Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

PL EN


2025 | 3 | 133-159

Article title

Adopcje i rekonfiguracje. Socjologiczne ujęcie wykorzystania generatywnej sztucznej inteligencji w badaniach społecznych

Content

Title variants

EN
Adoptions and Reconfigurations: a Sociological Perspective on the Use of Generative Artificial Intelligence in Social Research

Languages of publication

PL

Abstracts

PL
Celem tego artykułu jest przyjrzenie się możliwościom i problemom związanym z wykorzystaniem technologii generatywnej sztucznej inteligencji w badaniach społecznych oraz zaproponowanie socjologicznej ramy pojęciowej do analizy zachodzących w tym obszarze przemian. Tekst rozpoczynamy od przeglądu aktualnie dostępnych zastosowań sztucznej inteligencji (ang. artificial intelligence, Al) w prowadzeniu badań społecznych. Następnie dokonujemy ich krytycznej analizy - przede wszystkim z uwagi na ryzyka, jakie ich zastosowanie niesie dla procesów tworzenia wiedzy o świecie społecznym i właściwości wytwarzanej wiedzy. Wreszcie, opierając się na współczesnej generacji teorii praktyk społecznych oraz modelu wielopoziomowej transformacji socjo-technologicznej zaproponowanym przez Franka Geelsa, pokazujemy, że wykorzystanie narzędzi Al może być rozumiane nie tyle jako ich proste „użycie", ile - trafniej jako wieloaspektowe, otwarte i dynamiczne „adoptowanie". Jak argumentujemy, efekty tych procesów nie są determinowane samą technologią, ale mimo to mogą się wiązać ze znaczną rekonfiguracją praktyk badawczych oraz szerszych układów społeczno-technologicznych, których te praktyki są częścią.
EN
The aim of this article is to explore the possibilities and implications of using digital tools based on generative artificial intelligence technologies in social research. We begin by providing an overview of the currently available applications of AI in social research. Next, we critically analyze these applications, focusing primarily on the risks they pose to the characteristics of the knowledge produced. Finally, drawing on contemporary social practice theory and the multilevel socio-technological transformation model proposed by Frank Geels, we demonstrate that the use of AI tools should be understood not merely as straightforward “application,” but rather as a multifaceted, open-ended, and dynamic process of “adoption.” We argue that the outcomes of these processes are not solely deter- mined by the technology itself; however, they can lead to a reconfiguration of research practices and the broader socio-technological frameworks in which these practices exist.

Year

Issue

3

Pages

133-159

Physical description

Contributors

  • Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie
  • Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie

References

  • Afeltowicz, Ł., Pietrowicz, K. (2013). Maszyny społeczne. Wszystko ujdzie, o ile działa. Wydawnictwo Naukowe PWN.
  • Argyle, L. P., Busby, E. C., Fulda, N., Gubler, J. R., Rytting, C., Wingate, D. (2023). Out of One, Many: Using Language Models to Simulate Human Samples. Political Analysis, 31(3), 337–351.
  • Argyle, L. P., Busby, E. C., Gubler, J. R., Hepner, B., Lyman, A., Wingate, D. (2025). Arti-“fickle” Intelligence: Using LLMs as a Tool for Inference in the Political and Social Sciences. arXiv. https://doi.org/10.48550/arXiv.2504.03822
  • Baek, J., Jauhar, S. K., Cucerzan, S., Hwang, S. J. (2024). Researchagent: Iterative Research Idea Generation over Scientific Literature with Large Language Models. arXiv. https://doi.org/10.48550/arXiv.2404.07738
  • Bail, C. A. (2024). Can Generative AI Improve Social Science? Proceedings of the National Academy of Sciences, 121(21). https://doi.org/10.1073/pnas.2314021121
  • Bassett, C., Roberts, B. (2023). Automation Anxiety: A Critical History – the Apparently odd Recurrence of Debates about Computation, AI and Labour. W: S. Lindgren (red.), Handbook of Critical Studies of Artificial Intelligence (s. 79–93). Edward Elgar Publishing.
  • Bińczyk, E. (2007). Obraz, który nas zniewala. Współczesne ujęcia języka wobec esencjalizmu i problemu referencji. Universitas.
  • Bisbee, J., Clinton, J. D., Dorff, C., Kenkel, B., Larson, J. M. (2024). Synthetic Replacements for Human Survey Data? The Perils of Large Language Models. Political Analysis, 32(4), 401–416.
  • Bolanos, F., Salatino, A., Osborne, F., Motta, E. (2024). Artificial Intelligence for Literature Reviews: Opportunities and Challenges. Artificial Intelligence Review, 57(10), 259. https://doi.org/10.48550/arXiv.2402.08565
  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S. i in.. (2021). On the Opportunities and Risks of Foundation Models. arXiv. https://doi.org/10.48550/arXiv.2108.07258
  • Browne, J., Cave, S., Drage, E., McInerney, K. (red.). (2023). Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines. Oxford University Press.
  • Bruch, E., Atwell, J. (2015). Agent-based Models in Empirical Social Research. Sociological Methods & Research, 44(2), 186–221. https://doi.org/10.1177/ 0049124113506405
  • Calegari, R., Ciatto, G., Mascardi, V., Omicini, A. (2021). Logic-based Technologies for Multiagent Systems: A Systematic Literature Review. Autonomous Agents and Multi-Agent Systems, 35(1), 1.
  • Camic, C., Gross, N., Lamont, M. (red.). (2011). Social Knowledge in the Making. University of Chicago Press.
  • Chopra, F., Haaland, I. (2023). Conducting Qualitative Interviews with AI. http://dx. doi.org/10.2139/ssrn.4572954
  • Christou, P. A. (2023). How to Use Artificial Intelligence (AI) as a Resource, Meth- odological and Analysis Tool in Qualitative Research? Qualitative Report, 28(7), 1968–1980. https://doi.org/10.46743/2160-3715/2023.6406
  • Collins, R. (1994). Why the Social Sciences Won’t Become High-Consensus Rapid Discovery Science. Sociological Forum, 9(2), 155–177.
  • Collins, K. M., Jiang, A. Q., Frieder, S., Wong, L., Zilka, M. i in.(2024). Evaluating Language Models for Mathematics through Interactions. Proceedings of the Na- tional Academy of Sciences, 121(24). https://doi.org/10.1073/pnas.231812412
  • Cornell University (2023). Generative AI in Academic Research: Perspectives & Cultural Norms. https://it.cornell.edu/sites/default/files/itc-drupal10-files/Gene- rative%20AI%20in%20Research_%20Cornell%20Task%20Force%20Report- -Dec2023.pdf#page=8.09
  • European Commission. (2024). Living Uuidelines on Responsible Use of Generative AI. https://research-and-innovation.ec.europa.eu/document/download/2b6cf7e5-36ac- -41cb-aab5-0d32050143dc_en?filename=ec_rtd_ai-guidelines.pdf
  • Feldman, M. S., Orlikowski, W. J. (2011). Theorizing Practice and Practicing Theory. Organization Science, 22(5), 1240–1253. http://hdl.handle.net/1721.1/66516
  • Feuerriegel, S., Hartmann, J., Janiesch, C., Zschech, P. (2024). Generative AI. Busi- ness & Information Systems Engineering, 66(1), 111–126. https://doi.org/10.1007/ s12599-023-00834-7
  • Floridi, L. (2020). AI and Its New Winter: From Myths to Realities. Philosophy & Technology, 33, 1–3. https://doi.org/10.1007/s13347-020-00396-6
  • Geels, F. W. (2002). Technological Transitions as Evolutionary Reconfiguration Pro- cesses: A Multi-level Perspective and a Case-study. Research Policy, 31(8–9), 1257–1274. https://doi.org/10.1016/S0048-7333(02)00062-8
  • Geels, F. W. (2011). The Multi-level Perspective on Sustainability Transitions: Re- sponses to Seven Criticisms. Environmental Innovation and Societal Transitions, 1(1), 24–40. https://doi.org/10.1016/j.eist.2011.02.002
  • Götz, F. M., Maertens, R., Loomba, S., van der Linden, S. (2023). Let the Algorithm Speak: How to Use Neural Networks for Automatic Item Generation in Psycho- logical Scale Development. Psychological Methods, 29(3), 494–518. https://doi. org/10.1037/met0000540
  • Grossmann, I., Feinberg, M., Parker, D. C., Christakis, N. A., Tetlock, P. E., Cunning- ham, W. A. (2023). AI and the Transformation of Social Science Research. Science, 380(6650), 1108–1109. https://doi.org/10.1126/science.adi1778
  • Haman, M., Školník, M. (2023). Using ChatGPT to Conduct a Literature Review. Accountability in Research, 31(8), 1244–1246. https://doi.org/10.1080/08989621 .2023.2185514
  • Hanchard, M. (2024). Towards a Practice-orientated Digital Sociology. W: Engaging with Digital Maps. Our Knowledgeable Deferral to Rough Guides (s. 93–131). Palgrave Macmillan.
  • Hofmann, V., Kalluri, P. R., Jurafsky, D., King, S. (2024). Dialect Prejudice Predicts AI Decisions about People’s Character, Employability, and Criminality. arXiv. https://doi.org/10.48550/arXiv.2403.00742
  • Huang, J., Tan, M. (2023). The Role of ChatGPT in Scientific Communication: Writing Better Scientific Review Articles. American Journal of Cancer Research, 13(4), 1148. Hui, A., Schatzki, T., Shove, E. (red.). (2017). The Nexus of Practices. Connections, Constellations, Practitioners. Routledge.
  • Jain, S., Kumar, A., Roy, T., Shinde, K., Vignesh, G., Tondulkar, R. (2024). SciSpace Literature Review: Harnessing AI for Effortless Scientific Discovery. W: European Conference on Information Retrieval (s. 256–260). Springer Nature Switzerland.
  • Joyce, K., Smith-Doerr, L., Alegria, S., Bell, S., Cruz, T. i in. (2021). Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change. Socius, 7. https://doi.org/10.1177/2378023121999581
  • Keller, M., Sahakian, M., Hirt, L. F. (2022). Connecting the Multi-level-perspective and Social Practice Approach for Sustainable Transitions. Environmental Innova- tion and Societal Transitions, 44, 14–28. https://doi.org/10.1016/j.eist.2022.05.004
  • Krajewski, M. (2023). Wstęp. Mapując kontrowersje teorii praktyk społecznych. Studia Socjologiczne, 4(251), 11–16.
  • Lansing, J. S. (2003). Complex Adaptive Systems. Annual Review of Anthropology, 32(1), 183–204. https://doi.org/10.1146/annurev.anthro.32.061002.093440
  • Law, T., McCall, L. (2024). Artificial Intelligence Policymaking: An Agenda for Soci- ological Research. Socius, 10, 1–13. https://doi.org/10.1177/23780231241261596
  • Law, J. (2009). Seeing Like a Survey. Cultural Sociology, 3(2), 239–256. https://doi.org/10.1177/1749975509105533
  • Li, A., Chen, H., Namkoong, H., Peng, T. (2025). LLM Generated Persona is a Promise with a Catch. arXiv. https://doi.org/10.48550/arXiv.2503.16527
  • Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., Ha, D. (2024). The AI Scientist: Towards Fully Automated Open-ended Scientific Discovery. arXiv. https://doi.org/10.48550/arXiv.2408.06292
  • Luybashenko, I. (2025). Analizy jakościowe wspierane gen AI. W: K. Olejniczak, D. Batorski, J. Pokorski (red.), Generatywna AI w badaniach. Praktyczne zasto- sowania w ewaluacji polityk publicznych (s. 131–153). Polska Agencja Rozwoju Przedsiębiorczości.
  • Lupton, D. (2014). Digital Sociology. Routledge.
  • Maclure, J. (2020). The New AI Spring: A Deflationary View. AI & Society, 35(3), 747–750.
  • Marres, N. (2017). Digital Sociology: The Reinvention of Social Research. John Wiley & Sons.
  • Marres, N., Guggenheim, M., Wilkie, A. (2018). Inventing the Social. Mattering Press.
  • McCarthy, J., Minsky, M. L., Rochester, N., Shannon, C. E. (2006). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
  • Miller, J. H., Page, S. E. (2007). Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton University Press.
  • Morgan, D. L. (2023). Exploring the Use of Artificial Intelligence for Qualitative Data Analysis: The Case of ChatGPT. International Journal of Qualitative Methods, 22, https://doi.org/10.1177/16094069231211248
  • Morgan, D. L., Nica, A. (2020). Iterative Thematic Inquiry: A New Method for Ana- lyzing Qualitative Data. International Journal of Qualitative Methods, 19. https:// doi.org/10.1177/1609406920955118
  • Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S. (2019). Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
  • Osborne, T., Rose, N. (1999). Do the Social Sciences Create Phenomena? The Exam- ple of Public Opinion Research. The British Journal of Sociology, 50(3), 367–396. https://doi.org/10.1111/j.1468-4446.1999.00367.x
  • Oxford University Press. (2024). Researchers and AI: Survey Findings, https://academic.oup.com/pages/ai-survey-findings
  • Paduraru, C., Cristea, R., Stefanescu, A. (2024). Adaptive Questionnaire Design Using AI Agents for People Profiling. W: ICAART (3) (s. 633–640).
  • Pantzar, M., Shove, E. (2010). Understanding Innovation in Practice: A Discussion of the Production and Re-production of Nordic Walking. Technology Analysis & Strategic Management, 22(4), 447–461.
  • Park, J. S., Zou, C. Q., Shaw, A., Hill, B. M., Cai, C., i in. (2024). Generative Agent Simulations of 1,000 People. arXiv. https://doi.org/10.48550/arXiv.2411.10109
  • Peñalvo, F. J. G., Ingelmo, A. V. (2023). What do We Mean by GenAI? A Systematic Mapping of the Evolution, Trends, and Techniques Involved in Generative AI. IJIMAI, 8(4), 7–16. https://doi.org/10.9781/ijimai.2023.07.006
  • Petticrew, M., Roberts, H. (2008). Systematic Reviews in the Social Sciences: A Practical Guide. John Wiley & Sons.
  • Radford, A., Narasimhan, K., Salimans, T., Sutskever, I. (2018). Improving Language Understanding by Generative Pre-training. https://api.semanticscholar.org/CorpusID:49313245
  • Rani, V., Nabi, S. T., Kumar, M., Mittal, A., Kumar, K. (2023). Self-supervised Learning: A Succinct Review. Archives of Computational Methods in Engineering, 30(4), 2761–2775. https://doi.org/10.1007/s11831-023-09884-2
  • Rip, A., Voß, J. P. (2019). Umbrella Terms as Mediators in the Governance of Emerg- ing Science and Technology. W: A. Rip, Nanotechnology and Its Governance (s. 10–33). Routledge.
  • Røpke, I. (2009). Theories of Practice – New Inspiration for Ecological Economic Studies on Consumption. Ecological Economics, 68(10), 2490–2497. https://doi.org/10.1016/j.ecolecon.2009.05.015
  • Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models for HighStakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Rudnicki, S. (2023). Wystarczająco dobra wiedza. Badania user experience a wytwa- rzanie praktycznie użytecznej wiedzy o świecie społecznym. Wydawnictwo Naukowe Scholar.
  • Ruppert, E., Law, J., Savage, M. (2013). Reassembling Social Science Methods: The Challenge of Digital Devices. Theory, Culture & Society, 30(4), 22–46. https://doi. org/10.1177/0263276413484941
  • Russell Group. (2023). Russell Group Statement of Principles on Artificial Intelligence in Higher Education. https://russellgroup.ac.uk/media/6137/rg_ai_principles-final. pdf
  • Sampson, A. (2024). Using AI to Make Social Research More Inclusive and Impactful. https://faculty.ai/insights/articles/using-ai-to-make-social-research-more-inclusive- and-impactful
  • Savage, M., Burrows, R. (2007). The Coming Crisis of Empirical Sociology. Sociolo- gy, 41(5), 885–899. https://doi.org/10.1177/0038038507080443
  • Schatzki, T. (2001). Introduction: Practice Theory. W: T. Schatzki, K. Knorr-Cetina, E. von Savigny (red.), The Practice Turn in Contemporary Theory (s. 10–23). Routledge. Schmidgall, S., Su, Y., Wang, Z., Sun, X., Wu, J., Yu, X., i in. (2025). Agent Laboratory: Using LLM Agents as Research Assistants. arXiv. https://doi.org/10.48550/arXiv.2501.04227
  • Shove, E. (2010). Beyond the ABC: Climate Change Policy and Theories of Social change. Environment and planning A, 42(6), 1273-1285.
  • Shove, E., Pantzar, M. (2005). Consumers, Producers and Practices: Understanding the Invention and Reinvention of Nordic Walking. Journal of Consumer Culture, 5(1), 43–64. http://dx.doi.org/10.1177/1469540505049846
  • Shove, E., Pantzar, M., Watson, M. (2012). The Dynamics of Social Practice: Everyday Life and How It Changes. Sage.
  • Shults, F. L. (2025). Simulating Theory and Society: How Multi-agent Artificial Intelligence Modeling Contributes to Renewal and Critique in Social Theory. Theory and Society, 1–23. https://doi.org/10.1007/s11186-025-09606-6
  • Si, C., Yang, D., Hashimoto, T. (2024). Can LLMs Generate Novel Research Ide- as? A Large-scale Human Study with 100+ NLP Researchers. arXiv. https://doi.org/10.48550/arXiv.2409.04109
  • Sikorska, M. (2018). Teorie praktyk jako alternatywa dla badań nad rodziną prowadzonych w Polsce. Studia Socjologiczne, 229(2), 31–63. https://doi.org/10.24425/122463
  • Smagacz-Poziemska, M., Bukowski, A., Kurnicki, K. (2018). „Wspólnota parkingowania”. Praktyki parkowania na osiedlach wielkomiejskich i ich strukturalne konsekwencje. Studia Socjologiczne, 228(1), 117–142. https://doi.org/10.24425/119089
  • Stanford University. (n.d.). Responsible AI: Ethical Principles and Guidelines. Stanford University IT. https://uit.stanford.edu/security/responsibleai
  • Stehr, N., Grundman, R. (2001). The Authority of Complexity. British Journal of Sociology, 52(2), 313–329. https://doi.org/10.1080/00071310120045006
  • Stokel-Walker, C., Van Noorden, R. (2023). What ChatGPT and Generative AI Mean for Science. Nature, 614(7947), 214–216. https://doi.org/10.1038/d41586-02300340-6
  • Torre-López, J., Ramírez, A., Romero, J. R. (2023). Artificial Intelligence to Automate the Systematic Review of Scientific Literature. Computing, 105(10), 2171–2194. https://doi.org/10.1007/s00607-023-01181-x
  • Törnberg, P. (2023). How to Use LLMs for Text Analysis. arXiv. https://doi. org/10.48550/arXiv.2307.13106
  • University of Oxford, Information Security. (n.d.). Guidelines on the Use of Genera- tive AI. University of Oxford. https://communications.admin.ox.ac.uk/communi- cations-resources/ai-guidance#collapse5321536
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L. i in.(2017). Attention is All You Need. Advances in Neural Information Processing Systems, 30. https://doi. org/10.48550/arXiv.1706.03762
  • Wang, H., Fu, T., Du, Y., Gao, W., Huang, K. i in. (2023). Scientific Discovery in the Age of Artificial Intelligence. Nature, 620(7972), 47–60. https://doi.org/10.1038/ s41586-023-06221-2
  • Warde, A. (2014). After Taste: Culture, Consumption and Theories of Practice. Journal of Consumer Culture, 14(3), 279–303. http://dx.doi.org/10.1177/1469540514547828
  • Whitfield, S., Hofmann, M. A. (2023). Elicit: AI Literature Review Research Assistant. Public Services Quarterly, 19(3), 201–207. http://dx.doi.org/10.1080/15228959.2023.2224125
  • Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E. i in. (2018). W: AI Now Report 2018 (s. 1–62). AI Now Institute at New York University.
  • Williams, R. T. (2024). Paradigm Shifts: Exploring AI’s Influence on Qualitative In- quiry and Analysis. Frontiers in Research Metrics and Analytics, 9. http://dx.doi.org/10.3389/frma.2024.1331589
  • Xu, R., Sun, Y., Ren, M., Guo, S., Pan, R. i in. (2024). AI for Social Science and Social Science of AI: A Survey. Information Processing & Management, 61(3). https://doi.org/10.48550/arXiv.2401.11839
  • Yang, Z., Du, X., Li, J., Zheng, J., Poria, S., Cambria, E. (2023). Large Language Models for Automated Open-domain Scientific Hypotheses Discovery. arXiv. https://doi.org/10.48550/arXiv.2309.02726
  • You, Z., Lee, H., Mishra, S., Jeoung, S., Mishra, A., Kim, J., Diesner, J. (2024). Beyond Binary Gender Labels: Revealing Gender Biases in LLMs through Gender-neutral Name Predictions. arXiv. https://doi.org/10.48550/arXiv.2407.05271
  • Yuan, Z., Yuan, H., Tan, C., Wang, W., Huang, S. (2023). How Well Do Large Lan- guage Models Perform in Arithmetic Tasks? arXiv. https://doi.org/10.48550/arXiv.2304.02015

Document Type

Publication order reference

YADDA identifier

bwmeta1.element.desklight-53777009-3c7e-4771-a23c-4ff146087fad
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.