Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

PL EN


2024 | 1 | 181-193

Article title

Scientific coexistence and mutual learning between humans and ChatGPT 4.0

Content

Title variants

PL
Naukowa koegzystencja i wzajemne uczenie się ludzi oraz ChatGPT 4.0

Languages of publication

Abstracts

PL
Niniejsze badanie zgłębia dynamikę naukowej współpracy oraz wzajemnego uczenia się między nauczycielem akademickim – badaczem a sztuczną inteligencją, uosobioną przez ChatGPT 4.0 (nazwanego „Alex”). Badanie ujmuje AI, a w szczególności ChatGPT 4.0 nie tylko jako narzędzie poznawcze, lecz także jako partnera w dialogu. Wykorzystując autoetnografię, analiza ukazuje, w jaki sposób interakcje człowieka z AI kształtują ludzką percepcję, doświadczenie i praktyki w różnych obszarach życia akademickiego. Podkreśla również złożoność tych interakcji oraz konieczność refleksyjnego i krytycznego podejścia do relacji człowiek AI, wskazując na ewoluującą naturę paradygmatów edukacyjnych i metodologicznych w erze sztucznej inteligencji.

Year

Issue

1

Pages

181-193

Physical description

Dates

published
2024

Contributors

  • Polski Uniwersytet na Obczyźnie, Londyn

References

  • Aggarwal A., Tam C. C., Wu D., Li X., Qiao S. (2022). Artificial intelligence (AI)-based chatbots in promoting health behavioural changes: a systematic review, https://doi.org/10.1101/2022.07.05.22277263.
  • Anderson L. B., Kanneganti D., Houk M. B., Holm R. H., Smith T. (2023). Generative AI as a tool for environmental health research translation. GeoHealth, 7 (7), https://doi.org/10.1029/2023gh000875.
  • Andujar C. S., Gutiérrez-Martín L., Miranda-Calero J. Á., Blanco-Ruiz M., López-Ongil C. (2022). Gender biases in the training methods of affective computing: redesign and validation of the self-assessment manikin in measuring emotions via audiovisual clips. Frontiers in Psychology, 13, https://doi.org/10.3389/fpsyg.2022.955530.
  • Berens P., Cranmer K., Lawrence N. D., Luxburg U. v., Montgomery J. (2023). AI for science: an emerging agenda, https://doi.org/10.48550/arxiv.2303.04217.
  • Caliskan A., Bryson J. J., Narayanan A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356 (6334), 183–186, https://doi.org/10.1126/science.aal4230.
  • Ciechanowski L., Przegalinska A., Magnuski M., and Gloor P. (2019). In the shades of the uncanny valley: an experimental study of human-chatbot interaction. Future Gener. Comput. Syst. 92, 539–548, https://doi.org/10.1016/j.future.2018.01.055.
  • Darcel K., Upshaw T., Craig-Neil A., Macklin J., Gray C. S., Chan T. C. Y., Pinto A. D. (2023). Implementing artificial intelligence in Canadian primary care: barriers and strategies identified through a national deliberative dialogue. Plos One, 18 (2), e0281733, https://doi.org/10.1371/journal.pone.0281733.
  • Denzin N. K. (2017). Critical qualitative inquiry. Qualitative Inquiry, 23 (1), 8–16. https://doi.org/10.1177/1077800416681864
  • Elisa F., Cesta A., Umbrico A., Orlandini A. (2021). Simplifying the a.i. planning modelling for human-robot collaboration. 2021 30th IEEE International Conference on Robot &Amp; Human Interactive Communication (RO-MAN), https://doi.org/10.1109/ro-man50785.2021.9515431.
  • Ellis C. (2004). The Ethnographic I: A Methodological Novel about Autoethnography. Walnut Creek, California: Altamira Press.
  • Ellis C., Adams T. E., Bochner A. P. (2011). Autoethnography: an overview, Historical Sicial Research. 36, No. 4, 273–290, https://doi.org/10.12759/hsr.36.2011.4.273-290.
  • Klichowski M. (2020). People Copy the Actions of Articial intelligence. Frontiers in Psychology, June, Vol. 11. Article 1130, 1–7, https://doi.org/10.3389/fpsyg.2020.01130.
  • Köbis N., Bonnefon J., Rahwan I. (2021). Bad machines corrupt good morals. Nature Human Behaviour, 5 (6), 679–685, https://doi.org/10.1038/s41562-021-01128-2.
  • Köllen T. (2019). Diversity management: a critical review and agenda for the future. Journal of Management Inquiry, 30 (3), 259–272, https://doi.org/10.1177/1056492619868025.
  • Laakkonen M. (2021). Articial intelligence (AI): hidden rules of our society. Hallinnon Tutkimus, 40 (4), 276–283, https://doi.org/10.37450/ht.112201.
  • Lee C. I., Houssami N., Elmore J. G., Buist D. S. (2020). Pathways to breast cancer screening artificial intelligence algorithm validation. The Breast, 52, 146–149, https://doi.org/10.1016/j.breast.2019.09.005.
  • Lemaignan S., Warnier M., Sisbot E. A., Clodic A., and Alami R. (2017). Articial cognition for social human-robot interaction: an implementation. Artif. Intell. 247, 45–69, https://doi.org/10.1016/j.artint.2016.07.002.
  • O’Meara S. (2019). AI researchers in China want to keep the global-sharing culture alive. Nature 569, S33–S35, https://doi.org/10.1038/d41586-019-01681-x.
  • OpenAI. (2023). ChatGPT (wersja 4.0). Retrieved from https://chat.openai.com/.
  • Roche C., Wall P. J., Lewis D. (2022). Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI and Ethics, 3 (4), 1095–1115, https://doi.org/10.1007/s43681-022-00218-9.
  • Smith J. (2023). Eects of Machine Learning Algorithms for Predicting and Optimizing the Properties of New Materials in the United States, European Journal of Physics Sciences, Vol. 6, Issue 1, 23–34, https://doi.org/10.47672/ejps.1444.
  • Szwabowski O., Baron-Polańczyk E., Cywiński A., Gliniecka M., Lib W., Łuszczek K., Marek L., Perzycka E., Walat W., Warzocha T. (2022). A Story by Academic Teacher About Dustance Education in the Time of Lockdown. Cultural Studies – Critical Methodologies, Volume 22, Issue 4, 1–10, https://doi.org/10.1177/15327086221094283.
  • Tenakwah E. S., Boadu G., Tenakwah E. J., Parzakonis M., Brady M., Kansiime P., Berman A. (2023). Generative AI and higher education assessments: a competency-based analysis, https://doi.org/10.21203/rs.3.rs-2968456/v1.
  • Thoring K., Huettemann S., Mueller R. M. (2023). The augmented designer: a research agenda for generative AI-enabled design. Proceedings of the Design Society, 3, 3345–3354, https://doi.org/10.1017/pds.2023.335.
  • Upshaw T., Craig-Neil A., Macklin J., Gray C. S., Chan T. C. Y., Gibson J. L., Pinto A. D. (2023). Priorities for artificial intelligence applications in primary care: a Canadian deliberative dialogue with patients, providers, and health system leaders. The Journal of the American Board of Family Medicine, 36 (2), 210–220, https:// doi.org/10.3122/jabfm.2022.220171r1.
  • Vasoya N. H. (2023). The role of parents and educators in managing the risks of artificial intelligence. Asian Journal of Education and Social Studies, 41 (4), 1–5, https://doi.org/10.9734/ajess/2023/v41i4899.
  • Victor B. G., Kubiak S. P., Angell B., Perron B. E. (2023). Time to move beyond the aswb licensing exams: can generative artificial intelligence oer a way forward for social work? Research on Social Work Practice, 33 (5), 511–517, https://doi.org/10.1177/10497315231166125.
  • Vlasceanu M., Dudík M., Momennejad I. (2021). Interdisciplinarity, gender diversity, and network structure predict the centrality of AI organizations, https://doi.org/10.31234/osf.io/dp3ef.

Document Type

Publication order reference

Identifiers

Biblioteka Nauki
63008364

YADDA identifier

bwmeta1.element.ojs-issn-2052-319X-year-2024-issue-1-article-b4641d88-151b-372b-a250-9fa09f4a0f7d
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.