Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

PL EN


2023 | 2(14) | 85-100

Article title

Artificial Intelligence: opportunities and concerns in the era of Big Data. Ethical and practical issues with decision-making and Generative AI in the era of ChatGPT

Authors

Content

Title variants

PL
Sztuczna inteligencja: możliwości i obawy w erze Big Data. Etyczne i praktyczne kwestie związane z podejmowaniem decyzji i generatywną sztuczną inteligencją w erze ChatGPT

Languages of publication

Abstracts

EN
Artificial Intelligence has made impressive progress in the past few decades, and, especially in the last couple of years, so many different systems able to simulate human response in a very realistic and coherent way have been made available to the general public. This has opened the road to new possibilities for the use of AI in a variety of contexts, starting from the use of generative AI to the shortening of otherwise long working tasks (such as programming). However, it also created unforeseen issues that have yet to be addressed, since there is a lack of legal and ethical guidelines for the use of these new AI tools. This article analyses some of the most controversial applications of these new AI systems, highlighting both problems and ethical concerns, as well as the possible ways in which they can be dealt with in the future.

Publisher

Year

Issue

Pages

85-100

Physical description

Dates

published
2023

Contributors

  • Centrum Nauki Experyment

References

  • Avrahami, O., Tamir, B. (2021). Ownership and Creativity in Generative Models. ArXiv. /abs/2112.01516
  • Casal-Otero, L. et al. (2023) ‘Ai literacy in K-12: A systematic literature review’, International Journal of STEM Education, 10(1). doi:10.1186/s40594-023-00418-7.
  • Cooper, D. (2022) Is dall-e’s art borrowed or stolen?, Engadget. Available at: https://www.engadget.com/dall-e-generative-ai-tracking-data-privacy-160034656.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLm-NvbS8&guce_referrer_sig=AQAAAL2kuMch9qy7ShbU_7eef88Z1DO-q69cTXNqkFiKWqDTJHGGu9wZ4SIqaz1NuVsTAuySEAGMartNgHEK-9oxZnfWH0uhdT8tv6LrfVcm3T7vx4Tv83e1LOWTLUCbgF_9DtLhoYCoy-IxTHxZKRLFYHZXUeER2rurshypkjj7ZAtOzlp [Access: 01 August 2023].
  • Dastin, J. (2018) Amazon scraps secret AI recruiting tool that showed bias against women, Reuters. Available at: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G [Accessed: 31 July 2023].
  • Dayma, B. (2021) Dall-e Mini explained, W&B. Available at: https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlld-zo4NjIxODA#the-datasets-used [Accessed: 01 August 2023].
  • De Bruijn, H., Warnier, M. and Janssen, M. (2022) ‘The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making’, Government Information Quarterly, 39(2), p. 101666. doi:10.1016/j.giq.2021.101666.
  • Dwivedi, Y.K. et al. (2023) ‘Opinion paper: “so what if chatgpt wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of Generative Conversational AI for Research, practice and policy’, International Journal of Information Management, 71, p. 102642. doi:10.1016/j.ijinfomgt.2023.102642.
  • Gille, F., Jobin, A. and Ienca, M. (2020) ‘What we talk about when we talk about trust: Theory of trust for AI in Healthcare’, Intelligence-Based Medicine, 1–2, p. 100001. doi:10.1016/j.ibmed.2020.100001.
  • Growcoot, M. (2023) Ai image dataset demands money from photographer who requested removal of his photos, PetaPixel. Available at: https://petapix-el.com/2023/04/26/ai-image-dataset-demands-money-from-photogra-pher-who-requested-removal-of-his-photos/ [Access: 01 August 2023].
  • Haenlein, M. and Kaplan, A. (2019) ‘A brief history of artificial intelligence: On the past, present, and future of Artificial Intelligence’, California Management Review, 61(4), pp. 5–14. doi:10.1177/0008125619864925.
  • Hillier, M. (2023) Why does chatgpt generate fake references?, TECHE. Available at: https://teche.mq.edu.au/2023/02/why-does-chatgpt-generate-fake-referenc-es/ [Access: 04 August 2023].
  • Hudnall, H. (2023) Fact check: Video altered to show Joe Biden making trans-phobic remarks, USA Today. Available at: https://eu.usatoday.com/story/news/factcheck/2023/02/09/fact-check-video-edited-show-joe-biden-making-trans-phobic-remarks/11211453002/ [Access: 04 August 2023].
  • Lee, J.Y. (2023) ‘Can an artificial intelligence chatbot be the author of a scholarly article?’, Science Editing, 10(1), pp. 7–12. doi:10.6087/kcse.292.
  • Li, X. and Zhang, T. (2017) ‘An exploration on artificial intelligence application: From security, privacy and ethic perspective’, 2017 IEEE 2nd International Conference on Cloud Computing and Big Data Analysis (ICCCBDA) [Pre-print]. doi:10.1109/icccbda.2017.7951949.
  • Morley, J. et al. (2020) ‘The ethics of AI in health care: A mapping review’, SSRN Electronic Journal [Preprint]. doi:10.2139/ssrn.3830408.
  • O’Neil, C. (2018) Weapons of math destruction: How big data increases inequality and threatens democracy. London, UK: Penguin Books.
  • Papp, D., Krausz, B. and Gyuranecz, F. (2022) ‘The AI is now in session – The impact of digitalisation on courts’, Cybersecurity and Law, 7(1), pp. 272–296. doi:10.35467/cal/151833.
  • Parra, D. and Stroud, S.R. (2023) The ethics of AI Art, Center for Media Engagement. Available at: ttps://mediaengagement.org/research/the-ethics-of-ai-art/ [Access: 01 August 2023)].
  • Peters, U. (2022) ‘Explainable AI lacks regulative reasons: Why ai and human decision-making are not equally opaque’, AI and Ethics, 3(3), pp. 963–974. doi:10.1007/s43681-022-00217-w.
  • Ray, P.P. (2023) ‘CHATGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope’, Inter-net of Things and Cyber-Physical Systems, 3, pp. 121–154. doi:10.1016/j.iotcps.2023.04.003.
  • Turing, A.M. (1950) ‘Computing Machinery and intelligence’, Mind, LIX(236), pp. 433–460. doi:10.1093/mind/lix.236.433.
  • Torrance, A. W., & Tomlinson, B. (2023). Training Is Everything: Artificial Intelligence, Copyright, and Fair Training. ArXiv. /abs/2305.03720
  • Vaccari, C. and Chadwick, A. (2020) ‘Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news’, Social Media + Society, 6(1), p. 205630512090340. doi:10.1177/2056305120903408.
  • Vayena, E. (2015) ‘Ethical challenges of Big Data in public health’, European Journal of Public Health, 25(suppl_3). doi:10.1093/eurpub/ckv169.024.
  • Wang, F. and Preininger, A. (2019) ‘Ai in health: State of the art, Challenges, and Future Directions’, Yearbook of Medical Informatics, 28(01), pp. 016–026. doi:10.1055/s-0039-1677908.
  • Zhang, Y. et al. (2021) ‘Ethics and privacy of Artificial Intelligence: Understandings from Bibliometrics’, Knowledge-Based Systems, 222, p. 106994. doi:10.1016/j.knosys.2021.106994.
  • Editorial policies: Artificial Intelligence (no date) Nature news. Available at: https://www.nature.com/nature-portfolio/editorial-policies/ai [Access: 04 August 2023].
  • Artificial Intelligence (no date) Artificial-Intelligence noun - Definition, pictures, pronunciation and usage notes | Oxford Advanced Learner’s Dictionary at Oxford-LearnersDictionaries.com. Available at: ttps://www.oxfordlearnersdictionaries.com/definition/english/artificial-intelligence [Access: 04 August 2023].
  • European Parliamentary Research Service and Madiega, T. (2023) Artificial Intelligence Act.

Document Type

Publication order reference

Identifiers

Biblioteka Nauki
32388079

YADDA identifier

bwmeta1.element.ojs-doi-10_34813_psc_2_2023_6
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.