The implementation of artificial intelligence-based solutions not only contributes to increasing the level of innovation and the pace of development of companies operating in the digital space but can also be perceived as beneficial from the user’s perspective. Offering (seemingly) free tools designed to facilitate human work promotes, often unreflectively, their overuse not only for entertainment but also for creating reports, official documents, and even scientific texts. Users interacting with artificial intelligence enter data that may be a company secret or of a private nature. Such actions can lead to a deepening imbalance between the benefits gained by companies offering services based on artificial intelligence and the benefits to users, ultimately posing a threat to the latter’s privacy. This article is not only an attempt to answer questions about whether artificial intelligence can be controlled (from the perspectives of both organizations and users), how to protect privacy in the age of artificial intelligence, and whether it is possible to ensure the right to be forgotten, but it is also a contribution to a broader discussion regarding the often unacknowledged consequences of using generative artificial intelligence services. For this purpose, a multiple case study method was utilized. An analysis of the terms of service of selected AI-based services offered by companies, such as text generation, graphic generation, and immersive experiences, was conducted. Examples of solutions that allow for increased anonymity in the digital environment and the deletion of data on the internet will also be presented.
Since the dissemination of generative artificial intelligence (GenAI) tools, i.e., ChatGPT, GPT-4, HeyPi, or DALL-E, one of the controversial educational projects’ topics has become the issue of plagiarism regarding the possibility of producing content, such as written documents, mathematical solutions, or even a programming code. For this reason, the US company Turnitin has launched a tool called Orginality to detect artificial intelligence (AI) activity in text generation. In other words, this tool aims to help educators/ teachers identify written works that are likely to have been composed by AI. In recent times, one can also note the overwhelming abundance of algorithms and innovative modifications to AI tools that can thwart the process of detecting the author’s integrity. Thus, this article aims to outline: (a) the importance of contextual thinking when confronted with GenSI tools, and (2) of how long and whether we are (in fact) sufficient with current solutions in identifying academic integrity?
PL
Od czasu upowszechnienia narzędzi generatywnej sztucznej inteligencji (GenSI), t.j. ChatGPT, GPT-4, HeyPi, czy DALL-E, jednym z dyskusyjnych tematów projektów edukacyjnych stała się kwestia plagiatu dot. możliwości wyprodukowania treści np. wypowiedzi pisemnych, rozwiązań matematycznych, czy też kodu programistycznego. Z tego względu amerykańska firma Turnitin uruchomiła narzędzie o nazwie Orginality do wykrywania aktywności sztucznej inteligencji (SI) w generowaniu tekstów. Innymi słowy, owo narzędzie ma na celu pomóc edukatorom/dydaktykom zidentyfikować prace pisemne, które prawdopodobnie zostały skomponowane przez SI. W ostatnim czasie można także odnotować przemożne bogactwo algorytmów i innowacyjnych modyfikacji narzędzi SI, które mogą jednak uniemożliwić proces wykrywania rzetelności autora. Zatem, niniejszy artykuł ma na celu nakreślić: (a) znaczenie myślenia kontekstowego w konfrontacji z narzędziami GenSI oraz (b) jak długo i czy (w istocie) wystarczą nam aktualne rozwiązania w rozpoznaniu rzetelności akademickiej?
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.