This article aims to clarify the essence of the concept of “artificial sociality” in the context of human-machine interaction, answering the main research question of this study - is artificial sociality a prerequisite or a result of this interaction? To achieve this aim, the authors conducted a logical analysis of the definitions of sociality and artificial sociality presented in the scientific literature as well as empirically studied artificial sociality in the context of human-machine interaction, using three methods - method of comparing means, correlation and discriminant analysis. All three methods applied for the analysis of the same data: indicators of the potential of human-machine interaction and G. Hofstede’s six cultural dimensions in the countries of the world (n = 63). With the help of cultural dimensions the authors tried to interpret empirically the degree of “artificiality” of the culture of a particular country (based on the methodological approach about the presence of “natural” and “artificial” in a culture), which [“artificiality”of the culture] determines the development of artificial sociality. The main conclusions of the research are as follows: 1) sociality is understood by the authors not as characteristics of agents included in the communication network, but as a result of the implementation of these characteristics - the mechanism of social interactions created and used by communicating agents, which [social interactions] are of various types: cooperation, rivalry, grouping, merging, etc.; 2) artificial sociality presupposes - and thus differs from natural sociality ñ artificial (algorithmic), as opposed to natural (associative or intuitive), mechanism of interaction between social agents in the course of their communication; 3) artificial sociality arose in human society along with the development of writing and, after that, various methods of processing and storing information (cataloging, archiving, etc.), i.e. long before the appearance of machines, it [artificial sociality] is determined by the relative “artificiality” of a culture and is a prerequisite, but not a result of human-machine interaction. The research funded by the Erasmus+ Programme of the European Union, Eurokey project No. 2017-1-TR01-KA202-046115.
This paper explores the intersection of human psychology and advanced technology, focusing on how intelligent and emotive technology influences human behavior and emotional intelligence, and in the process, might impact our ability to show and feel empathy. Based in Alfred Adler's theory of human motivation, we examine how feelings of inferiority - vulnerability, powerlessness, perfectibility, and the need for afiliation - drive our increasing dependence on technology. The human tendency to treat inanimate objects as animate is heightened by the sophisticated communication capabilities of Generative AI (Gen AI), altering our interpersonal dynamics and communication signals. We analyze how this shift impacts empathy, self-centeredness, and impatience, suggesting a need for conscious awareness of technology's limitations to preserve genuine human connections. By conducting a “technology dependency audit,” we encourage individuals to reflect on the extent to which their lives are mediated by technology. Ultimately, the paper argues for reclaiming our emotional and practical autonomy from technology to maintain authentic human relationships and emotional well-being.
Artificial intelligence (AI) is rapidly transforming communication processes across various sectors, including marketing, education, healthcare, and entertainment. This study explores the theoretical perspectives surrounding AI’s integration into communication, examining how AI-driven tools such as ChatGPT, MidJourney, and Google Gemini are reshaping content creation, personalisation, and human-machine interaction. While AI enhances efficiency and allows for real-time customisation of messages, it also presents ethical challenges related to privacy, data security, and algorithmic bias. By synthesising key academic studies, the study outlines the critical ethical considerations, including the risks of deepfakes and disinformation, and emphasises the need for ethical frameworks to guide responsible AI use. The text also discusses the new digital competencies required to navigate AI-enhanced communication environments, such as AI literacy, data proficiency, and ethical reasoning. Through a systematic literature review, this study contributes to the ongoing discourse on AI’s role in communication by offering a comprehensive theoretical framework that highlights both the opportunities and limitations of AI technologies. Future research should focus on addressing gaps in empirical studies, particularly concerning the long-term impacts of AI on decision-making and the ethical governance of AI-generated content.
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.