The polysemy of generative artificial intelligence

Authors

DOI:

https://doi.org/10.7346/-feis-XXIII-01-25_10

Keywords:

Generative Artificial Intelligence, LLMs, Education, AI Companionship, History Teaching

Abstract

The following paper aims to introduce into the pedagogical landscape a critique of the function of LLMs, large language models, as mediators in relational and teaching practices. It will explore the genesis and functionality of Generative Artificial Intelligence and, in particular, Large Language Models (LLMs), which are now widely used in formal, non-formal, and informal education settings. Following a brief theoretical introduction to the devices, two illustrative cases are presented: Humy.AI and Replika. These two companies propose the use of a ‘relational agent’ which, starting from a specific LLM, performs similar tasks but in very different fields: Humy.AI is developed to implement history teaching, while Replika is a relational agent, or AI companion, developed to function as a virtual companion, capable, for example, of empathic listening and non-judgmental dialogue. The criticism is based on the fact that, in both cases, a large language model, simply because it can imitate natural language and respond effectively from a syntactic point of view, becomes an epistemological actor and decision-maker in the growth processes of individuals in training, just as it is when it comes to organizing certain processes quantitatively. For this reason, it is urgent to understand to what extent – and to what degree – an agent based on generative artificial intelligence can be a mediator in these processes. Where empirical research must clearly identify the terms of this revolution, it is equally important to take a clear position and not confuse the pedagogical tasks that every educator must respond to, in order to avoid excessive delegation to algorithms and agents based on generative artificial intelligence, as is already the case in certain specific areas of human resources, robotics, and algorithmic workplace management.

References

Aresu, A. (2024). Geopolitica dell’Intelligenza Artificiale. Feltrinelli.

Avital, M., & Te’eni, D. (2009). From generative fit to generative capacity: Exploring an emerging dimension of information systems design and task performance. Information Systems Journal, 19, 345–367. https://doi.org/10.1111/j.1365-2575.2007.00291.x

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Bravo, F. (2024). EU DATA COOPERATIVES. L’ingresso delle cooperative di dati nell’ordinamento europeo. Giappichelli.

Bridle, J. (2022). Ways of being: Beyond human intelligence. Allen Lane.

Buongiorno, F. (2024). Fenomenologia delle reti neurali. Per un concetto polisemico di intelligenza (artificiale). In M. Galletti & S. Zipoli Caiani (Eds.), Filosofia dell’Intelligenza Artificiale. Sfide etiche e teoriche (pp. 45–62). Il Mulino.

Coeckelbergh, M. (2020). AI ethics. MIT Press.

Contini, M., Fabbri, M., & Manuzzi, P. (2006). Non di solo cervello: Educare alle connessioni mente–corpo–significati–contesti. Raffaello Cortina.

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Dario, N. (2014). On the concept of generativity. Formazione & insegnamento, 12(4), 83–94. https://ojs.pensamultimedia.it/index.php/siref/article/view/1613

Dreyfus, H. L. (1972). What computers can’t do. Harper & Row.

Dreyfus, H. L. (1992). What computers still can’t do: A critique of artificial reason. MIT Press.

Esposito, E. (2024). L’intelligenza degli algoritmi. PSICHE, 2, 537–548. https://doi.org/10.7388/115432

Fabbri, M. (2022). Essere insegnanti essere genitori: La competenza comunicativa in educazione. FrancoAngeli.

Gallese, V., & Morelli, N. (2023). Cosa significa essere umani: Dialogo tra neuroscienze e filosofia. Raffaello Cortina.

Gardner, H. E. (2002). Mente e cervello: Nuove prospettive in educazione. In E. Frauenfelder & F. Santoianni (Eds.), Le scienze bioeducative: Prospettive di ricerca (pp. 177–187). Liguori.

Giaccardi, C., & Magatti, M. (2014). Generativi di tutto il mondo, unitevi! Manifesto per la società dei liberi. Feltrinelli.

Kitchin, R. (2014). Big data, new epistemologies and paradigm shifts. Big Data & Society, 1(1). https://doi.org/10.1177/20539517145284

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Penguin Books.

Ma, Z., Mei, Y., & Su, Z. (2023). Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support. AMIA Annual Symposium Proceedings, 2023, 1105–1114. https://doi.org/10.48550/arXiv.2307.15810

Maples, B., Cerit, M., & Vishwanath, A. (2024). Loneliness and suicide mitigation for students using GPT-3-enabled chatbots. npj Mental Health Research, 3, 4. https://doi.org/10.1038/s44184-023-00047-6

Margiotta, U. (2019). Editoriale. Responsabilità pedagogica e ricerca educativa: Intelligenza collaborativa, formazione dei talenti e tecnologie dell’artificiale. Formazione & insegnamento, 17(1), 13.

Marriott, H. R., & Pitardi, V. (2024). One is the loneliest number… Two can be as bad as one: The influence of AI friendship apps on users’ well-being and addiction. Psychology & Marketing, 41(1), 86–101. https://doi.org/10.1002/mar.21899

Mitchell, M. (2022). L’intelligenza artificiale. Einaudi.

Moore, P. V. (2019). The mirror for (artificial) intelligence: In whose reflection? Comparative Labor Law & Policy Journal, 41(1), 47–67. https://doi.org/10.2139/ssrn.3423704

Pasquinelli, M. (2023). The eye of the master: A social history of artificial intelligence. Verso Books.

Perrotta, C., Selwyn, N., & Ewin, C. (2024). Artificial intelligence and the affective labour of understanding: The intimate moderation of a language model. New Media & Society, 26(3), 1585–1609. https://doi.org/10.1177/14614448221075296

Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. https://doi.org/10.1037/h0042519

Schüll, N. D. (2012). Addiction by design: Machine gambling in Las Vegas. Princeton University Press.

Selwyn, N. (2016). Education and technology: Key issues and debates (2nd ed.). Bloomsbury Academic.

Terranova, T. (2012). Attention, economy and the brain. Culture Machine, 13, 1–19. https://culturemachine.net/wp-content/uploads/2019/01/465-973-1-PB.pdf

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433

Vallortigara, G. (2023). Il pulcino di Kant. Adelphi.

Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. MIT Press.

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.

Wittrock, M. C. (1990). Generative processes of comprehension. Educational Psychologist, 24, 345–376. https://doi.org/10.1207/s15326985ep2404_2

Zao-Sanders, M. (2025). How people are really using Gen AI in 2025. https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025

Zittrain, J. L. (2006). The generative internet. Harvard Law Review, 119(7), 1974–2040. http://www.jstor.org/stable/4093608

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Published

2025-09-29

How to Cite

Astorri, G. (2025). The polysemy of generative artificial intelligence. Formazione & Insegnamento, 23(S1), 60–66. https://doi.org/10.7346/-feis-XXIII-01-25_10