Does Generative Artificial Intelligence render the Debate-Based Teaching Methodology Obsolete?

 

L’Intelligenza Artificiale Generativa rende obsoleta la metodologia didattica fondata sul Debate?

 

Matteo Giangrande

Università degli Studi “G. d’Annunzio” Chieti–Pescara (Italy) – matteo.giangrande@unich.it

https://orcid.org/0000-0002-7978-138

 

ABSTRACT

This article investigates whether generative artificial intelligence (AI), exemplified by tools like ChatGPT, renders the debate methodology obsolete. It explores the educational sector’s responses, from initial resistance to strategic adoption, and the profound impact on competitive debate. While AI enhances preparation through tools and iterative prompting strategies, the study emphasises the preservation of uniquely human tasks, such as rhetorical skill, critical reflection, and ethical reasoning. Introducing concepts like “bundles of tasks” and “cognitive artifacts,” it argues for the integration of AI as a complement rather than a substitute. The analysis underscores the need for balanced policies and critical literacy to harmonise technological innovation with the enduring value of creativity, collaboration, and autonomy in debate. Far from obsolescence, debate evolves into a dynamic framework blending human and artificial intelligence.

 

Questo articolo indaga se l’intelligenza artificiale generativa (IA), esemplificata da strumenti quali ChatGPT, renda obsoleta la metodologia didattica del debate. Esamina le reazioni provenienti dal settore educativo, che spaziano da una iniziale resistenza fino a una strategica integrazione, e l’impatto profondo sull’ambito del debate competitivo. Pur riconoscendo che l’IA potenzia la fase preparatoria attraverso strumenti sofisticati e strategie iterative di prompting, lo studio pone l’accento sulla salvaguardia di compiti autenticamente umani, quali l’abilità retorica, la riflessione critica e il ragionamento etico. Introducendo concetti quali “aggregazioni di compiti” e “artefatti cognitivi”, sostiene l’integrazione dell’IA come complemento, non come sostituto. L’analisi evidenzia la necessità di politiche equilibrate e di una competenza critica per armonizzare l’innovazione tecnologica con il valore durevole della creatività, della collaborazione e dell’autonomia nel debate. Lungi dal cadere nell’obsolescenza, il debate evolve in una dinamica sintesi di intelligenza umana e artificiale.

 

KEYWORDS

Generative Artificial Intelligence, Competitive Debate, Cognitive Artifacts, Critical Thinking, Educational Innovation

Intelligenza Artificiale Generativa, Debate competitivo, Artefatti cognitivi, Pensiero critico, Innovazione formativa

 

CONFLICTS OF INTEREST

The Author declares no conflicts of interest.

 

SUBMITTED

March 15, 2025

 

ACCEPTED

April 7, 2025

 

PUBLISHED

April 14, 2025

 

 


 

1. The responses of the educational sphere and competitive Debate communities to the advent of Generative Artificial Intelligence

 

The launch of ChatGPT in November 2022, attracting 100 million users within two months, disrupted educational practice. Agencies such as the New York Department of Education and the Los Angeles Unified School District initially blocked access, fearing an erosion of critical thinking (Singer, 2023). Critics, including Gillmor (Hern, 2022) and Herman (2022), warned that traditional assessments risk becoming obsolete. Yet universities like Minnesota and scholars such as Ethan Mollick (Wharton School) began experimenting with ChatGPT’s potential and limitations, advocating for its responsible use (Henriksen et al., 2023).

In 2023, some institutions revoked their bans. New York’s public schools adopted guidelines for ethical usage (Banks, 2023), while the Los Angeles Unified School District launched “Ed,” a student support chatbot. Concurrently, workshops for educators and plagiarism detection tools (e.g., GPTZero and Turnitin) became pivotal. Universities such as Harvard (2023) and MIT (2024) introduced courses and workshops in AI literacy and prompt engineering, whereas Vanderbilt and Eureka Labs (Karpathy, 2024) explored advanced AI applications.

By the 2024–2025 academic year, ChatGPT adoption had intensified. Stanford and Berkeley examined its benefits for conceptual understanding but raised ethical concerns. The education industry felt the impact: companies like Chegg reported losses due to free AI services (Ott, 2023). At the secondary level, the David Game College began pilot exam-preparation programmes (Carroll, 2024), further highlighting AI’s expansion into pre-university education.

The advent of ChatGPT has reshaped teaching by making it more interactive and personalised, yet it has also introduced challenges such as plagiarism and cheating. This turning point in educational digitalisation reconfigures critical and creative thinking, as well as academic integrity, necessitating new policies that reconcile innovation with educational ethics.

Competitive debate largely followed broader AI trends in education. As early as 2018, IBM’s Project Debater demonstrated persuasive writing, oral comprehension, and argument modelling capabilities; in 2019, AI-Debater successfully challenged Harish Natarajan, a world-class debating champion. By 2021, a Nature editorial cautioned against data biases, potential propaganda, and the urgent need for regulations analogous to pharmaceutical oversight (Cressey, 2021).

The sole known regulatory framework thus far stems from the National Speech and Debate Association, aligning with the Biden administration’s executive order on safe AI. It permits AI-based research and brainstorming but prohibits citing AI-generated content directly and requires verifiable sources to preserve critical thinking and educational principles (Roy, 2023). To mitigate risks to critical thinking, training in inclusive AI literacy has also begun.

Debaters and coaches utilise AI to refine training and preparation. Platforms such as DebateMe.ai and Yoodli offer AI-driven simulations, while iterative prompting is taught to develop argument strategies (DebateUS.org, n.d.). Some coaches harness AI to simulate scenarios, generate arguments, anticipate counterarguments, and provide real-time feedback, although others remain wary of dependency, limited critical engagement, and algorithmic biases (Baum & Villasenor, 2023).

As Director of the National Debate Society of Italy (2020–2024) and a member of the Italian Ministry of Education’s Committee for the National Championships of Debate, I observed similar dynamics in Italy. In April 2024, I conducted the first coaching workshop on the ethical and strategic use of AI in debate teaching methodology (Giangrande, 2024).[1]

This paper—presented orally at the 2024 Summer School of the Italian Association for University Teaching—addresses the question central to debate educators: Does generative AI render the debate methodology obsolete? My answer, in the negative, follows this structure: §2 examines how Italian educators (2014–2022) viewed themselves as an avant-garde; §3 outlines the worst-case scenario posed by AI progress; §4 introduces the concepts of a “bundle of tasks” and ChatGPT as a “cognitive artefact”; §5 discusses irreplaceable tasks; §6 identifies emerging new tasks; and §7 provides heuristic conclusions. Finally, in §8, I shall respond to possible objections against my argument.

Drafted on 1 December 2024, with data current to 24 September, this contribution acknowledges that AI’s rapid evolution may soon render some arguments outdated, necessitating a re-evaluation of the paradigm: the future interplay between AI and debate remains profoundly uncertain.

 

2. Debate enthusiasts as avant-garde educators

 

Between 2014 and 2022, Italian advocates of debate perceived themselves as innovative educators, introducing debate for transversal skills, establishing national school networks, and earning both institutional recognition and international achievements. Although there were attempts at innovation, OECD-PISA data (INVALSI, 2022) show a continued prevalence of teacher-centred, transmissive methods in Italy. Debate challenges this model by fostering active knowledge construction, student engagement, and critical analysis and reflection, aligning with European recommendations on key competences (De Conti & Giangrande, 2018). It promotes transversal skills such as critical thinking, debiasing, evidence-based argumentation, media literacy, persuasive communication, collaboration, and leadership, reinforcing reflective learning through practice, feedback, and self-assessment. Its interdisciplinary nature and dialectical exchange distinguish it from traditional pedagogy and other educational approaches (Akerman & Neale, 2011).

Networks such as “WeDebate” and the “Società Nazionale Debate Italia” broadened debate’s reach in schools and civil society through training, events, tournaments, and Erasmus projects, thereby consolidating a national community able to engage internationally and connect with global educational networks. Having adapted to both curricular and extracurricular contexts—including mini-debates and online debate during the pandemic—debate evolved from limited use to a more widespread presence. Notably, it was incorporated into the “Avanguardie Educative” initiative, acknowledged by the Ministry of Education via “DebateItalia” and the National Championships, and endorsed by Parliament and the European Commission (2020–2022). Successes at the 2024 European Schools Debating Championship further attest to a marked increase in competence levels.

 

3. IA and Debate methodology: worst-case scenario

 

3.1. The current state of Generative AI

 

By September 2024, generative AI stands at the nexus of technological progress, strategic vision, and extensive infrastructural investment, bolstered by advancements in deep learning and intensified competition among key players that amplify its transformative potential. Below are several key observations:

 

(1)   OpenAI’s o1 Model (USD 20 per month) excels in STEM, scoring 83% in the IMO and 93% on Codeforces, aided by techniques such as reinforcement learning and chain-of-thought. Its o1-mini version (−‍80% cost) further broadens access.

(2)   OpenAI’s Five-Tier Classification ranks AI from conversational chatbots to Innovators (Level 4) and Organisations (Level 5). With o1 at Level 2, the company is developing technology for higher tiers (OpenAI, 2024).

(3)   Statements from Sam Altman and Ilya Sutskever: On 23 September 2024, Altman anticipated virtual experts capable of revolutionising science and societal systems, cautioning that insufficient infrastructure could hinder AI and fuel inequality. On 6 October 2023, Sutskever (2023) tweeted: “if you value intelligence above all other human qualities, you’re gonna have a bad time” (Tweet). That same month, Altman was removed from the board; leaks emerged regarding Q* (subsequently “Strawberry,” finally “o1” in 2024); and the o1 Knowledge Cut-off was imposed. Debate persists over Sutskever’s intended message.

(4)   Rivalry between OpenAI, Google, and others is intensifying. GAIIP (Microsoft, BlackRock, United Arab Emirates) is investing USD 100 billion in data centres and nuclear reactors, while Google designs compact reactors for AI applications (Microsoft, 2024).

(5)   Emergent Challenges:

a.      Although o1-mini democratises AI, insufficient infrastructure risks confining it to an elite;

b.     Q* raises ethical concerns;

c.      Dependence on energy and specialised chips necessitates nuclear reactors;

d.     Altman foresees autonomous AI yielding “massive prosperity.”

(6)   The debate on the AI in Education oscillates between apocalyptic and integrative visions. Critics warn of dehumanisation, homogenisation, and technological dependence (Holmes et al., 2021), whereas optimists highlight AI’s potential to democratise, personalise, and expand educational access (Bulathwela et al., 2021). An integrated approach aims to balance innovation with human interaction by employing technology for personalisation and accessibility, albeit contingent upon adequate infrastructure. Bias, inequality, and diminished empathy, creativity, and autonomy remain critical risks. Ultimately, successful AI integration should emphasise and enhance human learning (Ifenthaler et al., 2024).

 

3.2. Worst-case scenario

 

Unrestrained AI integration risks dehumanising education, displacing human capabilities and reducing learning to a mechanistic, meaningless process. In debate, the perceived “infallible authority” of AI curtails autonomy and creativity, imposing algorithmic frameworks that sacrifice diversity and originality. Critical thinking withers, replaced by prepackaged answers that foster passivity and superficiality.

Unequal AI access exacerbates educational disparities, undermining democracy and entrenching inequality (Vosberg, 2024). In disadvantaged settings, inadequate human support heightens technological dependency. Without rigorous regulation and global investment, AI may reinforce structural inequalities and market-driven pressures, ultimately dehumanising and trivialising education. The worst-case scenario envisions education reduced to passive imitation, stripped of authenticity, empathy, and transformative capacity.

 

4The notions of bundles of tasks and cognitive artefact

 

4.1. Bundles of tasks

 

Ethan Mollick, a Wharton School professor and leading figure on innovation and AI’s impact on education and work, explores AI integration as co-worker, co-teacher, and coach in Co-Intelligence and his newsletter One Useful Thing. He examines real-world examples of human–machine collaboration, highlighting the need to master strategies that amplify human skills while minimising risks.

A critic of complete professional replacement by AI, Mollick proposes the concept of “bundles of tasks,” wherein AI automates repetitive and analytical functions, preserving activities requiring creativity, empathy, and judgement. In the legal field, for instance, AI manages research tasks, freeing professionals for higher-order duties and essential human prerogatives such as negotiation and ethics.

In education, Mollick experiments with AI as both tutor and mentor for personalised learning, warning that a passive or “crutch-like” approach undermines autonomy and critical thinking. He offers seven classroom strategies for AI integration—AI tutor, AI coach, AI mentor, AI teammate, AI tool, AI simulator, and AI student—to balance the benefits and risks, mitigating biases, errors, and overreliance. Underlying his perspective is the view that AI serves as a complementary tool, enhancing work and education without supplanting human competence. Mollick thus advocates for strategic and informed integration that safeguards creativity and autonomy, requiring ongoing adaptability and a deep understanding of the technology’s possibilities and limits to optimise synergies between human and artificial intelligence.

 

4.2. Cognitive artefact

 

Cassinadri (2024) examines ChatGPT as a “cognitive artefact” within social epistemology and extended cognition theory. In ChatGPT and the Technology-Education Tension, he classifies cognitive artefacts into three categories: substitutive, tools that reduce cognitive effort; complementary, devices that augment capabilities without replacing them; and constitutive, essential elements enabling otherwise unachievable activities. He describes ChatGPT as a “computational cognitive artefact” capable of playing substitutive roles (e.g., drafting texts), complementary roles (creative support for complex projects), or constitutive roles (real-time automated translation), depending on context and usage.

Cassinadri highlights the need for epistemically virtuous usage, one that integrates ChatGPT strategically to enhance learning without undermining cognitive autonomy (see also King, 2023).

5. Irreplaceable tasks in Debate methodology

 

Assuming that debate methodology can be conceptualised as a “bundle of tasks,” which tasks are involved? When understood this way, debate comprises an interconnected array of indispensable components: (a) topic exegesis, entailing analytical inquiry and decoding of hidden implications; (b) information gathering, grounded in critical scrutiny of sources to build a solid epistemic basis; (c) argument construction, aimed at rigorous, rhetorically robust discourse; (d) rhetorical proficiency, refined through tone, diction, expressiveness, and nonverbal communication; (e) simulation-based preparation, essential for direct confrontation; (f) strategic coordination, critical for team synergy; (g) refutation, rigorously dismantling opposing positions; (h) synthesis, to distil and amplify persuasiveness; (i) critical metacognition, centred on self-evaluation and growth; and (l) ethical development, underpinning a morally sound dialectic. This multidimensional framework elevates debate as a tool for cultural and cognitive advancement, transcending mere oratory and shaping transversal skills and critical awareness.

As a cognitive artefact, ChatGPT occupies three roles in debate: (1) substitutive, automating research, synthesis, and initial brainstorming; (2) complementary, fostering regulated creativity and providing targeted, instantaneous feedback during simulations; and (3) constitutive, particularly in large-scale data analysis or real-time translation. Its effectiveness hinges on students’ critical engagement, which is vital to avert passive dependency.

ChatGPT’s involvement varies according to the competitive debate format. In face-to-face impromptu debates without technological support, participants rely solely on personal cognitive resources, emphasising mental agility, active listening, emotional regulation, and eloquence. While ChatGPT remains inaccessible during the event itself, it proves invaluable in the preparatory phase, enhancing analytical and synthetic abilities. In face-to-face prepared debate, extended timelines allow for in-depth research and strategic collaboration, blending critical thinking with technology: ChatGPT can assist in data gathering, scenario simulation, and rebuttal development but, if used uncritically, may erode autonomy and authenticity. In online impromptu debate—though it increases economic accessibility and overcomes geographical barriers—technological reliance and plagiarism risks grow, potentially undermining the integrity and depth of dialectical processes.

Even in a worst-case scenario, debate preserves certain irreplaceable tasks. (1) Rhetorical and oratory skills—rooted in live, voice-based interaction—are indispensable for modulating tone, nonverbal communication, and empathy. (2) Interpersonal collaboration and conflict management rely on social dynamics that AI cannot replicate. (3) Metacognitive development, predicated on self-regulation and introspection, remains inherently human, as do (4) intellectual integrity and intercultural understanding, which require direct engagement that values difference and diverse perspectives. It is difficult to find another educational methodology offering such a wealth of stimuli.

That worst-case scenario, however, appears unrealistic, since many tasks are readily integrated without being indiscriminately replaced. (1) ChatGPT accelerates source access for information-gathering; (2) it provides adaptable frameworks for building arguments; (3) it functions as a virtual sparring partner in simulated practice, enhancing dialectical preparation.

The strength of debate lies in oral, live interaction, the core of “giving and asking for reasons”: it verifies comprehension, hones critical acumen, and refines analytical skills in persuasive communication. Ethical integration of ChatGPT must enhance rather than supplant learners’ critical autonomy. Investment in face-to-face formats remains vital to safeguarding debate’s oral and social dimensions. Central to this endeavour is instruction in critical AI use, prioritising rhetorical, collaborative, and metacognitive skills that artificial intelligence cannot replicate.

 

6. Emerging Tasks

 

The integration of AI in debate redefines human roles, introducing new domains such as metacognitive self-monitoring, strategic planning, and quantitative analysis. While it augments cognitive capabilities, it also raises complex epistemological and ethical questions. AI revolutionises self-monitoring by promoting rigorous debiasing and self-corrective reflection, employing data processing to identify biases, fallacies, and argumentative weaknesses (Grassini, 2023).

In competitive debate preparation, AI simulates intricate scenarios, anticipates opponents’ strategies, suggests innovative countermeasures, and serves as a sparring partner to enhance cognitive agility and improvisation. It tests propositions against diverse debating styles and builds personalised hypothetical scenarios. It further refines pathos with compelling narratives and emotional data, maximising audience engagement (Baidoo-Anu & Owusu Ansah, 2023).

In parallel, it advances platform-based approaches. Archival technologies generate perpetual records of performances, tracking skills development; simultaneous translations expand transnational accessibility; and archived debates, integrated into curricula, serve as paradigm tools for learning and research.

The AI–debate nexus opens unprecedented empirical pathways. Advanced metrics analyse dialectical configurations and strategic patterns; investigations examine cognitive, emotional, and social impacts; and comparisons uncover disparities and congruences between human and AI-mediated debates. Still, ethical challenges arise: safeguarding creativity and intellectual integrity, ensuring equitable access, and mitigating algorithmic biases to preserve fairness and authenticity in decision-making (Kumar et al., 2024).

 

7. Heuristic conclusions

 

The integration of generative AI into both educational and competitive debate heralds a historic opportunity to redefine critical education by broadening cognitive horizons and reinforcing transversal skills. Three points merit emphasis:

 

(1)   As a microcosm of cognitive, social, and ethical processes, debate preserves irreplaceable features—emotional persuasion, situational adaptability, and empathy—that demand the distinctiveness of human thought and the intensity of interpersonal interaction. In a landscape saturated with automated written, audio, and video content, it reaffirms authenticity and critical discernment as essential values.

(2)   Far from constituting a threat, AI serves as a catalyst for new tasks, enhancing human capabilities through simulations, strategic analyses, and bias detection. Yet its ethical integration requires safeguarding intellectual autonomy and avoiding passive dependency.

(3)   In the era of AI, critical literacy is indispensable: students and educators must learn with and about AI, examining its limitations, biases, and potential. Within this perspective, debate becomes an epistemic laboratory probing the boundaries between human autonomy and artificial cognitive assistance.

 

AI thus inaugurates an interdisciplinary “bundle of tasks” that expands transversal competences, affirming debate as a refined educational method. The challenge lies in harmonising technological power with the uniqueness of human thought so that AI elevates—rather than replaces—the transformative potential of education.

 

8. Replies to potential objections against the argument

 

The core logical structure of the argument, as articulated succinctly in this article, is as follows:

 

– If an educational practice necessarily involves tasks that cannot be automated, it possesses an irreducibly human core resistant to AI.

– Debate necessarily involves tasks that cannot be automated.

 Debate necessarily possesses an irreducibly human core resistant to AI.

– If debate possesses an irreducibly human core, then AI can only serve integrative or complementary functions with respect to this core.

 In debate, AI can only serve integrative or complementary functions relative to the irreducibly human core.

– If AI can only serve integrative or complementary functions, then ChatGPT can enhance but not fully replace irreducibly human tasks.

– ChatGPT can indeed enhance human cognitive tasks but cannot fully replace them.

 In debate, ChatGPT can enhance but not replace the irreducibly human core.

– If ChatGPT enhances without replacing the irreducibly human core, then its responsible integration does not nullify the educational value intrinsic to debate.

 In debate, the responsible integration of ChatGPT does not nullify its essential educational value.

 

With respect to this argument, I can foresee three sets of objections: 1) concerning the purported “irreducibility” of human debate; 2) concerning the scope and actual role of AI; 3) concerning the educational impact and the responsibility of its use. Below I will let you know these objections in brief and propose equally brief possible responses.

 

8.1. On the purported “irreducibility” of human debate

 

One can refute the argument proposed in this paper by arguing that it is not logically warranted to conclude that what cannot be automated today will remain so indefinitely. There is no proof of a “core” beyond AI’s reach. Even empathy, creativity, and interpretative nuances could potentially be replicated through further advances. AI debate demonstrations offer no evidence of insurmountable barriers; without proof of an irreducible core, the thesis of “enhancement without replacement” is weakened.

A possible response to this kind of refutation could highlight that ​while artificial intelligence can generate text that appears creative or empathetic, it merely simulates these attributes without genuine consciousness or emotional experience. It lacks consciousness, intentionality, and the phenomenological dimension inherent to authentic human debate—an act intrinsically relational, social, and ethically charged. This qualitative distinction is not contingent but intrinsic, rooted in AI’s fundamental nature as statistical pattern-processing rather than sentient subjectivity. Indeed, debating is not merely about producing formally correct arguments; it entails agency: deciding whether and how to argue, listening to one’s opponent, and reacting on relational and pragmatic planes (tone of voice, body language, emotional states, historical-cultural context), all while bearing responsibility for what is stated. Such actions, intrinsically tied to subjective situatedness, cannot originate from a machine inherently devoid of subjective participation. Moreover, it is not a matter of “today versus a future technology.” The distinction is ontological rather than temporal: human cognition and experience embody an irreducible dimension beyond mere computational input-output operations. AI may convincingly mimic empathetic or creative expressions, yet inherently lacks the subjective experiential essence constitutive of these phenomena (Findlay et al., 2024). Thus, the phenomenon of “debating AI” merely demonstrates sophisticated linguistic competence, reinforcing rather than refuting the fundamental distinction between mechanical statement generation and authentic conscious engagement.

 

8.2. On the scope and actual role of AI

 

Another point of rebuttal that can be advanced to the argument presented in this article is that even conceding a human core, AI might redefine the borders of debate, taking over the majority of argumentative functions and thus relegating the human element to a marginal role, thereby weakening the notion of an irreducible core.

This refutation can be countered in the following way. Content generation constitutes merely one aspect of debate; its essence resides in contextual interpretation, real-time judgement, emotional attunement, persuasive sensitivity, and adaptive responsiveness to interlocutors and audience. Though AI may facilitate preliminary tasks such as source retrieval or strategic planning, the overarching direction—intentional judgement, critical discernment, and responsible deployment of knowledge—remains intrinsically human. Regardless of its sophistication, AI functions solely as an adjunct to human capabilities: expediting information retrieval, proposing argumentative frameworks, and facilitating preparatory tasks. Nevertheless, the sense-making dimension —determining one’s argumentative aims, rationales, and ethical-rhetorical strategies—remains exclusively the debater’s province. Though procedural tasks diminish, interpretative judgement and responsibility are not diminished but accentuated. Paradoxically, even were AI to perform nearly all executive tasks, the residual fraction—conscious, contextually grounded choice—would remain indispensable to authentic debate. Absent human intentionality and engagement, debate ceases to be genuinely formative, degenerating into mere automated text production. Hence, the irreducible core’s significance is not eroded by technological advancement but rather accentuated as its indispensable foundation.

 

8.3. On educational impact and responsibility of use

 

The third point of refutation that we imagine could be addressed to the argument proposed here is that integrating AI could alter the formative nature of debate, inducing students to rely on ready-made solutions; even with “responsible and regulated” usage, there remains the risk of undermining deep learning and the educational value inherent in the practice.

We reply to this type of refutation that integrating AI does not sanction unrestrained delegation of tasks by students to technology. Rigorous educational contexts impose stringent limits: while AI supports inquiry and ideation, ultimate analysis, discussion, and appraisal remain firmly within students’ purview (Elstad, 2024). Within this framework, AI functions as an exploratory instrument—expanding source access and hypothesis testing—never supplanting deliberate reflection and individual judgement. Precisely because AI provides instant solutions, it necessitates and enables training students in critical scrutiny—source verification, relevance evaluation, and bias detection. Far from being displaced, evaluative and reflective processes are strengthened by a tool proficient in generating ostensible truths. One frequently gains deeper insight from interacting critically with generative AI’s imperfect outputs than from mechanical database queries. What renders debate such a potent educational vehicle is the personal elaboration that occurs: taking a stance, defending it, confronting other viewpoints, and negotiating meanings and priorities. The expressive and relational dimension remains intact; judicious AI use stimulates students to critically assess, reinterpret, and employ its outputs as a starting point rather than a conclusive endpoint. The pedagogical core—cultivating critical thought and rigorous argumentation—endures precisely because it is anchored in the learner, not algorithmic computation.

 

In conclusion, notwithstanding objections regarding AI integration into debate methodology, the fundamentally human core of argumentation—the practice of giving and receiving reasons—remains irreplaceable: it resides in conscious intentionality, relational sensitivity, and moral accountability—qualities inherently inaccessible to artificial intelligence. Far from eroding debate’s pedagogical value, tools like ChatGPT may expand its horizon, enriching critical discourse and reflective inquiry. While AI enhances informational access and cognitive stimulation, educators must incorporate it judiciously, safeguarding independent thought and empathetic discernment.

This article thus asserts that debating as pedagogical practice—comprising attentive listening, meaning negotiation, and intentional discourse—is not obviated by advancements in AI technologies but rather identifies in them a strategic ally, contingent upon preserving the irreducibly human essence of argumentation. Future challenges shall entail achieving a judicious balance between technological possibility and pedagogical exigency: a dynamic equilibrium which, if navigated with discernment and ethical awareness, reasserts debate’s potency as a vehicle for critical education in the age of artificial intelligence.

Thus, this stance currently strikes me as the most compelling, though not definitive.


 

References

 

Akerman, R., & Neale, I. (2011). Debating the evidence: An international review of current situation and perceptions. CfBT Education Trust in association with the English-Speaking Union.

Baidoo-Anu, D., & Ansah, S. O. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. Journal of AI, 7(1), 52–62. https://doi.org/10.61969/jai.1337500

Banks, D. C. (2023, May 18). ChatGPT caught NYC schools off guard. Now, we’re determined to embrace its potential. Chalkbeat. https://www.chalkbeat.org/newyork/2023/5/18/23727942/chatgpt-nyc-schools-david-banks/

Baum, J., & Villasenor, J. (2023, May 8). The politics of AI: ChatGPT and political bias. Brookings. https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/

Bulathwela, S., Pérez-Ortiz, M., Holloway, C., & Shawe-Taylor, J. (2021). Could AI democratise education? Socio-technical imaginaries of an EdTech revolution. arXiv. https://doi.org/10.48550/arXiv.2112.02034

Carroll, M. (2024, August 31). UK’s first ‘teacherless’ AI classroom set to open in London. Sky News. https://news.sky.com/story/uks-first-teacherless-ai-classroom-set-to-open-in-london-13200637

Cassinadri, G. (2024). ChatGPT and the technology-education tension: Applying contextual virtue epistemology to a cognitive artifact. Philosophy and Technology, 37(14), 1–28. https://doi.org/10.1007/s13347-024-00701-7

Cressey, D. (2021, April 1). Am I arguing with a machine? AI debaters highlight need for transparency. Nature. https://www.nature.com/articles/d41586-021-00867-6

DebateUS.org. (n.d.). How to write a PF debate case with AI in 30 minutes or less (and what to do with the saved time). https://debateus.org/how-to-write-a-pf-debate-case-with-ai-in-30-minutes-or-less-and-what-to-do-with-the-saved-time/

De Conti, M., & Giangrande, M. (2018). Debate. Teoria, pratica, pedagogia. Pearson.

Elstad, E. (2024). AI in education: Rationale, principles, and instructional implications. arXiv. https://doi.org/10.48550/arXiv.2412.12116

Findlay, G., Marshall, W., Albantakis, L., David, I., Mayner, W. G. P., Koch, C., & Tononi, G. (2024). Dissociating artificial intelligence from artificial consciousness. arXiv preprint arXiv:2412.04571. https://doi.org/10.48550/arXiv.2412.04571

Giangrande, M. (2024, April 3). Nuove frontiere della ricerca nel Debate. [Presentation for the Rete Wedebate]. https://www.sn-di.it/wp-content/uploads/2024/12/Nuove-frontiere-della-ricerca-nel-Debate.pdf

Harvard University. (2023). Artificial intelligence courses. https://pll.harvard.edu/subject/artificial-intelligence?utm_source=chatgpt.com

Henriksen, D., Woo, L. J., & Mishra, P. (2023). Creative uses of ChatGPT for education: A conversation with Ethan Mollick. TechTrends, 67(4), 595–600. https://doi.org/10.1007/s11528-023-00862-w

Herman, D. (2022, December 9). The end of high-school English. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/

Hern, A. (2022, December 4). AI bot ChatGPT stuns academics with essay-writing skills and usability. The Guardian. https://www.theguardian.com/technology/2022/dec/04/ai-bot-chatgpt-stuns-academics-with-essay-writing-skills-and-usability

Holmes, W., Bialik, M., & Fadel, C. (2021). Artificial intelligence in education: Addressing ethical challenges in education. AI and Ethics, 1(2), 123–134.  https://doi.org/10.1007/s43681-021-00096-7

IBM Research. (2018). IBM debating technologies. https://research.ibm.com/haifa/dept/vst/debater.shtml

Ifenthaler, D., Majumdar, R., Gorissen, P., Judge, M., Mishra, S., Raffaghelli, J., & Shimada, A. (2024). Artificial intelligence in education: Implications for policymakers, researchers, and practitioners. Technology, Knowledge and Learning, 29(4), 1693–1710. https://doi.org/10.1007/s10758-024-09747-0

INVALSI. (2022). Indagine internazionale OCSE PISA 2022: I risultati degli studenti italiani in matematica, scienze e lettura. FrancoAngeli.

Karpathy, A. (2024, July 16). Introducing Eureka Labs. https://eurekalabs.ai/

King, M., & ChatGPT. (2023). Why teachers should explore ChatGPT’s potential. Nature. https://doi.org/10.1038/d41586-023-03505-5

Kumar, S., Verma, A. K., & Mirza, A. (2024). Digital revolution, artificial intelligence, and ethical challenges. In Digital transformation, artificial intelligence and society (pp. 161–177). Springer. https://doi.org/10.1007/978-981-97-5656-8_11

Microsoft. (2024, September 17). BlackRock, Global Infrastructure Partners, Microsoft, and MGX launch new AI partnership to invest in data centres and supporting power infrastructure. Microsoft News Center. https://news.microsoft.com/2024/09/17/blackrock-global-infrastructure-partners-microsoft-and-mgx-launch-new-ai-partnership-to-invest-in-data-centers-and-supporting-power-infrastructure/

MIT News. (2024, August 27). First AI + Education summit is an international push for ‘AI fluency’. https://news.mit.edu/2024/first-ai-education-summit-0827

Mollick, E. (2024). Co-intelligence: Living and working with AI. Penguin Random House.

Mollick, E., & Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. arXiv. https://doi.org/10.48550/arXiv.2306.10052

OpenAI. (2024). Learning to reason with LLMs. OpenAI Blog. https://openai.com/index/learning-to-reason-with-llms/

Ott, M. (2023, May 2). Chegg’s stock plunges on fears of competition from ChatGPT. AP News. https://apnews.com/article/chegg-stock-shares-chatgpt-ai-a877423bd4a8f67b1494363fc81cf52a

Roy, R. (2023, September 8). Artificial, but not exactly intelligent: AI in debate & NSDA regulations. Ethos Debate. https://www.ethosdebate.com/guest-post-artificial-but-not-exactly-intelligent-ai-in-debate-nsda-regulations-by-rik-roy/

Singer, N. (2023, August 24). Despite cheating fears, schools repeal ChatGPT bans. The New York Times. https://www.nytimes.com/2023/08/24/business/schools-chatgpt-chatbot-bans.html

Sutskever, I. [@ilyasut]. (2023, October 6). if you value intelligence above all other human qualities, you’re gonna have a bad time [Tweet]. X. https://x.com/ilyasut/status/1710462485411561808

Vosberg, S. (2024). The potential impact of artificial intelligence on equity and inclusion in education (OECD Education Working Papers, No. 23). OECD Publishing. https://www.oecd.org/en/publications/the-potential-impact-of-artificial-intelligence-on-equity-and-inclusion-in-education_15df715b-en.html



[1] Just a few of remarks about my “first-hand” point of view on the matter. In my capacity as a high school teacher of History and Philosophy—where the debate format has long served as an evaluative vehicle enabling students to develop transversal competencies such as dialectical discernment, collaborative synergy, and critical thinking—I have, over the two past year, directly observed the emergence of new tensions precipitated by the sudden, widespread introduction of generative artificial intelligence tools. Although these tools provide immediate cognitive ‘scaffolding’ during individual preparation, they have compelled a fundamental rethinking of the traditional balance between independent study, primary-source research, and collaborative simulation. Consequently, I have found it essential to shift substantial portions of preparatory work (ordinarily conducted at home) into the supervised classroom context, where the teacher’s presence and the counterpoint of peer discussion can highlight any potential diminutions in conceptual understanding or emotional engagement, thereby forestalling undesirable forms of automated dependency. Concurrently, in my roles as a debate coach and judge—alongside organising national competitions—I have been obliged to maintain a delicate equilibrium between the stimulating affordances of generative software (enabling simulated sparring sessions or instantaneous argument-mapping) and the risk of an unbridled turn to copy-and-paste automation that could erode personal growth and reflective engagement. Indeed, the propensity, particularly among certain high-performing debaters, to rely on AI as a convenient surrogate for the coach—thus bypassing the labour of sustained exploration and dialectical exchange—has necessitated the introduction of anti-fraud protocols, designated moments of ethical reflection on safeguarding intellectual autonomy, and heightened vigilance on the part of adjudicators. These measures aim to dispel the lingering uncertainty surrounding the authenticity of human contributions which, if left unaddressed, might compromise the very essence of the educational experience and the cultivation of critical thinking that debate aspires to foster.