A Freirean Reckoning with AI and Capitalism in the Pursuit of Humanising Education across Technology-Mediated Contexts

Tiffany Karalis Noel and William Liang

2026 VOL. 13, No. 1

Abstract: This commentary examines the integration of generative artificial intelligence (AI) into contemporary education through a Freirean lens, arguing that AI does not disrupt schooling so much as extends a neoliberalised system manipulated by efficiency, standardisation, and performative learning. Far from signalling a pedagogical shift, AI thrives in environments where intellectual struggle and dialogic inquiry have long been subordinated to outcomes-based accountability. Situating this critique within the field of open and distance learning (ODL), this article highlights how AI may exacerbate pressures towards automation, scale, and depersonalised instruction, particularly in development contexts where ODL is positioned as a mechanism for widening access. Through classroom and ODL-based examples, this commentary contends that the ethical task is not to reject AI but to reimagine its use in ways that deepen humanisation, cultivate agency, and strengthen relational forms of learning. It concludes with implications for ODL practitioners committed to equity, access, and social transformation.
Keywords: Artificial Intelligence, humanising education, technology-enabled learning

Introduction

This commentary argues that generative Artificial Intelligence does not represent a pedagogical disruption to contemporary education but rather an intensification of neoliberal logic that has long prioritised efficiency, standardisation, and performative learning over dialogic inquiry and humanisation. Drawing on Freirean critical pedagogy, the analysis situates AI within broader structures of power, reproduction, and control, with particular attention to how these dynamics manifest themselves in open and distance learning (ODL) systems and learning-for-development contexts. By foregrounding dialogic humanisation, critical consciousness, and praxis, the commentary proposes that the ethical task is not to reject AI, but to reimagine its use in ways that deepen relational learning, learner agency, and educational justice.

When Freire (1970) wrote of the teacher-student contradiction and urged a pedagogy of the oppressed grounded in dialogic humanisation, or learning through mutual dialogue that affirms the humanity and agency of both teacher and student, he was pointing to a truth that continues to haunt our educational systems today. He understood that schooling could either cultivate critical consciousness, such as when students analyse how historical redlining policies perpetuate contemporary inequalities, or function as a mechanism of social reproduction, by maintaining power structures and positioning learners as passive recipients of predetermined knowledge, through, for example, test-based tracking that reinforces existing hierarchies under the guise of merit. In learning-for-development contexts, where education is framed as a pathway to social mobility, democratic participation, and collective wellbeing, these tensions carry especially high stakes, as pedagogical practices can either exacerbate structural inequities or support emancipatory transformation.

For practitioners in ODL, these concerns carry particular urgency. ODL provision often depends on large enrolments, limited instructional time, and diverse learner needs across dispersed regions. Generative AI enters this environment with the promise of rapid support and increased capacity, yet it can also intensify existing tendencies toward automation and reduced human presence. In development contexts, where communication, accompaniment, and culturally responsive teaching are essential for persistence, the risk of further depersonalisation is especially pronounced. This commentary therefore invites ODL practitioners to consider how AI might be used not only to manage practical constraints but also to sustain humanising interactions, foster dialogue, and strengthen relationships that support learner growth. These concerns sit at the centre of ODL practice, where student support, tutor presence, and meaningful interaction have long been recognised as essential conditions for persistence and equitable participation. In this sense, AI does not transform ODL so much as amplify pre-existing systemic pressures already influencing its pedagogical landscape.

AI as an Extension of Neoliberal Schooling

In the present moment, the ascendance of generative AI does not so much fracture the foundations of education as reveal the brittle scaffolding upon which those foundations were built. Although AI’s classroom presence may appear to depart from traditions of human-centred instruction, teacher-student dialogue, and cultivated understanding, it is more accurately read as a logical extension of an era governed by efficiency, compliance, and outcomes-obsessed capitalist rationality (Fox, 2024). From test-based teacher evaluations to the prioritisation of career readiness over civic imagination, the integration of AI—whether for essay writing, automated feedback, or lesson planning—reflects how schooling, and, increasingly, technology-enabled and distance learning initiatives framed as vehicles for widening access, have become subsumed within systems of quantification, control, commodification, and performance-based accountability (Apple, 2006; Biesta, 2010; Giroux, 2014).

Within ODL practice, these arguments are not abstract. They are experienced through institutional demands for rapid feedback turnaround, dashboard-driven monitoring of learner progress, and standardised content delivery that leave limited space for dialogue, relational presence, or pedagogical responsiveness (Anderson & Dron, 2011; Tait, 2014). For ODL educators, this reshapes the nature of teaching itself, shifting professional judgement toward compliance with metrics rather than responsiveness to learners, and constraining opportunities for the slow, relational work through which trust, motivation, and critical engagement are cultivated. In this sense, AI appears at home in contemporary classrooms not because it improves education but because it performs fluently within a system already trained to value efficiency over encounter.

Long before ChatGPT could draft a five-paragraph essay in seconds, education had been reorganised to align with the temporal logic of capital. Learning was compressed into standardised intervals such as forty-minute class periods and bell schedules, while outcomes were codified into metrics such as test scores, grade point averages, and graduation rates. Intellectual growth, once understood as a gradual and nonlinear process of discovery, was increasingly sacrificed for deliverables that could be quantified, ranked, and monetised. Block scheduling displaced exploratory learning, multiple-choice benchmarks displaced open-ended dialogue, and so on. As Biesta (2010) argues, the language of learning outcomes became a grammar of control that reduced education’s relational and ethical dimensions to instrumental transactions. AI-assisted writing tools simply accelerate this logic by allowing students to generate polished essays without engaging in the difficult interior labour that deep thinking requires.

What emerges, therefore, is not necessarily the absence of learning altogether, but a convincing performance of it. For instance, a student may submit an eloquent essay on climate change that cites multiple sources and follows academic conventions but offers limited traces of the student’s own questions, intellectual conflicts, or evolving stance on the issue. The performance of learning remains intact, even as its transformative substance recedes.

The Simulation of Learning in Outcomes-Based Education

Having situated AI within neoliberal schooling logic, our analysis now turns to how this same logic reshapes learning itself, producing not the absence of learning but its simulation. The problem is not that AI introduces these conditions, but that it intensifies an ecosystem in which struggle, inquiry, and doubt have already been marginalised. Rather than inhabiting uncertainty, students are trained to align with predetermined standards, reinforcing a culture in which correctness displaces inquiry.

Teachers, already burdened by administrative demands and constrained by mandated curricula, are afforded little time or institutional permission to cultivate creativity, resistance, or intellectual risk. In this vacuum, AI becomes seductive precisely because it offers a streamlined pathway to the appearance of understanding that our ecosystem rewards those who are fluent, organised, and efficient. Yet, as Freire (1970) reminds us, the appearance of liberation may be the most dangerous form of oppression. When students come to equate polished text with independent thought, they unwittingly internalise a model of education that confuses performance with learning. These dynamics are not new but they become more acute within open and distance learning contexts, where scale, automation, and limited relational presence further intensify the conditions under which learning is simulated rather than lived.

Open and Distance Learning as a High-Stakes Site

The simulation of learning is not confined to traditional classrooms. In many ODL contexts, particularly those serving geographically dispersed, economically marginalised, or time-constrained learners, the pressures toward scale and efficiency are even more acute. Consider the ODL instructor responsible for hundreds of students across multiple regions who turns to AI-generated feedback to meet institutional turnaround expectations. The comments may be coherent and formally correct, but the relational dimension that distance learners often depend on to persist, such as being seen, recognised, and accompanied, recedes.

In another case, an AI powered mobile learning platform in a low-bandwidth region automatically generates lesson explanations whose linguistic and cultural assumptions are detached from local knowledge systems, reinforcing an extractive rather than dialogic model of learning. Together, these examples reflect decades of research cautioning that distance learning systems can widen participation while simultaneously reproducing inequities when human presence is subordinated to technological delivery (Anderson & Dron, 2011; Tait, 2014).

Although students increasingly recognise that school often demands performance rather than authentic engagement, teachers are correspondingly positioned as managers of appearance rather than facilitators of intellectual risk. As Sarofian-Butin (2025, March 19) observes, educators become “marks” in a staged performance of circulating through classrooms, giving feedback, and sustaining a system they know is fraying beneath the surface. Against this backdrop, Freire’s concept of praxis offers a necessary ethical and pedagogical interruption.

Praxis, Humanisation, and the Limits of Automation

These scenarios reveal the point at which pedagogy yields to productivity and the human element of education is deferred in favour of operational efficiency. Freire’s (1970) concept of praxis, or the dialectical unity of reflection and action, offers a critical counterpoint to this automation of thought. In a truly liberatory classroom, students are not only asked to produce knowledge but to locate themselves within it. They might explore gentrification by interviewing local residents or examine climate justice through the lens of their community’s access to clean water. In such spaces, learning unfolds through uncertainty, failure, contradiction, and transformation rather than through rehearsed correctness.

Yet the dominant structures of neoliberal schooling, and the massified forms of ODL and technology-enabled learning built upon similar logic, offer few incentives for such vulnerability. Curriculum pacing guides reward speed over depth, while admissions pressures encourage students to package their identities rather than investigate them. As Giroux (2014) argues, education under neoliberalism becomes a space of private investment rather than public good, training learners to optimise themselves for labour markets instead of interrogating the logic that shapes those markets. Within this paradigm, AI functions as another optimisation tool, rendering learning superficially impressive while hollowing out the reflective struggle at its core.

Understanding the challenge before us therefore requires recognising that AI is not an isolated phenomenon but an actor within a broader script authored by decades of policy decisions such as No Child Left Behind and Race to the Top. These reforms embedded a culture of quantification into public education and primed the system for tools that reward surface-level performance over critical thought. The same script now governs ODL systems organised around dashboards, predictive analytics, and large-scale content distribution. It privileges speed over reflection, certainty over complexity, and coverage over inquiry. AI excels in this environment because it delivers precisely what this system demands: fluent, efficient, and ostensibly correct outputs.

Let us not forget that there have been other moments in history when new tools provoked panic, disorientation, and pedagogical reconsideration. We can all remember two groundbreaking examples: the calculator once drew fire for its perceived threat to basic numeracy, with critics warning that students would forget how to compute without it; while the internet unsettled long-held assumptions about knowledge acquisition and authority, prompting fears that Wikipedia would replace libraries and that students would no longer need to remember facts. In each case, educators were eventually called upon to articulate the philosophical conditions under which such tools could be used with purpose. The difference now is the extent to which the human process of learning, such as asking difficult questions or dwelling in uncertainty, has already been devalued by systems that have gradually excised critical pedagogy from their bloodstream (McLaren, 2007). In this context, AI does not so much disrupt education as outperform a version of schooling that may have surrendered its soul.

Reimagining AI as a Pedagogical Provocation

Reclaiming education’s soul requires decentring efficiency as the ultimate value in both in-person and distance learning environments. It demands that we prioritise patience, cultivate ambiguity, and restore the legitimacy of intellectual discomfort. Students must be invited into processes where discovery is valued more than delivery, and where knowledge is co-constructed through dialogue and situated experience (hooks, 1994).

Reimagining AI within this framework means treating it not as a substitute for thought but as a provocation for it. Learners might critique AI-generated essays for conceptual shallowness, annotate algorithmic responses to detect bias or oversimplification, or compare these outputs with their own evolving drafts to analyse differences in voice and stance, and epistemic positioning. In these practices, AI becomes a site of inquiry rather than a shortcut to performance.

These commitments, however, cannot remain abstract. They must be translated into concrete pedagogical choices, particularly within ODL and learning-for-development environments, where pressures toward automation and scale are most acute.

Implications for ODL and Learning-for-Development Practice

For practitioners working in ODL and technology-enabled environments, these questions carry particular urgency. ODL systems are often celebrated for expanding access, flexibility, and learner autonomy, but they also operate within structures that can intensify the pressures of efficiency, standardisation, and scale. Generative AI fits easily into this landscape because it promises rapid feedback, automated support, and streamlined content delivery, which may appear to solve real constraints such as large enrolments, dispersed learners, and limited instructional time. However, without a humanising pedagogical frame, AI is more likely to reinforce hierarchies, obscure power, or distance learners from substantive engagement.

For ODL and learning-for-development practitioners, this analysis suggests several core pedagogical commitments. First, AI should be positioned as an object of critical inquiry rather than a substitute for student thinking. Learners should interrogate AI outputs for bias, cultural assumptions, and conceptual limitations. Second, reflective practices should require students to locate themselves in relation to automated text, articulating where their perspectives converge with or diverge from algorithmic responses. Third, institutions should resist framing AI primarily as a tool for scale, surveillance, or efficiency, and instead protect tutor presence, peer dialogue, and community-based inquiry as non-negotiable conditions of learner support. Finally, in development contexts, AI deployment should be evaluated for its linguistic, cultural, and epistemic alignment with local knowledge systems rather than be assumed as neutral or universally applicable.

Within learning-for-development practice, principles of humanising pedagogy provide a critical ethical compass for AI integration in ODL systems. Dialogic engagement, respect for learners’ cultural and epistemic identities, and the cultivation of critical consciousness require that AI support, rather than replace, relational learning processes. This entails designing AI-supported activities that invite dialogue, contextual interpretation, and learner voice, rather than reinforcing extractive, standardised, or externally imposed models of knowledge. Ethical AI use is therefore not a technical compliance issue but a relational and political commitment to dignity, agency, and educational justice.

For ODL practitioners, then, the challenge is not whether to integrate AI but how to do so in ways that honour humanisation, equity, and relational learning. This includes designing assignments in which AI outputs serve as objects of critique rather than templates for correctness, embedding reflective prompts that require students to situate themselves in relation to automated text, resisting institutional pressures to use AI primarily for monitoring or productivity, and protecting relational infrastructures such as tutor presence, peer dialogue, and community based inquiry that AI cannot simulate but that remain essential for learner persistence and transformation.

This work, particularly in ODL and learning-for-development contexts, is contemplative, imperfect, and often at odds with the dominant culture of schooling. But if education is to resist continuing as the training ground for docile technocrats, it must inhabit the tension between what schooling has become and what it might still be. Classrooms must function as spaces of refusal: refusal of market logic that ranks students by test scores; refusal of false rigour that equates worksheets with learning and overwork with excellence; and refusal of the automation of spirit that turns teaching and learning into procedural labour. As recent studies of AI-supported tutoring systems in large-scale ODL programmes show, algorithmic feedback can prioritise surface-level correctness over conceptual struggle, reshaping how learners engage with knowledge and how instructors are positioned as monitors rather than dialogic partners (Anderson & Dron, 2011; Tait, 2014). These dynamics unfold within a sociopolitical landscape structured by capitalist imperatives and neoliberal reform, where schools are managed as businesses and students reduced to data points. As Freire (1970) reminds us, education is never neutral.

The pedagogical challenge of our time, therefore, is less about integrating AI more effectively and more about building educational systems capable of questioning the logic that has made such integration feel inevitable. For ODL, this challenge is especially consequential, as the field has historically advocated for learner support and relational pedagogy as counterbalances to isolation, attrition, and the pressures of scale. Reclaiming this tradition requires resisting performance-driven models of learning and reaffirming education as a public, ethical, and relational project.

Practitioner-Focused Implications for ODL and Learning-for-Development

For ODL and learning-for-development practitioners, the analysis offered in this commentary translates into the following commitments:

  1. Prioritise equity over efficiency. AI adoption should be evaluated by its impact on access, participation, and epistemic recognition rather than gains in speed or scale.
  2. Protect relational pedagogy. Tutor presence, peer dialogue, and community-based inquiry should remain central, particularly in development contexts where persistence depends on human connection.
  3. Embed culturally relevant and contextualised learning. AI systems should be interrogated for linguistic, cultural, and epistemic bias and adapted to support local knowledge systems.
  4. Cultivate critical consciousness. Learners should analyse and resist algorithmic authority, using AI outputs as sites for dialogue, critique, and reflective positioning.
  5. Frame ethical AI use as a pedagogical responsibility. Ethical integration is a humanising commitment to dignity, agency, and educational justice, not merely a technical obligation.

Together, these commitments position AI as a pedagogical site where struggles over equity, voice, and humanisation are actively negotiated within ODL and learning-for-development practice.

Reclaiming Education as a Human Project

In conclusion, education should be reclaimed as a space of shared inquiry, ethical formation, and agency cultivated through uncertainty rather than eliminated by it. If the future of education is to matter, it must be imagined not through the lens of technical efficiency but through a moral and political commitment to the full humanity of learners and educators alike. In ODL and learning-for-development contexts, this commitment requires that technology serve humanisation rather than replace it, and that AI be judged not by what it accelerates but by what it preserves: dialogue, dignity, and becoming. Only through such commitments can education remain not merely a system of delivery, but a human project grounded in justice and relationship.

References

Anderson, T., & Dron, J. (2011). Three generations of distance education pedagogy. The International Review of Research in Open and Distributed Learning, 12(3), 80-97. https://doi.org/10.19173/irrodl.v12i3.890

Apple, M.W. (2006). Educating the "right" way: Markets, standards, god, and inequality (2nd ed.). Routledge.

Biesta, G. (2010). Good education in an age of measurement: Ethics, politics, democracy. Routledge.

Freire, P. (1970). Pedagogy of the oppressed. Herder and Herder.

Fox, N.J. (2024). Artificial intelligence and the black hole of capitalism: A more-than-human political ethology. Social Sciences, 13(10), Article 507. https://doi.org/10.3390/socsci13100507

Giroux, H.A. (2014). Neoliberalism’s war on higher education. Haymarket Books.

hooks, b. (1994). Teaching to transgress: Education as the practice of freedom. Routledge.

McLaren, P. (2007). Life in schools: An introduction to critical pedagogy in the foundations of education (5th ed.). Allyn & Bacon.

Sarofian-Butin, D. (2025, March 19). In the age of AI, is education just an illusion? The Chronicle of Higher Education. https://www.chronicle.com/article/in-the-age-of-ai-is-education-just-an-illusion

Tait, A. (2014). From place to virtual space: Reconfiguring student support for distance and e-learning in the digital age. Open Praxis, 6(1), 138-150. https://doi.org/10.5944/openpraxis.6.1.102

 

 

Author Notes

Tiffany Karalis Noel, PhD, is an educator, researcher, and leadership development specialist whose work focuses on humanising pedagogy and the cultivation of agency and critical consciousness in educational contexts. Email: tbkarali@buffalo.edu (https://orcid.org/0000-0003-1989-1643)

William Liang is a student and education journalist whose writing explores the social, philosophical, and cognitive implications of Artificial Intelligence in contemporary education. Email: liang18834@gmail.com (https://orcid.org/0000-0001-6483-570X)

 

Cite as: Karalis Noel, T., & Liang, W. (2026). A Freirean reckoning with AI and capitalism in the pursuit of humanising education across technology-mediated contexts. Journal of Learning for Development, 13(1), 123-129.