Degrees of Context

Generic chatbots fail in education settings because they lack the context that these institutions need. Tools like ChatGPT, Copilot, and Gemini can be useful - but they are also fundamentally ignorant of everything that makes education personal. At present, the majority of them cannot access a student's grade history, attendance patterns, timetable, prior assessment feedback, or the accumulated understanding of their learning gaps. For example, if a student enquires about a forthcoming assignment deadline, a chatbot which responds contextually should be able to respond with the date and time of the assignment deadline, the mark required to maintain the grade average for a university place, and opportunities to access study support ahead of the deadline. Multiple systematic reviews confirm that generic chatbots "lack deep contextual understanding, struggle with ambiguous or complex questions, and may give inaccurate or superficial answers" (Ohio State University, 2024).

This explains a striking paradox: most educators believe AI is useful, yet relatively few actively use it in teaching. The technology exists; the contextual grounding that would make it useful does not. The next wave of AI in education must close this gap - moving from general-purpose chatbots to hyper-contextual assistants grounded in institutional data.

A Brief History of Context in Educational AI

The rule-based era of the 1990s and 2000s established that context matters. Carnegie Learning's Cognitive Tutor, built on John Anderson's ACT-R cognitive architecture, demonstrated that AI systems maintaining student models could produce measurable results - students completed problem sets significantly faster with notably higher test scores than in conventional environments. These 'intelligent' Tutoring Systems established a four-component architecture still relevant today: domain model, student model, tutoring model, and interface.

ChatGPT's launch in November 2022 became the fastest-growing consumer application in history. The tool could pass law school exams yet often produced what researchers described as "patently false" academic citations (Brookings Institution, 2023). It could do certain remarkable things—but not the specific things any particular school needed.

The current transition represents a fundamental pivot. Retrieval-Augmented Generation (RAG) research has exploded, enabling AI systems to ground responses in institutional knowledge bases. Khan Academy's Khanmigo - integrated with over 100,000 lessons—exemplifies this approach by guiding students contextually rather than generically (Khan Academy, 2024). PowerSchool's PowerBuddy answers district-specific questions by accessing real-time documentation including policies, schedules, and handbooks.

What Context Enables

When AI assistants gain access to student data and curriculum materials, outcomes transform. A Harvard CEPR study found that schools using AI-powered early warning systems achieved meaningful reductions in chronic absenteeism and course failures after just one year (Education Week, 2024). These systems work by combining signals that individually seem minor - moderate absenteeism plus a slight GPA drop plus a minor behaviour incident - to identify at-risk students much earlier than traditional thresholds would catch.

A 2025 Dartmouth study with medical students using a curated AI teaching assistant found that students "overwhelmingly trusted the curated knowledge more than generally available chatbots," demonstrating what researchers called "precision education" - instruction tailored to each learner's specific needs and context (Dartmouth, 2025).

The pattern is consistent: contextual AI outperforms generic AI. Georgia State University's Pounce chatbot handles over 200,000 questions annually on registration and financial aid. MagicSchool AI's district implementations have achieved dramatic improvements in literacy outcomes. Context is the difference between a clever tool and a useful one.

Oversight Through Dashboards and Alerts

Contextual AI assistants require robust oversight mechanisms. SchoolAI's Mission Control dashboard provides real-time student progress tracking with engagement dips highlighted, sentiment analysis distinguishing confidence issues from misconceptions, and safety alerts that immediately notify teachers of critical concerns (SchoolAI, 2024). Khan Academy's Khanmigo offers teachers full access to student chat histories and flagged content alerts.

Academic research distinguishes between "mirroring dashboards" that provide information and "advising dashboards" that provide interpretation support (van Leeuwen et al., 2023). The American Federation of Teachers recommends that districts establish ethical oversight committees with regular transparent reporting on AI use (AFT, 2024).

If these dashboards become commonplace, the stakeholders that make up the communities in schools, colleges and universities will all need to be involved in determining what information can or cannot be surfaced. It will be interesting to see if institutions will share a common ground about what is surfaced.

Governance: The PARTNER Framework

Responsible deployment of contextual AI requires principled governance. The PARTNER framework (Hart & Hussain, 2025) establishes six core requirements:

These principles align with emerging global standards. UNESCO's 2023 guidance represents the first international framework specifically addressing generative AI in education, mandating data privacy protection and human oversight (UNESCO, 2023). The EU AI Act classifies educational AI as high-risk, requiring transparency and human control over systems that "may determine the educational and professional course of a person's life."

The Privacy Tension: Local vs. Cloud

Context creates capability, but also risk. The more an AI system knows about students, the more valuable it becomes and the more dangerous a breach would be. This creates an unresolved tension between privacy-first local systems and cloud-based platforms with greater capabilities.

FERPA, signed in 1974, was not designed for AI systems. Its protections do not extend to AI-generated data like predictive performance insights or behavioural analytics. The US CLOUD Act complicates matters further by enabling US-based tech companies to access data regardless of physical location, creating jurisdictional conflicts with local protection laws (Ethical Data Initiative, 2025).

Middle-ground solutions are emerging. Federated learning allows collaborative model training without raw data leaving institutional servers. On-premise AI deployments, where vendors install and manage infrastructure while data stays within institutional facilities, offer another path. Education-specific products like ChatGPT Edu explicitly do not train on student data, offering more protected environments than consumer versions.

Multi-Agent Architecture: A Technical Note

The most sophisticated contextual assistants employ multi-agent architectures - specialist agents collaborating rather than a single monolithic system. This translates into a "composite assistant" built from dedicated components: a grades agent accessing academic records, a timetable agent managing scheduling, a counselling agent providing advising support, each with differentiated access permissions matching its function.

Anthropic's engineering research found that multi-agent systems with specialised agents significantly outperformed single-agent systems (Anthropic, 2025). Google's 2025 Virtual TA pilot with the University of Michigan uses agentic AI across thousands of students - one of the largest educational AI agent deployments to date.

When a campus chatbot has access to a large selection of data about each student, and when this data is coupled with well defined rules and parameters, the chatbot is in a position to respond and act with a high degree of context. This is called hyper-context. Furthermore, when a campus chatbot can draw on a student’s historical and present dataset, as well as planned future data points, it can operate with situational context. A chatbot’s ability to operate with hyper-context and situational context is enhanced when the agents that comprise it behave in a cooperative manner.

The Co-Design Imperative

These systems only work if built with the campus community, not merely for them. The evidence is unambiguous: educational AI built without genuine participation from teachers, students, and administrators faces adoption barriers, trust deficits, and outright failure.

InBloom, backed by over $100 million in Gates Foundation funding, collapsed because it ignored stakeholder concerns about data privacy. Summit Schools' personalised learning platform prompted student walkouts citing concerns about data exploitation and the elimination of "human interaction, teacher support, and discussion and debate with our peers" (Edutopia, 2019). A 2024 systematic review found that AI adoption in authentic educational settings remains limited partly due to "ignoring the stakeholders' needs" and "a lack of pedagogical contextualisation" (Topali et al., 2025).

Washington State's guidance articulates this imperative clearly: a human-centred AI learning environment "prioritises needs of students, teachers, administrators" and ensures students "actively shape their learning experience with AI" (OSPI, 2024).

In one of the largest generative AI chatbot projects in Australia, the Catholic Education Network has launched CeChat, a general purpose chatbot that’s used by students, teachers and school administrators across its school network. Co-design is a core aspect of the programme, and this has ensured the ongoing success of the service.

Good co-design in practice, as demonstrated by the CeChat programme, involves schools participating as active partners rather than passive users throughout the lifecycle of the technology. Schools entered the pilot with clear readiness criteria, leadership backing, and an expectation of sustained engagement, ensuring feedback was grounded in real classroom and administrative use. Development followed an agile, iterative model in which educators tested evolving versions of the chatbot, shared structured insights from day-to-day practice, and directly influenced design decisions and the product roadmap. Pedagogical frameworks were used to anchor discussions to support teaching and learning, while ethical safeguards and values were shaped collaboratively. In this way, co-design functioned not as consultation at the margins, but as a continuous partnership where context, practice, and values actively shape the system’s behaviour and how it is governance.

Advocates of participatory design such as Bødker (2018) state that it allows groups of individuals to influence big issues, and one of the biggest issues in education right now is the design, use, governance and impact of AI in education settings.

By big issues we mean for people, in various communities and practices, to take control and partake in the shaping and delivery of technological solutions, processes of use, and future developments that matter to them and their peers (Bødker and Kyng, 2018, p.2).

Conclusion

The transition from generic chatbots to hyper-contextual educational assistants represents not merely a technological upgrade but a fundamental reimagining of what AI can accomplish in schools and universities. When AI systems understand individual students' academic histories, connect to curriculum materials, and access institutional data, they achieve outcomes that generic tools cannot approach.

But context creates complexity. Institutions must navigate governance frameworks requiring transparency and human oversight, privacy tensions between capability and protection, and the co-design imperative that places educators and students at the centre of development. The path forward requires integration, principled governance through frameworks like PARTNER, and genuine partnership with the communities these systems are meant to serve.

The next wave of educational AI will succeed not through superior algorithms alone, but through superior integration with the human systems - pedagogical, institutional, and social - that give education its meaning.

If you would like to learn more about enabling context in your student or staff facing chatbots please get in touch.

References

American Federation of Teachers. (2024). Commonsense guardrails for using advanced technology in schools. https://www.aft.org/sites/default/files/media/documents/2024/Commonsense_Guardrails_AI_0604.pdf

Anthropic. (2025). How we built our multi-agent research system. https://www.anthropic.com/engineering/multi-agent-research-system

Bødker, S., & Kyng, M. (2018). Participatory design that matters—Facing the big issues. ACM Transactions on Computer-Human Interaction, 25(1), 4:0-4:31. https://doi.org/10.1145/3152421

Brookings Institution. (2023). Should schools ban or integrate generative AI in the classroom? https://www.brookings.edu/articles/should-schools-ban-or-integrate-generative-ai-in-the-classroom/

Dartmouth College. (2025). AI can deliver personalized learning at scale, study shows. https://home.dartmouth.edu/news/2025/11/ai-can-deliver-personalized-learning-scale-study-shows

Education Week. (2024). Most schools have early-warning systems. How well do they work? https://www.edweek.org/leadership/most-schools-have-early-warning-systems-how-well-do-they-work/2024/02

Edutopia. (2019). Common edtech mistakes—and how schools can avoid them. https://www.edutopia.org/article/common-edtech-mistakes-how-schools-can-avoid/

Ethical Data Initiative. (2025). Data sovereignty in the age of AI: Data policy in higher education. https://ethicaldatainitiative.org/2025/11/06/data-sovereignty-in-the-age-of-ai-data-policy-in-higher-education/

Hart, J., & Hussain, A. (2025). Ada and FirstPass: Bolton College's digital assistants for students, teachers and campus support teams. In AI-Powered Pedagogy and Curriculum Design (pp. 205–221). Routledge.

Khan Academy. (2024). Meet Khanmigo: Khan Academy's AI-powered teaching assistant & tutor. https://www.khanmigo.ai/

Ohio State University. (2024). The rise of chatbots in higher education: Transforming teaching, learning, and student support. https://ascode.osu.edu/news/rise-chatbots-higher-education-transforming-teaching-learning-and-student-support

Office of Superintendent of Public Instruction, Washington State. (2024). Human-centered artificial intelligence in schools. https://ospi.k12.wa.us/student-success/resources-subject-area/human-centered-artificial-intelligence-schools

SchoolAI. (2024). Mission Control: See and support students and teachers across your school. https://schoolai.com/products/mission-control

Topali, P., Ortega-Arranz, A., Rodríguez-Triana, M. J., Er, E., Khalil, M., & Akçapınar, G. (2025). Designing human-centered learning analytics and artificial intelligence in education solutions: A systematic literature review. Behaviour & Information Technology, 44(5), 1071–1098. https://doi.org/10.1080/0144929X.2024.2345295

UNESCO. (2023). Guidance for generative AI in education and research. https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research

van Leeuwen, A., Strauß, S., & Rummel, N. (2023). Participatory design of teacher dashboards: Navigating the tension between teacher input and theories on teacher professional vision. Frontiers in Artificial Intelligence, 6, 1039739. https://doi.org/10.3389/frai.2023.1039739

← Previous Article

The evolution of a school, college or university chatbot service

How campus chatbots could evolve from simple oracles to fully agentic digital assistants that support students and staff.