Fulfilling the Promise of AI in Education

Fulfilling the Promise of AI in Education

Since the release of ChatGPT in 2022 by OpenAI the potential for Artificial Intelligence (AI) to revolutionize education has never been bigger. Generative AI (GenAI) and chatbots based on Large Language Models in particular (LLMs) hold the promise of a personal tutor for everyone. This tutor of the future can explain difficult concepts, quiz the learner and provide feedback on its answers.

However, generative AI’s greatest strength, also poses its biggest risk. Like any AI model based on machine learning, that generates outputs based on patterns discovered while training on a (large) set of data, a GenAI model can make mistakes. Your photo library app might not recognize all the pictures of your favourite pet. Similarly, ChatGPT can convincingly fabricate all kinds of false ‘facts’, like non-existent court cases, deaths of famous people that are still alive, etc.

In the field of GenAI such mistakes are often called ‘hallucinations’, a misleading term, often disliked by experts in the field, since it suggests that it is a bug or error in the model. However, from the perspective of the model nothing goes wrong. Everything that is generated is ‘hallucinated’, most of those things just happen to be true, but some are complete non-sense. The generative process that makes these models so powerful and convincing, is the same process that makes them ‘hallucinate’. You could say they are two sides of the same coin.

Therefore, ChatGPT and others, add a disclaimer that states to always verify its responses, because they can make mistakes. Obviously, in an educational setting, a simple disclaimer is not enough. This would be the equivalent of adding a paragraph to the beginning of every school textbook that says: “anything in this book might be wrong, please make sure you have other books to verify the content of this book”. That’s not what learners expect from a textbook, and neither should they expect it from its future replacement, the AI tutor.

Fortunately, an approach to applying generative AI models has emerged that greatly mitigates this problem: Retrieval Augmented Generation (RAG). In this approach generative models are restricted to generate responses based on external content, that is retrieved from an external source. This means that if this content is reliable and of good quality, so will be the answers of the chatbot. RAG puts LLMs in their strength, it uses their generative capabilities to provide a great natural language interface, but uses external content as the source of information.

In the context of education this means that the AI tutor can give higher quality personalized quizzes and assessments, explain difficult concepts better, and give more appropriate feedback. Moreover, the tutor can give references to the resources that its responses are based on. To ensure these tutors do not go off-topic and that they provide answers on the level of the learners, the external content that is provided to the model needs be a small collection, with only relevant material. To guarantee that these small collections are as appropriate for the target audience as possible, they should be curated by educational experts. When these conditions are met, AI tutors can have a real positive impact on the education of the future.

Related Posts

Book a demo

Contact details

Other information