First published in The Hindu, 31 Jan 2026
For better and worse, AI is upending education. While bots can accelerate and enhance learning, they can also supplant deep engagement and reflection. How can educators help students reap the pluses of AI without succumbing to its pitfalls? At what stage of the learning process should AI be introduced?
In an article in Harvard Magazine, Olivia Farrar reports on a conference at the University where professors shared thoughts on how they were integrating AI while trying to safeguard student learning and authenticity. Teddy Svoronos, a lecturer at the Harvard Kennedy School recommends a “traffic light” model for AI usage. Green indicates that AI can be used sans restrictions, yellow suggests limited access and red is when AI is banned.
When introducing a topic, tasks are usually marked in red so that students first wrestle with concepts on their own. In the exploration stage, students may be allowed full access to AI to both broaden and deepen their understanding. They are asked to reflect on questions like how did AI change my understanding or what aspects did AI overlook or not address adequately? For one assignment, students had to engage in a “Socratic dialogue” with a bot that was trained on the course content. However, the conversation was evaluated by the professor rather than the bot.
Tari Tan, a lecturer in neurobiology, asks students to first make notes on a lesson and then compare their notes to what Chat GPT produces for the same content. During this exercise, students analyze the “quality of their prompts” and how biases may impact the process. By doing this exercise, students realize that while AI can be a useful tool it is not an “infallible source of information.”
Depends on the use
In an article in Psyche, Nick Kabrel avers that AI, like any tool, is not inherently a boon or bane, as it’s oft made out to be, but depends greatly on how it is deployed. Overdependence and mindless use can indeed jeopardize our cognitive capacities. He cites research that shows that students who depended on AI more, had poorer critical reasoning skills. Another study revealed that when students used AI to help them write an essay, they didn’t remember what they had produced a few minutes after doing the assignment.
To avoid these pitfalls, Kabrel recommends that you engage with the bot more deliberatively and strategically. First, you need to identify your long-term goals as a learner or a professional in a particular field. If you want to become a design thinking consultant for example, you need to be able to ideate and generate creative ideas on your own. If you use a bot to come up with designs before engaging in the hard work yourself, you will not flex your creative muscles, and are then more likely to be replaced by a bot in the future. Don’t be lured by the short-term gains of saving time or scoring better grades on an assignment.
Kabrel recommends a sandwich approach when using AI. Always come up with your own thoughts first. Then you may ask AI to “critique your work,” to suggest alternative viewpoints or find lacunae in your arguments. Finally, you need to evaluate the suggestions given by AI and decide which ones strengthen your work. This way, your learning is enhanced and not hampered by AI.
Most importantly, don’t believe everything AI says. Know that it can hallucinate and outright lie. Double-check the sources it cites as it is known to fabricate citations.
You may also use AI like a tutor who pushes the envelope of your thinking. Kabrel calls this the non-directive mode where you ask the bot to flag potential flaws in your reasoning, or to highlight where you have made a mistake in your problem-solving without giving out the answers. This way, you still have to do the hard work of figuring out what’s wrong and fix the errors.
The writer is visiting faculty at the School of Education, Azim Premji University, Bengaluru, and the co-author of Bee-Witched.

