Dear Edtech: Before You Build Another Learning Mode
A teacher’s open letter on motivation, student agency, and cognitive commitment
A teacher’s open letter on motivation, student agency, and cognitive commitment
Recently I ran a poll to source views on how the newfound plausibility in ChatGPT-5 answers could show up in assessments and student work, and the implications for teachers/educators.
Mark Berry, a digital lead in a school, replied:
I’ve always said to staff — you know the students, you know their capabilities, their writing styles. Sit down with the student and ask them questions about their assignment. It’ll become very clear, very quickly if they understand what “they’ve” written, how they’ve structured it, and what conclusions they’ve come to.
AI can be used to assist, but I fear many students will see it as a way to churn out their homework in 30 seconds without even bothering to read it.
This hit a chord with some ideas about motivation, power, and agency that have been circling around my head since the current fashion for ‘learning modes’ hit our screens.
After a discussion with a technical person in a large edtech company yesterday, I realised something important: the failure of these learning modes to enable learning is a design issue, not an implementation flaw.
Traditionally in the technology business, if a tool was to be made, it would be in consultation with the sector it was being made for: the client sector. This means learning about the sector and the research within it. That hasn’t happened for education.
Why?
It’s that old ‘those who can’t, teach’ mindset. Anyone thinks they can have a go. Yet it is difficult to evidence being a ‘good’ teacher (in terms of advancing student wellbeing, attainment, and enabling life choice). Anyone who home-schooled during lockdown knows this.
That awful OpenAI Podcast with Leah Belsky clearly shows how big tech hold this mindset to the field.
The field of pedagogy. It has a name.
So why has pedagogy been ignored? The answer lies in the lack of respect for the profession.
No-one has bothered to learn how we have studied how to learn. They look to cognitive science for immediate, tangible, and reproducible answers.
I find myself repeatedly commenting on posts with 'What about motivation?' Depending on your viewpoint, motivation is more than cognitive science. Motivation is linked to engagement. Engagement to enjoyment. Enjoyment to purpose. And so on. There's a subjective/objective or cognition/affect debate here.
Who am I to say what a child will enjoy?
The only way to know this is by spending time with them and asking. Chatbots can do this, but professionals know when to use this information appropriately. We will not use it to tailor personal advice or go beyond whatever context it was discussed within. It goes into a folder in our brains and comes out when we want to try to link a topic to a child’s interests.
Chatbots don’t do this. They use this interest information in relation to statistical norms, then produce biased results. These results could be something such as a suggested career. Or something more sly, such as plausible advice on how to integrate with friends, in the case of the more plausible ChatGPT-5.
So how do students retain power in this context? What power and agency do they have?
They have the agency to decide where they spend their attention, and whether to choose to perform cognitive commitment. This term, cognitive commitment, I am defining in relation to the concerns about cognitive offloading. It is the act of a student deciding whether to commit to cognitive effort, or to offload to AI.
Have we thought that maybe the next generation wants to offload?
It may not be what’s best for their learning in our terms, from our pre-existing pedagogical paradigm, but perhaps we should respect their choices?
This becomes messier for children under 18. How much cognitive offloading do we allow as responsible adults? As teachers? As parents? How much will Edtech companies define as 'safe'?
Like chocolate, how much is good for them, how much rots their teeth, and how much causes a health problem?
While we are off busily exploring these questions and aligning ourselves into pro or against AI factions, students and children have picked up the agency themselves.
Empathise with a student for a minute: Given a boring homework sheet? I have the power to refuse to cognitively commit to the process of learning through engaging with the sheet. So I’ll stick it through a chatbot and focus on the product, not the process. Task done. I feel no shame.
Or perhaps empathise with the flip-side: Given an open task that matches your interest? I’ll commit cognitively to this. I’ll give it my attention. I’ll perform the cognitive science versions of deep learning and add to my schema. I'm motivated.
So while we are deciding how to integrate, enable, and protect students with their AI use, I argue they are actually executing their agency by choosing where to put their attention, and how they perform cognitive commitment.
Attention and cognitive commitment are quanta - transactional quanta. Like photons, they come in packets, and like Young's Slits demonstrated, these quanta behave differently when forced into different interactions. Read: as a teacher you can get attention and cognitive commitment in different scenarios, but you have to work out what these scenarios are, then learn how to replicate them. It's a complex and dynamic system: you and the learner. And that system isn't closed - there's 30 interacting units in your class at least.
Viewed in this light, learning becomes more about motivation and less about hard science. How does a teacher motivate? What is it about the professional that can motivate (or demotivate) a classroom full of children or students?
Motivating students takes experience and skill. It is a dynamic and rapidly changing process. If I was modelling this in terms of physics, I’d need to define a series of very small intervals and take tangents to student-function to get a rough idea of where a student was at any one moment.
Through this lens, learning would be an integral sum of all the instantaneous parts — all the small decisions and changes in trajectory the teacher continually manages. And for each child. In the whole class. This is what makes the profession challenging. This is what takes skill. This is what is not being reproduced in chatbots and learning modes.
So a plea to edtech companies: learn about pedagogy. Learn about motivation. Learn about the profession.
Study a teacher.
Observe and transcribe the million different decisions they manage and optimise in an hour’s lesson.
Then — and only then — will you have the basis for a real ‘learning mode’
.





