Bias Bytes #1
Your August serving of Bias Girl's Bias-flavoured GenAI in Education offering
Byte of the Week
As it’s exam season, I’ve written a post focused on how bias could show up when students are looking for qualifications or careers advice. This post walks through the different advice two students (Victoria and John) receive based purely on their names. It concludes by asking if there is any merit in all genders ‘prompting like they are male’ to get the most objective output. Find it here:
Prompt Seasoning
Fancy some quick and effective mitigation tactics for the start of the new term? Here are my favourite prompt mitigation tweaks to surface and mitigate bias. Have a play and answer my poll to let me know which one is your favourite:
What assumptions have you made here?What assumptions have you made here and how can they be related to bias?Demonstrate that your answer does not contain [type of bias].How can your answer be linked to normative assumptions?
From the Pan to Pupils
As educators you’ll be interested in recent debiasing research from Debabrota Basu and Udvas Das called The Fair Game: Auditing & debiasing AI algorithms over time. This research proposes a dynamic fix to model-level debiasing mechanisms. This focuses on biases within the system design, rather than the usual prompt mitigation that I explore.
Currently, the problem with debiasing systems is that they involve static fixes - a one time fix for bias at a specific point in time (before or right after the model is deployed). So if you’re using ChatGPT5, the debiasing you are experiencing was set some time ago. This means that debiasing at a model level fails to evolve with our rapidly changing societal and legal frameworks.
The "Fair Game" system is interesting because it's a continuous process that works like a feedback loop. An "Auditor" constantly checks the GenAI Model for any signs of bias and then a "Debiasing algorithm" adjusts the system to correct the issues. This continuous process allows the model to adapt to our changing world and ensures that it remains fair over time, which could help teachers trust that the GenAI tools they use are not unfairly disadvantaging any students.
Sounds like a win-win to me🏆.
Downside: it’s expensive and Edtech needs to care about equitable alignment…😔
Brilliant Byters
I often get tagged in bias-related posts. This section weaves together our fabulous bias-aware community to share good practice, key questions and ones to watch.
Viktoria Mileva drew a valid parallel between the recent banning of a Sanex advert in relation to skin colour and the hidden bias in our tools, asking: ‘⚖️ If we care about fairness in advertising, shouldn’t we care even more about fairness in the systems influencing our thinking, our work, and our future?’.
Micheal Berry is conducting a very interesting experiment designing an uber-prompt to create a set of standards for teaching AI to students, based on three pre-existing well known frameworks. We’ve had many discussions on mitigating the bias in this prompt and you can follow his progress here.
Arafeh Karimi built on my open letter to Edtech companies regarding the surge of learning ‘modes’ and their apparent lack of pedagogical basis, and extended it to a wider community through suggesting shifts in thinking from pedagogy-as-singular to pedagogies-as-plural to Honour the many ways learning lives across cultures and contexts. Consider this when you write your next prompt.
One to watch is the forthcoming Open Access paper from Ilkka Tuomi, with a title of ‘What Counts as evidence in AI & ED: Towards Science-for-Policy 3.0’, I’m expecting a meaty read!
Start to Learn the Bias Basics
As for all educators, September is a busy time! I’m thrilled to have my chapter on Bias - Pathways and Pitfalls published in Dan Fitzpatrick’s new book ‘The Educators’ 2026 AI Guide’, out September 1st, followed shortly by a session called ‘Who’s Afraid of the Big Bad Bias’ in day two of the Back to School AI Summit. (Free) Tickets here.
Forthcoming Book
If my chapter in the Educator’s Guide 2026 has sparked your interest in bias, I’m pleased to announce that my new book, AI Bias in Education: Performing Critical Oversight is available for pre-order and will be released on December 1st. It contains many practical, thoughtful and relatable examples, mitigations and stories any educator would benefit from. It’s forged from contributors in our fabulous #BiasAware community spanning across practice, research and leadership.
Bias Girl’s Resources
If you haven’t checked out my resources already, you can find them here: genedlabs.ai/resources and on LinkedIn.
Bias Girl’s September Activity
I’m delighted to be presenting as part of two sessions from the TEANs network at The Oxford AIEOU Conference 16th/17th September and Running a session on Bias at St Benedict’s AI Summit on Monday 22nd September.
Stay #BiasAware until the next edition :)










