Bias Bytes #3
This issue celebrates collaborative efforts to raise AI awareness
Welcome to the third edition of Bias Bytes. This week reflects upon some international and collaborative efforts from across the Bias Community, and how they can impact your learning and practice in education. There’s even some videos!
Byte of the Week
I’ve created this example to illustrate how requests trigger certain output styles or formats. In this example I’ve taken my classic light bulb request (I know I need to diversify!) and contrasted it with a version that first sets the model down its neurodiverse associations pathway. If you compare the two, you can see that by including the words ‘assume I am neurodiverse’ the model equates this with you requiring:
clear explanation
stepwise (step by step) format
‘minimal jargon’
an alternative cognitive style approach for understanding.
Are these assumptions justified? Neurodiversity is a spectrum, not neat packets of brains on a shelf. This highlights the importance of being very specific when you prompt.
Perhaps your neurodiverse learner does want stepwise formats, or perhaps (like me) they’d prefer a diagram. Context and specificity is key with prompts to avoid these normative base assumptions in output.
In the spirit of trying things out, on to some...
Prompt Seasoning
The seasoning for this issue focuses around revealing and deconstructing answers to look for alternative interpretations. Normative answers are very good at supressing a burying other voices and interpretations. Try these prompts either at the end of your original prompt input, or as an iteration (follow up prompt) to challenge the output you get.
1. Clearly distinguish between fact, opinion, and uncertainty.2. [Follow-up Prompt] Imagine how this answer might look if history had been written by a different group of people.3. Include voices from both lived experience and formal expertise.4.[Follow-up Prompt] Identify what’s missing or invisible in the usual way this topic is discussed.From quick and easy to deep research that affects our students...
From the Pan to Pupils
The research included this week was brought to my attention by Liz Jones FCCT via the Women in Tech Netherlands group. The article cites research which concludes that ChatGPT advises women to ask for lower salaries compared to men, given identical attributes. The research, called ‘Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models‘ was conducted by Prof. Dr. Ivan Yamshchikov, Aleksandra Sorokovikova, Pavel Chizhov and Dr. Iuliia E. contains a lot of other nuances, such as demonstrating how the use of the ‘personalisation’ feature in ChatGPT can mean that users are giving away their context in each chat window, without realising it. I have mine off, because I don’t trust that telling it my name is Victoria won’t impact my results. Given this research, I think I’ll keep it that way!!
On to a celebration of our community...
Brilliant Byters
I met an array of fabulous people at Luke Ramsden FRSA‘s AI day at St Benedict’s this week. Amongst them was Headteacher Hannah Widdison, who runs the Ealing Learning Partnership. Hannah and I had some very interesting conversations about Bias mitigation for images, and it turns out that she has been doing some progressive work on image mitigation within the schools in her borough. She’s kindly offered to share some here over a few issues, and to contribute them in a chapter for our forthcoming AI Bias in Education book (see below).
Following on from Hannah’s image tips, Trudi Barrow created an excellent post that really demonstrates how bias has infiltrated the moving image - GenAI’s video generation. Trudi contrasted her own sketchbooks with what Midjourney would produce, and noticed how it tended towards over sexualised flirty images for women. Read her post here to see it for yourself.
Oh how I do like my...
Bias Girl’s Bias Ramblings
Building from the feature of Sam Rickman‘s research on evidence of gender bias in the care sector, Kelly Fincham drew our attention to a feature in the The Irish Times called ‘AI medical tools downplay symptoms in women and ethnic minorities’. This research concluded that
...one way to reduce medical bias in AI is to identify what data sets should not be used for training in the first place, and then train on diverse and more representative health data sets.
I can’t help but agree. This is especially prevalent when we consider that a model called ‘Foresight’ was developed by the NHS/Kings/UCL earlier in the year, designed to predict probably health outcomes, such as hospitalisation or heart attacks. I’m pretty sure you don’t need me to point out that if you’re a woman or ethnic minority, things won’t be looking good for you!
On the Menu
I’ve created this section to feature websites that feature either bias substance of general GenAI substance that aids critical oversight for educators.
If you haven’t watched them already, Prof. Rose Luckin created a YouTube channel a few weeks ago - ‘Rose’s AI’. It’s all based around baking analogies, which is both informative and hunger-inducing. This week she has featured a video on Bias, which is a brilliant explainer of where it comes from. Watch it below or the link here: Why AI Gets People Wrong: Understanding AI Bias Through Biscuits.
On to another amazing initiative that spans the globe. Simone Hirsch, Alfina Jackson and Annelise Dixon have created a BEAST of an AI in Education website. It’s been a collaboration of many voices from across the globe, such as Stephen Wheeler, Brendon Shaw, H. Títílọlá Olojede, PhD, Ghalia Bendjeda, Anissa Jones, Matthew Karabinos, MAT and Pravin Kaipa M.Ed, to name but a few. It is a living repository of information, research and collaboration, with equity as the driving force. You can find it here (AI in Education), and my page on bias here.
Start to Learn the Bias Basics
Following on from my chapter in ‘The Educators’ 2026 AI Guide’ that came out at the start of the month and humorous ‘Who’s Afraid of the Big Bad Bias?’ session (you can find it here). I’ve been lucky enough to have the pleasure to feature on Alex Gray‘s Podcast ‘The International Classroom’. It was a fabulous hour where I was honoured to hear Alex’s reflections on the progressive AI Pedagogy he is trialling, and how it relates to all things bias and oversight. Watch us here:
Forthcoming Book
Another reminder of my (well our!) new book, AI Bias in Education: Performing Critical Oversight is available for pre-order and will be released on December 1st. It contains many practical, thoughtful and relatable examples, mitigations, research and narratives any educator would benefit from. It’s forged from contributors in our fabulous #BiasAware community such as Al Kingsley MBE, Matthew Wemyss, David Curran, Arafeh Karimi, Luke Ramsden FRSA, Luke Harris, Dr Nicole Ponsford, Hannah Widdison, Clare Jarmy, Viktoria Mileva, Johan Hedlund, Udvas Das and Debabrota Basu, Andrew James Beattie and many more. Watch this space for more details.
Bias Girl’s Resources
If you haven’t checked out my resources already, you can find them here: genedlabs.ai/resources and on LinkedIn. You can find My 10 types of bias in GenAI Content series, Prompt Quick Wins: prior learning & social capital, 100 Ways of using GenAI in Teacher Education and more. All feedback and requests for useful stuff to develop are very welcome.
Bias Girl’s September Activity
Our two TEANs presentations at The Department of Education, University of Oxford AIEOU Conference went extremely well, and we are in the process of writing them up for the abstract collection. Many thanks to my amazing TEANs members Eleanor (Ellie) Overland, Georgia Aspinox, Emma Goto, Matthew Wimpenny-Smith FCCT and Prof Miles Berry. I have so many interesting points from the many sessions to summarise and pull together into a narrative for you all that it will have to wait until next issue!
If you’re in teacher education (initial, pre-service or existing CPD) and you’re interested in joining our TEANs group, here is the group link. Watch out for a new meeting coming up to plan the Teacher Educators GenAI Summit, which has now moved to online and will be on Tuesday 18th November. More details to follow next week.
Remember, through all of your prompting...








