We ❤️ Open Source
A community education resource
AI and gender bias: Practical steps developers can take to identify and fix bias in AI models
Watch this presentation "Is AI sexist?" with Emily Maxie on examining gender bias in Large Language Models (LLMs).
In her presentation, Emily Maxie explored the issue of bias in artificial intelligence, a topic that has far-reaching implications for both our present and future. Titled “Is AI Sexist? Examining Gender Bias in Large Language Models,” her talk examined how AI, which we often think of as objective, can reflect and even amplify the biases present in the real world—particularly gender bias.
Emily’s journey into understanding AI bias began in 2017, when she started working at a company developing AI algorithms. She shared a key insight: While computers are designed to be neutral, AI is only as unbiased as the data it’s trained on. To illustrate, she used a simple example: Imagine training an AI to recognize pets but only showing it pictures of light-colored Chihuahuas. If the AI is never shown any other types of dogs or pets, it would likely mistake a muffin for a dog, simply because that’s the data it was trained on.
Emily then highlighted a real-world case with Amazon’s 2014 AI system, which was created to review job applications. The system was trained on resumes from the previous decade, a time when 78% of U.S. software developers identified as male. As a result, the AI began to favor male candidates and penalize resumes with any indicators of gender—like a candidates involvement in a women’s group. The bias went undetected for a year, and while Amazon attempted to fix the system, it ultimately scrapped the project due to concerns over other potential biases.
Emily emphasized that while AI bias is a serious issue, eliminating all bias is not a realistic goal. Instead, the focus should be on identifying high-risk areas where bias could do the most harm—examples include hiring, healthcare, or loan approvals.
Read more: Human in the loop: Enhancing AI with a personal touch
Key takeaways and best practices
Emily concluded her presentation with three practical steps that both developers and non-developers can take to address AI bias and help build more ethical AI systems:
- Educate yourself about AI: It’s important to understand AI, as it’s becoming part of our daily lives. Emily recommends resources like Professor Ethan Mollick’s book Co-intelligence, which provides a solid primer on AI and its societal impacts.
- Understand your own biases: Everyone has biases, and recognizing them is crucial. Emily suggested using Harvard’s free Implicit Association Test to uncover your own hidden biases—whether based on race, gender, or other factors.
- Be critical consumers of AI: AI is everywhere—from facial recognition on your phone to personalized shopping recommendations. It’s important to question how these systems might be biased and to speak up when you encounter harmful or unexpected behavior in AI applications.
Conclusion
AI has the potential to improve lives and create more efficient systems, but without careful attention to the biases that shape it, it risks amplifying the inequities that already exist in our society. Emily’s talk serves as a reminder that, while bias may never be fully eliminated from AI, we can take meaningful steps toward reducing its impact—especially in areas like hiring, healthcare, and criminal justice. By educating ourselves, understanding our own biases, and being critical consumers, we can work toward a future where AI works for everyone, not just a select few.
More from We Love Open Source
- Getting started with Llamafile tutorial
- How Netflix uses an innovative approach to technical debt
- Evolving DevOps with productivity and improving the developer experience
- Harness the power of large language models part 1: Getting started with Ollama
The opinions expressed on this website are those of each author, not of the author's employer or All Things Open/We Love Open Source.