We’re all striving for a more gender-neutral, inclusive world, but what happens when the very technology, ‘AI’ designed to help us pushes us in the wrong direction?
Reshma, CEO of Toss the Coin, recently shared an interesting experience using a generative AI tool to create a comical image of a person unwrapping a gift, which turned out to be a box within a box. Naturally, the AI showed frustration on the person’s face, which was fitting.
But here’s the curious part – The AI’s “person” was a man. Why did it assume that?
But here’s the curious part – The AI’s “person” was a man. Why did it assume that?
Something had changed when she asked for the same scene with a woman. The frustration was gone, replaced by a smiling, cheerful woman. she hadn’t asked for that!
Why did the AI assume a woman should be smiling? (Read the prompt again)
This might seem small, but it’s a clear example of gender bias at play. AI, like a child, learns from the data it’s been fed—and that data often reflects decades of ingrained stereotypes. So, while we may want AI to be objective and neutral, it can only reflect the world we’ve shown it.
This bias exists, not just on the surface, but on a deeper level. For instance, when I ask GPT to write a letter in the third person, it’s smart enough to remain gender-neutral unless I specifically mention a gender.
This shows that on the surface, these biases may or may not exist. But dig deeper, and those stereotypes show up in ways we often fail to notice. The subtlety is what makes it so insidious—we don’t always see it, but it’s there, lurking in the background.
This raises a bigger question – how do we tackle bias in AI when it’s baked into the systems we use?
Every AI tool comes with some bias whether it’s around gender, race, or appearance—because it all boils down to how it was trained. While we can try to remove bias during training, the question remains.
Can we ever be truly objective? After all, bias is subjective.
This is where Explainable AI (XAI) steps in. XAI aims to pull back the curtain and offer transparency. It helps explain why the AI made certain decisions—like why it assumed a “person” was a man or why it showed a woman smiling. This transparency is a crucial step toward accountability, allowing us to pinpoint and confront bias directly. But even with these explanations, AI is still a mirror of the biased data it’s been fed.
There’s growing research focused on eliminating these biases in AI, but it’s not an easy fix. Since we all carry biases, consciously or not, AI will likely reflect those unless we take conscious steps to regulate and standardize its development.
Regulation might seem like a tedious step, but it could be essential to ensuring AI remains fair. That said, even with regulations in place, bias is something we’ll have to continuously work at—a step toward a future where technology operates without prejudice.
It may feel utopian to imagine an AI world free of bias, but it’s something we must keep striving for. The challenge will be ongoing, but the more we address these issues today, the better chance we have at creating technology that truly serves everyone equally.