Whenever you read reports about technology behaving in racist or sexist ways, like when Google’s search algorithms recently supplied pictures of half-dressed actresses when asked for “South Indian masala“, you immediately realise that it’s hardly the technology itself that can be blamed for being sexist, but in fact the creators and users of the technology that have passed on their biases to it.
Last year, during his research with image recognition software, a University of Virginia computer science professor named Vicente Ordóñez noticed some strange sexist patterns: that the software would associate pictures of kitchens with women, not men, for example.
He decided to test out two huge collections of labelled photos that are used to “train” image recognition software, including one that’s supported by Microsoft and Facebook. It found that these large image collections came with in-built gender biases, and a proclivity to link images of shopping and cleaning with women, and coaching and shooting to men.
The research also found that the softwares weren’t just reflecting the biases they were trained with, but also actually magnifying them. If an image recognition software is “trained” using a set of pictures that carry these biases, it was found that the software would create an even stronger association with the biases.
While it’s pretty annoying in and of itself that new technologies are being trained and programmed to mirror the real world biases that we’re trying so hard to remove, this could also have larger consequences. As Wired points out, if replicated in tech companies, these biases could affect widely used photo recognition software, and could interfere with tools that analyse social media photos and activities in order to glean customer preferences.
We’ve enough reason to understand that this a problem already: back in 2015, Google’s facial recognition software tagged two black people as monkeys, as the images that were fed to it when it was trained always linked white people to humans, leaving black people and others to…be assigned to animals, I suppose.
In all seriousness though, this is something we should be careful about. In the early days of the Internet, the virtual world was hailed as a new, neutral space where gender and racial biases could be erased, giving us a sort of opportunity to start over. Now, given that we’re seeing how easy it is to use the Internet to troll, harass and attack women, and also looking at the clearly gendered ways that technology in itself is learning to behave, it’s clear that we can’t just sit back and take it for granted that the virtual world will be any less sexist than the real world. In fact, given how so much of our daily lives is shifting to the virtual world, we should be particularly vigilant in fighting the sexisms we encounter here.