How to keep human biases out of AI
Drawing on years of experience in creating AI systems, Kriti Sharma examines the way human biases are built into machine learning. She says most AI systems learn through looking at what happened in the past, pointing to an underlying conservatism.
She also says many AI programmers, probably unconsciously, perpetuate their own biases, such as giving “servant” AIs such as Alexa and Siri female names and voices. It is possible from a technical perspective to create AIs without such biases, she says. It requires building greater diversity into the learning system and being aware of the need to counter stereotypes in AI interfaces and language.