Wednesday, May 11, 2016

Is Artificial Intelligence Racist?

20 years ago, Microsoft launched Clippy,  the AI office assistant that became a lightning rod of scorn for the company. Not resting on their laurels, 20 years later, Microsoft decided to demonstrate it’s AI prowess, this time by releasing a “chatbot” capable of carrying on conversations.
Unfortunately, the chatbot began making racially inflammatory comments - to the point that Microsoft found itself in the somewhat familiar place of shutting an AI project down and apologizing for it. 
But what went wrong? Is AI really racist? 
The answer is “It depends.” Think of AI as similar to the immature child prodigy whom you know could easily pick the locks on your house or reprogram your cars on-board computer. You’re somewhat nervous about the kid getting in with the wrong crowd and using his abilities to do bad things before they develop some sort of moral restraint. 
Put another way, AI depends on data. To train a chatbot to have conversations, one very logical way would be to build a massive database of conversations, compile a bunch of questions/statements, and then catalog the most common responses. Therefore, if the chatbot reads a lot of Arnold Schwarzenenegger movie scripts, then it will likely be pre-disposed to violent tendencies and telling people “I’ll be back". If the database of conversations tends to have a lot of dialog that could be considered racist, then the chatbot’s responses will be racist as well. 
Along similar lines, many large data sets depend on smartphones, or devices like the apple watch. It’s not hard to imagine that such devices could more likely to be owned by wealthy people. Thus data sets built using such technology could represent skew/bias toward the wealthy, and AI built on top of such data sets could in turn “discriminate" against the less fortunate in a society.
So is AI racist? Yes, no, and maybe so.

No comments:

Post a Comment