I found this great video from Google explaining how human bias can influence artificial intelligence (and a quick primer on how AI works). Basically, it comes down to this. As artificial as artificial intelligence may be, it’s still created by human programmers – humans who each have their own biases. And that bias can subconsciously influence the code being written to reflect the coders’ own biases.
Let’s say James Damore (the Google engineer fired for his memo claiming men are better suited to be in tech than women – old news, I know) rounds up a bunch of his friends who feel the same way, and they create a successful AI bot. Could that bot default to thinking that women are less important than men in tech? If it was a chatbot and you asked it “what makes a good coder”, would it come back and say “a man”?! Is that so far-fetched? Damore was an engineer at Google (who does build a lot of AI), and no one really knew of his bias until his memo. And without said memo, Damore could well be still working at Google building whatever it is he was building (side note – no idea whether Damore was involved in AI at Google).
I won’t say that’s a likely scenario by any means – just a good thought exercise. Google and Facebook have done a lot to tamp down the fake news, the hate, the negative stuff that might influence us (and AI). But when so many negative events like the Charlottesville violence or the Barcelona terror attacks seem to be happening more often, sometimes you just have to wonder, how will Skynet interpret it? That humans are divisive and hateful? Or that we we unite with love when we respond to these events? Are we going to get the the bad Arnold from Terminator 1, or the good Arnold from Terminator 2?