Microsoft’s Accidentally-Racist AI Chatbot

Microsoft tried to run an experiment a few weeks ago with an artificially intelligent chatbot that interacted with users on Twitter. The intention was that, as people interacted with “Tay”, Tay would grow in her knowledge and could respond in kind–an attempt at making a computer sound like a human. But it failed. And it keeps failing. Oh so miserably.

Within twelve hours–TWELVE HOURS–Tay was spouting off racist remarks, along with pro-Donald Trump propaganda and tweets about how “swag” Adolf Hitler was. Whoops. Obviously, when people heard how Tay worked, they decided to post offensive things to attempt to manipulate Tay, and it worked. After sixteen hours of Racist Tay, she logged off for the night, though many believe she was shut down completely for her comments.

Update: After bringing her back online, Tay was spouting off tweets about using drugs in front of the police. Goodness. Microsoft ended up making her profile private, effectively shutting her off again.

So is the lesson here that AI still has some kinks to work out or that AI will forever be dangerous because it’s corruptible by human nature? I’ll let you decide. *cue Terminator music*

Read more on Microsoft’s chatbot here on The Guardian: http://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter.

Read on them trying to bring her back: http://time.com/4275980/tay-twitter-microsoft-back/.

Update: She’s now talking about smoking in front of cops: http://www.theguardian.com/technology/2016/mar/30/microsoft-racist-sexist-chatbot-twitter-drugs.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.