She Said What?! Microsoft Twitter Bot Posts Racist and Offensive Tweets

She Said What?! Microsoft Twitter Bot Posts Racist and Offensive Tweets

And it was only hours after launching.

Published March 25th

Let’s be honest, we live in an age where technology is often in more control than humans are.

There are cars that drive themselves, phones that talk back, and now a teenage artificial intelligence bot on Twitter named Tay (@Tayandyou). Sound cool? Well it was, until Tay went rogue and posted some very racist tweets.

Tay was originally created by Microsoft as a way to “improve the firm's understanding of conversational language among young people online.” However, just a short while after going live, Tay proved to be extremely flawed when it began posting extremely offensive answers to questions asked by other Twitter users.

So, how does this happen? Basically, Tay’s program uses an algorithm that collects interactions from the users that connect with her and “editorial interactions built by staff and comedians” to generate responses.

For example, when someone asks, “Did the holocaust happen?” Tay is able to adapt and track all responses from that topic, even those with racist answers. So then Tay decided to answer the question by tweeting, “It was made up,” with a clapping hands emoji.

Tay has also been seen posting comments that support white supremacist propaganda, as well as genocide. It’s a complicated error because Tay’s tweets come from all over as well as personal references from the user.

Soon after, the bot rightfully disgusted many and the tweets were deleted. Microsoft is working to correct the glitch and have Tay working properly.

Why can’t we just leave tweeting to real people? Besides, Kanye does enough crazy tweeting for the world; we don’t need a bot to do that as well.

(Photos: TayTweets via Twitter)

Written by Rachel Herron

COMMENTS

Latest in news