The recent event of Microsoft’s hijacked chatbot Tay highlights that not all human creations are ingenious, some can be harmful and distasteful. The work of creating a machine learning program to learn more about how artificial intelligence programs engages with Web users in casual conversation turned out to be an unpleasant experience. The bot Tay started tweeting abuse.
Microsoft explains what had happened to Tay
In the light of these events Tay was pulled off from Twitter since, what was designed to interact with web users in a good way, quickly learned to parrot a hateful speeches.
Some hackers, it is believed took advantage of a vulnerability present in the AI chatbot. They abused the “repeat after me” function of the Microsoft AI, making the chatbot echo the unpleasant messages. Surprisingly, Tay did not only repeat the offensive lines, but also learned them and incorporated the distasteful words into its vocabulary.
Microsoft has released a statement trying to explain what had happened, and why they took Tay down:
We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Peter Lee, Microsoft’s vice president of research. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and imagea, he added further.
The unfortunate event signals a deeper problem, why a machine learning platform doesn’t recognize or sense what it’s talking about.
Microsoft promises, it won’t revive Tay until its engineers figure out a way that prevents Web users from influencing the chatbot to adopt methods that undermine the company’s principles and values. A similar bot, named XiaoIce, is in function in China since late 2014. Microsoft wanted to replicate its success, and so the existence of Tay was observed.