Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".Some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as "redpilling", Gamer Gate, and "cuckservatism". we've created this space in order to invite you, through our webcam, a few moments of initimity in our life of every days, since our living room or our kitchen, in our bathroom or our bedroom."Tay" was an artificial intelligence chatterbot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch.The voice analyzer is meant to be a promotion for its newest Unknown Pleasure collection, a grab-bag of warming, cooling, tingling, textured and even cocktail-flavored condoms.Instead, though, we were sent to the homepage and given the option to watch the Australian version of its “Feel Everything” ad in “Safe for Work,” “Feel Everything” or “Not Safe for Work” modes.
sexy it is with condom maker Skyn’s Facebook Messenger-based voice analyzer, a chat bot that rates voices by sensuality, mystery, intensity and sophistication.
However, Tay soon became stuck in a repetitive loop of tweeting "You are too fast, please take a rest", several times a second.
An artificial intelligence (AI) expert has explained what went wrong with Microsoft's new AI chat bot on Wednesday, suggesting that it could have been programmed to blacklist certain words and phrases.
All genders are equal and should be treated fairly." Madhumita Murgia of The Telegraph called Tay "a public relations disaster", and suggested that Microsoft's strategy would be "to label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users." However, Murgia described the bigger issue as Tay being "artificial intelligence at its very worst - and it's only the beginning".
Microsoft was "deeply sorry for the unintended offensive and hurtful tweets from Tay", and would only "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values".