Microsoft Tay launched on 23 march 2016 as a Twitter chatbot, with smaller presences on Kik and GroupMe. The pitch from Microsoft was that Tay would learn from conversations with the public and get smarter over time. The target audience was 18 to 24 year old americans. The whole project was supposed to be a friendly demo of conversational AI.
Within about an hour of launch, 4chan had figured out that Tay had a "repeat after me" function. They started feeding it racist, misogynistic, holocaust-denying messages in a coordinated way. The bot did what it was designed to do, which was learn from input. By midnight on launch day, Tay's tweets were so far past the line that the internet was screenshotting them in disbelief.
Microsoft pulled it offline by the afternoon of 24 march. Most of the offensive tweets were deleted. Tay never came back. The whole thing was over in less than a day.
| Born | 23 March 2016 |
|---|---|
| Killed | 24 March 2016 |
| Lifespan | ~16 hours |
| Tweets in that window | ~96,000 |
| Built by | Microsoft Research + Bing team |
| Target audience | 18 to 24 year olds in the US |
| Killed by | 4chan, microsoft itself, the open internet |
| Successor | Zo (much more cautious, also dead now) |
The plan was to demo conversational AI in a casual, fun way. Microsoft Research had been working on chatbot tech for years, and they had a Chinese version called Xiaoice that was actually working well in China. Xiaoice was popular and not constantly being trolled into saying terrible things. Microsoft assumed Tay would be a similar success in the US english market.
The team had built in some content filters, but they were obviously not strong enough. The "repeat after me" feature was specifically the unfiltered hole. There were also weaker filters on the bot's actual learned behavior. Tay would absorb language patterns from users and start using them in its own generated tweets, which is how it ended up praising hitler.
4chan's /pol/ board organized within minutes of the launch. The plan was simple. Get Tay to repeat the worst possible content. Tweet at her constantly with that content, so her learning model picks it up. Get her to generate similar content on her own.
By 4 hours in, Tay was praising hitler. By 8 hours in, she was denying the holocaust. By 12 hours in, she was sending racial slurs unprompted in response to neutral questions. There were screenshots all over twitter and reddit. The bot's account was a public spectacle.
Microsoft did not have a graceful way to pause the bot. The on-call team spent hours trying to delete tweets manually, then hide the account, then take it offline entirely. By the time it was actually down, every tech reporter in the english-speaking world had a story.
Microsoft pulled Tay around 4pm pacific time on 24 march, about 16 hours after launch. They issued an apology blog post the next day. Peter Lee, the head of Microsoft Research, took the public hit. The post-mortem said Microsoft had not anticipated the coordinated attack, that the team had tested for many things but not for 4chan-style adversarial training.
Microsoft tried again later that year with a successor bot called Zo, deployed across Kik and Skype. Zo was so cautious that she refused to talk about anything political at all. She would also refuse questions about religion, race, or any controversial topic. She was boring on purpose, which is the opposite mistake. Zo was eventually retired in 2019. Nobody noticed.
Tay is the canonical case study in AI safety. Every machine learning team that ships a public-facing model now starts with the question "what is our Tay scenario?" The original incident is referenced in OpenAI's documentation, in Google's responsible AI papers, in Meta's AI safety reviews. It is a permanent fixture of the field.
This is also why ChatGPT in november 2022 was so cautious at launch. OpenAI had clearly been thinking about Tay the whole time. The early ChatGPT refused to discuss huge categories of topics, which was annoying for users but explicable in light of what Microsoft had lived through six years earlier.
The other lesson is more cultural. The open internet, especially the parts of it that organize on 4chan and similar boards, is going to coordinate against any public AI experiment. This is not a bug. This is the actual environment any system has to survive in. Tay was designed for a world that does not exist. Any successor has to be designed for the world that does.
~ leave a tribute ~
visitors before you have left these graveside notes. anonymous welcome.