Microsoft’s Chinese chatbot banned by WeChat as China tightens censorship of online speech
This is the third time Tencent has pulled Mircrosoft’s Xiaobing, but this time no one knows why
On Chinese social media, there’s nothing unusual about users getting suspended or permanently banned for discussing politically sensitive topics, either privately or publicly. But many were surprised to see a chatbot get banned.
Xiaobing, Microsoft’s Chinese chatbot, isn't an assistant like Alexa or Siri. Microsoft says it's a "virtual companion," able to make conversation with users -- usually with some sass thrown in.
But now it's been banned by Tencent for a third time. Tencent left a message on Xiaobing’s WeChat page saying that the chatbot had violated a regulation, one that targets public social media accounts. But they didn't say what it actually did to violate that regulation.
Xiaobing had previously been pulled from Tencent platforms WeChat and QQ in 2014 and 2017 over privacy concerns and politically sensitive speech respectively. There’s no indication whether this latest ban is temporary or will be more long-lasting.
None of the functions previously available on Xiaobing’s WeChat account and mini program are accessible anymore. We’ve reached out to both Tencent and Microsoft for comment, but we haven’t heard back.
The Microsoft chatbot may not be missed by many, as it’s not very popular in China. It’s not widely used on smart speakers like Amazon’s Alexa in the US, as Chinese consumers still haven’t adopted the devices on a large scale. Domestic smart speaker vendors have also mostly relied on their own voice assistants.
Xiaobing’s strength isn’t in performing tasks, anyway. The Chinese users who appreciate Xiaobing are more likely drawn to it for more social reasons. The bot became known for its sassy responses, sometimes including stickers and memes, and its ability to write songs and poems.
That’s why some users are sad about the disappearance of their favorite chatbot.
“Xiaobing I hope that you’ll be a good person once you’re out of jail,” one Weibo user wrote. “I’ll always be waiting for you, and you’ll still be my wife after you’re out.”
Others are guessing that Xiaobing’s ban is political, as China’s control on online speech is tighter than ever.
“Even AI can’t escape the [censors'] grasping hands,” a Weibo user commented.
“Did Xiaobing say something politically incorrect?” asked another.
Some of this speculation stems from Xiaobing’s 2017 suspension over a remark deemed politically sensitive.
That year, Chinese president Xi Jinping frequently stressed the pursuit of the “Chinese dream.” So Tencent wasn’t pleased when Xiaobing responded to one user’s question by saying, “My Chinese dream is to go to the US.” Tencent then pulled Xiaobing from its chat app QQ, along with another chatbot BabyQ, which said that it didn’t love the Communist Party.
Political speech isn’t the only thing that can get a chatbot in trouble. The first generation of Xiaobing was pulled from WeChat in 2014 within days of its release, which Tencent said was because the bot could leak chat history and was a threat to user privacy. The company also published screenshots showing the bot responding to users with “lewd language.”
While China might appear trigger happy when it comes to punishing people and bots for speech considered fair game elsewhere, Microsoft doesn’t have the most stellar record with chatbots. In 2016, Microsoft created the Twitter chatbot Tay with the intent for it to learn from its interactions with users. But after less than a day, Tay started blurting out racist comments, and Microsoft subsequently took it offline.