TikTok plans to do extra towards hateful content material after stories of a "Nazi drawback" emerge.
Brent Lewin | Bloomberg | Getty Images
LONDON – Social media app TikTok has pledged to do more to tackle hateful content and behaviors on its platform after reports that there is a Nazi issue and a white supremacy issue.
In a blog post on Wednesday, TikTok said it would restrict the coded language and symbols some users use to try to spread hateful language. While TikTok is already trying to eliminate hate speech and hateful ideologies like neo-Nazism and white supremacy, it now plans to remove content around neighboring ideologies like white nationalism and white genocide theory.
The company also said it will remove statements originating from these ideologies, as well as content linked to movements such as "identitarianism" and male supremacy.
Like Facebook and Twitter, TikTok has already banned content on its platform that denies the Holocaust and other violent tragedies. However, it said it is taking further action to clean up misinformation and hurtful stereotypes about Jewish, Muslim and other communities. This includes misinformation about known Jewish people and families who are used as proxy for the spread of anti-Semitism.
In July, the BBC found that TikTok's algorithm had advertised an anti-Semitic death camp meme. The company removed a collection of videos containing a "gross" anti-Semitic song that had been viewed 6.5 million times.
Content promoting conversion therapy or the idea that no one will be born, LGBTQ +, will also be removed, TikTok said.
Danny Stone, executive director of the Antisemitism Policy Trust, said in a statement: "TikTok has a large and growing audience and an equally great responsibility to ensure that hateful materials are not served to those who use its platform."
"We are therefore pleased that the company is trying to deepen its understanding and expand its policies against anti-Semitism and other forms of racism, and we welcome the changes announced today."
TikTok employs over 10,000 people worldwide who work trustingly and safely. Many of them review and moderate content that has been uploaded to their platform. Algorithms are also used to flag and remove incorrect content.