Twitter has introduced a mute filter, muted conversation and user report infrastructure in an attempt to combat the trolls, eggs and hate speech on the platform.
Twitter has augmented its mute feature to allow users to filter specific phrases, keywords, and hashtags, which Instagram launched in September this year. B&T chatted with Marne Levine, Global Chief Operating Officer Of Instagram about the comment moderation feature.
Twitter’s new mute tool allow users to mute entire conversation threads. This will allow users to stop receiving notifications from a specific Twitter thread without removing the thread from your timeline or blocking any users.
Twitter also announced a “hateful conduct” reporting option (when users report an “abusive or harmful” tweet, they’ll now see an option for “directing hate against a race, religion, gender, or orientation”). Twitter is giving a more direct way to report hate speech for users, or for others, whenever they see it happening.
Twitter is the fastest way to see what’s happening and what everyone is talking about. What makes Twitter great is that it’s open to everyone and every opinion. We’ve seen a growing trend of people taking advantage of that openness and using Twitter to be abusive to others.
The amount of abuse, bullying, and harassment we’ve seen across the Internet has risen sharply over the past few years. These behaviors inhibit people from participating on Twitter, or anywhere. Abusive conduct removes the chance to see and share all perspectives around an issue, which we believe is critical to moving us all forward. In the worst cases, this type of conduct threatens human dignity, which we should all stand together to protect.
Because Twitter happens in public and in real-time, we’ve had some challenges keeping up with and curbing abusive conduct. We took a step back to reset and take a new approach, find and focus on the most critical needs, and rapidly improve. There are three areas we’re focused on, and happy to announce progress around today: controls, reporting, and enforcement.
Twitter has long had a feature called “mute” which enables you to mute accounts you don’t want to see Tweets from. Now we’re expanding mute to where people need it the most: in notifications. We’re enabling you to mute keywords, phrases, and even entire conversations you don’t want to see notifications about, rolling out to everyone in the coming days. This is a feature we’ve heard many of you ask for, and we’re going to keep listening to make it better and more comprehensive over time.
Our hateful conduct policy prohibits specific conduct that targets people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. Today we’re giving you a more direct way to report this type of conduct for yourself, or for others, whenever you see it happening. This will improve our ability to process these reports, which helps reduce the burden on the person experiencing the abuse, and helps to strengthen a culture of collective support on Twitter.
And finally, on enforcement, we’ve retrained all of our support teams on our policies, including special sessions on cultural and historical contextualization of hateful conduct, and implemented an ongoing refresher program. We’ve also improved our internal tools and systems in order to deal more effectively with this conduct when it’s reported to us. Our goal is a faster and more transparent process.
We don’t expect these announcements to suddenly remove abusive conduct from Twitter. No single action by us would do that. Instead we commit to rapidly improving Twitter based on everything we observe and learn.