No, it isn’t an edit button. But Twitter is experimenting with a new feature to get you to stop and rethink sending out nasty tweet replies.
The company appears to be designing the new feature to help tone down the toxic conversations that can erupt over the social media platform.
“When things get heated, you may say things you don’t mean,” Twitter said on Tuesday. “To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”
The company told PCMag the experiment will only focus on tweet replies for a small number of English-language speakers. To determine what language is harmful, Twitter is going to use input based on tweets that have been previously reported to the company for abusive behavior. AI-powered algorithms will then flag the content before it gets posted.
Affected users will see a prompt that will highlight the questionable choice of words, and ask them if they would like to revise the language before posting the reply.
The experimental feature sounds similar to what Instagram introduced in December to stop bullying. The social media platform also began using AI-powered algorithms to detect when users potentially post offensive content, which can result in Instagram giving them the option to revise the caption.
Twitter said it’ll review the results from the ‘“rethink a reply” experiment before determining the next steps, which could include adopting the feature for all users.
In the meantime, some users are calling on the company to add an “edit button” instead. But earlier this year, Twitter’s CEO Jack Dorsey threw cold water on the idea. His main problem is how an edit button would enable users to alter existing tweets that have already been retweeted thousands of times. “We’ve considered a one-minute window, or a thirty-second window to correct something. But that also means we’d have to delay sending that tweet out,” Dorsey told Wired in January.