Twitter labelling Trump’s tweets raises questions about what should be allowed on social media
It isn’t news to anyone that President Donald Trump is a prolific tweeter. The president routinely takes to Twitter to vent frustrations, criticize opponents and make policy pronouncements. He has even been known to post more than 100 tweets in a single day.
Throughout his presidency, Trump has faced criticism for his social media habits. From his @realDonaldTrump account, the president routinely promotes conspiracy theories, shares false information, and launches personal attacks online.
Historically, Twitter has been vocal in supporting all content, even if harmful or objectionable. Today, the platform is often criticized for uneven application of its own policies regarding what posts or accounts can stay up and which are taken down or suspended.
Twitter has had to grapple with these criticisms, and last month adopted new measures to help curb the spread of coronavirus-related mis- and disinformation. In early May 2020, the platform started adding fact-checking labels to posts.
Then, on May 26, the US president found his tweets subject to a fact-check. In a flurry of tweets, Trump claimed that elections held by mail-in ballot will result in election fraud. On two of these, Twitter added a label reading “Get the facts about mail-in ballots,” and redirected users to news articles from mainstream media sources and fact checkers describing Trump’s claims of voter fraud as unsubstantiated.
The move came as a surprise to many. A Twitter spokesperson explained that the flagged tweets contained “potentially misleading information about voting processes and have been labeled to provide additional context around mail-in ballots.”
Three days later, Twitter again took action on one of the president’s tweets.
This time, the note added by Twitter was not a fact-check, but a notice that stated the tweet had violated Twitter policy, by ‘glorifying violence.’ Trump’s second flagged tweet was a response to the protests that arose in Minnesota following the death of George Floyd during police arrest.
Twitter chose to leave the tweet up, despite its being in violation of the platform’s terms of service, deeming the content to be of public’s interest, having come from the president of the United States. In an attempt to limit the reach of the tweet, Twitter disabled likes, retweets, and replies on the post.
These two instances raise a number of important questions around communication on social media platforms, and what the responsibility these companies may or may not have to limit or qualify speech and stop the spread of false information on their sites.
Trump Tweet #1 — Should Twitter Help Users Tell What’s True?
Trump Tweet #2 — Violent threats and the Public Interest
Companies such as Twitter and Facebook have been under pressure to stop the spread of false and misleading information, but there is no consensus about how to do this. The platforms themselves have been reluctant to intervene in what users post, beyond taking down content that clearly violates terms and conditions. The current climate of information disorder has created a lot of pressure for action.
Twitter CEO Jack Dorsey is the public face of that company, while Facebook founder and CEO Mark Zuckerberg make decisions for that company. The two men have both been criticized for allowing dangerous misinformation to spread quickly on their platforms, but in the case of Donald Trump’s posts, their opinions and actions have been different.
Discussing the decision to add flags to Trump’s tweets, Jack Dorsey explained that “Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves. More transparency from us is critical so folks can clearly see the why behind our actions.”
In contrast, Mark Zuckerberg, made the following statement, defending his decision to leave Trump’s posts alone: “We have a different policy than Twitter on this … I believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online. I think in general, private companies shouldn’t be, especially these platform companies, shouldn’t be in the position of doing that.”
Social media platforms’ practice of determining what user content is acceptable to show, monitoring posts, and removing what violates the site’s terms.
The product of research conducted to investigate a claim to determine if it is accurate based on the best available evidence.
Generally speaking, ‘public interest’ refers to the concerns that citizens share in common. In a media context, the concept typically relates to accessing information that the public has a right to know.