As platform manipulation tactics continue to evolve, the micro-blogging platform said it is expanding rules to better reflect how it identifies fake accounts and what types of inauthentic activity violate its guidelines before the US mid-term elections in November.
“Some of the factors that we will take into account when determining whether an account is fake include use of stock or stolen avatar photos, use of stolen or copied profile bios and use of intentionally misleading profile information, including profile location,” said Del Harvey, Vice President, Trust and Safety, Twitter, in a blog post.
“We are expanding our enforcement approach to include accounts that deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules,” she added.
Twitter has also expanded the criteria for when it will take action on accounts which claim responsibility for a hack, which includes threats and public incentives to hack specific people and accounts.
“In August, we removed approximately 50 accounts misrepresenting themselves as members of various state Republican parties,” informed Yoel Roth, Head of Site Integrity at Twitter.
“We have also taken action on Tweets sharing media regarding elections and political issues with misleading or incorrect party affiliation information,” he added.
In August, Twitter removed 770 accounts engaging in coordinated behaviour which appeared to originate in Iran.
“Our automated detections continue to identify and challenge millions of potentially spammy and automated accounts per week. In the first half of September, we challenged an average of 9.4 million accounts each week,” said Roth.
Twitter is also witnessing a decline in the average number of spam-related reports it receives from users each day — from an average of approximately 17,000 per day in May to approximately 16,000 per day in September.