advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Twitter’s ML initiative will study impact of its algorithm

In recent years, a social media platform’s algorithm has been cited as the cause of all manner of problem. This opinion is usually held by content creators or users on such platforms, with the companies behind the algorithm often failing to provide public insight into how it works.

Twitter could be changing that as far as its own algorithm is concerned, which we have criticised on more than one occasion for creating an echo chamber in terms of the content you’re served up or interact with.

This change comes in the form of the newly announced Responsible Machine Learning Initiative.

The initiative will see Twitter enlist the help of data scientists and engineers from across the United States to investigate any potential “unintentional harms” that the platform’s algorithm may result in.

“When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product,” explains the company in a blog post regarding the announcement.

“Leading this work is our ML Ethics, Transparency and Accountability (META) team: a dedicated group of engineers, researchers, and data scientists collaborating across the company to assess downstream or current unintentional harms in the algorithms we use and to help Twitter prioritise which issues to tackle first,” it adds.

Twitter will also be sharing its findings, particularly when it comes to what it unearths with regard to any bias within algorithm. To that end, the company says it will be publishing insights into the following areas in coming months:

  • “A gender and racial bias analysis of our image cropping (saliency) algorithm
  • A fairness assessment of our Home timeline recommendations across racial subgroups
  • An analysis of content recommendations for different political ideologies across seven countries.”

While it is encouraging to see Twitter take bias within its platform seriously, it is not the first company to do so. As such it will be interesting to see whether the findings of the study result in actual refinements to the algorithm and tangible improvements to how the platform itself works.

[Image – Photo by MORAN on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement