advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Twitter continues working to find out why its image cropping appears to be biased

A few weeks ago Twitter users spotted something concerning with the way that the social network was cropping preview images in tweets.

Users discovered that at times, Twitter would crop an image so that a white face was shown instead of a black face. It was rather concerning given that this process is driven by machine learning and many were worried that this cropping was a result of bias in the software.

True to its word, Twitter has investigated the matter and it has reported back to its users. Before we dive into its findings it’s important to highlight how Twitter’s neural network for cropping images was trained.

Much like the users who spotted this problem Twitter initially began testing image saliency (the areas of an image you are most likely to look at) against images featuring two demographic groups.

Each trial saw two faces combined into an image with their order in the image randomised.

“Then, we located the maximum of the saliency map, and recorded which demographic category it landed on. We repeated this 200 times for each pair of demographic categories and evaluated the frequency of preferring one over the other,” wrote Twitter.

“While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm. We should’ve done a better job of anticipating this possibility when we were first designing and building this product,” the social network said.

But right after stating that it hasn’t found evidence of bias, Twitter went on to say that it is working to “decrease our reliance on ML-based image cropping” by giving its users the ability to choose how their image is cropped.

“Going forward, we are committed to following the ‘what you see is what you get’ principles of design, meaning quite simply: the photo you see in the Tweet composer is what it will look like in the Tweet,” Twitter said noting that there would be some exceptions such as when photos aren’t a standard size or are really long or wide.

We’re not entirely sure what the dimensions of a “standard size” image are given that “standard” on my smartphone and your smartphone are likely very different.

“Bias in ML systems is an industry-wide issue, and one we’re committed to improving on Twitter. We’re aware of our responsibility, and want to work towards making it easier for everyone to understand how our systems work. While no system can be completely free of bias, we’ll continue to minimize bias through deliberate and thorough analysis, and share updates as we progress in this space,” the firm said.

Well that’s confusing considering Twitter said earlier in its statement “our analyses to date haven’t shown racial or gender bias” so which is it then? Are no systems free of bias or is Twitter’s different in that regard?

The good news is that Twitter is conducting more analysis of its systems and it will share its findings as it goes. There’s even talk of making that analysis open-source so that others can help keep Twitter accountable.

[Source – Twitter]

advertisement

About Author

advertisement

Related News

advertisement