Facebook is creating its own internal teams to study the algorithms on its primary social media platform and Instagram, with the objective of finding out whether said algorithms are racially biased.
This was initially reported by the Wall Street Journal (paywall) earlier in the week, but now The Verge has received comment from a Facebook spokesperson to confirm that it is indeed proceeding with the study.
According to the company, these newly formed teams will be, “tasked with ensuring fairness and equitable product development are present in everything we do,” they told The Verge.
“We will continue to work closely with Facebook’s Responsible AI team to ensure we are looking at potential biases across our respective platforms,” they added.
Given Facebook’s reluctance to look into the bias of its algorithm previously, this is quite an about-face for the company, and one that is indeed welcome. It looks like this change has been spurred on by the BLM movement and anti-racism protests Stateside.
There is also the factor of big brands pulling digital advertising from Facebook in recent weeks, but the company has not commented on that matter with regard to this latest change.
“The racial justice movement is a moment of real significance for our company. Any bias in our systems and policies runs counter to providing a platform for everyone to express themselves,” Vishal Shah, VP of product of Instagram, told The Verge in a statement.
“While we’re always working to create a more equitable experience, we are setting up additional efforts to continue this progress — from establishing the Instagram Equity Team to Facebook’s Inclusive Product Council,” he added.
It is indeed good to see Facebook take potentially racially biased elements seriously, but the fact that it is creating its own teams does give us pause. One need only look at the recently formed Oversight Committee as an example of a team that does not have real power within the organisation.
Hopefully these newly formed teams will not be the same, and some changes to the racial bias within the algorithm are made in future.