IBM’s cloud software can now detect bias in AI and tell you how it works

Share on facebook
Share on twitter
Share on linkedin
Share on email

Bias is the among the most important things to note when developing software that makes use of artificial intelligence (AI).

This is one of the reasons why so many AI developers champion diversity in their teams. Even then, however, not everybody is perfect and no team can manage its bias flawlessly.

The good news is that IBM has now added bias detection to its IBM Cloud solution.

The software is fully automated and not only detects bias but also details how an AI programme is making decisions. This is all done in real-time.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making,” senior vice president of Cognitive Solutions at IBM, David Kenny said in a statement.

IBM says that the explanations for how an AI is making decisions are provided in easy to understand terms. Users are able to see which factors weighted the decision in one direction compared to another, the confidence in the recommendation, and the factors behind that confidence.

All of this information is presented in a visual dashboard which means a business can access this information at a glance.

The firm says that these new IBM Cloud capabilities work with AI built on a variety of platforms including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.

“In addition, IBM Research is making available to the open source community the AI Fairness 360 toolkit – a library of novel algorithms, code, and tutorials that will give academics, researchers, and data scientists tools and knowledge to integrate bias detection as they build and deploy machine learning models. While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit created by IBM Research will help check for and mitigate bias in AI models. It invites the global open source community to work together to advance the science and make it easier to address bias in AI,” IBM said.

This software could prove invaluable to small businesses which may want to start playing in the AI sector but lack the funds to employ a huge, diverse team.

[Image – CC 0 Pixabay]

Brendyn Lotz

Brendyn Lotz

Brendyn Lotz writes news, reviews, and opinion pieces for Hypertext. His interests include SMEs, innovation on the African continent, cybersecurity, blockchain, games, geek culture and YouTube.