Key Recommendations for Keeping Bias Out of AI

By
4 minute read

Eliminating bias is a challenging but necessary step for successful AI deployments.

Removing bias from AI solutions is not only critical for ensuring fair and equal treatment of end users, but also for ensuring a business’ security and success. The consequences of ignoring bias in AI are clearly illustrated in a recent DataRobot survey, where 36% of respondents reported that their businesses suffered from AI bias in at least one algorithm, resulting in unequal treatment of users based on gender, age, race, sexual orientation and religion. As a result, among those respondents, 62% reported having lost revenue, 61% lost customers, 43% lost employees, and 35% incurred legal fees because of lawsuits or legal action.

Enterprise adoption of technology such as Conversational AI is increasing exponentially, meaning more and more end users from diverse backgrounds and lived experiences are interacting with AI. While it has always been important to prevent bias in AI, never has the need been more urgent for businesses to build anti-bias AI strategies.

However, this challenge should not deter organizations from deploying AI. In fact, as more organizations across all industries rely on AI to optimize business operations and improve customer experiences, those who don’t implement AI risk falling behind their competitors.

So how do organizations take the leap into AI while also ensuring their solutions don’t harm the very people they are trying to help? Let’s explore some recommendations for addressing bias in enterprise AI.

Build AI Development Teams with a Focus on Diversity

By 2023, Gartner anticipates that all organizations will expect AI development and training personnel to “demonstrate expertise in responsible AI,” to ensure their AI solutions achieve algorithmic fairness.

There is good reason for this expectation. While AI is not inherently biased, algorithms are influenced by the biases and prejudices of their human creators. Although we may not yet be at the point when responsible AI expertise is a requirement for all AI development personnel, there are steps organizations can take today to ensure developers are able to detect and address bias in AI solutions.

Regardless of whether a developer is a new addition to an AI project, or an existing member, they should receive training on how to recognize and avoid bias in AI. In a recent study exploring ageism in AI for healthcare, the World Health Organization found that healthcare AI solutions are often embedded with designers’ “misconceptions about how older people live and engage with technology.” WHO recommends training AI programmers and designers, regardless of their age, to recognize and avoid ageism in their work, and in their own perception of older people.

This advice is applicable to detecting and eliminating not just ageism, but also sexist, racist, ableist and other biases that may lurk within AI algorithms. However, while training programs can help to limit bias, nothing compares to the positive impact of building a diverse analytics team. As noted in a recent article from McKinsey, “bias in training data and model outputs is harder to spot if no one in the room has the relevant life experience that would alert them to issues.” The teams that plan, create, execute and monitor the technology should be representative of the people they intend to serve.

For example, many of the AI leaders featured in our Women in AI series emphasize the critical role women play in eliminating gender bias in AI, including Andrea Mandelbaum, Founder of Mc Luhan Consulting (click here to read her story).

Note: In commemoration of International Women’s Day (March 8) and Women’s History Month, Amelia has been highlighting several participants in our Women in AI series on our LinkedIn, Twitter and Facebook pages. Be sure to follow along to read insights from AI leaders on how we can break gender biases in STEM.

Monitor AI Solutions Every Step of the Way

Another step that organizations can take to avoid bias is by fostering a practice of regularly conducting fairness audits of AI algorithms. As stated in an article from Harvard Business Review, one of the keys to eliminating bias from AI is subjecting the system to “rigorous human review.”

Several leaders in the AI and automation field have already put this recommendation into practice. Alice Xiang, Sony Group’s Head of AI Ethics Office, explains that she regularly tells her business units to conduct fairness assessments, not as an indicator that something is wrong with their AI solution, but because it is something they should continuously monitor. Similarly, Dr. Haniyeh Mahmoudian, Global AI Ethicist at DataRobot, emphasizes the importance of surveilling AI at every step of development to ensure bias does not become part of the system. She describes how this process allows AI teams to determine whether their product is ready for public deployment.

Be Transparent with End Users About the Use of AI

Even after building a diverse AI development team, training team members on responsible AI practices and regularly assessing algorithms throughout the development process, organizations cannot afford to let their guard down.

Once companies deploy their AI product, they should be transparent with end users about how the algorithm was developed, the intention of the product and the point-of-contact for end users to connect with in case they have questions or concerns. Dissolving the mystique of AI can encourage open dialogue between companies and users, empowering developers to leverage user feedback to improve their solutions, and reducing harm by ensuring any erroneous algorithmic biases are resolved in a timely manner.

The potential for bias in AI is not an insurmountable obstacle; however, it also cannot be left unaddressed. When organizations deploy AI, they are responsible for making sure their technology treats users fairly, regardless of their race, gender, age, ability, sexual orientation or religion. With anti-bias AI strategies that include diverse AI teams, continuous monitoring for bias in AI, and open dialogue with end users, organizations can create truly extraordinary experiences for all.

The Intelligent Contact Center

Companies have spent decades implementing Interactive Voice Response (IVR) systems in their call and customer care centers, but they've proven unable to keep up with customers' expectations.

In this white paper, we examine the benefits of an Intelligent Contact Center, where companies utilize Conversational AI-powered virtual agents to provide first-line resolution and support for customers, and augment human employees through AI and automation.

Download our paper to learn the benefits of this approach and why current IVR systems simply will not cut it in today’s hyper-paced digital landscape.

Learn More