CallMiner's 2024 CX Landscape Report is here! |Download today

Blog Home

Introduction to Responsible AI

Company

Micaela Kaplan

April 21, 2021

Responsible AI intro image
Responsible AI intro image

When we are young, most of us are taught that being responsible is a choice. But what about the hard or even impossible choices? There are those situations in life which are exceptionally difficult, putting us at odds with social norms, or our values and morality. These situations may fall deeply in the grey zone or feel like any choice is a lose-lose. We are all faced with decisions where there is no good answer.

Artificial intelligence (AI) is no different. Should a self-driving car steer you into a tree to protect a puppy? Should the likelihood of recidivism as predicted by a model determine jail sentences? Should college entrance exam scores be predicted and used as truth when the exam cannot be taken due to a global pandemic?

While these choices may sound extreme, these are all very real questions that have come up in the AI world, and in global news media, in the last few years. As humans try to simplify their day-to-day lives by using machinery, we often start allowing machines to tackle not only life’s easy questions, but also some of the hardest. In fact, there have already been an abundance of situations where a model predicts an outcome that reflects and perpetuates the many injustices that happen in our world.

Additionally, computer algorithms lack the ability to reason for themselves or consider the many cultural norms and societal contracts that govern our ways of behaving. Models in today’s world have a real, tangible, and sometimes life-changing impact on the lives of real people, and this brings to light an important new side of machine learning and AI.

A Rose By Any Other Name…

Whether you want to call it Responsible AI, ethics in AI, AI misuse, detecting bias in AI, or something else, all reference the need to ensure that machine learning doesn’t add or play into the injustices of our world. Many also use these terms to discuss ways in which machine learning and AI can help us counteract and fight these same problems.

At the CallMiner Research Lab, we don’t have all the answers to creating perfectly responsible AI systems, but we do understand exactly how important it is that we think about and actively work towards building tools that are inclusive and transparent. Understanding exactly what that means and how to achieve it is a process that involves learning, open conversation, and constant self-evaluation and change. Responsible AI is not a final state, but rather, a continuous cycle of detection, evaluation, and improvement.

Through our Responsible AI blog series, we will divulge the details of how the CallMiner Research Lab envisions and implements Responsible AI. Our approach is driven by one relatively simple idea – creating, implementing, and using AI in a responsible way is everyone’s responsibility.

Responsible AI is not simply a checklist that one researcher completes one time throughout the research process. It cannot even be delegated to a single team.

Responsible AI is a company-wide effort that succeeds through the diversity and dedication of those working to achieve it.

It is a process that must be evaluated by a diverse set of minds and backgrounds in order to account for a larger universe of perspectives and experiences. This process must also be repeated throughout the life of an AI-driven tool, from the time the idea is conceived, through its development and deployment, and consistently as users begin to explore and understand it.

Our framework for Responsible AI is based around five foundational ideas.

  1. Find Injustice - The goal of Responsible AI is to recognize and address the injustices of our world, which manifest themselves within our data or are reflected by our models.
  2. Embrace the Unknown - No one person will have the answer to any given problem, and there are some questions that we come across that have yet to be answered anywhere in the field.
  3. Strive for Transparency - Few algorithms are biased by nature. We must monitor the training and application of these algorithms to guarantee fair and ethical AI.
  4. Adapt as Knowledge Grows - Responsible AI is an imperfect, incomplete, and relatively new aspect of the field, and any approach must be able to adapt to the new challenges and injustices that appear every day.
  5. Work without Ego - The first step to Responsible AI is well-documented self-evaluation and meaningful and sometimes uncomfortable conversations that help us work towards a brighter future.

We hope that through our transparency with our Responsible AI efforts, we can bring increased awareness to how important it is to approach AI and machine learning from this perspective, and add to the discussions that will bring about industry-wide action in the coming years.

CallMiner Research Lab Artificial Intelligence