CIOs Are Discovering That Their Software May Be Biased

It turns out that software can make mistakes also
It turns out that software can make mistakes also
Image Credit: Ron Mader

As CIOs we tend to trust the programs that we put in place to run the companies that we work for. We have development teams that take requirements and convert them into working code that shows the importance of information technology. We also pay an army of testers to make sure that the software that we deploy does what it is supposed to do. However, what CIOs are starting to discover is that there is the very real possibility that bias may be finding its way into our software. The result of this is that our company may end up making poor decisions. CIOs need to understand what is going on and how it can be fixed.

How Can Software Be Wrong?

The person with the CIO job knows that an algorithm is simply a set of instructions for a computer telling it how to accomplish a task. Today they range from simple computer programs, defined and implemented by humans, to far more complex artificial-intelligence systems, trained on terabytes of data. Either way, human bias has been worked into their programming. A good example of this lies in facial recognition systems which are trained on millions of faces. However, if those training databases aren’t sufficiently diverse, they are less accurate at identifying faces with skin colors they’ve seen less frequently. CIOs are starting to fear that this could lead to police forces using software to disproportionately target innocent people who are already under suspicion solely by virtue of their appearance.

One piece of software that is used by police for determining sentencing is a program called COMPAS. It has become the subject of fierce debate and rigorous analysis by journalists at ProPublica and researchers at Stanford, Harvard and Carnegie Mellon, among others. Unfortunately the results are often frustratingly inconclusive. No matter how much we know about the algorithms that control our lives, finding ways to make them “fair” may be difficult or even impossible. Yet as biased as algorithms can be, at least we know that they can be consistent. When it comes to humans, biases can vary widely from one person to the next.

As people in the CIO position look to algorithms to increase consistency, save money or just manage complicated processes, our reliance on them is starting to worry politicians, activists and technology researchers. What CIOs need to realize is that the aspects of society that computers are often used to facilitate have a history of abuse and bias: who benefits from government services, who gets the job, who is offered the best interest rates and, of course, who goes to jail. Some CIOs talk about getting rid of bias from algorithms, but that’s not what we’d be doing even in an ideal state. We need to realize that there’s no such thing as a non-biased discriminating tool, determining who deserves this job, who deserves this treatment. The algorithm is inherently discriminating, so the real question for CIOs is what bias do you want it to have?

Dealing With Bias In Software

Steps are being taken to deal with bias in software. Back in 2018, New York City became the first government in the U.S. to pass a law intended to address bias in the algorithms used by the city. The law doesn’t do anything beyond create a task force to study and make recommendations on the matter. New York State’s top insurance regulator clarified in early January that, for determining life insurance qualifications and rates, existing laws preventing insurers from discriminating based on race, religion, national origin and more will also apply to algorithms that train on homeownership records, internet use and other unconventional data sources.

Determining what biases an algorithm has is very difficult; measuring the potential harm done by a biased algorithm can be even harder. An increasingly common algorithm that is in use predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and have used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates a bias against low-income parents.

The irony is that, in adopting these modernized systems, communities are resurfacing debates from the past, when the biases and motivations of human decision makers were called into question. CIOs need to realize that panels that determine the bias of computers should include not only data scientists and technologists, but also legal experts familiar with the rich history of laws and cases dealing with identifying and remedying bias, as in employment and housing law. CIOs are waking up to the impending regulatory and compliance burden. Rentlogic, a firm that rates New York City apartments by health and safety standards, has employed an algorithm auditor in order to build trust with customers and to prepare for future regulation. Eventually there may be something like a chief algorithm officer at big companies like Google.


What All Of This Means For You

If we can’t trust our software, then who can we trust? CIOs are starting to realize that the software systems that they have put in place to run their firms may have a problem. These systems have been designed and implemented by people. Those people have their own set of bias. These bias can then find their way into the software systems that they create. Once this happens, the decisions that are being made by these software systems may not be fair. It’s our job as CIO to understand that we may have a problem on our hands and then find a way to deal with it.

Human bias can be worked into the software that is deployed at our firms. A good example of this are facial recognition systems which are only as good as the database of faces that they have been trained on. Finding ways to make algorithms fairer can be very difficult to do. One of the problems that we are running into is that algorithms are being used to manage tasks that relate to people who have been discriminated against in the past. Laws are being passed that will prevent software from discriminating against minorities. One of the biggest problems is that algorithms use data and if data is missing, then the algorithms may reach a wrong answer. Panels can be used to determine the bias of computers. An algorithm auditor may become a part of every firm in the future.

In order to solve a problem, CIOs have to first understand that a problem exists. CIOs have started to become aware that the software that is being used to run their companies may have bias worked into it. Fixing bias in a computer algorithm is not an easy thing to do; however, it can be done. If CIOs are willing to invest the time and energy required to uncover and fix bias in their software, then they can have more trust in the results that the software is producing.


– Dr. Jim Anderson Blue Elephant Consulting –
Your Source For Real World IT Department Leadership Skills™


Question For You: How can a CIO detect if software that they are using contains a bias?


Click here to get automatic updates when The Accidental Successful CIO Blog is updated.
P.S.: Free subscriptions to The Accidental Successful CIO Newsletter are now available. Learn what you need to know to do the job. Subscribe now: Click Here!

What We’ll Be Talking About Next Time

CIOs are, among other things, responsible for making sure that the business keeps running smoothly. Although we generally only get involved in hiring when the company is hiring people to work in the IT department, it turns out that we can also play a role in hiring in other parts of the company. Many companies are struggling to find enough workers to keep their business moving. What this means for CIOs is that we are going to have to step in and use the importance of information technology to see if we can use technology to solve the company’s hiring problems. We all know what this means: robots.