Although CIOs would like to spend their time dealing with issues like servers, networking, and how best to move legacy applications into the cloud, all too often we get dragged into more down-to-earth issues. One of the biggest issues that a lot of companies have to deal with is crime. Crimes that can affect a business can range from stealing inventory to assaulting workers. As CIOs we have access to some pretty high-powered technology. Is there any way that we could harness this technology in order to help our companies deal with the crime that they are facing on a daily basis?
The Power Of Being Able To Read The Future
Let’s face it: it was a seductive pitch to city governments and police departments. They could use predictive software to deter a crime before it is committed. Using artificial intelligence-powered algorithms, cities could chew up data on incident reports, weather, time and other variables, learn historical patterns, and spit out forecasts faster, cheaper and more accurately than human analysts. Using big data they could then put cops in the right place at the right time to help discourage crime. Federal funding helped push tools like this to police departments in Los Angeles, New York and elsewhere in the 2010s. What CIOs need to understand is that more recently those tools have faced pushback. Criminal-justice advocates warn that a disproportionate number of reported incidents by the tools involving low-income people or people of color could lead to outsize police footprints in their communities and could result in unequal enforcement relative to total crime. At the same time, some academics question how effective the tools really are.
In Santa Cruz, CA the police department stopped using AI software in 2017. The city became the first in the nation to effectively ban such technology, with local officials warning it could contribute to racial profiling and strain police ties with the community. More recently, lawmakers in Oakland, CA, and New Orleans have also voted to prohibit the tools. The Los Angeles Police Department, says it stopped using AI tools last year because of budget cuts, a move that came after an internal watchdog called for more oversight of predictive analytics. The LAPD and Chicago Police Department have also halted programs to predict potential repeat offenders. Now, predictive-policing companies are starting to rebrand their products and rethink their approach, focusing less on “forecasting” crime and more on tracking police, both to provide more oversight and to learn what behaviors correlate to reduced crime.
Providing Less Forecasting But More Tracking
A CIOs we are excited about the possibilities offered by AI tools. We understand the benefits that they can offer to our companies. However, when it comes to fighting crime in cities the evidence that predictive software reduces crime more effectively than human analysts is mixed. The challenge is that nobody has access to the data required to engage in independent evaluation. Most predictive tools are privately made. It can be difficult to prove causation between the tools and reductions in crime. The thinking is that the AI tool’s core data – incident reports about shootings, burglaries and other crimes – is the least biased information that data analytics tools can use. Police departments that use AI tools often install global positioning systems in police cars or radios to track where individual officers are patrolling. This allows them to compare deployments to crime forecasts. In the future departments could increasingly employ bird’s eye views of officer locations to monitor officers’ movements and improve accountability, in addition to forecasting crime hot spots.
AI-powered algorithms can spot patterns and trends across data sets to help police supervisors put officers in the right places. The goal is to track granular details of their work, from making traffic stops to visiting local businesses. It’s providing the reporting that the command staff can go back to and look at and say, “These tactics correlate to the most reductions in crime,” and they can focus on those types of activities. There may be an opportunity to use predictive software to help target social work and mental-health services for high-risk areas. Crime-forecasting firms are quick to distinguish their products from other data-mining or facial-recognition technologies that make use of personal information to attempt to identify suspects. Some AI tools rely on reported incidents and don’t use data points such as arrests – a metric for police activity rather than total crime – for fear of perpetuating a cycle of over-policing in communities of color. The tools don’t collect analogous information about nonphysical spaces where cyber- or white-collar crime occurs, which could make enforcement for such incidents less likely.
CIOs can understand that the complexities of such data sets pose questions that many governments aren’t currently equipped to answer. But to get the answers that they need they could create advisory or oversight groups with this kind of expertise. A handful of jurisdictions have created such bodies in recent years for the purpose of researching government-used algorithms. If you’re going to put these programs in place, one of the essential things you are going to have to do is to build trust. It is important to re-examine the historical data and not just rely on it. In Santa Cruz, lawmakers left open the possibility of using crime-forecasting and facial-recognition tools if the city council approves them but only if it is based on a review of peer-reviewed research into bias.
What All Of This Means For You
As CIO we are exposed to a great deal of technology every day. We understand what technology can do and what problems can be solved by applying technology to it. Our businesses operate in cities and towns and we understand that crime is a serious issue for both the towns that we operate in as well as our business. If there was a way to anticipate where a crime was going to happen, then it seems like it could benefit both our business and the cities in which we operate.
The original idea was to use artificial intelligence (AI) technology to crunch a great deal of data in order to predict where crime would happen in the future. However the use of tools to do this is starting to get pushback because of the focus that has been put on low-income people or people of color. Police departments that had rolled out these type of services have started to discontinue them. The companies that make these AI tools have started to rebrand them as police tracking tools instead of crime predictors. One of the challenges that these tools are dealing with is that they may have bias because of bias that is in the data that has been fed into them. Going forward the plan is to have the tools track the actions of police offices in order to determine where they should be and when they should be there. Going forward, advisory boards may be required to ensure that the algorithms are being used correctly.
There is no question that the new technologies that we use are exciting. They can help to speed tasks up and they can help us to sort through a great deal of data. However, we need to be careful to make sure that the data that these sophisticated tools are fed are balanced and correct. As long as CIOs can keep an eye on how the AI tools are operating, then perhaps we will be able to use these tools to make our lives more secure.
Question For You: Do you think that AI tools can be used to predict crime without discriminating against different groups?
Click here to get automatic updates when The Accidental Successful CIO Blog is updated.
P.S.: Free subscriptions to The Accidental Successful CIO Newsletter are now available. Learn what you need to know to do the job. Subscribe now: Click Here!
What We’ll Be Talking About Next Time
As the person with the CIO job, the one thing that you would like from your employees is their honest feedback. You need this if you are going to be able to deliver on the importance of information technology. You don’t have any way of telling if you are doing your job correctly, you need them to tell you how things are going. However, there’s a very good chance that the people who work for you are scared of you. They think that if they speak up, they’ll get fired. In order to get around this problem, should you let them tell you what they think anonymously?