As a CIO you have probably already heard about face recognition technology. This is the technology that can allow your firm to do away with things that don’t seem to work like passwords, fingerprints, and security fobs. Instead, all an employee has to do to gain access to an area or an application is to show a computer their face and then they are in. Since everyone always has their face and since all of our faces are unique, this seems like a great solution to a lot of the security problems that the person with the CIO job is currently facing. However, despite the importance of information technology, it turns out that this may not be the magic bullet that we’ve been looking for.
The Challenges Of Facial Recognition Systems
Facial-recognition systems have been long touted as a quick and dependable way to identify everyone from employees to hotel guests. However, they are now in the crosshairs of the bad guys. For many years, researchers have warned about the technology’s vulnerabilities. However, recent schemes have confirmed their fears and underscored the difficult but necessary task of improving the systems. In just the past year, thousands of people in the U.S. have tried to trick facial identification verification to fraudulently claim unemployment benefits from state workforce agencies. There were more than 80,000 attempts to fool the selfie step in government ID matchups. That included people wearing special masks, using deepfakes which are lifelike images generated by AI or holding up images or videos of other people.
Thousands of people have used masks and dummies like these to try and trick facial identification verification. Facial recognition use for one-to-one identification has become one of the most widely used applications of artificial intelligence these days, allowing people to make payments via their phones, walk through passport checking systems or verify themselves as workers. An example of this are the drivers for Uber Technologies who must regularly prove they are licensed account holders by taking selfies on their phones and uploading them to the company. Uber uses Microsoft Corp.’s facial-recognition system to authenticate them. Uber is in the process of rolling out the selfie-verification system globally. They did this because they had to deal with drivers hacking its system to share their accounts.
Amazon.com Inc. and smaller vendors like Idemia Group, Thales Group and AnyVision Interactive Technologies sell facial-recognition systems for identification purposes. The technology works by mapping a face to create a so-called face print. Identifying single individuals is typically more accurate by the system than spotting faces in a crowd. Still, this form of biometric identification does have its limits.
The Bad Guys Go After Facial Recognition Systems
In a recent security report it was stated that experts expect to see fraudsters increasingly create “Frankenstein faces,” using AI to combine facial characteristics from different people to form a new identity in order to fool facial ID systems. The analysts said the strategy is part of a fast-growing type of financial crime known as synthetic identity fraud. This is a crime where fraudsters use an amalgamation of real and fake information to create a new identity. Until recently, it had been activists protesting surveillance who had targeted facial-recognition systems. Privacy campaigners in the U.K., for instance, have painted their faces using asymmetric makeup. This makeup is specially designed to scramble the facial-recognition software powering cameras while walking through urban areas.
Criminals have even more reasons to do the same. They want to spoof people’s faces in order to access the digital wallets on their phones. They are also using it to get through high-security entrances at hotels, business centers or hospitals. Any access control system that has replaced human security guards with facial-recognition cameras may be at risk. The idea of fooling automated systems dates back several years. Back in 2017, a male customer of the insurance company Lemonade tried to fool its AI application for assessing claims by dressing in a blond wig and lipstick, and uploading a video saying his $5,000 camera had been stolen. Lemonade’s AI systems, which has been designed to analyze such videos for signs of fraud, flagged the video as suspicious and found the man was trying to create a fake identity. It turns out that the man had previously made a successful claim under his normal guise.
It turns out that there are two ways to protect facial recognition systems from being fooled. One is to update the underlying AI models to beware of new novel attacks by redesigning the algorithms that underpin them. The other is to train the models with as many examples of the altered faces as possible. Your goal should be to show the system faces that could spoof them. These are known as adversarial examples. Unfortunately, it can take up to 10 times the number of images needed to train a facial-recognition model to also protect it from spoofing. This is a costly and time-consuming process. It takes so long because for each human person you need to add the person with adversarial glasses, with an adversarial hat, so that this system can know all combinations.
What All Of This Means For You
In terms of new technology, facial recognition is one such technology that the person in the CIO position need to make sure that they stay on top of. The power of facial recognition technology is that it can replace people and potentially provide better security for the company’s applications and locations. However, as with all new things, the scammers have been alerted and they are in the process of trying to come up with ways to fool these systems.
People who are filing claims for unemployment benefits have been trying to fool facial recognition systems for quite some time. Uber uses facial recognition software to verify the identity of their drivers. Facial recognition systems can be purchased from companies such as Amazon. The people who want to defeat facial recognition systems are resorting to creating fake faces built from photos of many different people. Criminals are motivated to spoof facial recognition systems in order to gain access to digital wallets and restricted locations. CIOs can protect their companies from the bad guys. They can update their underlying facial recognition AI models. They can also take the time to train the system to spot fakes.
There is no doubt that facial recognition technology is a powerful new tool. However, CIOs need to understand that the bad guys also realize how powerful this tool is and are actively trying to find ways to fool it. If CIOs can take the time to understand how their facial recognition systems work and then take steps to boost their reliability, then they may have a tool that the company can use safely.
– Dr. Jim Anderson
Blue Elephant Consulting –
Your Source For Real World IT Department Leadership Skills™
Question For You: Do you think that facial recognition software will ever be accurate enough to use safely?
Click here to get automatic updates when The Accidental Successful CIO Blog is updated.
P.S.: Free subscriptions to The Accidental Successful CIO Newsletter are now available. Learn what you need to know to do the job. Subscribe now: Click Here!
What We’ll Be Talking About Next Time
CIOs are trying to find out what works for everyone. Now that we’ve moved into a new era where everyone is not in the office, the person with the CIO job has to find a way to allow everyone to work together in order to realize the importance of information technology. Realizing that each of their employees may be operating on a different schedule is the first step in trying to solve this problem. On a given project there may be workers who are both “morning people” and “afternoon people”. Just to make things even more difficult, some people may be located in different countries. What’s the best way for CIOs to get their people to work together?