In an article published by The Wall Street Journal (WSJ) on April 3, 2024, the alarming rise of deepfakes and their potential to disrupt businesses and financial institutions has been brought to light. As artificial intelligence (AI) advances rapidly, making voice and image generation more realistic than ever, malicious actors are now targeting enterprises with sophisticated deepfake technology.

According to Bill Cassidy, the chief information officer at New York Life, fraudulent calls have always been a concern, but the ability of AI models to imitate an individual’s voice patterns and give instructions over the phone poses entirely new risks. The WSJ report highlights that banks and financial services providers are among the first to face these threats, with Kyle Kappel, U.S. Leader for Cyber at KPMG, emphasizing the rapid pace at which this space is evolving.

Cassidy told the WSJ:

There were always fraudulent calls coming in. But the ability for these [AI] models now to imitate the actual voice patterns of an individual giving instructions to somebody with the phone to do something—these sorts of risks are brand new.

The article cites a recent demonstration by OpenAI, showcasing technology (Voice Engine) that can recreate a human voice from a mere 15-second clip. Although OpenAI has chosen not to release the technology publicly until potential misuse risks are better understood, the implications are clear. Bad actors could leverage AI-generated audio to manipulate voice-authentication software used by financial institutions to verify customers and grant account access. The WSJ report mentions that Chase Bank fell victim to an AI-generated voice during an experiment, although the bank stated that additional information is required to complete transactions and financial requests.

The scale of the deepfake problem is evident from a report by identity verification platform Sumsub, which revealed a staggering 700% increase in deepfake incidents within the fintech sector in 2023 compared to the previous year. In response, companies are working diligently to implement more robust security measures to counter the impending wave of generative AI-fueled attacks.

New York Life’s Cassidy, for example, is collaborating with the company’s venture capital group to identify startups and emerging technologies specifically designed to combat deepfakes. He suggests that the best defense against the generative AI threat may be a form of generative AI itself.

The WSJ article also highlights the potential misuse of AI to generate fake driver’s license photos for setting up online accounts. In response, Alex Carriles, chief digital officer of Simmons Bank, has adjusted identity verification protocols. Instead of uploading pre-existing pictures, customers must now photograph their driver’s licenses through the bank’s app and take selfies while following prompts to look in different directions. This approach aims to thwart the use of generic AI deepfakes that may not be able to mimic these specific movements.

Carriles acknowledges the challenge of striking a balance between providing a seamless user experience and implementing robust security measures to prevent attackers from exploiting the system. Interestingly, not all banks are equally concerned about the deepfake threat. KeyBank CIO Amy Brady considers the bank’s delayed adoption of voice authentication software as a stroke of luck, given the current risks associated with deepfakes. Brady states that she will not pursue the implementation of voice authentication software until more effective tools for detecting impersonations become available.