Generative AI fraud: how to help protect your organization

As cutting-edge tech makes payment fraud easier at scale, organizations should take these basic account security measures.

Imagine this: An employee in the finance department of a large multinational firm attends a video call with the firm’s CFO along with several of the employee’s colleagues. During the call, the CFO asked the employee to initiate a $25 million transfer to an outside account. No one else on the call thought this was odd, so the employee did just that. 

The problem? All the other participants on that video call were fake, avatars that looked and sounded exactly like the employees they claimed to be. The money, however, was all too real—and it was gone. This is not a hypothetical example. It actually happened in Hong Kong in early 2024. 

While still relatively rare, identity fraud cases using “deepfake” techniques like this have risen tenfold between 2022 and 2023, according to digital identity verification solutions provider Sumsub. And the problem is likely to grow worse. Programs like ChatGPT and DALL-E—artificial intelligence systems capable of carrying on conversations with human-like language and creating lifelike images from text prompts—are merely the best known of proliferating generative AI applications. Other technology, like that used in the Hong Kong fraud, can create convincing digital clones of real individuals using nothing more than a photo and a snippet of spoken words. 

Not new—just easier, cheaper and more realistic

Identity fraud and its digital variants aren’t new. Phishing attacks—sending emails that appear to be from reputable companies in order to trick victims into revealing sensitive information such as passwords and credit card numbers—can be traced back to the 1990s. These attacks leverage social engineering, which refers to a broad range of malicious activities using psychological manipulation to mislead victims into making security mistakes or giving away sensitive information.

Social engineering attacks happen in one or more steps. A perpetrator first investigates the intended victim to gather necessary background information—such as potential points of entry and weak security protocols—needed to proceed with the attack. Then, the attacker attempts to gain the victim’s trust before provoking subsequent actions that break security practices, such as revealing sensitive information or granting access to critical resources.

Increasingly, the barrier to entry for highly sophisticated AI-based fraud continues to fall, as more generative AI applications become widely available. Just this year, the owner of a popular social media app collaborated with researchers at a Chinese university known for its military research to create a program that allows users to change their voice into another person’s in real time using generative AI technology. 

These technologies have the ability to significantly amplify existing fraud threats. While it’s been easy enough in the past to create fake ID documents or to impersonate a bank employee over the phone, today’s fraudsters have far greater capabilities at their disposal, including: 

  • More consistent, more customizable and higher quality software
  • Less skill or experience needed to make fake documents and believable bots
  • Mass automated customization, using personal information harvested at scale
  • Better synthetic voice generation with less ‘true voice’ capture needed
  • Bots’ ability to have more flexible conversations

As a result, it’s reasonable to expect that the speed and frequency of attacks will continue to accelerate, especially considering how AI enables attacks at scale in ways that are difficult to counteract, given human limitations.

How to help protect against advanced AI fraud

The best way to help defend your organization against deepfakes and other identity threats is to implement—and enforce—policies and to require ongoing training. Encourage employees to be wary of even the most innocuous-seeming communications that touch on security and never deviate from approval procedures for payments. 

Basic account security measures include: 

Recognizing social engineering attempts: Employees should be immediately suspicious of unusual requests, particularly those that introduce new information (such as a new account number) or which circumvent established security measures. 

Callbacks: If an employee suspects the party in contact could be an imposter, they should end that communication and call the customer using a known good number—one that has been used previously with verified success, or a different contact who can verify the request.

Deepfake detection: This is essentially “fighting fire with fire” with a number of vendors offering systems that leverage artificial intelligence to detect anomalous behavior indicating a voice, image or video may be AI-generated. Like antivirus software, however, deepfake detection must continually stay ahead of its adversaries and, for that reason, should not be considered foolproof. Instead, deepfake detection should be used as a potential indicator of fraud requiring further investigation.

Communication policies: If you are a vendor, make sure that your customers know what to expect when you communicate with them. Elements include identity verification and the type of information your agents will and will not ask for.

What to expect when communicating with Capital One

Our communication practices can help you keep your organization’s sensitive information secure. Here's what to expect when you contact us or we contact you.

When you contact us:

  • When an authorized representative of your organization initiates contact with Capital One, we’ll verify their identity before sharing account information or performing transactions on your organization’s behalf.
  • Our agents may ask the caller to verify information we have on file or ask other questions to confirm their identity.
  • Our agents will not ask the caller to provide an online banking password or the live token number used to authenticate your identity on the Intellix platform.

When we contact you:

  • Our Fraud Department may contact an authorized user in your organization if we detect unexpected activity on your corporate account. Fraud agents will require identity verification before discussing the account in question.
  • Our agents will never ask you to provide an online banking password or live token number used to authenticate your identity on the Intellix platform.

Additionally, it is best to use the Intellix address directly from the Capital One website or onboarding documents as fraudsters may create fake sites in an attempt to steal client credentials.

Maintaining vigilance

As the fraud landscape continues to evolve with advancements in generative AI, businesses must remain vigilant and proactive in implementing robust defense mechanisms. ​

Recognizing social engineering attempts, being wary of unusual requests and utilizing deepfake detection systems are crucial steps in safeguarding sensitive information and mitigating the risks associated with generative AI identity fraud. Additionally, fostering a culture of continuous training and adherence to established security protocols is paramount. 

By staying ahead of emerging threats and leveraging innovative solutions, you can help protect your business and your customers from the detrimental impact of AI-driven payments fraud.