The latest issue of IBSI Fintech Journal featuring Regtech & Regulators and Fraud Management has profiled Clari5’s global banking financial crime management solutions. Read More
Image courtesy: Pixabay
Though the term deepfake came about in 2017, the ability to modify and manipulate videos dates back to the 1997 Video Rewrite program. It allowed modifying video footage of a person speaking to depict that person mouthing the words from a completely different audio track. The technique of blending images, videos and altering voice has been used in cinema for even longer, but it was expensive and time-consuming.
A deepfake algorithm can convincingly impersonate a real person’s appearance, actions and voice. With the growth of social media and digital technologies, the technique has now become something of an art form with usage growing rapidly. Scams using deepfake technology and AI pose a new challenge for businesses, as conventional security tools designed to keep impostors out of corporate systems are not designed to spot fake voices or manipulated videos.
Anti-fraud technology companies are in the process of developing defenses to detect deepfakes, while organizations have begun to take deepfakes very seriously. Google has built a database of 3,000 deepfakes to help researchers and cybersecurity professionals develop tools to combat the fake videos. Facebook and Microsoft are working with leading US universities to build a database of fake videos for research.
Two years ago, the New York Times, the BBC, CBC Radio Canada and Microsoft launched Project Origin to create technology that proves a message actually came from the source it purports to be from. In turn, Project Origin is now a part of the Coalition for Content Provenance and Authenticity, along with Adobe, Intel, Sony and Twitter. Some of the early versions of this software that trace the provenance of information online already exist, the only question is who will use it? (Forbes).
The risk has now become so real, that the US government has been debating the growing threat posed by deepfakes and other AI-generated false information and what it could mean for the country’s national security. In June 2022, the US government tabled a bill to be enacted by the Senate and House of Representatives to combat the spread of disinformation via deepfake video alteration technology (Congress.gov).
The Lethal New Kid on the Block
Deepfake is being used today to create astonishingly convincing digitally manipulated voices, photos and videos, even fake identities.
Deepfake photos are used to create non-existent persons (aka sock puppets), who are active online and in traditional media. A deepfake photo appears to have been generated, along with manipulated but genuine looking metadata, for a non-existent person.
Deepfake apps that enable users to substitute their faces onto those of characters in films and TV shows are already popular on social media platforms.
New deepfake software allows adding, editing, or deleting words from the transcript of a video, and the changes are reflected seamlessly in the video.
Audio deepfakes have already been used in social engineering scams, tricking people into believing they are speaking with a trusted person. An energy firm’s CEO was scammed over the phone when he was instructed to transfer €220,000 into a Hungarian bank account by a fraudster who used audio deepfake technology to mimic the voice of the firm’s parent company’s chief executive.
The volume of deepfakes grew at an exponential rate – from around 14,000 in 2019 to 145,000 in 2021 (TechCrunch). Recently, in one of the most significant deepfake phishing attacks, a bank manager in the United Arab Emirates fell victim to fraudsters, who used AI voice cloning to trick the bank manager into transferring $35 million. (Forbes). Fraudsters are also breaking into video conversations. A recent survey reveals that more than 30% of companies experienced attacks on their videoconferencing systems in 2021.
Deepfake Frauds in Banks: A Few Scenarios
New Account Fraud
Aka application fraud, this type of fraud occurs when fake or stolen identities are used specifically to open bank accounts. A fraudster can create a deepfake of an applicant and use it to open an account, bypassing most of the usual checks. The criminal could then use that account to launder money or run up large amounts of debt. Once they become proficient in this, they can create fake identities at scale to attack financial services globally.
With ghost fraud, criminals use personal data from a deceased person for access to online services, tap into savings accounts and gain credit scores, as well as apply for cars, loans or benefits. Deepfake technology lends credibility to such applications, as the bank officials checking an application see a convincing moving, speaking figure on screen and believe that this is a live human being.
Synthetic Identity Fraud
Amongst the most sophisticated deepfake tactic, synthetic identity fraud is extremely difficult to detect. Rather than stealing an identity, criminals combine fake, real and stolen information to ‘create’ someone who doesn’t exist.
These synthetic identities are then used to apply for credit/debit cards or complete other transactions to help build a credit score for the new, non-existent ‘customer’.
Costing businesses billions each year (Pymnts.com), synthetic identity fraud is the fastest-growing type of financial crime, and deepfake technology adds another layer of validity to these types of attacks.
Fraudulent Claims by Deceased Persons
Using deepfakes, fraudsters can also make insurance or other claims on behalf of deceased individuals. Claims can successfully continue to be made on pensions, life insurance and benefits for many years after a person dies and could be done either by a family member or professional fraudster. Here, deepfakes are used to convince the bank that a customer is still alive.
A Tempting Target for a Deepfake Heist
Where there’s money, there’s crime. Trust fraudsters to leverage new technology in their commitment to gain access to accounts, or to set up accounts or steal money. It is just a matter of time before deepfake becomes another new normal for digital rogues to defraud banks.
Most banks demand a government-issued ID and selfie to determine a person’s digital identity when creating a new account online. An impostor can use deepfake technology to easily create a photo to meet the requirement.
Deepfake technology that mimics human voice is already being used to target call centers. We will soon start seeing signs of deepfake technology being used to bypass face recognition controls, including those using state-of-the-art liveliness tests. Banks will have to develop invisible behind-the-scenes controls that can compensate for the vulnerabilities in current authentication processes and protocols.
Even as banks widen the span of their digitisation efforts to cater to increasing online action, it is vital to put equal (if not more) emphasis on stronger measures to protect assets and reputation from newer, emerging threats.
Critical therefore to recognize this emerging threat and proactively implement smarter and enterprise-wide defense mechanisms.
How Can Banks Combat Deepfake Fraud?
Many banks are already using automated deepfake detection software. While these auto-detection technologies work well today to detect amateur deepfakes, it will not suffice going forward. Very soon, deepfakes will have ultra-realism, and existing technologies will not be able to detect them.
As with any security measure, employee training and vigilance is paramount. Banks must invest time and effort in making staff aware of deepfakes with examples (getting an unexpected call from a senior bank executive asking them to perform an urgent, unexpected, non-standard task. Banks can have internal security questions to help employees confirm a caller’s identity if required.
There are also advanced identity verification solutions with embedded ‘liveness’ detection to detect advanced spoofing attacks, including deepfakes, and check if a remote user is physically present. But deepfakes can bypass these methods and imposters can still game the system unless the ID verification technology has certified liveness detection validated by a competent authority.
As of now, in most cases the attempts have had flaws with enough tell-tale signs about manipulations. There are also tools that help tell fact from fiction. Social and traditional media can use these tools to identify deepfakes, delete or label them, so that users from other industry sectors who rely on the information do not end up becoming unsuspecting victims. Another solution is for imaging technologies is to add ‘digital noise’ to image, voice and video files, making it harder for a fraudster to use them to produce deepfakes.
Biometrics provides financial services institutions with a highly secure and highly usable means of verifying and authenticating online users.
Biometric face verification enables an online user to verify their face against the image in a trusted document (such as a passport or driver’s licence). This is ideal for the first interaction with a new customer, for example at onboarding.
Online face authentication then enables a returning customer to authenticate themselves against the original verification every time they want to log in to their account.
Financial Institutions must also:
Invest in newer technology. Reverse image search technologies have enabled tracing original versions of images. However, research is scarce about reverse video searches, and it does not exist in a way that is freely available to all. Banks must collaborate with niche fintech solution vendors to make it possible to develop this technology and publicly release it.
Enrol social media firms. The massive volume of information that social media companies have, could potentially provide solutions. Regulators, banking industry leaders and policymakers can encourage social media companies to disseminate their data to social scientists (while preserving social media user privacy), to enable discovering new solutions to fight deepfakes.
Push for legal reforms. Banks and fintechs can spearhead the need for a stronger framework that makes deepfake technology application vendors accountable for enabling deepfake crime.
Altering voices, videos and photos have become an effortless affair and it has already become a challenge to determine a person’s identity with 100% accuracy. The latest addition to the list of appearance-modifying technologies – deepfake, is also the most formidable one because of its astonishing capabilities.
From having to replace customer funds to incurring penalties to losing trust and reputation, deepfake-led data breaches or account takeovers, can have a devastating impact for a bank.
And as is historically evident with any form of crime, especially financial crime, it usually stays a step ahead, so yesterday’s solutions will simply not be enough to solve today’s newer challenges.