Featured Article : 3000% Increase in Deepfake Frauds

A new report from ID Verification Company Onfido shows that the availability of cheap generative AI tools has led to Deepfake fraud attempts increasing by 3,000 per cent (specifically, a factor of 31) in 2023.

Free And Cheap AI Tools

Although deepfakes have now been around for several years, as the report points out, deepfake fraud has become significantly easier and more accessible due to the widespread availability of free and cheap generative AI tools. In simple terms, these tools have democratised the ability to create hyper-realistic fake images and videos, which were once only possible for those with advanced technical skills and access to expensive software.

Prior to the public availability of AI tools, for example, creating a convincing fake video or image required a deep understanding of computer graphics and access to high-end, often costly, software (a barrier to entry for would-be deep-fakers).

Document and Biometric Fraud – The New Frontier

The Onfido data reveals a worrying trend in that while physical counterfeits are still prevalent, there’s a notable shift towards digital manipulation of documents and biometrics, facilitated by the availability and sophistication of AI tools. Fraudsters are not only altering documents digitally but also exploiting biometric verification systems through deepfakes and other AI-assisted methods. The Onfido report highlights a dramatic rise in the rate of biometric fraud, which doubled from 2022 to 2023.

Deepfakes – A Growing Threat

As reinforced by the findings of the report, deepfakes pose an emerging and significant threat, particularly in biometric verification. The accessibility of generative AI and face-swap apps has made the creation of deepfakes easier and highly scalable, which is evidenced by a 31X increase in deepfake attempts in 2023 compared to the previous year!

Minimum Effort (And Cost) For Maximum Return

As the Onfido report points out, simple ‘face swapping’ apps (i.e. apps which leverage advanced AI algorithms to seamlessly superimpose one person’s face onto another in photos or videos) offer ease of use and effectiveness in creating convincing fake identities. They are part of an influx of readily available online AI assisted tools that are providing fraudsters with a new avenue into biometric fraud. For example, the Onfido data shows that Biometric fraud attempts are clearly higher this year than in previous years with fraudsters favouring tools like the face-swapping apps to target selfie biometric checks and create fake identities.

The kind of fakes these cheap, easy apps create have been dubbed “cheapfakes” and this conforms with something that’s long been known about online fraudsters and cyber criminals – they seek methods that require minimum effort, minimum expense and minimum personal risk, yet deliver maximum effect.

Sector-Specific Impact of Deepfakes

The Identity Fraud Report shows that (perhaps obviously) the gambling and financial sectors in particular are facing the brunt of these sophisticated fraud attempts. The lure of cash rewards and high-value transactions in these sectors makes them attractive targets for deepfake-driven frauds. In the gambling industry, for example, fraudsters may be particularly attracted to the sign-up and referral bonuses. In the financial industry, where frauds tend to be based around money laundering and loan theft, Onfido reports that digital attacks are easy to scale, especially when incorporating AI tools.

Implications For UK Businesses In The Age of (AI) Deepfake-Driven Fraud

The surge in deepfake-driven fraud highlighted by the somewhat startling statistics in Onfido’s 2024 Identity Fraud Report, suggest that UK businesses navigating this new landscape may require a multifaceted approach. This could be achieved by balancing the implementation of cutting-edge technologies with heightened awareness and strategic planning. In more detail, this could involve:

– UK businesses prioritising the reinforcement of their identity verification processes. The traditional methods may no longer suffice against the sophistication of deepfakes. Therefore, Adopting AI-powered solutions that are specifically designed to detect and counter deepfake attempts could be the way forward. This could work as long as such systems can keep up with the advancements in fraudulent techniques (more advanced techniques may emerge as more AI sophisticated AI tools emerge).

– The training of staff, i.e. educating them about the nature of deepfakes and how they can be used to perpetrate fraud. This could empower employees to better recognise potential threats and respond appropriately, particularly in sectors like customer service and security, where human judgment plays a key role.

– Maintaining customer trust. UK businesses must navigate the fine line between implementing robust security measures and ensuring a frictionless customer experience. Transparent communication about the security measures in place and how they protect customer data can help in maintaining and even enhancing customer trust.

– As the use of deepfakes in fraud rises, regulatory bodies may introduce new compliance requirements and UK businesses will need to ensure that they stay abreast of these changes both to protect customers and remain compliant with legal standards. This in turn could require more rigorous data protection protocols or mandatory reporting of deepfake-related breaches.

– Collaboration with industry peers and participation in broader discussions about combating deepfake fraud may also be a way to gain valuable insights. Sharing knowledge and strategies, for example, could help in developing industry-wide best practices. Also, partnerships with technology providers specialising in AI and fraud detection could offer access to the latest tools and expertise.

– Since deepfake fraud may be an ongoing threat, long-term strategic planning may be essential. This perspective could be integrated into long-term business strategies, thereby (hopefully) making sure that resources are available and allocated not just for immediate solutions but also for future-proofing against evolving digital threats.

What Else Can Businesses Do To Combat Threats Like AI-Generated Deepfakes?

Other ways that businesses can contribute to the necessary comprehensive approach to tackling the AI-generated deepfake threat may also include:

– Implementing biometric verification technologies that require live interactions (so-called ‘liveness solutions’), such as head movements, which are difficult for deepfakes to replicate.

– The use of SDKs (platform-specific building tools for developers) over APIs. For example, SDKs provide better protection against fraudulent submissions as they incorporate live capture and device integrity checks.

The Dual Nature Of Generative AI

Although, as you’d expect an ‘Identity Fraud Report’ to do, the Onfido report focuses solely on the threats posed by AI, it’s important to remember that AI tools can be used by all businesses to add value, save time, improve productivity, get more creative, and to defend against the AI threats. AI-driven verification tools, for example, are becoming more adept at detecting and preventing fraud, underscoring the technology’s dual nature as both a tool for fraudsters and a shield for businesses.

What Does This Mean For Your Business?

Tempering the reading of the startling stats in the report with the knowledge that Onfido is selling its own deepfake (liveness) detection solution and SDKs, it still paints a rather worrying picture for businesses. That said, The Onfido 2024 Identity Fraud Report’s findings, highlighting a 3000 per cent increase in deepfake fraud attempts due to readily available generative AI tools, signal a pivotal shift in the landscape of online fraud. This shift could pose new challenges for UK businesses but also open avenues for innovative solutions.

For businesses, the immediate response may involve upgrading identity verification processes with AI-powered solutions tailored to detect and counter deepfakes. However, it’s not just about deploying advanced technology. It’s also about ensuring these systems evolve with the fraudsters’ tactics. Equally crucial is the role of employee training in recognising and responding to these sophisticated fraud attempts.

As regulatory landscapes adjust to these emerging threats, staying informed and compliant is also likely to become essential. The goal is not only to counter current threats but to build resilience and innovation for future challenges.