Protecting against deepfakes in the era of LLMs

 

The Rise of AI and Deepfakes in Financial Scams: A Growing Concern

The advent of artificial intelligence (AI) has brought numerous advancements, but it has also given rise to deepfakes, particularly in financial scams, which pose significant risks. The increasing sophistication of large language models (LLMs) has further amplified the potential for deepfake technology to be exploited in the financial sector, where the stakes are particularly high.

A Global Cat-and-Mouse Game

Regulatory bodies and fintech companies worldwide are engaged in a relentless battle against fraudsters, striving to prevent or minimize user losses. This ongoing struggle is characterized by continuous efforts to stay ahead of increasingly sophisticated fraudulent activities.

The Human-Like Qualities of Deepfake Voices

In a recent interview, Lei Chen from FinVolution Group highlighted the human-like qualities of LLM-generated voices, which make it challenging to distinguish them from genuine ones. FinVolution Group, a US-listed fintech platform, provides credit services and anti-fraud technologies across the pan-Asian region and beyond.

Chen, Vice President of FinVolution and head of its big data and AI division, noted that earlier attempts at voice replication only produced similar-sounding voices. In contrast, modern deepfake technology creates textually coherent and free-flowing dialogues that closely mimic real human conversation, significantly increasing the likelihood of deception.

The recent release of GPT-4o by OpenAI demonstrates the advancements in mimicking human speech. Using text-to-speech (TTS) technology, GPT-4o can generate highly natural and emotionally rich voice outputs, making detection increasingly difficult.

Chen views these advancements with mixed feelings. While they represent a significant step towards artificial general intelligence, he is concerned about the risks they pose, particularly in financial transactions where both the elderly and financial professionals are at heightened risk of being deceived by fraudsters.

Real-World Implications

In February, an accountant at the Hong Kong branch of a multinational company transferred HK $200 million across 15 transactions, believing he was following instructions from the company’s chief financial officer and other staff during a video meeting. Only later did he realize that the participants were deepfake imposters. Such incidents are becoming alarmingly common.

The Surge in Deepfake-Related Fraud

A report by Sumsub, an identity verification service provider, revealed a tenfold increase in reported deepfake-related frauds globally from 2022 to 2023. Countries like the Philippines, Vietnam, the United States, and Belgium have seen particularly sharp increases in fraud attempts.

Chen emphasized that while fraudulent videos attract more attention, voice forgery presents a greater challenge. Voice cloning and recognition are more complex than image processing, involving intricate processing logic and various personal traits like accents, intonation, speech habits, and dialects. This complexity makes voice forgery more difficult to detect and combat.

Combatting Voice-Based Fraud

As the global tech community shifts from traditional deep learning models to LLMs, detecting synthesized voices becomes even more challenging. Future advances in fake voice recognition will rely on LLMs for more precise detail capture, according to Qiang Lyu, an algorithm scientist at FinVolution Group.

FinVolution has been vigilant against voice-based fraud attempts, logging and intercepting over 1,000 cases in China within two to three months last year.

Innovative Approaches to Fraud Detection

To better identify financial scams involving cloned voices, novel approaches are needed. Lei Chen suggests modeling detection methods after polygraphs, which detect subtle tremors in conversations to identify emotional fluctuations or signs of deception. Using LLMs, vast amounts of real data can be analyzed to pinpoint minute deceptive details.

FinVolution is also exploring ways to leverage voice recognition technology to protect users’ financial well-being. This includes integrating voiceprint services into its apps, enabling users to record their voices during registration for ID verification during sensitive operations.

The Role of Regulation

Chen emphasizes the importance of collaboration with regulatory authorities in combating deepfakes. “Clear legislation and stringent enforcement are necessary to ensure the proper use of personal data and privacy,” he says. Proactive strategies, such as labeling datasets, can help fintech companies promptly detect instances of misuse and mitigate potential damages.

Conclusion

In the face of a constantly evolving AI landscape, fintech companies like FinVolution must bolster their defense mechanisms to protect consumers and third-party partners. By fostering innovations in regulatory technology (regtech) and maintaining strict data governance, the industry can raise the bar for fraudsters and deter reckless AI usage. Persistent efforts to showcase superior AI capabilities can act as a potent deterrent, promoting the responsible use of technology for the greater good.

Source: fintechfutures.com

Hipther

FREE
VIEW