New Elliptic Study Predicts AI as the Future of Crypto Crime
By Harvey Hunter
Published on:
June 7, 2024 09:09 EDT
|
Reading time: 2 minutes
A recent Elliptic report has unveiled the emergence of AI-driven crypto crimes, signaling a new era of cyber threats that are being utilized for deepfake scams, state-sponsored attacks, and other sophisticated illicit activities.
The report acknowledges the positive impact of artificial intelligence (AI) on various industries, including the AI cryptoasset sector. This has led to the development of numerous projects that are set to redefine the AI crypto landscape.
However, as with any new technology, there is a risk of malicious actors exploiting these advancements for illicit purposes.
Therefore, Elliptic emphasizes the importance of monitoring early indicators of illegal activities to foster long-term innovation and address emerging risks in their infancy.
Impersonating Celebrities, Leaders, and Crypto Executives
Elliptic has highlighted the use of deepfakes and AI-generated content to promote crypto scams.
Deepfake videos often leverage the likeness of prominent figures such as Elon Musk and former Singaporean Prime Minister Lee Hsien Loong to endorse fraudulent investment schemes.
These videos create a false sense of legitimacy by implying official backing, thereby deceiving potential victims.
Sophisticated scams like “pig butchering” romance scams involve sustained communication with victims to carry out the fraud.
Elliptic also reported instances of deepfake technology impersonating high-ranking executives during virtual meetings, exploiting their authority to authorize significant transactions that impact both the corporate and crypto sectors.
AI-Powered Illicit Markets
Tools like ChatGPT can be utilized to generate or check code for vulnerabilities, posing a significant threat to the crypto industry.
Decentralized crypto applications often rely on open-source code, like smart contracts, making them susceptible to cyberattacks. The emergence of DeFi auditing aims to mitigate these risks.
While AI has the potential to streamline smart contract audits, auditors caution about its current limitations. There is a concern that hackers could exploit AI to identify vulnerabilities in DeFi protocols quickly.
On dark web forums, advanced language models (LLMs) are being explored for crypto-related crimes, including reverse-engineering wallet seed phrases and automating scams such as phishing and malware distribution.
Dark web markets offer illicit versions of GPTs specifically tailored for AI crypto crimes, designed to evade detection by legitimate GPTs.
The report mentions WormGPT, described as the “enemy of ChatGPT”, which openly promotes itself for facilitating phishing emails, carding, malware creation, and generating malicious code.
US Alerts on Korean AI Crypto Crime
The United Nations has attributed over 60 cryptocurrency heists to North Korean state actors, resulting in the theft of over $3 billion from 2017 to 2023. Reports suggest that North Korea is exploring AI to enhance their hacking capabilities.
Anne Neuberger, the US Deputy National Security Advisor for Cyber and Emerging Technologies, has expressed concerns about the increasing use of AI in criminal activities.
The report also highlights North Korea’s advancements in AI research since 2013, focusing on applications like facial recognition and potential military uses. Kim Il Sung University is actively involved in AI program development, collaborating with Chinese entities in this domain.
While Elliptic has not found direct evidence of hostile state actors using AI on blockchains, these groups are experimenting with large language models (LLMs) to improve their hacking skills.
For more updates, follow us on Google News.
Subscribe to Updates
Get the latest creative news from FooBar about art, design and business.