Artificial Intelligence (AI) scams pose a clear and present danger to individuals and corporations in 2025. Glancing in the rearview mirror, 2024 offered us a glimpse of what’s coming. We saw a proliferation of voice clones, deepfakes, and AI phishing scams. Cybercriminals are increasingly sophisticated with their fraudulent techniques. They are utilizing all technologies to circumvent, bypass, and break through existing defenses. 2025 is expected to be a year of tremendous challenge vis-à-vis the rise of AI-driven scams. This is especially true for online dating, online banking, and the Fintech industry.
We are seeing huge numbers of impressively realistic job advertisements across social media channels, like Telegram, Instagram, Facebook, X, and TikTok. Scores of characters are posing as AI models so that their likenesses can be marketed for deepfakes. AI is developing at a rate of knots. Many rudimentary IT security teams cannot keep pace with these rapid and unprecedented ‘dangerous’ technological advancements. Generative AI is particularly troublesome in its threat vectors. For example, the Deloitte Center for Financial Services (DCFS) estimates that this industry will generate losses greater than $40 billion within three years, from $12.3 billion in losses in 2023.
This rapid proliferation of AI-driven scams underscores the critical importance of addressing vulnerabilities in digital systems. Among the most insidious threats are malicious code infiltrating applications and websites. This is a primary vehicle for fraudsters to deploy their tactics. Such malicious code exploits are devastating in industries reliant on online platforms. For it is here that trust is paramount. For a deeper understanding of how this occurs and preventative measures, consult the detailed guide to understanding malicious code. By recognizing these patterns, businesses and individuals can stay ahead of evolving AI-fueled fraud techniques.
The Federal Bureau of Investigation is Taking the AI Threat Seriously
Indeed, the Federal Bureau of Investigation (FBI) routinely warns about the criminal element abusing artificial intelligence for nefarious purposes. And it is precisely this threat vector – AI fraud – that is the number one concern for corporations globally. A preponderance of evidence exists, with face-changing programs readily available on Telegram. The implications of such technology are profound, but not in a good way. Indeed, the brains trust behind these deepfakes and face-changing programs created realistic software that the human eye cannot quickly identify. The scam potential for such software is off the charts. This is particularly true in online dating applications.
Astonishingly, algorithms that gauge ‘chatter’ about criminal scams and AI have increased exponentially in recent years. On Telegram, for example, criminal-minded channels about deepfakes and AI for use in fraudulent activity spiked from 47,000 messages in 2023 (Point Predictive) to 350,000 messages by the end of 2024. That represents an increase of almost 650%. At these growth rates, many social media platforms, forums, blogs, seemingly legit dating sites, websites, and applications will be overwhelmed with AI-generated scams. Fraud experts are rightly concerned that 2025 is going to yield off-the-charts figures.
What Are BEC Attacks?
The scams that are likely to target corporate folks are BEC attacks. Business Email Compromise attacks – BEC – abuse AI technology to leverage deepfakes for fraudulent activities. Indeed, Hong Kong witnessed such instances where AI-generated content was utilized to impersonate company executives on Zoom. The realism of the onscreen characters (audiovisual realism) was off the charts. Employees were convinced to transfer $30 million to the fraudsters. Unfortunately, these scams are commonplace, with cyber criminals leveraging every tool to con unsuspecting victims. Indeed, US-based companies face similar challenges. In 2023, 5/10 accounting professionals were the victims of deepfake AI fraud. Medius reported this in the United States.
Company Executive Management Attacks
Deepfake email fraud – extortion – is yet another sophisticated scam employed across Southeast Asia. In Singapore, dozens of public servants from scores of governmental agencies were targeted in an extortion plot. In the email, ransom demands for $50,000 in crypto were made. The deepfakes used AI-generated material to showcase these high-profile government ministers in compromising positions, taking bribes, being unfaithful to their partners, or otherwise destroying their credibility. If the government ministers refused to pay, the fraudsters threatened to release the photos and videos to the media. The images for the AI scam were taken from YouTube and LinkedIn. This is unfortunate, given that AI software is more readily available than ever.
Sextortion Scams
This is a particularly worrisome AI-generated scam because it typically correlates with younger people. These scammers pose as women interested in a romantic soirée. They send convincing, realistic images of attractive women to the unsuspecting victims who are asked to send compromising pictures of themselves. Then, the tables are turned on the victim, and money is demanded. Failure to pay results in the AI scammer sending compromising videos and images to family members and friends. Unfortunately, there is tremendous urgency with these scams because the victim doesn’t have much time to consult with others before the compromising content is released to the public.
AI Dating Scams
Artificial intelligence dating scams are a sad reality that millions of people are discovering. Many of these scams emanate in Nigeria, where computer-savvy programmers use their nefarious techniques to con unsuspecting victims out of vast amounts of money. Shockingly, AI chatbots with ultra-lifelike features, voices, and appearances can successfully interact (in real-time) with victims who are none the wiser. These chatbots speak fluently, with emotion, purpose, and nuance, mimicking the authenticity of a human love interest. Autonomous chatbots are expected to swamp the dating scene in 2025.
Imposter Scams
One of the most commonly occurring AI scams is an impostor scam. This happens when scammers pretend to be close friends or relatives of the victim. Sometimes, they present as high-profile figures interested in communicating with the victim. The FTC 2023 Consumer Sentinel Network Data Book (CSNDB) states that one in five imposter scam victims lost money. The average loss per person at the time was $800. We can expect a much higher number of imposter scams in 2025, with attendant losses also rising.
*Sources: https://www.aarp.org/money/scams-fraud/info-2024/biggest-scams-2025.html
*Sources: https://www.experian.com/blogs/ask-experian/the-latest-scams-you-need-to-aware-of/