Processing…
Success! You're on the list.
NEWSLETTER

Processing…
Success! You're on the list.

AI use in cybercrime to look out for

Vortex IQ founders
Picture credits: Vortex IQ

AI can change every business. The Internet upgraded to brick-and-mortar stores and made them available worldwide. AI will be able to become your digital assistant, automate boring tasks, and make your life even easier. 

But there’s a downside. Whenever something becomes effortless, we give something away for it that’s hard to get back. In the case of the internet, we have easy access to entertainment, information, and connection to the rest of the world but relinquish digital privacy. Since AI has entered the market, and there’s no going back, we should be ready to abandon the belief that anything we see online is true or real. 

Cybercriminals use tools like ChatGPT to create socially engineered attacks, malware, and encryption tools. But generative AI is improving, and so are their scams. We’ll look at a few examples of AI use in cybercrime to watch out for. 

Fake social media accounts

Creating and maintaining a new fake social media account is a breeze, thanks to AI. Scammers can use those fake accounts for romance scams, brand impersonation, spreading fake news, and even malicious activities. For example, in December 2023, an experienced cybercriminal created a platform that uses AI to create social media content. The platform included Telegram, Twitter (now X), Facebook, and Instagram posts and replies. So, they offer the service of managing fake social media accounts for a subscription fee. There’s also a standalone platform so buyers can handle and enhance the content with AI. 

A couple of case studies from this scam were AI nude images of female models, celebrities, and influencers. The content looked incredibly realistic. Next come copies of successful crypto traders, where the cybercriminal mimicked their images, backgrounds, and content. The worst part is that the platform can simultaneously create content for 200 accounts. We’re talking images, texts, reels, and chats. And scammers are only getting started. 

Deepfakes

On New Year’s Eve, a special gift from the cybercriminal underground was AI-powered deepfake services. So bad actors can do lip syncs, deepfakes (lip sync and face replacement), and voice acting. Users pay for the time of the video (30 seconds usually) and get the desired content. The impact of deepfakes can cause irreparable damage. 

Bad actors can use presidents, influencers, celebrities, and even your family members to create fake social media profiles and push products, services, or agendas. They can also use these deepfakes for fake charities, crypto scams, and even exploiting unsuspecting victims out of their money. Impersonating corporate executives can cause brand damage or extract millions of dollars, such as when a finance worker paid $25 million after a video call with their CFO, which proved to be a deepfake. 

Spam tools and services

Most people are aware of spam because it has been around for such a long time. You get a junk email that offers money for free or an inheritance from an African prince, and you just ignore it. All the services we use on a daily basis have filters that prevent spam from reaching you. But sometimes, spam still goes through. Scammers aren’t sleeping under a rock. They’re enhancing their strategies, and AI gave them superpowers. 

One cybercriminal who had been doing it for 15 years used ChatGPT to power his spam services. Thanks to the AI, he was able to randomize the text in his scam message and get a higher success rate. More emails reached inboxes, and he claimed that thanks to ChatGPT, nothing was impossible. Many others using the tool received 70% success rates and bypassed Gmail, Outlook, Yahoo, and every webmail service. 

Cybercriminals looking for new tools demand that ChatGPT be integrated into their emails due to the randomization function. When every email has unique text, it can bypass the anti-spam filters. We’re living in dangerous times. 

How can you protect yourself? 

When everything can be falsified, the new rule of the internet must become – don’t trust anything. Don’t trust your eyes, and don’t trust your ears. That’s very hard to do because humans aren’t hardwired to negate all the information coming to them. You should approach the internet with a guard and never let it down. 

To protect yourself, you need to start with the basics. You need to have cybersecurity and network security tools. Use a combination of firewall, antivirus, VPN, password manager, and two-factor authentication (2FA) apps. Together, these tools will keep you safe from the technical side. That’s the easy part. The hard part is what you click on, what you download, and what information you share. 

Cybercriminals no longer focus on brute force attacks where they actively try to breach into your account. No. Like fishermen, they’re setting up hooks in the water, and they wait for you to bite. And you will. It’s just a matter of time. 

Hackers have made careful socially engineered attacks that last for months to succeed in their attempts. One of the best examples is the Axie Infinity hack, in which they targeted one of the technical founders, created a fake company, fake team, and fake job offer to plant malware in a PDF file. Then, over several months, they waited for the malware to get a substantial network share to steal more than $600 million worth of crypto tokens. 

No one can prepare against something like that unless they see it happen once. That’s why you need to stay on top of cybersecurity trends and be vigilant when it comes to your digital safety. Use cybersecurity tools constantly, and never skimp on the best cybersecurity practices. They exist for a reason.

Related Posts
Total
0
Share

Get daily funding news briefings in the tech world delivered right to your inbox.

Enter Your Email
join our newsletter. thank you