AI Scams Posing Greater Cybersecurity Risk

07.03.23 11:09 AM

Artificial intelligence (AI) continues to leave its mark in the world of business, enhancing productivity and efficiency in a wide variety of ways. However, AI is also causing threats to evolve. Malicious actors are increasingly able to launch believable scams thanks to AI technology, and that creates new risks for individuals and businesses alike.

Understanding emerging threats will allow your businesses to update practices and more effectively shield your organization and its assets from attacks. Here’s a look at how AI is altering the threat landscape.

How AI Scams Are Increasing Cybersecurity Risk

Generative Text AIs

One of the methods that many people use to identify scams is to review any text in suspicious emails or text messages, looking for clues. Spelling and punctuation errors are potential giveaways depending on their nature. Similarly, awkward word choice or the incorrect use of specific phrases are possibly indicative of scams.

However, spotting scam messages is becoming more challenging due to the wide availability of generative-text AI technologies. This type of tech can create natural-sounding content based on simple prompts and offers a high degree of spelling and grammar accuracy. As a result, a scammer can have the technology make the content for email and text scams instead of writing it themselves, eliminating many of the red flags that people use to identify threats.

AI-Created Imagery


With AI technology, individuals can have an AI alter existing images in highly believable ways. For example, they can take a photograph of a person who’s close to the target, combine it with a picture of a current disaster, and use it to convince the target that someone they know is in need of help. Then, they can message the target to request financial assistance as a means of scamming them out of money.

Additionally, AI technology can help create documents or other types of written communications that appear to come from a legitimate business. For example, capturing a logo to generate authentic-looking letterhead is relatively simple with AI support. Similarly, altering the details of a business card to let a scammer to pose as a legitimate employee isn’t tricky, and there are far more potential uses for AI-created imagery that could allow malicious actors to solicit funds, secure sensitive user credentials, and more.

AI-Generated Audio


Some AI technologies can use small snippets of a person’s voice and functionally replicate how they speak. Scammers can use these tools to convince a target that they’re talking to an individual they know, such as a family member or colleague. Potentially, a person would be less suspicious of potential red flags in the conversation if the speaker sounds like someone they know, making it easier for malicious actors to get the money or information they’re after.

Additionally, AI-generated audio could potentially get past voice-based verification systems. It allows a malicious actor to create recordings that mimic the authorized party. When played, voice verification systems may not detect a difference between that audio and the authorized person speaking and may let the scammer through.

How to Protect Your Organization from AI Scams


Protecting your company from AI scams requires a multi-faceted approach. First, educate employees about the tools and strategies malicious actors are using, as awareness can increase the likelihood that workers will think twice before engaging with suspicious messages. Additionally, teach them techniques that support due diligence, such as who to contact to verify the authenticity of any messages, images, or phone calls.

Finally, use multi-factor authentication, including on voice-based verification systems. A combination approach ensures that scammers who can mimic an authorized party vocally may get stopped by the second authentication factor required. The other requirement dramatically enhances security, so it’s a wise addition regardless.

Book a free consultation with VocalPoint to go over options to help fortify your organization against bad actors. 


Nathan Weatherford