THIS NEW AI SCAM COULD FOOL ANYONE - HERE'S HOW TO BEAT IT Criminals are using deepfake technology in a scary new AI scam, but knowing the red flags can help you avoid becoming their next victim. Picture this: You answer the phone one day and hear a voice that sounds like your child’s. They tell you that they have been kidnapped and need cash for a ransom right away. You scramble to help—only to realize that the voice on the other end of the line doesn’t belong to your child but instead is part of a sophisticated, terrifying new AI scam that uses deepfake phone calls. That’s what happened to Arizona mother Jennifer DeStefano, who recently testified about her experience to the Senate. And unfortunately, her story is all too common. As artificial intelligence (AI) technology becomes cheaper and more accessible, criminals are frequently using it to impersonate the voices of our friends and loved ones to trick us into sending them money. According to the Federal Trade Commission, scammers stole more than $12.5 billion from Americans in 2024, with imposter scams accounting for $2.95 billion of those losses. The good news? You can beat scammers at their own game. Reader’s Digest spoke with five cybersecurity experts, including the head of the Identity Theft Resource Center, to learn how to spot these new AI scam calls, how they can put your personal information at risk and what to do if you become a target. Read on to find out how to protect yourself and stop scammers in their tracks. What is the new AI scam call, exactly?A clever scammer with a good AI program doesn’t need much more than a few-second recording of a loved one’s voice to be able to clone the person’s voice and apply their own script. From there, they can play the audio over the phone to convince their victim that someone they love is in a desperate situation and needs money immediately. These aren’t your typical four-word phone scams—they’re much more advanced. In one of the most common examples, parents or grandparents receive a call from their children or grandchildren claiming they need money for ransom or bail, like the AI deepfake scam DeStefano encountered. “We have seen parents targeted and extorted for money out of fear that their child is in danger,” says Nico Dekens, director of intelligence and collection innovation at ShadowDragon. Eva Velasquez, CEO of the Identity Theft Resource Center, says that the center also receives reports of AI scam calls that convince victims their relative needs money to pay for damages from a car accident or other incident. Other scams include using a manager or executive’s voice in a voicemail instructing someone to pay a fake invoice, as well as calls that sound like law enforcement or government officials demanding the targeted individual share sensitive information over the phone. As you can see, this type of phishing using AI can take many forms, but the through line is the sense of urgency the scammer creates. The goal is to get you to panic and make an impulsive decision. How does this AI scam work?It may take a few steps to pull together an AI scam, but the tech speeds up the process to such an extent that these cons are worryingly easy to produce compared with voice scams of the past. Here are the three steps involved: Step 1: Collect the recordingTo carry out an AI scam call, criminals first must find a five- to 10-second audio recording of a loved one’s voice, such as a clip from YouTube or a post on Facebook or Instagram. Yep, that’s all it takes for an AI algorithm to create an eerily accurate clone. To do that, the scammer feeds the recording into an AI tool that learns the person’s voice patterns, pitch and tone—and, crucially, simulates their voice. These tools are widely available and cheap or even free to use, which makes them even more dangerous, according to experts. For example, generative AI models like ChatGPT or Microsoft’s VALL-E need to listen to only three seconds of an audio “training” clip of someone speaking to create a replica of their voice. “As you can imagine, this gives a new superpower to scammers, and they started to take advantage of that,” says Aleksander Madry, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory. Step 2: Give a script to the AIOnce the AI software learns the person’s voice, con artists can tell it to create an audio file of that cloned voice saying anything they want. Their next step is to call you and play the AI-generated clip (also called a deepfake). The calls might use a local area code to convince you to answer the phone, but don’t be fooled—the bad guys are capable of spoofing their phone numbers. Many phone-based fraud scams originate from countries with large call-center operations, like India, the Philippines or even Russia, according to Velasquez. Step 3: Set the trapThe scammer will tell you that your loved one is in danger and that you must send money immediately in an untraceable way, such as with cash, via a wire transfer or using gift cards. Although this is a telltale sign of a scam, most victims will panic and agree to send the money. “The nature of these scams plays off of fear, so in the moment of panic these scams create for their victims, it is also emotional and challenging to take the extra moment to consider that it might not be real,” Dekens says. Scammers are also relying on the element of surprise, according to Karim Hijazi, the managing director of SCP & CO, a private investment firm focused on emerging technology platforms. “The scammers rely on an adequate level of surprise in order to catch the target off guard,” he says. “Presently, this tactic is not well known, so most people are easily tricked into believing they are indeed speaking to their loved one, boss, co-worker or a law enforcement professional.” How has AI made scams easier to run—and harder to spot?Imposter scams are nothing new, but artificial intelligence has made them more sophisticated and convincing. “AI did not change much in terms of why people do scams—it just provided a new avenue to execute them,” Madry says. “Be it blackmail, scam, misinformation or disinformation, they now can be much cheaper to execute and more persuasive.” While AI has been around for decades for both criminal and everyday use—think: AI password cracking and AI assistants like Alexa and Siri—it was expensive and required a massive amount of computing power to run. As a result, shady characters needed a lot of time and expertise with specialized software to impersonate someone’s voice using AI. But that’s not the case anymore. “Now, all of this is available for anyone who just spends some time watching tutorials on YouTube or reading how-to docs and is willing to tinker a bit with the AI systems they can download from the internet,” says Madry. On top of that, Velasquez notes that previous imposter phone scams used to blame a poor connection or bad accident to explain why their voice sounded different. But today’s technology “has become so good that it is almost impossible for the human ear to be able to tell that the voice on the other end of the phone is not the person it purports to be,” says Alex Hamerstone, a director with the security consulting firm TrustedSec. Original Source: https://www.rd.com/article/ai-scam/ |
|
|||||||||||||
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
|||||||||||||||||||||||||
|
||||||||||||||||||||
|
||||||||||||||||||||