FBI Warns Smartphone Users—Hang Up And Create A Secret Word Now
source: forbes.com (contributed by Artemus founder, Bob Wallace) | image: fbi.gov
Update, Dec. 07, 2024: This story, originally published Dec. 05, now includes details of innovative technological solutions for smartphone users looking to protect themselves from the kinds of AI-generated scams the FBI has warned about. An update on Dec. 06 added details on reporting smartphone crime to the FBI along with additional input from security experts.
The use of AI in smartphone cyber attacks is increasing as recent reports have revealed; from tech support scams targeting Gmail users to fraudulent gambling apps and sophisticated biometric protection-busting banking fraud to name but a few. Now the Federal Bureau of Investigations has issued a public service announcement warning of how generative AI is being used to facilitate such fraud and advising smartphone users to hang up and create a secret word to help mitigate these cyber attacks. Here’s what the FBI warned you must do.
FBI Warns Of Generative AI Attacks Against Smartphone Users
In public service alert number I-120324-PSA, the FBI has warned of cyber attackers increasingly looking to generative AI to commit fraud on a large scale and increase the believability of their schemes. “These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud,” the FBI said. Given that, as the FBI admits, it can be difficult to tell what is real and what is AI-generated today, the public service announcement serves as a warning for everyone when it comes to what to look out for and how to respond to mitigate the risk. Although not all the advice is aimed directly at smartphone users, given that this remains a primary delivery mechanism for many AI deepfake attacks, especially those using both facial and vocal cloning, it is this advice that I am focusing on.
The FBI warned of the following examples of AI being used in cyber attacks, mostly phishing-related.
- The use of generative AI to produce photos to share with victims so as to convince them they are speaking to a real person.
- The use of generative AI to create images of celebrities or social media personas promoting fraudulent activity.
- AI-generated short audio clips containing the voice of a loved one or close relative in a crisis situation to ask for financial assistance.
- AI-generated real-time video chats with alleged company executives, law enforcement, or other authority figures.
- AI-created videos to “prove” the online contact is a “real person.”
AI is going to start blurring our everyday reality as we head into the new year, Siggi Stefnisson, cyber safety chief technical officer at trust-based security platform Gen, whose brands include Norton and Avast, said. “Deepfakes will become unrecognizable,”Stefnisson warned, “AI will become sophisticated enough that even experts may not be able to tell what’s authentic.” All of which means, as the FBI has suggested, that people are going to have to ask themselves every time they see an image or watch a video: is this real? “People with bad intentions will take advantage,” Stefnisson said, “this can be as personal as a scorned ex-partner spreading rumors via fake photos on social media or as extreme as governments manipulating entire populations by releasing videos that spread political misinformation.”
The FBI Says To Hang Up And Create A Secret Word
To mitigate the risk of these smartphone-based AI cyber attacks, the FBI has warned that the public should do the following:
- Hang up the phone to verify the identity of the person calling you by researching the contact details online and calling the number found directly.
- Create a secret word or phrase that is known to your family and contacts so that this can be used for identification purposes in the case of a true emergency call.
- Never share sensitive information with people you have met only online or over the phone.
Shaken Not Stirred—A James Bond Approach To The Smartphone Deepfake Problem
In their technical research paper, Shaking the Fake: Detecting Deepfake Videos in Real Time via Active Probes, Zhixin Xie and Jun Luo from the Nanyang Technological University, Singapore, have proposed a system called SFake to determine if a smartphone video is actually generated by AI. SFake, the researchers said, “innovatively exploits deepfake models’ inability to adapt to physical interference,” by actively sending probes that trigger good old-fashioned mechanical vibrations on the smartphone. “SFake determines whether the face is swapped by deepfake based on the consistency of the facial area with the probe pattern,” Xie and Luo said. After testing, the clever duo concluded that “SFake outperforms other detection methods with higher detection accuracy, faster process speed, and lower memory consumption.” This could be one to watch out for in the future when it comes to mobile deepfake detection protections.
Honor Magic 7 Pro Builds Deepfake Detection Into The Smartphone Itself
The soon to be released Magic 7 Pro flagship smartphone from Honor looks liken it will bring scam protections right into the handset with an innovative on-device AI deepfake detection feature. According to Honor, the deepfake detection platform has been “trained through a large dataset of videos and images related to online scams, enabling the AI to perform identification, screening, and comparison within three seconds.” If any suspected deepfake content is detected, the user gets an immediate warning to deter them from continuing in the engagement and potentially saving them from expensive fraud.
How To Report AI-Powered Smartphone Fraud Attacks To The FBI
If you believe you have been a victim of a financial fraud scheme, please file a report with the FBI Internet Crime Complaint Center. The FBI requests that when doing so, you provide as much of the following information as possible:
- Any information that can assist with the identification of the attacker, including their name, phone number, address and email address, where available.
- Any financial transaction information, including dates, payment types and amounts, account numbers along with the name of the financial institution that was in receipt of the funds and, finally, any recipient cryptocurrency addresses.
- As complete as possible description of the attack in question: the FBI asks that you include your interaction with the attacker, advise how contact was initiated, and detail what information was provided to them.