Google is testing AI scam call detection for Android

Scam calls are becoming ever more prevalent. Whether it be robocalls or voice phishers looking to steal sensitive information from their victims, it’s estimated that Americans receive around two billion scam calls per month. 

Experts believe one reason for the recent growth is the boom in AI technology over the past couple of years. AI can make such scams much easier by quickly generating new messages for each person they’re trying to scam, as well as creating convincing-sounding AI-generated voices. Particularly sophisticated scammers can even clone the voices of public figures. 

With this growing problem, there is evidently a need for more sophisticated voice spam protections. Google may have come up with an AI-powered solution using Gemini Nano, an AI model developed specifically for on-device activities.

Scanning calls for suspicious activity

Google vice president for engineering Dave Burke discussed the potential feature at the Google I/O conference for software developers. When someone receives a call, the feature will scan for suspicious activity and send the phone owner an alert if it detects anything off. Such red flags could include language generally associated with fraudulent calls, such as a “bank representative” calling to request sensitive information like pin codes or passwords. 

Burke assured the audience that the call monitoring is done exclusively on the device, so your conversations will remain private. He did not give a specific date for release, saying that they’ll need users to opt in for testing, but said the tech giant would release more information later this year. For now, however, it’s only available on Google Pixel 8 Pro and Samsung S24 Series devices. 

Privacy concerns

While this may seem like a positive step forward for fraud protection, not all privacy advocates are keen on the idea because of the potential for abuse by surveillance companies, stalkers, and more. Even though the conversations theoretically stay on the device, they could still be vulnerable to hackers or police data requests. 

Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, described this potential development as terrifying, telling NBC news:

“It’s very easy for advertisers to scrape every search we make, every URL we click, but what we actually say on our devices, into the microphone, historically hasn’t been monitored.”

Share on Twitter, Facebook, Google+