upper waypoint

Why Does My Bank Want My Voice to Login?

Save ArticleSave Article
Failed to save article

Please try again

A man holds a hand to his ear while the other hand holds a smartphone close to his face with a microphone logo and connected dots.
 (Metamorworks via iStock/Getty Images Plus)

We’re all used to giving out a bit of personal data to get into our financial accounts: social security numbers, our birthdays and so on. However, a growing number of financial institutions are asking for a sample of our voices. Should we be concerned?

What is Voice ID?

Voice authentication systems are a form of biometric authentication, similar to a fingerprint. It relies on voice recognition software, which verifies customer identities by detecting the unique patterns in a small speaking sample.

Why does my bank want my voice?

Banks have to try something new because our personal data is no longer so private, according to James Lee, Chief Operating Officer of the non-profit Identity Theft Resource Center outside of San Diego. “All of that data’s been compromised. Our Social Security numbers, our driver’s license numbers, where we live, our phone numbers, you know. …That’s all readily available,” Lee said.

He and other cybersecurity experts warn that a readily available pool of personal data contributes to fraudulent logins and financial theft.

Isn’t it easy for hackers to use AI to clone my voice?

Recent advances in generative artificial intelligence have led to better, cheaper and publicly accessible AI voice cloning models.

Sponsored

And as we see and hear more deep fakes, it might feel like a bad idea to use our voices to access our accounts.

But Lee downplayed those concerns, saying that most of us don’t have a big enough vocal profile on the Internet to attract the attention of hackers or make it easy for them to develop effectively convincing clones.

What about celebrities? Podcasters? Or, really, is anybody who posts videos on social media? Lee argued that most hackers like to hack at scale, and most of us don’t have that much money in our financial accounts.

“For somebody to appropriate your voice, it’s a little more difficult, and identity criminals don’t like to do things that are difficult. They like to do things that are easy,” Lee said. “So the risk to any one individual is relatively low.”

Reasons to think twice

Your financial data is arguably your most sensitive data, even if you don’t have a lot of money in your accounts.

Also, publicly available generative AI tools are notoriously insecure. Most of the companies that produce the software make little or no attempt to ensure that the humans being copied have consented to the process.

“It seems like it’s fast becoming normalized insanity, where even questioning it is made to make you feel old,” said Justin Kloczko, a tech and privacy advocate for Consumer Watchdog. “It’s not really safe, and you really shouldn’t feel crazy.”

What are banks saying?

KQED reached out to a number of financial institutions for this story, but only Wells Fargo responded:

“Wells Fargo uses a layered approach to authentication. One such layer is a service called “Voice Verification,” which allows customers to use a unique voiceprint to access certain accounts. This service must be paired with other identity verification methods to allow access to customer accounts. A customer’s voice ID by itself will not grant access to user accounts.”

Related Stories

Eva Velasquez, who heads the Identity Theft Resource Center outside of San Diego, explains that a layered approach means the bank is using multiple factors to determine whether a login attempt should be deemed credible. “They’re pinging for the location. Is that a known device to [the bank]?”

“I’m all for adding more [layers],” Velasquez said. “You pick up a single twig; you can break it with no effort. You bundle 20 or 30 of them, and you can’t.”

What comes next?

As AI becomes more powerful, the financial sector knows it’s in an arms race with hackers to keep our data and our money secure.

“We’re going to need AI to defend against AI,” Deborah Guild, chair of the Financial Services Sector Coordinating Council, said at a recent event that brought together representatives from government and industry to talk about threats from AI. “We as an industry need to mount a coordinated defense. We have to get better and faster at sharing actionable insights,” she said.

In March, the U.S. Treasury Department released a report entitled Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector after conducting in-depth interviews with 42 firms of all sizes, from global, too-big-to-fail financial institutions to local banks and credit unions. The report promises that industry-wide standards for generative AI-powered ID technology are coming soon.

lower waypoint
next waypoint