Banks need to be ready for better deep fakes

Banks are required by law to know their customers. Deep fake technology in the hands of outlaws is making that hard job even harder.

Vijay Balasubramuniyan, CEO at Pindrop, joins us to talk about the world of deep fakes, and how this technology is impacting the world of banking.

A few takeaways from the conversation:

  • Deep fake technology is being used to scam banks and their customers in a number of ways, including business email compromise and new account opening.
  • Increasing reliance on virtual assistants may make it more challenging for banking institutions to know for sure if it’s dealing with a legitimate customer.
  • Anti-deep fake technology may help, but just as important is awareness and vigilance, along with a response plan when a deep fake attack is detected.

Apple Podcasts     Spotify    Google Podcasts    Amazon Music

Sign up for the free BAI Banking Strategies newsletter and get industry insights delivered to your inbox.

By clicking the Subscribe button, you acknowledge that you have read our Privacy Policy and Terms of Use and agree to be bound by them.

Vijay Balasubramaniyan, co-founder and CEO at Pindrop, welcome to the BAI Banking Strategies podcast.

Great to be here, Terry. Really looking forward to this.

Vijay, I would think that most listeners know what we mean when we say deep fake. But just to make sure that everyone who’s on the same page, can you give us a brief definition of what a deep fake is?

Yeah. So a deep fake is a synthetic piece of audio or video that uses a field of artificial intelligence called deep learning to create extremely believable likenesses of a real target.

Deep fakes have been around for some years now, and as technology has advanced, their quality has gotten better. So as we think about a deep fake versus the real thing being faked, I’m wondering where we are in terms of quality. Are we yet at the point where the fake can be indistinguishable from the genuine article? And if we’re not quite there yet, how close are we?

The deep fakes that are present right now are really close to the likeness. Pindrop did a survey by taking pieces of audio. So when you look at deep fake, you can either deep fake the audio or you can deep fake the video. We did this with audio and we had people listening to both real audio and deep fake audio, and saw how well can they distinguish between the two. Humans had a 57% accuracy in detecting a deep fake versus it being real. What that means is they’re 7% better than random chance. So you could get a monkey to randomly say, “This is a deep fake, this is real,” and it would be just 7% worse off than humans in detecting this. And so it turns out to be that these deep fakes are getting very real.

So creating that perfect imitation, no doubt it’s a huge challenge. So many moving pieces, so many things that have to come together. What are some of the hurdles that the deep fake technology still has to clear in order to get to that perfection level?

The kinds of things that they have to get across is, first, is access to information. If they need to deep fake, for example, you, Terry, they have to find access to tons of your audio or tons of your video in order to get that really realistic. The second challenge that they have, and they haven’t solved this yet, is identifying things like deeply human traits. On the video side, it is the rate at which humans blink or the rate at which you blink or the way you move your face, or the fact that you could have a receding hairline. In the case of audio, it is things like the way you say certain words like S and F, and things like that. Then the final piece of the entire equation is, “How do you do all of this in real time?” You can always create a deep fake audio or a video that is consumed that doesn’t have any real-time interaction, so it is just an audio that is uploaded. But when it has to have real-time interaction when you’re interacting with a real person, that’s another challenge that they have to overcome.

When I hear the words “deep fake,” my first thought is a doctored video of House Speaker Nancy Pelosi from a few years ago where the video was slowed down to make it sound like she was slurring her words. While that may have been a political dirty trick, it really wasn’t much of a trick technology-wise, right? Given improved quality, what are some of the ways that deep fakes are being used these days, and what are the fakers trying to accomplish?

Nancy Pelosi’s video is what we call a “cheap fake,” not a deep fake, because all you’re doing video editing, you’re slowing down the video and things like that. But you see the use of deep fakes both for good and for bad. Like, for example, deep fakes are used to give people back their voice. There is this great thing where Sonantic actually created a deep fake audio for Val Kilmer, who had lost his voice due to throat cancer. So there are situations where deep fakes are being used for good purposes, but they’re starting to being used for things like disinformation or misinformation. We’re starting to see it being used for scams in financial institutions. We’re starting to see it being used for scams in applying for jobs. The FBI just rolled out an advisory where they said people are applying for jobs using deep fakes. There is both good uses, but increasingly more nefarious uses coming in for deep fakes.

You mentioned financial services – let’s bring that conversation around to banking specifically. What can you tell us about how deep fakes are being used in banking today, particularly in U.S. banking, if you know anything about that?

We’re starting to see attacks on the U.S. banking side where deep fakes are being used for a couple of things. One, there used to be this traditional bucket, which is the biggest bucket of fraud that most of these organizations see, including banks, which is business email compromise. This is where you get an email purporting to be from some member in the organization and it actually turns out to be a fake email, and they’re asking you to wire money or click on a link and things like that. We’re starting to see that business email compromise. Even though it’s called business email compromise, they’re essentially trying to social engineer you to do something that you didn’t want to do, so they’re now changing that to phone calls. They’re actually calling people on the bank and saying, “Hey, I need to wire this money out” because I’m either trying to do an acquisition or I have this really important deal that I need to use it for. They’re starting to use that to impersonate really important people at organizations like the CEO, like branch managers, comptrollers, CFOs. That’s one area of attack that we’re seeing in the banking side. But the second side is also account openings. When you’re trying to open up an account, if using deep fake technology, you can actually create a synthetic identity, and more importantly, you can create a synthetic identity that has the likeness of, let’s say, a dead person and start doing transactions on behalf of that person. So both new account openings and business email compromise for banking are two areas where we’re seeing deep fakes come alive.

There’s not much surprise that banks and credit unions would be targeted by deep fakes, given the potential financial rewards at stake. But can you give us a sense of how much success the deep fakers have been having in their efforts to scam the bankers?

Yeah, the biggest one is a $35 million bank heist, and it happened across the UAE and Hong Kong, where this branch manager was asked to wire $35 million on behalf of a client. Across a wide variety of accounts, he actually wired this $35 million. The reason it started getting attention in the U.S. is there was a bank in the U.S. where there was a fake account in which money, about $400,000 of that $35 million got wired. The UAE organization that was leading the investigation, they were trying to find out how to get back that $35 million. But we’re also seeing with these new fintech organizations like Binance, the chief communications officer of Binance, in order to get people to list themselves, a fake version of their chief communications officer was used to start asking for money in order for the listing. Then you also have a $243,000 deep fake that happened in the U.K.. This wasn’t quite for a bank, this was for an energy company, but then the money went out of the bank account. In these cases, the deep fake was used because the branch managers or the CEO of the German company who was the victim, in each of these cases, they had accents, and those accents were faithfully represented in that deep fake.

We know that among the fraud purveyors out there, there are many who are skilled at the technology side. What do you know about how bad actors may be tinkering with technology, adapting it, customizing it to better fit their criminal purposes with deep fakes? Or do they even need to have a high level of technical savvy in order to create and use deep fakes?

The scary thing is, in order for you to create a basic-level deep fake, and if you are a person on the other side, a lot of these basic-level deep fakes because they’re not so widely prevalent, you have no reason to believe that it’s actually a deep fake on the other end. These guys are using the classic technologies of combining the deep fake with urgency. “I want this money wired now. I’m on a plane traveling, so I’m going to be out of pocket for the next two hours. By the time I land this plane, since I’m the CEO of the company, I want this money wired out because we will lose the acquisition.” So they’re using the deep fake technologies, and many of these are readily available, to then start using urgency, the tactics that they used to use to actually convince people on the other end that it is the real person. The one thing that we’re finding super interesting is in addition to this new technology, which is deep fake, they’re using their old techniques. In one of these deep fake cases where millions of dollars were wired, the CEO isn’t a very public persona, so they didn’t have external recordings of the person, but they had hacked in already and so they got these internal all-hands communications and they use that to create a deep fake. So the deep fake technology, they’re using just basic deep fake technology, but part of making a deep fake convincing is having access to tons of audio or video, and they’re using their existing techniques to actually get access to that.

Banking institutions are increasingly relying on biometrics for identity verification for other purposes as well. I recently read an article claiming that deep fake technology may actually be able to defeat the biometric authentication technology that banks are using. What do you think about the suggestion that biometrics may be already obsolete even before it’s fully rolled out?

Any security technology needs to take into account the fact that when you create a security technology, you’re going to have motivated attackers that try to find a way to beat it. This is true in the case of biometrics as well. A lot of current biometric technologies don’t take care of the fact that you could be having a deep fake on the other end. So how well do you do in identifying those fakes? This is where when we create technology, we are really particular of making sure our biometrics can detect deep fakes as well. So that’s where looking at the way people are speaking, are they actually fluctuating their voice like the way humans do? Are they saying things like fricatives, the S and the F, properly? You have to have techniques that allow you to identify when a deep fake is there. But you’re absolutely right. If a biometric technology isn’t taking into account that a motivated attacker could be using deep fakes on the other end, it will get beaten. But that is true of all security technologies.

Vijay, when you’re out talking to clients and you’re out talking to prospects, what is your read on how concerned they are about deep fakes now, given all of the other fraud-related things that they already have on their plate? For those that are worried, what are they specifically worried about?

What we’re seeing is that the largest organizations, so the top banks, they’re all worried about deep fakes. As you go further down, they have so much on their plate that they don’t have big enough teams to worry about things like deep fakes. But when we talk to the top 10 banks, they’re all worried about deep fakes. The reason they’re worried about it is just the extent of impact that a deep fake could have. If a CEO or a CFO or a comptroller or even an individual branch manager gets deep faked or a personal wealth manager gets deep faked, it can be really big. It can be in the tune of millions of dollars, and the fact that some of them have actually experienced these relatively infrequent attacks, but they have experienced an attack themselves.

When they talk to you about what they’re specifically worried about, how do you respond? I mean, what advice do you give clients regarding how they can better protect themselves and how they can better protect their customers from this rising technology?

Yeah. One is to adopt technology that can detect deep fakes, like how do you adopt technology that allows you to determine when your private wealth advisor is being deep faked as opposed to the real private wealth advisor calling in? How do you detect when your customer is being deep faked when it is not your customer, especially when he’s asking to do a million-dollar transfer, even though a million-dollar wire transfer is natural? How do you make sure that when your accounts are being opened, those accounts are not deep fake accounts or synthetic identities? Then how do you make sure when you’re hiring employees who are looking at critical data, they’re not—because everything is remote right now—those remote employees are actually real employees and are not synthetic employees created using deep fake technology. We ask them to look at each of these areas in order to make sure that they understand the impact that deep fakes could have. But in addition, we also talk about providing advisories to their customers that, when you get a call from a private wealth advisor, make sure you understand that you’re doing your right set of checks as well as at the same time, you’re making sure that you’re not being scammed. There are several banks that have put out advisories on their websites because of the onslaught of the oncoming of deep fake attacks. These organizations are not only providing website advisories to protect their consumers, but they’re also starting to look at adopting technologies that will protect them in a more programmatic way.

Vijay, we spoke earlier in this conversation about how deep fakes are being used in banking today, but let me ask you now to look forward. How do you see this … How do you think the use of this technology might change in the next few years, and then maybe look a little further out as well?

The thing that worries me the most is, at least as far as banking is concerned, there don’t seem to be very many reasons why deep fakes are going to be used for good. The only potential reason is when organizations are trying to create a virtual assistant that represents them. In that particular case, it’s a synthetic voice. It’s not quite a deep fake. But in terms of how these organizations are starting to deal with customers, as you see, traditionally, they would talk to you over the phone. Now they’re starting to use newer mechanisms like talking to you over WhatsApp and talking to you over Zoom. These newer mechanisms are definitely very customer-friendly. But in these newer mechanisms, how are you making sure that it’s the customer on the other end? You’re going to see more and more of synthetic identity being created in order to open up new accounts as well as take over accounts, and this is where banks have to be very vigilant going forward, especially as they increase the number of ways in which they interface with customers. Then the other angle that I think that I worry about is just misinformation. So the CEO of the bank, the CFO of the bank, the comptroller of the bank – a deep fake video of them saying the earnings call next week is going to be terrible because we missed our numbers. Those are things that they have to worry about. I think we’re going to see more and more of that because that’s the easiest way to fundamentally change the outcome in the case of stocks or in the case of political elections. So we’re going to see both of that increase in the coming year.

Vijay Balasubramaniyan, CEO at Pindrop, many thanks again for joining us on the BAI Banking Strategies podcast.

Absolutely, Terry. This was a wonderful pleasure.

Terry Badger, is Managing Editor at BAI and host of the BAI Banking Strategies podcast.