Some people will go a long way to sidestep the potential cyber threats that artificial intelligence could unleash upon our planet.
Like the planet next door.
Tesla co-founder and CEO Elon Musk wants to colonize Mars and create a society free from the theoretical danger of AI. Of course, that danger isn’t currently a reality. Nor is human life on Mars. But the man who melds dreams with reality as well as any earthling has marked his calendar for the 2020s.
Much closer to home, you don’t need Musk’s future-thinking bent to see the wide range of tech attack tools taking aim at the financial services industry. The folks at FireEye are seeing targeted intrusions and data breaches against financial institutions typically conducted to accomplish a variety of goals, David Mainor, Manager of Financial Crime Analysis tells BAI. These attacks:
compromise employees’ and customers’ personally identifiable information (PII) for use in illicit schemes elsewhere
compromise existing customer accounts
leverage administrative capabilities to transfer funds into, or out of, individual customer accounts.
manipulate investment values for purposes such as increasing the value of the perpetrators’ holdings or creating market instability
create customer accounts for specifically illicit uses
initiate transactions regarding funds or investments directly from the targeted firm, and
obtain information about the targeted entity’s strategy, future business development plans, and/or its customers.
So how do fraudsters accomplish their nefarious goals?
By using “diverse tactics, techniques, and procedures to gain an initial foothold on the network of a financial institution,” Mainor says. “Tactics that are notable due to their popularity include delivery of malware, exploitation of web-facing vulnerabilities and illicit access via third parties associated with the targeted institution.”
Here are five of the top threat vectors facing financial institutions:
1. State actors—especially North Korea
North Korea’s state sponsored army of hackers packs a robust arsenal of weapons, including viruses, watering hole attacks and manipulation of the SWIFT banking messaging system. The North Korean threat to financial institutions “is very real and rather large,” says R.J. Burney, a cybersecurity expert who works with the FBI and U.S. Special Operations Command.
Aside from 2016’s Bangladesh Central Bank attack—which initially netted more than $100 million before some of the funds were ultimately being recovered—the North Koreans have hit a bank in Taiwan for $60 million.
North Korean cyber threat actors such as TEMP.Hermit have targeted victims in South East Asia, Europe and the Americas, according to the company. In some of these cases they are suspected of stealing funds in the millions of dollars. In other incidents, the North Koreans have pursued cryptocurrency theft as a means funding their various activities.
The methods used in this attack, particularly the in-depth knowledge of the bank’s SWIFT systems and the steps taken to cover the attacker’s tracks—are “indicative of a highly proficient actor,” according to the report, which points out it wasn’t the first Lazarus attack.
True to its name, jackpotting compromises an ATM to spit out cash at the breakneck speed of up to 40 bills every 30 seconds. In an insidious combination of James Bond villainy and cyberspace ubiquity, fraudsters pose as ATM technicians, right down to disguising themselves in service uniforms.
They then crack the ATM using a generic key the Secret Service says is relatively easy to buy on the internet. Once inside, the bogus tech hooks up a laptop or palm-sized external drive known as a “black box,” and uses a cellphone to crack the machine.
But do they scoop up the cash? Not always. A second conspirator often swoops in to do the mop-up work.
Using customer aversion to their advantage, cyber crooks pilfer billions from the often lightly defended contact centers, according to a security expert who specializes in contact center defense systems.
“While it’s impossible to understand how much money flows through these call centers, it’s estimated that fraud losses in 2017 approached $14 billion,” says Shawn Hall, director of fraud prevention and strategy at Pindrop Labs.
It’s a big problem on the rise—and exacerbated by the very nature of contact center design, says Hall.
Contact centers handle 36 billion interactions yearly and agents are measured on how quickly they can resolve each call, according to Pindrop’s recent study on call center fraud. Last year, the global rate in banking call centers jumped by more than 60 percent, says Hall, with one in every 867 calls pegged as fraudulent. Overall, nearly two-thirds of all financial institution fraud, he notes, is traceable to the call centers.
“Contact centers are often forgotten in the fight against fraud,” said Tricia Phillips, a cybersecurity analyst with Gartner. “Research shows that by 2020, 75 percent of organizations will sustain a targeted, cross-channel fraud attack with the contact center as the primary point of compromise.”
In March, Garnter released a study on call center fraud risk that found contact centers vulnerable because they are organizationally and architecturally separate from other financial institution acceptance channels, such as web self-service or mobile applications. As a result, they fall outside the fraud and loss prevention halo that shields digital channels.
Moreover, the availability of personal information also sets up contact centers as a prime target. Data breaches have created a large pool of information for criminals to use, taking data such as driver’s license and account numbers. Crooks then fill the gaps with information from social media and other sources. With this data in hand, fraudsters wield social engineering to deceive contact center agents, who have limited identity verification tools at their disposal.
4. The real threat of synthetic fraud
Instead of stealing someone else’s identity, a breed of bad hombres just make them up, says Colin Carvey, who formerly served as vice president of identity solutions at TransUnion.
Using a technique called “synthetic hacking,” fraudsters create small armies of rogue identities and apply for credit using Social Security numbers that are not purloined, but invented. Then they use the newly created credit profiles to pilfer from banks, auto dealers and virtually anyone who accepts credit for goods and services
“Creating the identity is very simple,” Carvey says. As detailed in a presentation at BAI Beacon, an initial investment of $79.99—less than the cost of three Swiffer Wet Jets—can mop up ill-gotten gains of $107,269.63, on average. That’s more than 1,300 times the initial cash outlay.
So a fake identity, armed with an untraceable Social Security number, can create havoc. But it takes work—and, in many cases, willing partners along the way. That is: “Newborn” synthetic identities need time to grow up and develop a credit history before they’re worth anything.
5. And finally—yes—artificial intelligence
Artificial intelligence is already showing its potential to do financial services a world of good.
Yet if banks are moving fast to adopt the technology, the fear is that malefactors can move faster.
“Fraudsters are more agile than the banking groups they’re trying to infiltrate, so they’re often able to develop malicious code and methods that take advantage process vulnerabilities and security lapses,” says Will Griffith, financial services industry lead for Teradata.
Banks, he says, “are on the defensive, reacting to the fraudsters’ moves, using analytics to detect anomalous behaviors.” And the aggressive fraudsters behave like insurgents.
“It’s a form of asymmetric warfare with the fraudster choosing the time, place and magnitude of their attacks,” says Griffith. “The criminal’s intent is to blend in with normal customers and transactions. For the banks, it’s a little like trying to find a needle in a stack of needles.”
And the danger posed by fraudsters armed with AI and machine learning is only going to rise, experts say.
In the summer of 2016, Ukrainian banks were among that government’s institutions hit by a new ransomware attack called Petya, which looked up computer files. While ransomware attacks are nothing new, the Petya attack had a twist—one that concerns financial industry security experts.
Petya’s authors used artificial intelligence to spot and exploit vulnerabilities in Ukrainian security systems, says Mason Wilder, research specialist with the Association of Certified Fraud Examiners (ACFE). He adds that the malevolent use of AI by Petra’s authors provides a harbinger of things to come: “In the future, artificial intelligence will likely pose much more of a threat than it does now.”
In this month’s BAI Executive Report, we examine where things stand with fraud protection and how it can be done more efficiently and effectively, including looking at the role of both humans and technology in fraud prevention strategies. Download Now...
Compliance training and professional development courses that are efficient, effective and on-point. Give your people the latest industry-approved tools they need to improve performance, reduce operational risk and better serve your customers.