How banks are utilizing conversational AI to tighten security

Can malicious threat actors be stopped in an era of increasing cyber threats? Businesses will have to remain vigilant to stay ahead of future strikes, but there is one technology lending a helping hand in the battle against social engineering: conversational AI.
Designed to communicate realistically, but without the human vulnerabilities that make us susceptible to deception, conversational AI is proving to be the one thing social engineers can’t deceive. By actively applying artificial intelligence to evaluate the dialogue in real-time, agents can engage customers on sensitive transactions without being vulnerable to social engineers that could compromise a customer’s account.
AI workers (or “digital employees”) are no longer limited to money transfers and mortgage application processes. They are now serving as the first line of defense between banks and malicious actors.
Much like a hacker searching for vulnerabilities in a security system, social engineers are constantly looking for weaknesses within customer support. They want to find a person they can really have a conversation with – someone who will listen and seems to be empathetic to whatever problem they present.
Those problems usually involve a sob story – a man whose daughter broke her leg and he needs to access his account to pay for her cast, or a woman whose dog was hit by a car and requires life-saving surgery that’s very expensive. They will literally say anything to get what they want – and if they have to, they’ll call back several times to find a customer agent who can be deceived. Then, when they think their story is believed, they’ll strike by asking for account details or some other tidbit that will allow them to breach a user’s account.
Not falling for the sob story
If conversational AI is on the other end of the line, malicious actors will not get what they want. Unlike a chatbot, conversational AI is empathetic, not robotic, but it still won’t fall for a sob story. In fact, the technology won’t fall for any story at all. When users wish to explore an account, they will be required to provide the necessary information to do so. If they don’t, conversational AI won’t budge.
This works by separating account access – which must be safeguarded at all times – from the emotional aspect of the conversation. Thus, if a customer calls and says he lost his credit card and would like a new one shipped to an address that differs from the one on file, conversational AI will be ready.
The technology can be both understanding and offer a supportive response while insisting that customers confirm their identity. This prevents conversational AI from inadvertently offending legitimate customers while ultimately protecting them from the threat of invasion.
Banks are one of many targets
All financial institutions are potentially vulnerable to malicious actors who are looking to score an easy buck at the expense of consumers. Payment card fraud is just one area of concern and it could exceed $40 billion within 20 years. Accordingly, financial institutions must remain on high alert at all times to prevent both sophisticated hacks, as well as strikes that are more social in nature.
If a financial institution proves to be too difficult to breach, malicious actors may look for another way in.
This is what happened when a woman was targeted by social engineers, who convinced her cellular carrier to provide and activate a new SIM card for her service plan. The malicious actors then used that SIM card to intercept text messages and breach the two-factor authentication associated with the woman’s financial accounts, allowing them to steal $30,000 in cryptocurrency. Her banks had nothing to do with this hack, but if her cellular carrier had relied on conversational AI to verify a customer’s identity, her accounts would have been protected.
Conversational AI is just one way the technology can improve cybersecurity. Artificial intelligence can also be used to halt phishing emails that contain malicious VBScript or JavaScript files. Better still, AI can compare websites to a machine-learning model of thousands of confirmed phishing sites to ensure consumers aren’t surfing a dangerous page.
As developers advance the technology and new use cases are discovered, AI could prove to be the strongest weapon in the fight against cyberthreats.
Ergun Ekici is vice president of emerging technologies at IPsoft.
Subscribe to the BAI Banking Strategies newsletter and podcast.