Security-related concerns have turned into stumbling blocks for banks trying to put robotic process automation (RPA) and cognitive computing to work. A recent Cognizant survey of 302 financial institution executives found that uncertainty, coupled with a lack of privacy and security standardization, slow down banks’ efforts to adopt these innovative capabilities.
More specifically, 91 percent say security and compliance represent moderately to highly difficult aspects of working on automation projects. Four out of five (81 percent) list privacy and security as the top external barriers to adoption, with about half saying those considerations present top internal barriers as well.
How banks tackle these challenges could determine the extent to which automation fulfills its promise for the industry. An opportunity exists to relieve people of routine tasks that involve interacting with data and IT systems (RPA). In automation’s more advanced state, gleaning meaningful information from large amounts of data can shape business decisions that cannot be codified, but are instead based on context or specific situations (cognitive computing).
Yet an unflinching focus on security, which protects banks on the one hand, shuts them out of technical advancement on the other.
Automation versus risk amplification
On their own, security concerns aren’t necessarily negative. In large part, they are essential to the long-term success of any modernization initiative in the banking community. Neither are security concerns unique to automation. They can crop up with any new process or technology.
Automation amplifies well-known security challenges in ways banking executives may feel unprepared to face, including:
Controls. Automation deploys software bots as a quick fix for fairly mundane tasks. For instance, an RPA bot might process credit card applications, while one with cognitive capabilities carries out Anti-Money Laundering (AML) procedures. But should something go wrong with the underlying data or systems, the bots could wreak havoc on connected business process.
Data integrity. As with any new piece of software, automation products can come with programming bugs or malware. The complicating factor with bots stems from their potential to handle data from disparate systems and perhaps not in sync. Let’s suppose a bank installs bots to provide a 360-degree view of the customer across different lines of business. If there’s an issue with a particular bot’s logic, it could compromise data integrity systemwide.
User privileges. Between ongoing consolidation and external partnerships (such as with fintech companies), banks already cope with the complexity of providing users an appropriate level of access. If directories aren’t in good shape, new automation may exacerbate existing problems.
Privacy. Banks have longstanding processes for handling sensitive information. For example, customers can shield their Social Security numbers by entering them into a fully-automated system or providing only the last four digits to a customer service representative. Any vulnerabilities typically lie in processes that migrate data from one system to another. A human looking to take this data can copy only so many numbers at a time. However, a bot can harvest thousands of sensitive data points in seconds.
System availability. Many of today’s denial-of-service (DoS) attacks take place via hacking a web server or application. Automation, however, offers a tempting new target to cybercriminals: the ability to take down multiple systems at once. For example, bots that have repeated, high-frequency access to multiple external websites become targets for hackers who seek their pre-approved access.
Regulator insight. In general, regulators view automation favorably because it can improve on traditional linear regression models and error-prone manual processes. But automation bots are trained with historic data—and sometimes with deep learning and neural networks layered in. As a result, regulators may struggle to understand how they work and arrive at decisions.
Inherent bias. Automation is commonly viewed as objective—sensibly enough, since bots have no emotions. But even bots can adopt a bias if the input data makes them so. Consider credit scoring models that omit or overly emphasize certain data points, resulting in decisions skewed toward certain outcomes.
The Power of Governance
Fortunately, all of these issues are manageable. Banks can address them via a disciplined approach to governance that stresses these actions:
- Document the process to be automated—including how it unfolds, the bot’s part in it and the data used to train it.
- Work across the bank’s internal functions to identify security, risk and compliance needs along with the robot logic or functionality required to address them.
- Deploy supervisory bots to detect and alert for unusual activity or errors that arise from process bots.
- Have a well-rehearsed contingency plan for when a bot breaks down and no human exists to back it up.
- Create an audit trail of the specific actions each bot carries out, preserving audit files for as long as regulations and internal policies require.
- Run periodic test cases to show how well the automation model works.
Security and all its intersections—from internal controls to data management and regulatory oversight—rank among the most difficult aspects of any automation project. On top of that, mature products don’t yet exist for bot security. Thus platform vendors to date have left it up to banks to decide how to handle security-related issues.
The good news is that by and large, financial institutions have the expertise they need to put these concerns to rest. The key is to take what they already know about risk, compliance and governance and apply it to automation.
Once that happens, banks will sweep aside the impediments and capture the value of this transformative technology: a revolution that isn’t so much bought as bot.
Want more Banking Strategies? Sign up for our free newsletter!
Sriniketh Chakravarthi is senior vice president, banking and financial services leader at Cognizant.