Home / Banking Strategies / Bank to the future: How 1991’s security imperatives still apply

Bank to the future: How 1991’s security imperatives still apply

In the April 1991 Edition of Bank Management (now BAI Banking Strategies), I co-authored an article titled “Protecting the Bank’s Information Asset.”  Of course, we’ve seen significant and monumental shifts in banking, payments and technology in the 26 years since. But what hasn’t changed? BAI thought it would be poignant (and fun) to revisit the security priorities from the original article. Here’s what we found.

Setting the stage: Early Macs, scarce attacks

In 1991, the financial services industry used Macintosh computers—mostly for art and graphics—and many businesses with heavy investments in mainframe computers actively worked on 3270 emulations of legacy CICS applications using Windows: the 3.0 version, to be precise. Google in its earliest form outside Stanford University was still seven years away; the iPhone, 16 years. Here’s how the rest of the landscape looked:

  • The software systems ACF2 and RACF competed for dominance over mainframe data security.
  • Banks such as Security Pacific led innovation in the emerging area of high-speed check imaging
  • Visa was researching in stealth the future of debit authentication.
  • Card-not-present (CNP) fraud was limited to mail order and telephone “MOTO” sales.
  • Total Systems, Inc. (now TSYS) was still working on TS2 credit card processing platform (1994).
  • The internet, HTML, and HTTP were just emerging, along with GSM and CDMA digital cellular technology.

Of the companies that experienced information “losses” at the time, 70 percent were reported to have originated from internal sources and were mostly attributed to poor employee training. Less than two percent were attributed to computer viruses or hackers. Fast forward to IBM’s 2016 Cost of Data Breach Study, which reported that “Most data breaches continue to be caused by criminal and malicious attacks.”

That was then … that is now

Here is the original 1991 list of tips that addressed the growing risk of information loss along with a revised commentary about relevance in 2017.

In 1991, we said:

  1. Start with a security policy: Look for evidence that security policies and objectives are reflected in management’s overall goals. A security policy should define data as a corporate asset and assign responsibility for protecting it. Why this is relevant in 2017: Most experts agree that security policies mark the first step in the long and often winding journey of establishing an information asset protection program.
  2. Build security awareness: The most effective programs deliver a recurring message over a long period and challenge employees to take responsibility for following published rules and guidelines. Why this is relevant in 2017: Security awareness training is now required by every major security and compliance framework.
  3. Overcome fragmentation: For companies running multiple platforms, security quality can vary greatly among environments because the security offerings are not consistent. Why this is relevant in 2017: We especially see fragmentation within companies that have grown through merger and acquisition. Hackers will attack the weakest link to access the entire network.
  4. Classify corporate data: Some data is sensitive to disclosure; some is not. Without classification, all data must be protected the same way. Since security resources are limited, this ultimately means data is over or under protected. Why this is relevant in 2017: With the rapid migration of on-premise data and applications to the cloud, the need for classification has never been greater. In fact, the NIST Risk Management Framework states: “The first and arguably most important step [of risk assessments] is to determine the criticality and sensitivity of the information being processed by the system…” While some DLP platforms partially address this classification issue, most systems still lack tools capable of continuous discovery and classification—particularly for endpoints.
  5. Promote data ownership: Push for a program that clearly identifies who can access to the bank’s information. Why this is relevant in 2017: PCI further mandates that covered systems implement role-based access to protect cardholder data. While formalized data ownership is still not mainstream, this notion is closely related to data classification. Once the classification and location of data is established, ownership and access control can be applied.
  6. Give programmers direction and training: Left to their own devices, programmers will build the security right into the application, often making it impossible to control security centrally. Why this is relevant in 2017: Vulnerability training recommended to programmers by the Open Web Application Security Project (OWASP) has become a standard practice in financial services. But 27 years later I would add that “left to their own devices,” programmers will not even build security into the application unless there is a user story pre-groomed in the backlog for priority in the next sprint. Agile projects should be subjugated to organizational security standards and a persona for the security domain should be established and well represented in the epics and user stories.
  7. Limit programmer authority: Because no one can wreak more havoc (even unintentionally) than someone with intimate knowledge of your systems, programmer access to production data should be limited. Why this is relevant in 2017: While startups struggle with separation of duties, banks and processors have an easier time with this. But the temptation to let developers into the production environment for Tier 3 technical support (for example) has not significantly diminished.
  8. Plan for contingencies: Strong security is only half of an information asset protection program. The other half is disaster recovery. Why this is relevant in 2017: The original article was written just after Hurricane Hugo impacted the Charlotte banking hub. This update is written on the heels of Hurricanes Harvey and Irma. Recovery plans are essential: Just ask folks in Houston and Miami.
  9. Pay attention to distributed and departmental systems: In many companies, centralized data processing no longer represents the lion’s share of application systems. Why this is relevant in 2017: While today we don’t speak as much in terms of “departmental systems,” we have seen a migration of data from dispirit systems to various cloud-based platforms. While this consolidated corporate data is an obvious focus for structured backup and recovery programs, endpoint data is often overlooked and has become the new concern for data governance and risk management.
  10. Don’t forget telecommunications: Make sure the contingency plan includes provisions for recovering the network. Why this is relevant in 2017: Without the network, there is no system. At the time of the original article, major banks still ran proprietary SNA/SDLC networks. Today, we all share dependence on the public internet—particularly for consumer web and mobile applications—with little or no control over its availability.
  11. Test the plan: Insist that plans are tested regularly and completely. Why this is relevant in 2017: Some things never change. Never, never assume your backup works. 
  12. NEW: Given the recent, high-profile data breaches and their broad implications, if I were to add anything else to this list it would be this: know your customer (KYC) and implement next-generation identity management capabilities into on-boarding and authentication processes.

Meanwhile, we promise to check back in the future to see whether this list still holds up as artificial intelligence, blockchain and robotic process automation become the norm. Expect to hear from us, oh, sometime around 2034.

 

Want more Banking Strategies? Sign up for our free newsletter!

Steve Bacastow is the founder of cybersecurity startup QuickVault, Inc. and a partner in Payment Industry Consultancy, Collective Dynamics.