Reconsidering Excel in financial services

The invention of relational cell-based spreadsheet technology in the late 1970s was truly a game changer. Today the technology is so accepted and so much a part of our life that it’s almost like using a faucet. However, we need to keep in mind that Dan Bricklin’s VisiCalc electronic spreadsheet hit the market only in 1979, when it began turning microcomputers into powerful business tools.

The elegance of the technology is that it enables the user to relate one cell or a group of cells to another and to perform a host of tasks. Outside of word processing, I can’t think of a more revolutionary technology I have seen in my lifetime. Therefore, it’s not surprising that banks quickly adopted VisiCalc, then Lotus 1-2-3, and eventually, in the 1980s, Microsoft Excel. Obviously, spreadsheet technology has saved a lot of time over the years, but perhaps more importantly, it has likely significantly decreased human error in calculations over this period.

However, Excel’s day in banking is basically over, and there are several important reasons for this. The traits that have made it so beautiful as software and so ubiquitous in business make it too dangerous for a lot of major bank functions. Ultimately, our current banking environment demands consistency and transparency in how data is calculated. Excel does not meet a basic tenet of software development, which is an ability to break up code and test it in a meaningful and effective way.

Cascading Errors

By its very nature, Excel is a user-specific and customizable program. Users, of course, arrange a set of rows and columns to perform a specific task, and then relate those cells to each other by function and formula. In effect, each person using Excel for a function develops his or her own software program, tweaked to their personal idiosyncrasies and specifications. Each spreadsheet is its own program for its own user or user group.

This has several important implications. First, only that specific user really knows how the particular software program works. Even though new people can examine each cell to study how the program works, doing so can be very complicated and time-consuming at a minimum. Depending on the complexity of the program, this can be practically impossible.

For example, consider that you’re running a bank and are calculating the allowance for loan and lease losses (or ALLL). Imagine that this calculation was performed in an Excel file that was developed by Person A, who then leaves the bank. Now, it would be very difficult for Person B to truly understand how Person A structured the ALLL calculation. That user experience is gone with Person A, but Person B must try to re-create his or her understanding of it. And if person B leaves, Person C must re-create his or her understanding of Person B’s understanding of Person A’s experience, creating an infinite loop of inefficiency and wasted mindshare and possibly creating an environment where calculation errors will occur.

Meanwhile, each user can make changes to the homemade program to make it better, but do they go back and test every relational aspect? I doubt it. Believe me, I’d rather use Excel to do a calculation than to use paper and pencil, but technology advances and the complex nature of formulas and macros within Excel mean it is hard to find errors within spreadsheets. This is especially true when you move beyond adding a column of numbers and are relating multiple columns of data to each other. At the same time, as a developer of software myself, I know too well that every time you change a software program in one place, it may break in several other places and you have to be able to check for and fix those breaks.

One mistake on a spreadsheet can create problems on other spreadsheets and throughout a complex calculation, creating a snowball that turns into an avalanche of trouble. The common name for this among Excel users is “cascading errors.” When data is coming from multiple sources and being accessed by multiple people in the bank, mistakes are even more likely. In fact, a 2008 analysis of multiple studies on spreadsheet errors reported that 88% of spreadsheet documents audited in those studies contained errors. A survey for Sageworks in mid-2014 found that 65% of respondents used Excel to calculate the ALLL reserve but users had serious concerns about the cascading effect of errors.

Another major limitation to using Excel in many major banking functions is the impossibility of storing immense amounts of data in a logical way so that there won’t be errors. For example, many banks using Excel to perform historical credit analysis for the ALLL are storing this in a huge data file with numerous tabs, numerous users and complex, sophisticated calculations for many loans over a long period of time.

Unlike Legos, which can be used to build something complex and structurally sound because the pieces all have a uniform base, Excel doesn’t provide a consistent framework when it is being used to create a massive database over time. That’s a big problem when it comes to banking and developing a calculation with a large impact on risk management. In these instances, you want some flexibility, but you want some structure and even rigidity, because the risk of being wrong is very high.

Data storage issues tied to Excel also relate to where the spreadsheet files are saved – to a personal computer or a network computer – generating questions about accessibility and security of these critical databases. I’d imagine that most banks have home-made Excel programs all over the place which become obsolete as the people who developed them leave the bank.

Excel is still a great program for certain more basic functions, such as individual loan analysis. It should not be completely stripped out from all banking operations. However, as the regulatory environment continues to require greater transparency, efficiency, and defensible methodologies surrounding the more complex risk management processes, financial institutions will benefit by turning from disconnected spreadsheets to more nuanced systems. Possible alternatives would be automation technology or sophisticated information management systems.

Mr. Hamilton is the chairman and co-founder of Raleigh, N.C.-based Sageworks. He can be reached at [email protected]. Mary Ellen Biery, research specialist at Sageworks, contributed to this article.