Home / Banking Strategies / Leveraging Data with High-Performance Analytics

Leveraging Data with High-Performance Analytics

Financial services firms possess the data to make sound decisions – whether to underwrite a loan, extend an offer to retain a customer or reduce exposure in a certain market. The problem is that the data deluge is so overwhelming that being able to make sound decisions quickly is not easy. Analytical run times either grow too long, or the amount of money spent to use data effectively grows too large. Neither outcome is acceptable in today’s high-pressured market. The game-changer is high-performance analytics for banks to efficiently leverage data.

Howard Rubin, a pioneer in the area of technology economics and strategic adviser on business-technology strategy, estimates that the need for computing power is growing two-to-five times faster than revenue. It’s not just about being able to do things faster; it’s about having the ability to do more with the data to get accurate recommendations. Can your institution test multiple scenarios and create and analyze multiple models that affect everything from creating marketing offers to rebalancing a portfolio? Or does it take so long to do this that the market has changed by the time the answer comes back?

High-performance analytics isn’t a one-size-fits-all solution; it’s a way of thinking about data using a variety of options that reduce run times, trim costs, and increase analytic bandwidth. By using parallel, distributed processing (dividing programs into little pieces to simultaneously execute by separate processors) via in-database, in-memory and/or grid computing, organizations can run advanced application programs more quickly and efficiently. Here are some examples of potential benefits:

Assessing the risk: A large investment bank constantly recalculates value at risk (VaR) of its complex portfolio of financial instruments, which used to take 18 hours. Now, it takes 12 minutes. That means the bank can quickly determine exposure, portfolio value at risk and funding liquidity risk. It can rapidly fine-tune responses to changes in interest rates or exchange rates, for instance.

Calculating loan defaults: A major U.S. bank has reduced its loan default calculation time for a mortgage book of more than 10 million loans from 96 hours to just four. Early detection of high-risk accounts is crucial to determining the likelihood of defaults, loss forecasting and how to hedge risks most effectively.

The right marketing offer: A financial services firm was spending five hours to analyze predictive models for marketing offers to acquire new customers. A high-performance analytics solution reduced the run time to less than three minutes. Why does that matter? Because the firm can now analyze many more models. That ability directly translated to a 1% lift in new customers, resulting in the potential for tens of millions of dollars in incremental revenue from a customer lifetime value perspective.

Protecting the customer base: Keeping a customer is much less expensive than finding a new one. One large financial services firm in the U.S. uses data from 17 million customers and 19 million daily transactions – which generates more than 10,000 database variables – as an early-warning system to detect customer disengagement. Certain interactions and transactions trigger alerts to front-line staff to immediately contact the customer whenever there is an indication that the relationship needs to be nurtured. This could involve addressing a request for a new product or service, or any other type of concern. The bank’s solution is generating more than $250 million in incremental revenue annually.

Depending on the need, there are different options for high-performance analytics, including leveraging Hadoop for large scale storage of semi-structured information. Hadoop provides an efficient storage mechanism and processing framework for large volumes of data that may not have been captured previously. In-memory analytics will play an ever-increasing role, but it will be the ability to leverage the combination of these technologies that will set businesses apart. Companies should understand how and when to deploy the following technologies:

In-memory analytics. Solves complex problems in near-real time with highly accurate insights by allowing analytical computations and big data to be processed in-memory and distributed across a dedicated set of nodes or blades. Through distributed, multi-threaded architectures, scalable in-memory processing is increasingly faster compared to traditional disk-based processing when running requests for new scenarios or complex analytical computations. It gives you concurrent, in-memory and multi-use access to any sized data in order to instantly explore and visualize data and tackle problems without computing constraints. New insights can be gathered by experimenting (e.g., running better queries and executing more complex analytic models) with complete data – not just a sample.

In-database analytics. Speeds time to insights and enables better data governance by performing data integration and analytic functions inside the database so you won’t have to move or convert data repeatedly. It uses a massively parallel processing (MPP) database architecture for faster execution of key data management and analytic development and deployment tasks such as select data management, data discovery, model development and model deployment jobs. This makes calculations faster, reduces unnecessary data movement and promotes better data governance. For decision makers, this means faster access to analytical results and more agile and accurate decisions. This can reduce – or even eliminate – the need to replicate or move large amounts of data between a data warehouse and the analytical environment or data marts for multiple passes of data preparation and analytics.

Grid computing. Promotes efficiency, lower cost and better performance by processing jobs in a shared, centrally managed pool of IT resources. You can split individual jobs and run each piece in parallel across multiple symmetric multiprocessing (SMP) machines using shared physical storage. The presence of multiple servers in a grid environment enables jobs to run on the best available resource, and if a server fails, its jobs can be seamlessly transitioned to another server, providing a highly available business analytics environment. Organizations can fully utilize all available computing resources now and cost-effectively scale out as needed, adding capacity in single-processing units for incremental IT spending.

By exploring such high-performance analytics options, banks can take better advantage of data in ways that directly affect the bottom line.

Mr. Collins is a senior vice president and chief technology officer with Cary, N.C.-based SAS Institute Inc. He can be reached at [email protected].