Three steps to smarter data management

The combination of multi-jurisdiction risk reporting and increasing product complexity has created a data explosion within financial services. While trusted data is integral to uncovering value and managing risk, optimizing the balance sheet is becoming a challenge for financial institutions.

This problem is only intensifying with the passage of the Dodd-Frank Act and over-the-counter (OTC) clearing reform. Between strained infrastructures that are nearing capacity and siloed systems, the silver bullet to financial institutions handling this information will be smarter data management. As a result, banks are implementing information management practices that span data categorization, data collection, defensible deletion and compliance monitoring to prepare for the next wave of regulation.

As banks develop a sustainable solution to their growing data problem, the first step is to identify information by 1) what can be deleted 2) what is a business record and 3) what to do with items within the gray area in between. For many chief information officers, organizing data within these three buckets represents a valuable step toward controlling the data surge. In many cases, banks are also deploying applications to meet their array of data management needs.

While manual classification has been the more traditional approach, automated classification systems are growing in popularity as the data explosion intensifies. Employees simply cannot devote the time looking through mountains of data to make individual decisions regarding content categorization, placement and classification. But with automated systems, banks can apply rules to their information. These rules can either provide mass classification or specific attribute capture, streamlining the data retrieval process for regulatory reporting, fire drills, eDiscovery processes and litigation.

Once banks have categorized their data and isolated the unimportant information, the next step is to delete it. Though it’s natural for banks to want to retain all information, especially as regulations are forcing banks to become more and more transparent, data retention creates its own problem: risk. With mass data retention, more personally identifiable information is made available, leaving this data vulnerable to a breach. Furthermore, retaining inessential data means more time spent searching for useful information.

Having tools in place that sort through and automatically delete petabytes of data can ensure organizations are keeping the right data – and only the right data necessary to run the business and meet compliance and regulatory guidelines.

The final step to better data management lies in surfacing valuable data for business users. Machine-learning tools enable banks to understand their most important client agreements through the collection of data from highly complex derivatives contracts, replacing manual process. As an example, automated collection of legal and eligibility data from OTC derivatives agreements is now providing high granular data at low cost to multiple data consumers – legal, credit, collateral and front office. These kinds of data services are invaluable as there have been instances in which banks have lost over $25 million in a single trade, due to issues such as using the wrong interest rates, posting the wrong type of collateral, or being arbitraged by competitors. By utilizing data capture tools the right way, banks can be smarter with their data for more effective management of risk, collateral and pricing.

By implementing these systems for better data management, banks and financial institutions are making a key step towards taking control of their information. The right combination of information classification, deletion and data capture tools will provide both banks and regulators with relief, security and peace of mind.

Mr. Lines is director, Financial Services, for San Francisco-based Recommind Inc. He can be reached at [email protected].