Home / Banking Strategies / Best Practices for Compliance Risk Assessment

Best Practices for Compliance Risk Assessment

Share

Editor’s Note: For an update to the article, read the 2018 article: Today’s best practices for compliance risk assessment.

Putting together a compliance risk assessment is pretty much standard procedure by now. Although risk assessment methodology in general has been around for quite a while, its prominence in the compliance field is a fairly recent phenomenon. Formulating the Bank Secrecy Act (BSA)/Anti-Money Laundering (AML) risk assessment about five years ago was many a compliance officer’s first experience with putting one together.

Fair lending soon followed (initially just for the largest banks; by now, nearly everyone) but now we are at the point where risk assessments are critical to the compliance function overall. Examiners expect banks to know where their compliance risks are and to devote resources to those areas that present the greatest risk to the institution. There is even a growing expectation that banks perform an enterprise-wide compliance risk assessment – that is, evaluate any and all compliance risks across the institution, rate them, then prioritize accordingly.

That is a daunting task to be sure, especially since many compliance officers weren’t “raised” that way. We’re used to putting out fires when they crop up, preparing for new regulatory requirements, and generally providing advice; however this new approach is the way of the future. This isn’t just a compliance concern – increasingly banks are being charged with understanding their operational, credit, market, and reputation risk profiles as well. Some see compliance risk as a subpart of operational risk, but this is a chicken-or-the-egg question: does compliance risk result because of the way banks conduct operations, or are operations conducted the way they are because of legal and regulatory requirements? In the end it doesn’t matter; we have to evaluate compliance risk regardless.

So how best to do it? There is no one “right” way, but there are some best practices that have developed over many trial and error efforts, and that’s what we’ll discuss here. The end game is to effectively evaluate the bank’s risk of violating laws or regulations and to then adequately mitigate that risk through well-designed and executed controls.

To start with, compliance risk belongs to the business units. They own it since the business processes involving the bank’s products and services and interaction with customers are performed in those units, not in the compliance department or anywhere else. The compliance department exists to assist business units in identifying and developing controls to mitigate the risks but those controls should be performed within the lines. Business units must take ownership of the process.

Whatever can be done to achieve that buy-in within the business (and “because the regulators say so” usually won’t do it) will make the process easier and ultimately more effective. An approach that aspires to make everyone’s lives easier, by focusing time and effort on processes that present greater risk, is a much easier sell.

Rate-Inherent Risk. This is often the most difficult concept to explain to those in the business units. Inherent risk is the risk of violations if there were absolutely no controls in place. No compliance department, no monitoring, no testing, nothing. It can be a difficult concept simply because inherent risk isn’t always explained very well. Here’s a typical conversation:

“What’s the inherent compliance risk for flood insurance in this line of business?”

“Oh, we’re good – the risk is very low. We do many things to ensure everything is done correctly and timely when it comes to flood.”

Obviously the missed point is that inherent risk means controls and mitigation strategies are not considered, but undoing this damage is sometimes taken as an insult. “What do you mean my risk is high? I just told you all the things to prevent violations.” If dashboard-type reports are produced by compliance, people usually don’t like the fact that something in their area might show up as red, even though it’s not a personal affront to the way the business conducts itself.

The key is to step back and make sure people understand that you’re not judging anyone’s performance when evaluating inherent risk. You’re merely obtaining a thorough understanding of the products, services, and processes involved in order to evaluate where compliance risk may lie. That takes detailed knowledge of both the regulatory requirements as well as the business processes. Once there, the next step is to pin an objective label on it: apply a rating.

What scale should be used? There are generally two trains of thought: the low-moderate-high scale and the one-to-five scale. There are no regulatory requirements that any particular measurement system be used as long as a conclusion is made (in the form of a rating) that is supported by logical rationale. Low-mod-high is used throughout the Interagency BSA/AML Examination Manual, so that’s what many used when formulating that particular risk assessment, but again that’s not the only game in town.

There are two interrelated issues with the low-mod-high scale. The first is that many banks like to present risk information using a dashboard format, and low-mod-high corresponds nicely with green-yellow-red. This is great in theory, but in practice many people have a strong negative reaction when they see red, so the tendency is to avoid red at all costs. This can lead to underestimating risk. High inherent risk is not an indictment of the bank; it just signifies an identified elevated risk level. If inherent risk is underestimated, sufficient controls will likely not be put into place.

What results is a tendency to bunch everything in the middle: “We’ll rate yellow to avoid too much red, but on the other hand we don’t want to be seen as ignoring risk, so too many things shouldn’t be green, either.” The dashboard ends up mostly yellow.

The related issue is that when looking at this too-yellow dashboard, people will start making distinctions within the moderate/yellow rating. “This one is kind of a low-moderate, but this other one is kind of a high-moderate,” and so forth. So even though you claim to have a scale of three, the waters get muddied quickly and you end up with more landing points. So why not set it up that way in the first place?

A one-to-five scale (or something even more granular) takes care of this problem by allowing finer degrees of judgment. The grades can still be color-coded (blue for two, orange for four, etc.) to present information in a dashboard format, if that is what is desired by management.

The only caution here is how to label the categories. Certainly terms such as high, high/mod, moderate, low/mod, and low work, but some may try to get creative and use terms such as “acceptable” or “allowable.” In compliance there is really no such thing as an “acceptable risk,” and we’ve all had conversations with those who claim they’ll “accept or take on the risk.” The risk assessment should not lead examiners (or anyone else) to think that the bank is prepared to allow violations of law or regulations. Compliance risk must be managed and mitigated, not allowed to occur. Don’t let terminology get you in trouble.

Evaluate Controls. Controls are processes to mitigate, or address and reduce, inherent risks that have been identified. They can be automated or manual, but ideally they should be prescriptive, meaning they should perform their function to prevent a violation from taking place. Detective controls, such as identification of past instances of noncompliance, while certainly useful to identify what may continue in the future, only count problems that have already occurred; they don’t control the problem from happening in the first place. Many argue these aren’t controls at all; they are quality control or testing mechanisms instead.

An overlooked fact about controls is that there are really two aspects to their evaluation: design effectiveness and execution (or operational) effectiveness.

Design effectiveness sounds pretty obvious, but if a control is not designed properly it won’t matter how well it operates. For example, consider a control designed to ensure an adequate amount of flood insurance is in place on all structure-secured loans. To be effective, the control must be designed to identify any loan in the institution made that is secured by a structure; ensure the loan has a flood policy in place before closing (assuming the property sits in a flood hazard zone); and ensure the amount of coverage is adequate to satisfy the regulatory minimum.

An effectively-designed prescriptive control would prevent loans from closing unless the proper amount of insurance is in place. This could be an automated process, where the closing package would be halted by the system unless proper documentation is present indicating coverage, or it could be a manual check-off procedure. Evaluating design effectiveness means considering the reliability of the control: will it identify exceptions in every necessary instance? Are all systems and business lines covered by design? Can it easily be circumvented (manual processes tend to fall into this category)? These are a few of the factors that contribute to the evaluation of control design effectiveness. Design effectiveness should be evaluated using a scale, just like inherent risk. There is no mandated methodology here either but it is easier to use the same scale as elsewhere. But even a well-designed control serves no purpose if it’s not put into place to do its job. This factor must also be evaluated.

Execution or operational effectiveness evaluates how well the control performs in practice; does it do what it was designed to do? In our flood example, if an automated control is designed to identify all structure-secured loans (the design is effective) but due to technical deficiencies it doesn’t always find all loans in a certain origination system, the control’s operation is not very effective.

This rating is judged independently of the design effectiveness; poorly-designed controls can operate perfectly and therefore have a favorable execution effectiveness rating. However, the overall rating of the control is a cascade, meaning the control’s overall rating cannot be higher than that of its design effectiveness. In other words, if a control is not designed properly, it won’t matter how well it operates – the control will not be effective. So if a bank uses a one-to-five scale to rate its controls and design effectiveness is rated three (meaning it’s designed moderately well), for instance, and its execution effectiveness is rated five (it operates extremely well), the control’s overall effectiveness rating cannot be higher than three. It wasn’t designed to detect all issues before they’re allowed to occur; thus it’s not a highly effective control overall.

Both elements are essential to properly evaluate compliance controls. A focus solely on how a control operates without also examining how it was designed risks overestimating its overall effectiveness.

Rate-Residual Risk. Sometimes called controlled risk or something similar, this is the ultimate evaluation of where the institution stands after inherent risk is measured and controls applied. It answers the question “where do we stand right now?” This is also the critical rating from the examiners’ perspective, since it shows where the bank’s gaps are and where resources should be dedicated to further reduce the risk.

It should be measured in the same fashion as inherent risk, using the same scale (whatever that might be depending on the bank). A key point here is to ensure that the ultimate rating is supported by documentation, so examiners, auditors, management, or other interested parties can see the assumptions, methodology, and process behind the rating.

As an administrative matter, the residual risk rating cannot be higher than the inherent risk, no matter how effective the controls may be designed and/or executed. This makes sense since controls serve to reduce inherent risk, not increase it. It is possible that the applied controls don’t move the needle on the inherent risk rating at all, but that’s a judgment call (and also a call for further action to tighten up the controls).

Residual risk ultimately dictates where the compliance officer needs to dedicate time and resources. And since resources aren’t unlimited (especially in the compliance field), banks should prioritize their action plans based on the highest residual risk ratings.

As long as banks have a well thought out plan of attack for their compliance risk assessments, adequately document their methodology, assumptions, and conclusions, they’ll be okay as far as the examiners are concerned. But this isn’t solely an exercise for the examiners’ sake; assessing risk is an important task to determine where the hot spots are in the bank and to avoid trouble in the future. In this age of rapid regulatory change, it’s absolutely essential.

Mr. Pry is a senior director with Washington, D.C.–based Treliant Risk Advisors LLC. He can be reached at [email protected].