.
.--.
Print this
:.--:
-
|select-------
-------------
-
MEASURING OPERATIONAL RISK

A number of people are trying to quantify and model operational risk. Are they trying to measure the unmeasurable?

By Nina Mehta

Derivatives people spend the bulk of their time looking at risks, but unless those risks are embodied in a number, they don’t really exist. The risk of data entry errors, rogue trading, a computer virus gnawing through databases and corrupting everything in sight—these operational risks are often as difficult to quantify as they are to compare. Nonetheless, attempts are underway to quantify and model elements of operational risk for a range of purposes—including capital allocation, better risk awareness and management, greater operational efficiency, and improved shareholder value. Other efforts are focusing on techniques to benchmark operational risks across an enterprise or across individual business units as a way to identify an institution’s weaknesses and shore up risk control. And still other efforts are geared toward decreasing costly operational failures in the thorny, flesh-torn area of data quality and management.

The key step in modeling operational risk is measuring exposures—and this means developing a loss database. The difficulty with developing such a database, however, is that no single bank has all the data needed, since the firsthand acquisition of that knowledge would have put it out of business. Banks can track and measure low-impact, high-frequency losses in, say, their back offices, but the kind of infrequent catastrophic risks that could bring down a bank like Barings can only be modeled.

One bank that has absorbed operational risk into its risk management mantra is Bankers Trust, which has developed what may be the most in-depth loss database of external events in the industry. The bank uses statistical analysis as well as actuarial models from the insurance sector to generate loss distributions for both common operational losses and high-severity tail events. The bank, moreover, is confident enough about its ability to produce good metrics that it has used a plug-in operational risk number for capital allocation for the last two years.

Other banks, recognizing the benefits of getting previously untracked risks under control—or at least in sight—are now playing catch-up, some with alacrity but more in fits and starts. They are beginning to track loss history, build their own databases, and map operational risk factors and linkages between losses and their probable causes. Another reason for the increased attention paid to operational risk these days is the steep rise in the cost of insurance coverage for these risks, which to date is the only proven hedge for pricey disasters.

Svelte models

NetRisk, a Greenwich, Conn.-based risk advisory and software firm, believes it has found a way to address operational risk. Last month it released RiskOps, a product that offers clients a database of external loss events, plus analytics to model operational risk. NetRisk broadly defines operational risk as the risks associated with managing a firm’s “financial assets.” These assets, says Dan Mudge, a senior risk adviser, include people (the attendant risks are of internal fraud, lack of training, processing errors and so on); physical assets (the loss of a business environment because of a fire, for instance, or the loss of bearer bonds in a bank’s transaction-processing office); technology (year 2000 problems, a failure in the money-transfer system or a virus corrupting internal databases); business relationships (unhappy customers requiring higher servicing costs, and lawsuits); and regulatory and other external issues.

RiskOps tracks losses in these areas across the financial services industry. Its database is compiled from information in annual reports, news stories, Lexis-Nexis and legal-database searches, and articles in financial publications. To ensure the relevance of the loss distribution information it’s selling, the company has built scaling techniques into its software so data samples can be adjusted for clients based on size, comparability of business and other factors. RiskOps is supported by Swiss Re New Markets, which is providing financial support and analytics based on its insurance industry expertise.

The RiskOps external loss database, however, is only the first part of a two-tiered plan. The second phase is the creation of an industry consortium database, which means, says Mudge, “institutions sharing private, nonpublic information.” NetRisk’s plan is to “aggregate internal information through indices and other means of disguise and then send it back [to clients], but in a form where it wouldn’t be easy to determine individual institutions’ contributions to the database.” To make the plan work, the firm says it will wait until it has four or five institutions on board so there’s enough of a critical mass to generate loss distributions in different areas and properly mask the data sources.

Some industry observers, however, believe a third-party aggregation of internal data from financial institutions will be extremely difficult to get off the ground. The primary impediment to building such a database, says Duncan Wilson, the head of IBM’s European risk consulting practice, is that the information is extremely sensitive and banks have had good reason not to share it in the past. Deborah Williams, a principal and research director at Meridien Research in Newton, Mass., who wrote a report last May called “Operational Risk Management Technologies,” agrees. “So far it appears to be a chicken-and-egg thing,” she says. “Everybody wants [the information] but only after everybody’s contributed to it. If all of my peers do it, they say, then I’ll do it too. But I don’t want to be the first.”

The wariness of banks on this issue accentuates a central difference between market and credit risk, on the one hand, and operational risk on the other. “With market risk,” says Williams, “people won’t tell you what model they’re using, but the fact that they have market risk is well-known. It doesn’t reflect on the larger organization as a whole.” Operational risk, however, gets to the heart of the institution. No unit in any bank or financial institution, she points out, “is exempt from operational risk, and there’s really no excuse for operational controls not to be good.” Everybody has operational risk failures, they tend to be kept under wraps, and they’re the kind of things that can cost people their jobs. Even within a company, industry observers note, people are sometimes reluctant to discuss the operational risks in their departments.

Another software vendor that has developed a tool to model operational risk is Toronto-based Algorithmics. The company will release its product, called Algorithmics ORM (Operational Risk Management), in mid-year. “We’re able to give firms an overall risk-adjusted performance measure for their businesses by adding operational risk to market and credit risk measures,” says Jack King, the director of the company’s operational risk business unit. Algorithmics ORM will measure back-office exposures in transaction processing by looking at the likelihood of a transaction failing at various control points—for instance, when details such as the verification of margins are matched between front-office and back-office systems.

Under this regime, the amount of a bank’s operational risk exposure is the market value of transactions that fail to match or settle on time, according to the bank’s existing standards. “Because banks have so many transactions, there’s a pretty high frequency of failures-to-settle,” says King, and those measurements generate “good statistics.” Algorithmics’ software can also be linked into event modeling for a range of low-frequency, high-loss events.

Because operational risk itself is a moving target, its definition, at least for vendors, depends on what the software in question can address and quantify. Algorithmics sees operational risk as “the risk of loss to the book value from a failure in the transaction processing of the firm.” This definition is less broad than NetRisk’s, but broad enough to embrace most of the disasters that have landed banks, unhappily, in the headlines. Rogue traders, unverified option-pricing models and lack of back-office independence all fall within the definition. A fire in a bank’s processing room would also be an operational risk (although a fire in a bank’s head office wouldn’t). And in the case of fraud, King points out, “a person may be transferring money to a personal account, hiding money in other accounts or misvaluing transactions—and all these are failures in the proper processing of transactions.”

Why model?

One of the more ambitious reasons for modeling operational risk is to address future losses proactively through capital allocation. This means figuring out loss probability distributions across business units and setting aside capital to cover potential losses. But however attractive this may be to some banks, the facts on the ground indicate that there are few banks with true capital allocation schemes for operational risk. King points out that the Algorithmics software can help a bank “motivated” by the capital allocation issue, but that a good operational risk system “should also help an institution improve operational efficiency. If [management] can get a handle on operational risk, it can look at losses under particular scenarios and make better decisions about how to spend money on operations.”

Dan Mudge at NetRisk is also reluctant to emphasize the number as the light at the end of the tunnel—at least for now. “The [operational risk] analysis is compatible with what people do for market and credit risk,” he says, “but our goal is not just to produce a capital number. It’s to identify risk, to quantify it and to prioritize the allocation of resources.” The analytics, he points out, can highlight areas where a firm has gaps in its insurance coverage and where it might want to add protection. If, for instance, a firm decides there’s a 1 percent chance of a $100 million loss as a result of rogue trading over the course of five years, it may decide that the best strategy is to buy insurance for that extreme outlier event and to self-insure the lesser, more manageable risks.

“Our goal is not just to produce a capital number. It’s to identify risk, to quantify it and to prioritize the allocation of resources.”
Dan Mudge
NetRisk

Another advantage of modeling, adds Mudge, is that the more data collected, the more likely it is that people will “start to think objectively” about operational risk categories and subcategories. Large discrepancies in the incidence of certain errors might suggest that firms are not yet defining a risk the same way—or that one firm hasn’t managed to tackle a given problem or the measurement of that problem.

Nevertheless, many in the industry don’t believe that loss databases are the way to go. Since operational risk varies between institutions, and is based on institution-specific hazards and the valuation of risk control and monitoring, many say that coming up with an operational risk number—or even a loss frequency distribution—will mean little to most managers. A handful of the financial institutions Williams and her colleagues at Meridien talked to simply didn’t think operational risk was worth analyzing—at least, not with hard and fast numbers. The only way to address operational risks, they said, was to install organizational checks and balances and proper controls. IBM’s Duncan adds that technology should be an issue only at the end of the process, once a bank knows clearly where its operational risks lie and how they can be handled.

“Measuring exposure to operational risk is one of the areas where there is no industry standard,” says Alan Bray, a partner in Deloitte & Touche’s U.K. financial services practice. Traditional risk management processes focus on quantitative, analytic methods, but a large aspect of operational risk, he notes, is about “getting away from looking at risk in a narrow sense, and looking at the qualitative risks as well—for example, the quality of the staff an organization is employing.” But how can such a touchy-feely risk be quantified? “That’s where people are trying to make subjective judgments of likelihood and impact,” he says, “without really assessing the ssue of probable financial loss.” Operational risk is different from market or credit risk in that best practice is still evolving. In addition, adds Bray, it is extremely difficult to quantify the potential cost of longer-term effects such as reputational damage.

Tracking the information flow

Deloitte & Touche has decided that the best way to identify and assess operational risks is to combine the information gathered in workshops conducted internally with other sources of data such as loss-event databases. Its software classifies and benchmarks operational risks against best practices identified through the firm’s work with other banks. For reporting purposes, the software uses a “traffic light system,” but underneath is a “wealth of data on experienced risk” that’s sufficient to support a value-at-risk analysis, says Bray.

The software IBM uses in its consulting business spits out an operational risk number at the end of the process. This is as it should be, says Duncan Wilson, although he also stresses the importance of identifying causes—and not simply loss values—for transaction and risk management failures. IBM advocates a “balanced score-carding” process by which clients’ compliance with identified best practices is quantified. Wilson and his colleagues then try to convince institutions to allocate a piece of capital to operational risks once they know their error rate.

“A person may be transferring money to a personal account, hiding money in other accounts or misvaluing transactions—all of these are failures in the proper processing of transactions.”
Jack King
Algorithmics

For software companies not prepared to take on the task of modeling operational risks and building loss databases, there remains the large, untamed area of data quality—a major slice of the operational risk pie for most financial institutions. Not surprisingly, many software companies are now stepping into the breach. Front-office trading systems are adding functionality to ratchet down operational failures, especially in the area of unauthorized trading, by requiring double validations for trades above certain thresholds, setting limits and adding tactical risk control features. But a number of vendors are defining the back office as the principal stomping ground for unchecked operational risk.

Two companies that have recently come out with what can be described as data-cleansing reconciliation systems are CSK Software and Financial Technologies Inc. Two months ago, CSK released PaceMaker, which last month went live in Deutsche Bank, its first client. PaceMaker is a transaction-flow monitor, says Jerry Goldman, the company’s U.K. managing director, that’s designed to catch data errors, inconsistencies and breaks in the processing of a trade that would otherwise cause the trade to fail. If the front office puts one amount on a trade, for example, and the back office enters another amount, the trade won’t settle. PaceMaker addresses this by establishing one location where a complete set of data on a trade exists, and then doing a compound reconciliation to ensure that the data on different systems are consistent. The software also produces metrics on where failures are coming from, adds Goldman, “so if a bank knows 20 percent of its failures are coming from SWIFT confirmations or because counterparty X is not providing correct settlement instructions, that information can guide the bank to change procedures or legacy systems.”

New York-based FTI has also launched a system that addresses the fractious issue of data control. According to Rob Flatley, senior vice president of sales and marketing, the company tackles the aspect of operational risk that lives at the “information aggregation layer,” beneath risk management systems and other systems installed to provide financial controls. Since debt and equity systems often don’t speak to one another across an enterprise, risk management is less efficient than it could be. FTI’s Global Financial Data Model is therefore designed to provide “data standards” for transactional information, market data and client-counterparty information. The system “defines the structure [in a database] for relating securities, issuers, clients, counterparties and transactions,” says Flatley. This permits institutions to bring together information across debt and equity systems, and to measure exposures and pending settlements more accurately—and faster.

Collectively, the operational risk systems on or near the market focus on the flow of data and information in a firm in order to determine the location of operational exposures. There are many naysayers and dark realists watching what happens in this new risk discipline, but if these heterogeneous operational risks do ultimately prove to be manageable, perhaps no risk will have to lie untended in the future, beyond the reach of a model.

Was this information valuable?
Subscribe to Derivatives Strategy by clicking here!

--