.
.--.
Print this
:.--:
-
|select-------
-------------
-
Modeling Risk In Its Full Complexity

Peter Davies, president of Askari, a State Street company, explains how temporal simulation techniques provide an integrated way to analyze risk across several business functions.

There is one unalterable fact in finance—time changes everything. Risk management systems are supposed to tell us about probable loss, yet they completely ignore time.

Most people in the financial world are focused narrowly on changes in market rates. But a host of other equally important factors are changing as well. Loan demand is changing. Competitive margins and spreads are changing. The costs of systems and personnel are changing. By the end of any given day, or in the next four quarters, a number of factors completely unrelated to market rates might have affected the profitability of our businesses.

We need to describe how the portfolios we know and value today will change through time as they’re exposed to uncertain external variables and are actively managed.

But things are even more complicated than that. While all these changes are occurring around us, we are busy actively managing our portfolios. If the dollar declines 20 percent, we don’t sit still—we constantly balance hedges, rebalance portfolios, change allocation schemes and update volatility estimates.

What is the conceptual model behind all this? We need to describe how the portfolios we know and value today will change through time as they’re exposed to uncertain external variables and are actively managed. We need to take the present value of the portfolio we are managing today and project that through time into the uncertain future.

Temporal simulation techniques can help bring time back into risk management. It’s important, however, to recognize that the concept of time varies, depending on what one is trying to do. People doing short-term liquidity management for the bank care about the next 30 days; those managing assets against a life insurance portfolio think in terms of a 16-year duration; those dealing with other financial problems may be thinking about 25-year payout profiles. In this case, there are three different time periods, with three different sets of scenarios and three different sets of strategies. So how can we actually build something that considers the time element in our risk analyses? What do we need?

Since time passes every day, we clearly need to model daily portfolio events. People in the risk management business have gotten a little sloppy when it comes to modeling daily events. Some of us have asked: Do we really need a risk management system that actually knows when a coupon is paid on a particular day? That depends. We might need that level of accuracy if it makes a big difference in a portfolio just before or after the end of a quarter. Depending on how the accounting system works and how bonuses are paid, it may account for big differences indeed.

We need to account for the strategic events that occur every few days and force us to rebalance our portfolios—and we need to account for the weekly and quarterly strategy sessions in which we recalibrate our allocation rules.

Moreover, we need to account for the way market rates diffuse over time. To do this effectively, we need processes that include multiple paths and multiple time periods. After all, people analyzing their portfolios at the end of a particular period need to understand the paths they took to get there, not simply the portfolios’ finite, discrete future values. This analysis may involve multiperiod Monte Carlo analysis or multiperiod historical simulation.

We also need to account for constantly changing external variables such as econometric data. This can involve things like personal income levels, anticipated loan demand, changes in the business cycle and the behavior of credit card receivables. All these should be incorporated in our model if they affect the performance of the portfolio.

We need to describe the elaborate policies, rules and games we play in the active management of our portfolios—in reinvestment, refinancing and rebalancing. Most of us don’t have the patience or ability to describe how we do what we do to a computer system. In fact, we don’t necessarily know anything about it until we actually do it. But it is possible to put system benchmarks into a computer. These are what I like to think of as “policy-neutral” strategies—the sorts of things that we do as part of the normal management of our portfolios. Once we enter these into the system, we can measure the risks we take over time as we deviate from our normal patterns.

What are the practical benefits of temporal simulation as a process? The short answer is that it provides a single integrated way of looking at risks. Risk measurement, control and management are difficult to do once, let alone many times. Yet that’s what we do. Firms routinely spend $3 million on a value-at-risk system, then spend $20 million on a credit system and another $1.5 million on an asset/liability system. All of them have different representations of positions, different customer hierarchies and different scenarios. The result is that different pieces of businesses are being managed on entirely different assumptions and are measured with entirely different mechanisms. It never adds up.

Temporal simulation is the only alternative that integrates risk analysis into a single simulation with an integrated risk model and integrated technology. It is necessarily more complex because it describes the risks we are exposed to in greater detail.

Temporal simulation takes a number of different risk measures into account. Sensitivity risk is the measure of the surface of a portfolio’s values intraday. Value-at-risk measures what that portfolio looks like when exposed to a range of alternative scenarios. Credit risk measures what portfolios look like when exposed to alternative scenarios through time. Asset/liability risk measures what discrete models for rates, spreads and loan demand will look like in the future, as well as how one might use different strategies to manage the exposure and the different net incomes, balance sheets and cash flows that each strategy generates.

Temporal simulation is the only way to achieve detail-level risk and aggregation analysis with consistency across risk types.

The ultimate risk measure, however, is business risk, which pulls all the various risks together. What is really being managed at the enterprise level is the volatility of net income and net worth of the enterprise through time. Not only should we care whether the net income reported in the fourth quarter of 1998 is below expectations, but we also want to know why—because of credit losses, because of bad competitive margins or because we screwed up on cost-of-funds assumptions. Ultimately, we want to know why we were unable to manage our budgets and point to the things that caused this to happen.

All these different systems and mechanisms tend to make attempts at synthesis impossible. In the consulting my company has performed over the past two years, we’ve noticed that people in organizations never agree on important assumptions—and that they do that on purpose. That poses a particular problem: We can never really see the effect of different market and product assumptions because we can never put them together and compare them. What would happen if we took Caroline’s portfolio and strategy and Jack’s and someone else’s, and exposed them to all of the scenarios? How sensitive would the strategies be to the scenarios being assumed? We could never get an answer, because of the different mechanisms and systems involved.

If we put all the assumptions into one risk mechanism, however, we can test the validity of the assumptions. Temporal simulation offers a more inclusive, more flexible and more consistent solution. It is the only way to achieve detail-level risk and aggregation analysis with consistency across risk types, along with estimates of future portfolio values and returns.

The growth of credit derivatives is forcing this kind of synthesis. In most of our organizations, we have two totally different concepts of time and risk analysis. Credit risk analysis typically involves a few events distributed through long periods of time, in a way that is generally insensitive to scenarios. Market risk analysis, by contrast, involves shorter-term analysis of the volatilities to present value, with multiple scenarios. These two approaches are now being combined into a single transaction price. And all of a sudden, we are presented with two views of the world, two views of the bank, two views of what we are supposed to be doing for a living, combined into a single market. This helps people realize that they need a way of measuring risk that enables them to scale up and down that continuum, to deal with both credit and market risk as a single phenomenon.

I’m convinced of one thing above all: To have a future in risk management, one needs to include the future in risk measurement. The time element must be put back into risk measures.


Hedging Efficiently with VAR

Alvin Kuruc, senior vice president, and Bernard Lee, financial engineer, at Infinity, a SunGard company, explain how VAR can be used as a guideline for judging alternative hedging strategies.

The goal of hedging is to offset the risk inherent in a relatively illiquid position by taking positions in liquid instruments. While in theory this is achievable, in practice it can be done only in an approximate way. There are numerous considerations that come into play. More frequent rebalancing will reduce tracking error but will increase transaction costs. There is often a trade-off between liquidity and basis risk. The futures contract that best matches a particular exposure, for example, may be relatively illiquid. Would it be better to use a different contract instead? If so, how do we adjust the hedge for the resulting basis risk?

Traders cannot use “extreme volatility” as an excuse for poor performance or unexpected P&L variations.

In order to address these questions in a systematic manner, we need a quantitative approach to the hedging problem. The success of value-at-risk methods over the past five years has demonstrated the feasibility of quantifying the risk of a financial portfolio as a single number using statistical methods. We thus propose the following paradigm: Formalize the hedging problem as one of minimizing the risk of the hedged portfolio as assessed by an appropriate VAR measure. VAR provides a measure by which the risk-reduction benefits of alternative hedging strategies can be quantified and considered in relation to their costs. For example, it is possible to quantify both the risk contributions of individual portfolio assets as well as the risk reductions by individual hedging vehicles. A particularly inefficient hedging vehicle can be identified and replaced by a potentially more efficient alternative. The transaction cost of adding each alternative hedging vehicle can be weighted against the projected variance reduction. A final decision can be made by comparing the current Sharpe ratio of the aggregate portfolio to those computed for alternative options for allocating risk capital.

The minimum-VAR approach is a particularly appealing alternative for hedging interest rate instruments. Duration-based hedging approaches are limited in that they only hedge against parallel shifts in the yield curve. Other common approaches hedge against shifts in all the par rates used to build the yield curve, but involve taking positions in each of the curve-building instruments. The minimum-VAR approach allows the operator to specify the hedge instruments to be used and computes an optimum hedge based on the relative likelihood of yield-curve shifts. The well-known observation that almost all yield-curve movements can be explained by two or three factors suggests that it should be possible to construct efficient hedges with a relatively small number of hedge instruments. The minimum-VAR hedge can be efficiently computed for parametric variance/covariance VAR methodologies using delta approximations. The computation amounts to solving a “weighted-least-squares” problem, which is an efficient, analytic computation.

One of the key benefits of the minimum-VAR approach is that the use of a single meaningful objective function provides a consistent means of extending the methodology. For example, limits on the size of positions in certain hedging vehicles can be accommodated simply by minimizing the risk measure subject to those constraints.

Gamma hedging for interest rate instruments is a potentially daunting problem since there is a gamma component for each pair of maturities, making it infeasible to hedge every gamma exposure exactly. By using a delta-gamma portfolio approximation, however, one can derive an explicit formula for the variance of the portfolio. An obvious approach is to compute hedge positions that minimize the variance of the hedged portfolio as assessed by the delta-gamma VAR measure. This can be further extended to a delta-gamma-vega VAR expression by including volatilities in the set of nominated risk factors. While it will no longer be possible to obtain an analytic expression for the variance-minimizing hedge position, a solution can be achieved by using numerical techniques. In most cases, a good starting point for the numerical minimization problem will be the hedging positions obtained from the delta approximation.

Using a statistical approach to address the basis risk present in almost any practical hedging problem is nothing new; Hull uses correlations to compute hedge ratios in Chapter 2 of Options, Futures and Other Derivatives. In the past, however, deployment of large-scale multivariate methods for this purpose would have entailed a substantial investment in development and maintenance. Today, the widespread availability of value-at-risk systems makes such deployment possible with a relatively small, incremental investment.


Quantifying Operational Risk

Chris Hamilton, a partner at KPMG, explains how to measure operational risk using earnings volatility and other factors.

The quantification of operational risk is potentially more challenging than that of market or credit risk, for a number of reasons. To start, there is a limited amount of historical market data to work with. Operational risk, moreover, has many components, including business and event risk, and the cause-and-effect issues are complex. In addition, there are more people with vested interests in operational risk than there are in market risk or credit risk.

Historically, most financial institutions have viewed operational risk as a matter of managing processes and procedures effectively and efficiently. This approach is focused on specific event-related risks, particularly off the back of mega-mergers or major systems implementations.

I believe that institutions first need to be able to stand back and view operational risk in a broader context that relates that risk to fluctuations in broad economic factors such as interest rates, stock prices, and overall income and employment. This will not only help executive management consider the business mix of their companies strategically, but will help them to prioritize investments in risk control initiatives and prevent indiscriminate and potentially inappropriate cost-cutting.

So let’s take a top-down approach to the problem. Analysts often find it useful to divide operational risk into different components. Event risk refers to rare, potentially catastrophic incidents such as natural disasters, systems breakdowns and large-scale fraud. Business risk refers to changes in the external economic environment. Analysts look at these two components separately, since event and business risk arise from distinct causes and call for different remedies or mitigation techniques.

Business risk partly reflects operating advantage, via the influence of fixed costs on profitability analysis. A 20 percent revenue drop, for example, clearly hits net profits harder if fixed costs are 60 percent rather than 10 percent of revenues.

The long-term correlation between five-year swap spreads and five-year Treasury notes is -4.2 percent, which is virtually no correlation at all.

In a multiline financial institution, operational risk for the total institution and for each line of business is dominated by business risk, with event risk somewhat diversified away. Business risk itself derives from fluctuations in broad economic factors such as interest rates, stock prices, the entry of new competitors, and overall income and employment. Business risk can also arise from sweeping changes in the regulatory and technological environments. These factors simultaneously affect many of the products and services offered by particular financial institutions, and do not get diversified away at the institutional or individual business levels.

If relatively good financial information exists, the best approach to the problem uses earnings volatility to measure operational risk and the associated capital it requires. The goal is to allow the organization to understand and quantify the source of risk for individual businesses and products, using a limited amount of data in a relatively short time frame. Even if the time series is limited, the strategic information can be quite useful.

This approach makes a number of important assumptions about the sources of risk. The approach assumes that the volatility of a company’s earnings is directly proportional to the sum total of its various risks. It then breaks out risks into different components, which are measured separately.

  • Market risk is driven by trading profits and net interest rate margin volatility.
  • Credit risk is driven by the volatility of charge-offs.
  • Event risk is driven by the volatility of noncredit losses such as litigation, fraud and so forth.
  • Business risk is driven by the volatility of income for a particular line of business, which is based on a combination of its spreads, fees and direct costs.

Focusing on revenues and noncredit losses, however, is not a perfect solution. These volatility measures contain a great deal of statistical noise that is not indicative of operational risk. Short-term revenue for particular businesses can be affected by market and credit risks, accounting conventions and other factors unrelated to operational risk.

In order to determine capital allocations that reasonably reflect operational risk by itself, it’s necessary to follow a five-step process of data cleaning and enhancement.

  1. Calculate the revenue volatility and associated operational risk capital for the institution.
  2. Compute the revenue volatility for the individual businesses.
  3. Adjust the individual business measures by removing fluctuations arising from extraneous factors.
  4. Use regression analysis to estimate the portion of adjusted volatility attributable to operational risk factors such as personnel turnover, growth rate and size.
  5. Derive the capital allocations implied by the operational risk factors determined using volatility measures.

This approach reduces noise in the data by a factor of almost 500. In addition, the regression procedure ensures that businesses with similar operational risk factors obtain similar amounts of business operational risk capital.

Earnings-at-risk is another straightforward methodology that is easily understood by business people, and produces results in a short time period. The major limitation of this methodology is that it focuses on risks already taken by the institution but provides little insight on the current risk profile. Earnings-at-risk works best in stable environments where it is possible to assume that the institutional risk profile has remained reasonably steady over recent years.

In the near future, we will see an increasing interest in addressing these issues, with a consistent approach focused on capturing the interrelationships among different businesses and analyzing the internal and external drivers of risk. The operational risk framework will become an integrated part of the strategic planning and decision-making process, helping management allocate and manage resources, prioritize corporate efforts and investments, and make decisions about business acquisitions and divestitures.


The views and opinions are those of the author and do not necessarily represent the views and opinions of KPMG Peat Marwick LLP.


Spread Risk

Steve Pelletier, vice president at Theoretics, examines U.S. swap spreads and their impact on risk management.

Last summer, many experienced traders incurred large losses with positions that were supposedly hedged. Spreads of all types, from sovereign to corporate, behaved in a volatile fashion. U.S. five-year swap spreads rose from 45.5 basis points at the end of July to 79.5 basis points by the end of August, before peaking in mid-October at 97.5 basis points. This “six-standard-deviation-event” wrought havoc on traders and hedgers alike. In the aftermath, it is useful to examine the statistical properties of swap spreads and consider the implications of this analysis on risk management practices.

Swap spreads are the derivatives market’s equivalent to corporate spreads. Swap spreads represent the credit spread over Treasury notes that high-quality borrowers would pay in the market: They are added to Treasury yields to generate swap rates and yield curves that are used to value financial instruments. For example, assuming that the five-year Treasury yields 4.75 percent and the five-year swap spread is trading at 75 basis points, the five-year swap rate would be quoted at 5.5 percent.

From 1992 to July 1998, five-year spreads traded in a range of about 20 basis points to 60 basis points. As Figure 1 shows, these spreads climbed dramatically in response to global market uncertainty, rising to a high of 97.5 basis points last October before settling down. Although this volatility seems extreme, it is not unprecedented. The largest positive percentage change occurred in April 1994; the largest negative percentage change occurred in September 1993. Figure 2 shows descriptive statistics for the percentage change in spreads over two intervals, January 1992–October 1998, and January 1992–July 1998. Because of the large sample size, excluding the most recent data has only a small effect on the values. A trader or risk manager using the data before the most recent volatility, however, would reach similar conclusions about the nature of spread risk.

Figure 2 Five-year swap spreads, daily percentage change
  1/92–10/98 1/92–7/98
Max 22.6 22.6
Min -15.8 -15.8
Std. 3.37 3.35
Mean 0.02 -0.00
Correlation to Treasury notes -4.23 -2.24

Another issue regarding spreads is whether they accurately approximate a normal distribution. In finance, it is common to assume normality for risk management and derivatives pricing. Figure 3 compares the histogram of actual percentage changes in spreads with the theoretical normal approximation. From the figure, it is clear that swap spreads are not normally distributed. The histogram displays extreme clustering around the mean with almost 50 percent of the changes close to the mean of .02 percent. In addition, the distribution shows a greater chance of outliers. The historcal probability of an event outside of three standard deviations is 1.35 percent, four times larger than the 0.3 percent predicted from a normal curve. In lay terms, swap spreads either change significantly, or not at all.

It is also useful to examine the stability of the important statistics over time. Volatility, as measured by the daily standard deviation, and the correlation to five-year Treasury yields, are two measures that have a direct impact on risk measures such as limit calculations and value-at-risk. Figures 3 and 4 show the rolling 65-day volatility and correlation to yields, respectively. Both of these measures are rather unstable, oscillating around their average values. Two observations can be made, based on these graphs. First, although the markets, including swap spreads, were considered extremely volatile from August 1998 to October 1998, the rolling volatility for that period—although above the average—had been exceeded in six previous periods. Traders and risk managers therefore cannot use “extreme volatility” as an excuse for poor performance or unexpected profit and loss variations. Second, the rolling correlation graph shows a relationship that may be changing. The correlation between five-year spreads and yields became more negative than ever in November 1997 and has remained at historically negative levels for about a year now.

The volatility in swap spreads have a number of important implications for traders and risk managers.

Interest Rate Swap Book Management: Typically, when swap traders enter into a swap, they offset the interest rate risk with a trade in Treasury notes. A trader receiving $100 million on a five-year swap, for instance, would match the trade with a sale of five-year notes. But a “hedged” book can still be exposed to a good deal of risk. Consider what occurred during the week of August 19–25, 1998. Assume that a desk entered a short position in five-year swaps (receiving a fixed rate), while hedging with Treasuries. The approximate dollar value per basis point risk on each leg would be $44,000. Figure 4 shows the actual market rates along with the associated profit and loss on this hedged position. Over a one-week holding period, the position lost approximately $900,000! Clearly, spread risk is significant, and needs to be carefully considered by swap desks when establishing trading limits.

Figure 4
Receiving $100 million five-year swap vs. short $100 million five-year Treasury note
        Profit and loss (in $ thousands)
  Treasury
Yield
Swap
Spread
Swap
Rate
Receive
Swap
Short
Note
Total
Aug. 19, ’98 5.34 % 55.5 bps 5.895 % 0 0 0
Aug. 25, ’98 5.11 % 75.5 bps 5.865 % +132 -1,012 -880

Option Pricing: Most theoretical option models make assumptions about the distribution of the underlying asset when calculating fair value. The most commonly assumed distributions include the bell-shaped normal or lognormal distributions. One popular product is called a “spread-lock” option. This gives the user the right to pay or receive at a fixed spread over Treasuries at some future date. For example, a buyer of a spread-lock call option on a five-year swap, struck at 50 basis points, will profit if spreads widen to 55 basis points. Given the historical distribution of five-year swap spreads, it would seem foolish to price this option using an assumption of normality. A more appropriate and robust model would account for the distribution shown in Figure 3. The resulting option prices are likely to be lower for at-the-money options and higher for out-of-the-money options.

Value-at-risk: Many institutions are placing increasing emphasis on the calculation of VAR, and use this number religiously to track performance. Two of the major inputs to VAR calculation are volatility (see Figure 5) and correlation between assets. The variability of standard deviation and correlation demonstrates that it is difficult to determine the appropriate numbers to use for these two inputs. For example, the long-term correlation between five-year swap spreads and five-year Treasury notes is -4.2 percent, which is virtually no correlation at all. However, over the last year, the correlation has been averaging about -40 percent. A manager using this value would have failed to recognize the directional risk a spread position contributed to his swap book over the previous few months. In this case, using correlation over a shorter horizon (for instance, 65 days) would have more accurately mapped the true VAR. This is not to say that using a long-term measure is incorrect, but rather suggests that VAR is a crude estimate of risk and must be used as a rough benchmark only.

Three Federal Reserve interest rate cuts have helped to calm the markets and reduce volatility. Indeed, judging by the equity market’s rapid climb to near-record highs, one would think that the events of the past summer were nothing more than a bad dream. Smart professionals, however, will not forget the summer of 1998 as a time when hedges no longer served as hedges (as swap traders who were short Treasuries found out), and when long-term statistical relationships deteriorated (that is, when swap spreads and interest rates became negatively correlated).

Help us capture the challenge and excitement of risk management.
Here’s how YOU can contribute:
Write for us!
Derivatives Strategy wants your ideas.

E-mail your contributions to kolman@derivativesstrategy.com or
write to 145 Hudson, Suite 700, New York, NY 10013

Feature stories, columns, etc.:
We’re always looking for columns or new article ideas on topics that haven’t been received enough attention. The ideas must be comprehensible to a wide audience of derivatives users.

Software reviewers:
We need people willing to spend an afternoon or evening becoming familiar with a particular piece of software. Later, we’ll interview you at length on the strengths and weaknesses of the program.

Derivatives comix
A conversation between a dealer and an end-user in comic book format that helps readers understand the fine points of derivatives sales and trading practices.

Confessions:
A no-holds-barred examination of your personal experience in the derivatives market. You can remain anonymous, but you must tell a vivid story that reveals both the positive and negative aspects of the business.

Letters to the editor
Reaction, support, admonishments, suggestions for improvement.

Was this information valuable?
Subscribe to Derivatives Strategy by clicking here!

--