.
.--.
Print this
:.--:
-
|select-------
-------------
-
Roundtable the limits of VAR

Does the increasing reliance on value-at-risk create a whole new set of problems? Are we better off with nothing at all?

On February 7, Derivatives Strategy invited a number of leading figures in the derivatives business to participate in a series of roundtable discussions as part of the 1998 Derivatives Hall of Fame award ceremonies. The first session focused on the limitations of value-at-risk methodologies. The second session (the transcript of which starts on Page 24) tackled the issue of the limits of modeling in general.

During the first session, critics of VAR argued that the increasing reliance on VAR methodologies introduces new risks and that false confidence in a flawed methodology often only makes things worse. The standard errors in competing VAR calculations and the unreliability of correlation matrices make most VAR numbers virtually meaningless. Critics also noted that widespread hedging with VAR could lead to a breakdown in correlations in the financial markets—and lead traders to find ways to get around the assumptions in VAR models. Some also noted that VAR neglects risk/return measures as well as significant operational and credit risks.

Although proponents of VAR readily admitted some of these shortcomings, they argued that stress testing, not VAR, should be used for worst-case scenarios and that quantifying the risks in the best way possible is better than doing nothing at all. All agreed that new methods need to be found to build judgments into the modeling process in order to understand causal relationships that may not be picked up by VAR models.

"The essence of civilization is measurement, and value-at-risk measures things in a way most people can understand. Although VAR is most valuable because it forces people into a process of thinking about risk, we also need more intuitive measures of risk. Asian financial institutions got into trouble because they didn't measure market and credit risk. Value-at-risk won't tell you the worst case; that's what stress testing is for. It is better to quantify and look for ways to improve predictions than do nothing at all.”
—PhilippeJorion

Philippe Jorion: My commute to this conference was a bit longer than that of most of you, since I came from Orange County. One of the advantages of having a long commute is that you can spend time reading on the plane. And I wanted to tell you about a very good book, The Measure of Reality, that asks the following the question.

PARTICIPANTS
Joe Kolman, editor, Derivatives Strategy
Michael Onak, Americas director of Arthur Andersen's Derivatives and Treasury Risk Management Consulting Group
Philippe Jorion, professor, University of California, Irvine, and author of Value At Risk: The New Benchmark for Controlling Derivatives Risk
Nassim Taleb, senior adviser, Banque Paribas, and author of Dynamic Hedging: Managing Vanilla and Exotic Options
Emanuel Derman, managing director, Goldman Sachs
Blu Putnam, president, CDC Asset Management
Richard Sandor, president, Hedge Financial
Stan Jonas, managing director, Societe Generale/FIMAT
Ron Dembo, president, Algorithmics
George Holt, managing director of quantitative finance, Arthur Andersen
Richard Tanenbaum, partner, Savvysoft
William Margrabe, president, William Margrabe Group
Dan Mudge, partner, NetRisk
James Lam, vice president, global risk management, Fidelity Investments
Jim Rozsypal, senior manager of Arthur Andersen's Derivatives and Treasury Risk Management Consulting Group

Some 900 years ago, Europe was populated by dull people in a barbaric state of civilization. Europe was much less advanced than the Arab world, for instance. Then 400 years afterward, Europe embarked on a wave of imperialism that seems to have no precedent in history. The book asks, How did this happen?

The thesis of the book is that this change happened as Europe started to "measure” the material world. What happened was a change in the mentality that induced people to start measuring space and time. It was the beginning of clocks, measuring space, measuring the material world and the environment. That led to huge technological progress in Europe and explains why Westerners seems to have been able to conquer all other civilizations they met.

In a way, value-at-risk is along the same lines. What you are doing is trying to measure risk in a systematic way. I view VAR as an extension of the idea of measuring things. The benefit of VAR is that it measures risk in a way that most people can understand it. The new aspect of VAR is that it is applicable to the total risk of a portfolio and that it accounts for leverage.

In my view, the widespread use of VAR is a huge improvement over previous practices. If you go back to a number of derivatives disasters we have had in the last few years, the only common lesson you can draw from these is that there was a lack of risk controls. These debacles occurred in different markets, or for different reasons. In some cases, there was a rogue trader. But the only common theme across all of these disasters is that there was a lack of sensitivity to the risks and a lack of control.

That's what has led to the widespread use of VAR. And it explains why VAR has become a benchmark for measuring market risk.

I also want to emphasize that what's useful about VAR is not the end number, the number that you just compute and report. Although it's useful to have a number to report to people on a board, we need to have a more intuitive understanding of risk. The main benefit of VAR is that it imposes a process on the firm. By computing VAR, the firm is forced to think about risk in ways that it was not doing before.

"Measuring events that are unmeasurable can sometimes make things worse. A measuring process that lowers your anxiety level can mislead you into a false sense of security. The general adoption of value-at-risk by investors will lead to a generalized breakdown of correlations. Being scientific does not mean being quantitative. Undue emphasis on mathematics can allow you to lose site of the real world. You have to start with knowledge of what's going on in the world and then possibly refine it with statistical methods, not the other way around.”
—NassimTaleb

I'm sure we'll hear from other people across the table about the drawbacks of VAR, but my view is that it's an extremely useful tool. It gives a first-order approximation to the risk of a financial institution. It is now being applied to bank trading portfolios. Now VAR is spreading from financial institutions to institutional investors.

In my view, this is a very positive development. But I want to emphasize again—what's really important about VAR is not the end number but the process that forces an institution to think about risk.

Joe Kolman: Nassim, do you want to respond?

Nassim Taleb: Yes, I'd like to answer Philippe's points, but not in the same order.

The first question concerns measuring, whether you can measure randomness or not. Frank Knight of the University of Chicago defined two forms of random events. One of them he called measurable risk, the other he called nonmeasurable uncertainty. I think that it's a grave mistake to try to mix the two.

Measurable risk is when you have a handle on the randomness. If I throw a pair of dice, for example, I can pretty much measure my risk because I know that I have one-sixth probability of having a three pop up. Nonmeasurable uncertainty is when I'm throwing the dice without knowing what's on them. In the real world, most social events are nonmeasurable because nobody hard-coded the rules of the game.

One example of an arrogant attempt to measure events that were too complex for us to quantify terminated quite eloquently with the fall of the Berlin Wall. We had people who wanted to build a scientific society based on the abilities of the social planners who could measure things and act accordingly. History is teaching us that getting more sophistication in measuring things does not necessarily represent progress. It may bring us backward, as it did with scientific communism.

Jorion's second point is that since the process of using VAR is a good effort, VAR has got to be a good method. I contest such auxiliary justification, as it could be used to defend things like astrology. The fact that soul-searching and risk-studying is a good thing does not necessarily mean that a specific method that measures risk will be an improvement. Besides, people here who are traders probably know that having a little bit of anxiety is better than having no anxiety at all. And the number that lowers your anxiety, misleading you, could be bad for you.

The third point I'd like to make concerns the distortions caused by the generalized application of the VAR. We know what naive diversification had led to. It led to what we saw last October. A phenomena called contagion resulted from the fact that people were hedging Korea with Brazil and Mexico with Russia. These contagion effects will be magnified by a generalized use of VAR because they would break down correlations. There are two enemies we have in the financial market. One is excessive leverage based on measurement, even if it's initially the right measurement. The second one is the feedback effect that leads to what I call illusory diversification. Out activities may invalidate our measurements. All stock markets go down together.

Jorion: There are two comments I want to make. The first is that Nassim said you cannot measure risk. This just goes against the whole idea of probability theory. All of the theory of finance has been trying to measure risks. You can say, as Nassim does, that you should throw away all of these advances and all of the theory and replace it with 2,500 years of market experience. But I don't know where we were 2,500 years ago. Personally, I think that there has been enormous improvement since.

The other point I want to make is about Asia, and here there are two subpoints. First, the parameters are not stable. I agree that when we measure volatilities and correlations, stability is an important issue. These parameters may change over time, but the focus, I think, should be to try to form better forecasts of volatility and correlation as opposed to throwing away the whole process.

There's another other thing I want to emphasize. The reason why Asian financial markets are in such big trouble is that Asian financial institutions have a history of not measuring market risk and not measuring credit risk. And now Asia has to clean up its financial system. In that respect, I think the U.S. financial markets are much more advanced than all other financial markets because American markets are really at the leading edge of this measuring exercise.

Taleb: I would like to dispel this idea that being against naive measurement means an unconditional opposition to all of the progress we have had in economics, probability theory and social science. Some of it is good, but only when applied properly. Being scientific does not necessarily mean being quantitative. Medicine is very scientific, in the sense that it has the rigor in its search for the truth, but it is not yet quantitative.

"We try to build judgment into quantitative models using prices as well as the factors that determine prices, but most VAR models are missing those judgments. VAR numbers may be flaky, but it's possible to measure your confidence in those numbers and act accordingly.”
—BluPutnam

Sophisticated models are great provided that you can understand something about the assumptions. Most people don't understand the extent of the assumptions.

I'd like to read something written about 55 years ago. "Too large a proportion of mathematical economics are a mere concoction, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.” The author is John Maynard Keynes, and that's the concluding paragraph of his General Theory.

Kolman: Thank you. I'd like to open it now for questions or comments.

Emanuel Derman: I have a question for Nassim. Can you give me an example of what it means to be scientific but not quantitative?

Taleb: Yes. Look at a time series—say of Mexico. Someone who's purely quantitative, just from looking at the data, might infer that there is no volatility and that Mexican currency presents no risks to investors.

But take someone who has read the newspaper, who's rigorous in his thinking, and who took the time to get familiar with the dynamics of foreign exchange markets and central bank reserves. He would know that something that takes place in Korea could spread to Mexico. Such an approach is scientific—that is, rigorous in searching for the truth and understanding the risks—but is not the least bit quantitative.

Being scientific does not mean using inductive models that are based on pure statistics.

Derman: Yes, but once you've taken account of the extra effects that you've mentioned that people were ignoring, it still seems to me you could be justified in then applying some sort of statistics to the variable you've encountered.

Taleb: Exactly. You start by searching for logical and causal relationships. Then later on, you can refine your analysis by using statistical methods if you still feel the need to do so. One of the mistakes I made was using the VAR, without much intuition about the data, and losing money. Then I got wiser and learned the existence of the trap. I start by trying to understand what was going on in the world, and I'm wiser by using statistical methods as a mere appendage to my reasoning. Value-at-risk should be nothing but a small footnote in the way we view the risks. Not the dominating tool.

Kolman: I'd like to hear from Blu Putnam, who is head of CDC Investment Management, a company that runs two or three billion dollars using various sorts of quantitative strategies. As an end-user, I know Blu's got his own feelings about VAR.

Blu Putnam: I am in the camp that believes that the process of using VAR results in a positive outcome. As an industry, we are studying risks more thoroughly than we once did. But I am also in the camp that believes that if you are a slave to history, you are dead. And, many VAR users are going to experience unintended problems because of their slavish use of historical risk data.

At CDC, we do what Nassim is suggesting. We try to build judgment into our quantitative models and try to forecast risks and correlations in a systematic way that involves using not only the price series, but also the factors that we think determine the price series, our excess return forecasts and our recent forecasting errors.

Let's say you get a situation like 1993, where the U.S. Fed Funds rate was flat, at 3 percent, for the full year. If one uses only historical price data of U.S. short-term debt securities, VAR will tell you there is very little risk in the U.S. interest rate market, since the historical standard deviation of the price series had been heading lower and lower as the Federal Reserve held short-term interest rates fixed. Of course in February 1994, fixed-income markets blew up. Had you been looking at factors like inflation, employment, growth and things like that, you would have been hearing a lot of noise coming and known that there was a potential storm brewing. Value-at-risk calculation based solely on the recent history of the price series, by construction, will never see a storm coming, and worse, the message that will be sent is that life is getting increasingly less risky—until the storm hits and it is too late.

"VAR should be used as just one tool, not as the only risk measure. You should also use other techniques in case your assumptions don't hold.”
—MichaelOnak

I think that the more good quality judgment and factor-based forecasting that you can build into your quantitative tools for risk forecasting, the better off you are. Unfortunately, most VAR systems, as practiced, do not do any of that.

Kolman: How would you modify the systems, if you were setting standards for the new VAR or modified VAR?

Putnam: I think if a company had no measure of risk before and it started using historical VAR, it is probably better off than it was at the beginning. Doing some type of VAR, even a historically based system, at least recognizes the importance of correlated positions in risk measurement. Of course, the user also must recognize that if quality risk forecasting is the objective, then historically based VAR processes using only lagged price series data are going to produce poor risk forecasts, are of limited usefulness and can even hurt by giving the user a false sense of security.

Kolman: The issue as far as I can tell is whether there's an advantage to using a model—VAR—that may have flaws over nothing at all. Isn't that the issue?

Michael Onak: I think what we've been saying is that VAR is just one tool and that there needs to be more means of measuring risk. That can include both judgments and the tracking of other risk factors. There may be other methods that tell you when the tools you've been using and the assumptions you've had are no longer relevant. The danger is that some companies look at VAR as the One Big Measure.

"Quantitative issues are important, but let's remember that rogue trading and leverage were responsible for most derivatives disasters.”
—RichardSandor

Taleb: I would agree with you that if people used it as just one tool of many, it may not be as harmful. But the reliance on VAR has hurt me, and that's why I believe it can hurt other people. Another point is the variance of that tool. Value-at-risk could be right. You can come up with an estimator for your risk that may be in the long run accurate. Unlike many measures, however, such a number has a huge standard error. So, here you have a risky estimator of risk. That's what bothers me about it. And that's why I consider that, in many cases, not using VAR could leave people better off.

Richard Sandor: I find myself pleased with the quantitative developments. But let's take a look at the derivatives problems that have occurred. Barings was rogue trading. Orange County was rogue trading and leverage. Korea was leverage. Mexico was leverage and/or undemocratic capital markets.

So if you focus on quantitative techniques, you miss virtually every major problem that systemically occurred. I think we all need to work together to focus on things like overlevered economies, overleveraged firms and rogue trading as well as quantitative techniques. That's where the problems seem to come.

Jim Rozsypal: There's one element that I believe has been overlooked in the past several years, and that's the human element of VAR. It is people who must understand the technology and understand how to interpret the results of VAR analysis and balance it with other means of testing. And it's people who need to be able to articulate the results to management, shareholders and analysts.

While much of the money to date has been spent on technology and quantitative analysis, many market participants will have to continually invest in people and training in order to enhance the risk monitoring capabilities. This, hopefully, will lead to an evolution of VAR. Not a revolution against VAR.

Stan Jonas: Value-at-risk is a strange analysis, because in a sense the level at which the analysis is carried on is the wrong point. It is at the human level that risk resides. At the risk of paraphrasing the National Rifle Association, positions don't lose money, people do. Portfolios in and of them self are not risky. Banks are not risky. Traders are risky.

I once had to explain to my father that a bank didn't really make its money taking deposits and lending out money to poor folk so they could build houses. I explained that the banks actually traded for a living. They take risk. And so the first question is: What is the function of having a trading operation if not to take risks?

Institutions exist to take risk. Traders and portfolio managers and investors are not interested in having no risk. They're interested in having as much risk as they can take, relative to their survival.

You have to ask: "What are the goals of the traders and their management?” Any VAR system is going to produce responses. Every bank today, rational or not, is developing methods to maximize the amount of risk potential they have under the measuring system given by either the BIS or the Federal Reserve, or their own self-monitoring efforts. Individuals are no different. Any rational individual is going to say, if you're giving me the dice, I want to throw them as many times as possible before I leave the table.

"Don't forget that you need people to be able to understand and articulate the results of your VAR analyses. Effective risk monitoring means a sizable commitment in people and training.”
—JimRozsypal

What will happen in VAR is what has happened with every other monitoring system: creative traders will figure out how to arbitrage the system. All traders, when they run complicated books, will use the parameters that give them the most flexibility within the modeling system.

What will be the consequences? I think we will see what we all saw in the Asian marketplace before last July. Not only was every VAR system incorrect ex-post, but they were totally inappropriate. In fact, much as a direct consequence of the classic "peso problem,” and much like portfolio insurance in 1987, the prevalence and apparent statistical comfort that VAR gave people probably increased the size and the risk of the exposure that banks were willing to take ex-ante. It would be interesting to see an academic exercise, Philippe: Exactly what did these VAR systems signify, and systematically what were the consequences of having so many participants using, effectively, the same "stop loss” programs? The equity market in 1987, Sandy Grossman's work on the consequences of portfolio insurance and later papers may give us a powerful starting point in this investigation.

Of course, it's ironic that JP Morgan, as the progenitor of both CreditMetrics and VAR, had to take such a large haircut recently against its positions in the Asian marketplace. Not to mention the fact that it probably ignored the operational risks of dealing with counterparties who will legally avoid their obligations—what I call "documentation arbitrage.” What happened? Where was that vaunted 24-hour system so widely advertised and even sold to others?

Kolman: Stan, you're implying that there are people out there who are aware of how people use VAR and trade against them because they know what they're trading assumptions are. Have you actually seen that on the trader desk?

"Any VAR risk system will inevitably encourage traders to figure out ways to take the most risk they can.”
—StanJonas

Jonas: I think you see it particularly, as Nassim pointed out, in these contagion effects. Imagine if eventually you had everybody, including institutional traders and hedge funds, using a variant of a VAR system. Perhaps it's more like a RAROC [risk-adjusted return on capital] system, where they don't want to lose any more than a certain amount of their capital at any given time, because that is the cardinal sin in the trading world, having a large drawdown. And thus it would look bad for that ultimate trade, raising even more money. That means everybody has the same proportional drawdown level, the limit on what they can lose.

The magic goal in this not-so-imaginary world is to have a fund or trading program that manage to make, say, 30 percent annually and never has more than a 3 percent drawdown monthly, peak to trough. But what happens when everybody has these same drawdown parameters and then you have a random shock to the system?

"The calculation errors in VAR are so enormous, the number is almost meaningless. Errors in forecasting correlation matrices make VAR inherently flawed.”
—RonDembo

My first premise is that after a given period of time, everybody has pretty similar trades. After 10 successful years, everybody is doing the Thai baht carry trade. Why? Because even though you think it might be a risky trade, all your friends are getting rich doing it, and after a while it becomes difficult to resist the pull. You don't want to be the only person at the hedge fund cocktail party who is not doing the trade du jour. Plus, the statistics show that it's a risk-free trade. After eight years, its an immutable fact—Thailand doesn't devalue. So you begin to look like a person who is not scientific—you're a victim of your own unfounded insecurities, a man of the past. I mean, are not the data there before you? All your friends are getting wealthy. Why don't you, too, take on these risk-free trades?

What results then is that people have portfolios that are diversified in virtually the identical fashion. This begs the question: diversified relative to what? And then you have some shock to the system. Under a VAR approach, which in the last analysis is nothing but a sophisticated stop-loss strategy, everybody tries to shrink the size of their aggregate portfolio. Because under a VAR system, when bad things happen, the way to make sure you don't lose any more than a certain amount of money is to shrink all of your portfolio. You "randomly” have to shrink the size of the portfolio. You may, in some cases, be able to decide which portions you want to shrink, but usually what you'll end up shrinking is the most liquid part of the portfolio. You get out of the things that you can get out of. We've seen this time and time again, particularly in the mortgage market, where after portfolio shrinkage all we are left with is the toxic waste.

Explicitly, what we see is that Brazilian Brady Cs begin being sold as a hedge against things that happened in Korea, because the same people owned both Brady Cs and Korean bonds. As any well-diversified emerging market portfolio would, they start selling Brady bonds. All of a sudden, Brady bonds begin moving down. That triggers other sales, so you end up having contagion effects. Then you can see that if everybody has a similar portfolio, everybody can't shrink their portfolio at once, because, in this world, the major fallacy of diversification is that somebody else has to be outside of the ostensibly diversified system to hold the risk. And those value buyers, whoever they may be, are traditionally slow to come into play.

So, for a long period, it's the famous Keynesian analogy. When you go to the football stadium and you can't see because somebody's standing in front of you, you get up and you can see the game. But if everybody gets up, nobody can see the game. And this is exactly what happens when you have a VAR system where people have similar portfolios and they all try to shrink their portfolios at the same time.

And then we coin the phrase contagion: people start looking on their screens and start trading U.S. bond futures against where the Malaysian stock market is trading. Thus you see how the contagion becomes concretized in traders actions.

Ron Dembo: There's nothing wrong with an attempt to look at risk scientifically, or to try and find some scientific measures. I don't think it's a futile exercise, given the amount of data and information in financial markets.

Value-at-risk, per se, is a good idea. But the way it's measured today, VAR is bad news because the calculation errors can be enormous. Often the number that is computed is almost meaningless. In other words, the number has a large standard error. If you take a particular methodology like RiskMetrics and you compare our RiskMetrics implementation with 10 others, you'd find 10 different numbers. And the differences in the numbers can be large.

I also find a real problem with the idea that one can forecast a correlation matrix. If you try and forecast the correlation matrix, you've got a point estimate in the future. The errors that we've seen, resulting from correlation effects, dominate the errors in market movements at the time. So the correlation methodology for VAR is inherently flawed. It is also flawed because it requires the use of a scaling factor to value forward in time.

Any methodology that's purely historically based, such as RiskMetrics, for example, is bound to fail. One needs to treat correlation as a random matrix. There are methodologies that do this. When real crashes occur they result in massive changes in correlation, which are quite difficult to forecast.

Let's look at the examples cited here today—the Mexican situation or the Korean situation. If you would have taken the last 30 days of data, as RiskMetrics might, they're meaningless in terms of helping you to measure your risk going forward.

So, there's nothing wrong with the concepts. There's nothing wrong with going to a scientific measure. What's wrong is relying on measures that have so much standard error. Comparing VAR numbers across systems is like comparing apples to oranges. Regulators are allowing banks to allocate capital based on these numbers. We can't get away from it. But we've got to recognize the fact that the numbers are really flaky.

Putnam: I do not disagree that the numbers are not flaky. But it is possible to a put a confidence band or a probability distribution around your forecasts of risks and start to think about decision-making under uncertainty. If you do that, then you have some sense of how flaky the numbers really are and you can monitor the changes in those things in the same way.

The process at CDC is a basic version of modern portfolio theory and Bayesian statistics. We assume that correlations are random variables and then we let them evolve through time in our quantitative tools that are simultaneously, in an integrated and consistent fashion, also generating excess return and risk forecasts. That integrated process gives us not only forecasts of risks and correlations, but a probability distribution as well. When these quantitative tools send a message about changes in the risk environment, they are giving us information about the level of confidence. Essentially, our perspective is that in these times of integrated global markets, one must spend as much time on forecasting risks and correlations as on forecasting excess returns.

Taleb: It all comes back to one problem. If the distributions were stationary and you had a recurring event that happens every five years or so, the world would be a better place. Because then we would, in 20 or 30 years, detect these problems and build defenses against them.

The problem is that the distributions are not stationary. What is alarming is not the fact that we have a standard error, as Ron mentioned so eloquently. The problem is that we don't quite know what the standard error is. When I use a thermometer, I may be aware that there is one or two degrees of error in my measurement of the temperature. But here, I don't know much about the instrument, particularly when it comes to rare events.

Finally, we should not confuse risk with variance. Most people believe that risk is variance. Risk is not variance, except for a symmetric normal distribution. Risk is what can really hurt you. What can hurt you is a large move down, and these are entirely uncharted waters for us.

Kolman: Philippe, do you want to respond to any or all of these comments?

Jorion: I think we should remember what VAR does and what it does not do. If you have a 95 percent confidence on your VAR number, it's certain that it's going to be exceeded in about one day out of 20. It doesn't give you the worst loss, but an estimate of the range of possible gains and profits. There's actually no way to compute the worst loss, because if you go back in time—you can go back to the crash of 1987, and you can go back to the depression of 1929—you'll always find losses that were wagging the tail of the distribution. If that's the source of the problem, then you should do stress-testing in addition to VAR.

There was another point mentioned that I think is valid. Five years ago, traders were paid a bonus, but they didn't take profit into account. They were given free options. Of course the value of an option goes up with the volatility. So traders have an incentive to increase the volatility of the position just as banks have an incentive to increase their risk, like savings and loans. It's the same problem.

The way to solve that is to impose a penalty on traders for the risk they are taking. That's exactly what VAR is doing. Now we say, "Okay, fine, you can take risks, but then you are going to have a penalty using a measure that tries to take a position to account.”

Then, of course traders will ask, "How am I going to try to game the penalty?” They are going to try to come up with an investment position that shows no VAR. I agree that it is a problem. The problem is not that traders have an incentive to game the system. And that's why I wouldn't recommend that VAR be simply a computer system that spits out a number. You need to have people who understand what the positions are. So in that respect, I think there is a lot of common view between the two sides of the table.

I think, however, that it is better to try to quantify than to do nothing. The focus should be on trying to find better predictive measures of risk. If you look at the EMS crisis of September 1992, you'll find that the historical data give you no information about risks. But if you go to the option data, you'll find that the implied volatility in option data gave a good forecast of the future risks.

--