The Jorion-Taleb Debate
The Q&A in our December/January issue, "The World
According to Nassim Taleb," contained a carefully reasoned attack on
Value-at-Risk and the methodology behind it. It has already inspired a slew
of talk on web pages across the derivatives world-and an equally reasoned
defense from Philipe Jorion, one of the leading experts on Value-at-Risk.
Here's Jorion's response, and Taleb's rebuttal. If you'd like to join the
debate, please email us at firstname.lastname@example.org. We'll publish more responses
in future issues.
In Defense of VAR
By Philippe Jorion
In a recent interview in Derivatives Strategy, Nassim Taleb delivered
a blistering attack on Value-at-Risk (VAR). The gist of the message was
that VAR is utterly useless as a risk management tool, as is much of the
field of financial engineering. This view is somewhat unusual given the
widespread interest in VAR, which is now used extensively by U.S. financial
institutions. It will be extended further following the recent Securities
and Exchange Commission ruling
that public corporations must disclose quantitative information about
their derivatives activity. All of this effort would be wasted if VAR was
In his discussion, Nassim Taleb brings up some important points that
are too often ignored and should be re-emphasized. I take issue, however,
with a number of other arguments.
First, the purpose of VAR is not to describe the worst possible outcomes. It is simply to provide an estimate of the range of possible gains and losses.
Many derivatives disasters have occurred because senior management did not
inquire about the first-order magnitude of the bets being taken. Take the
case of Orange County, for instance. There was no regulation that required
the portfolio manager, Bob Citron, to report the risk of the $7.5 billion
investment pool. As a result, Citron was able to make a big bet on interest
rates that came to a head in December 1994, when the county declared bankruptcy
and the portfolio was liquidated at a loss of $1.64 billion. Had a VAR requirement
been imposed on Citron, he would have been forced to tell investors in the
pool: "Listen, I am implementing a triple-legged repo strategy that
has brought you great returns so far. However, I have to tell you that the
risk of the portfolio is such that, over the coming year, we could lose
at least $1.1 billion in one case out of 20."
The advantage of such a statement is that this quantitative measure is
reported in units that anybody can understand-dollars. Whether the portfolio
is leveraged or filled with derivatives, its market risk can be conveyed
to a nontechnical audience effectively.
It is fairly clear that, had such an announcement been made, investors
would have been more careful about investing in the pool (or would have
disciplined Citron). In addition, it would have been harder for investors
to claim they were misled. Fewer lawsuits would have been filed. Perhaps
other embarrassing debacles such as Barings, Procter & Gamble or Gibson
Greetings would have been avoided. Derivatives disclosure should increase
transparency and stability in financial markets.
VAR has other benefits as well. By now, all U.S. commercial banks monitor the VAR of their trading portfolios on a daily basis. Suppose a portfolio
VAR suddenly increases by 50 percent. This could happen for a variety of
reasons-market volatility could have increased overnight, a trader could
be taking inordinate risks, or a number of desks could be positioned on
the same side of a looming news announcement. More prosaically, a position
could have been entered erroneously. Any of these factors should be cause
for further investigation, which can be performed by reverse-engineering
the final VAR number. Without it, there is no way an institution can get
an estimate of its overall risk profile.
The Orange County example, however, points out one of the limitations
of VAR, which is inherent in the definition. We would expect situations
in which the range of VAR is exceeded, for instance in 5 percent of the
cases using a 95 percent confidence level. This was the case in Orange County,
where a particularly volatile bond market led to a loss of $1.6 billion,
in excess of the VAR estimate. Practically speaking, there is no way to
provide an estimate of the absolute worst outcome (in the same sense that
the tails of continuous probability distributions are unlimited.) Nor should
we expect an institution to be protected against all possible losses, however
unlikely. As Chairman of the Federal Reserve Alan Greenspan stated, "When
market forces...break loose of economic fundamentals, ...sound policy actions,
and not just bank capital, are necessary to preserve financial stability."
Still, VAR must be complemented by stress-testing. This involves looking at the effect of extreme scenarios on the portfolio. This is particularly
useful in situations of "dormant" risks, such as fixed exchange
rates, which are subject to devaluations. Stress-testing is much more subjective
than VAR because it poorly accounts for correlations and depends heavily
on the choice of scenarios. Nevertheless, I would advocate the use of both
A second misconception raised in the discussion is that VAR involves
a covariance matrix only and does not work with asymmetric payoffs. This
is not necessarily the case. A symmetric, normal approximation may be appropriate
for large portfolios, in which independent sources of risk, by the law of
large numbers, tend to create normal distributions. But the delta-normal
implementation is clearly not appropriate for portfolios with heavy option
components, or exposed to few sources of risk, such as traders' desks. Other
implementations of VAR do allow asymmetric payoffs.
A third, more specific point is that the VAR approach is useless because volatilities and correlations change over time. This is debatable. Even
when changes occur, the degree of precision in daily volatilities is much
higher than that in expected returns. Traders routinely take positions based
on views that are even less reliable than risk measures. It is difficult
to tell whether traders are right or wrong; we do know, however, when they
are taking large risks. Also, we have successfully learned to model volatilities
that change over time (such as using GARCH models). Even better, we can
use risk measures implied from option data, which are the best forecasts
one could expect. It seems to me that our goal should be to try to improve
our risk forecasts, instead of discarding the whole VAR approach and relying
on "market lore."
Taleb also discusses a more general issue, which is that of the usefulness of scientific improvements. His point is that VAR is useless because it
is not perfect (unlike measures in the physical sciences). Admittedly, VAR
is not perfect. Our world, however, is constructed by engineers, not physicists.
And engineering has been described as the "art of the approximation."
The same definition applies to VAR. In fact, risk managers are less concerned
about precision than the traders who have to price derivatives. The advent
of derivatives has been compared to allowing us to drive at a faster speed
in financial markets. VAR is like a wobbly speedometer. Even so, it gives
a rough indication of speed. Derivatives disasters have occurred because
drivers or passengers did not worry at all about their speed. Of course,
there can be other sources of crashes (such as blown tires, for instance).
Such accidents can be compared to operational risks, against which VAR provides
no direct protection. Still, a wobbly speedometer is better than nothing.
Finally, let me turn to one issue on which we agree (at last). Nassim
Taleb points out an important problem, what I would call the "VAR dialectic"
issue. If a risk manager imposes a VAR system to penalize traders for the
risks they are incurring, traders may have an incentive to "game"
their VAR. In other words, they could move into markets or securities that
appear to have low risk for the wrong reasons. For instance, currency traders
in 1994 could have taken large positions in the Mexican Peso, which had
low historical volatility but high devaluation risk. Or, traders exposed
to a delta-normal VAR could take short straddles with zero delta (like Baring's
Leeson); this position appears profitable, but only at the expense of future
possible losses that may not captured by VAR.
This response explains why the actual benefits from technical innovations are generally less than hoped for. Going back to the driving example, the
addition of safety features such as antilock brakes and airbags have saved
fewer lives than initially expected, because some drivers may be lulled
into a false sense of safety. Even so, the net effect of these innovations
is beneficial. My car has both antilock brakes and an airbag.
In the context of portfolio management, gaming by traders can be compared to the general problem of in-sample portfolio optimization, which is known
to create optimistic views of risk. I fully agree that this is a serious
limitation of VAR. This is why risk management is not simply a black box,
but a dynamic process in which a competent risk manager must be aware of
the human trait for adaptation.
It seems premature to describe VAR as "charlatanism." In spite of naysayers, VAR is an essential component of sound risk management systems.
VAR gives an estimate of potential losses given market risks. In the end,
the greatest benefit of VAR lies in the imposition of a structured methodology
for critically thinking about risk. Institutions that go through the process
of computing their VAR are forced to confront their exposure to financial
risks and to set up a proper risk management function. Thus the process
of getting to VAR may be as important as the number itself. These desirable
features explain the widespread view that the "quest for a benchmark
may be over."
Philippe Jorion is a professor of finance at the University of California at Irvine. He has a degree in the "the art of the approximation"
(i.e. engineering) from the University of Brussels, and an MBA and Ph.D.
from the University of Chicago. He has published widely in academic and
practitioner-oriented journals. His latest book, Value at Risk: The New
Benchmark for Controlling Market Risks, was published by Irwin Professional
in late 1996.
By Nassim Taleb
Philippe Jorion is perhaps the most credible member of the pro-VAR camp. I will answer his criticism while expanding on some of the more technical
statements I made during the interview (Derivatives Strategy, December-January).
Indeed, while Jorion and I agree on many core points, we mainly disagree
on the conclusion: mine is to suspend the current version of VAR as potentially
dangerous malpractice, while his is to supplement it with other methods.
My refutation of VAR does not mean that I am against quantitative risk
management-having spent most of my adult life as a quantitative trader,
I learned the hard way the pitfalls of such methods. I am simply against
the application of unseasoned quantitative methods. I think that VAR would
be a wonderful measurement if financial models were designed for that purpose
and if we knew something about their parameters. The validity of VAR is
linked to the problem of probabilistic measurement of future events, particularly
those deemed infrequent (more than two standard deviations) and those that
concern multiple securities. I conjecture that the methods we currently
use to measure such tail probabilities are flawed.
The definition I used for the VAR came from the informative book by Philippe Jorion entitled Value at Risk: The New Benchmark for Controlling Derivatives
Risk: "It summarizes the expected maximum loss (or worst loss) over
a target horizon within a given confidence interval." It is the uniqueness,
precision and misplaced concreteness of the measure that bother me. I would
rather hear risk managers make statements like "at such price in security
A and at such price in security B, we will be down $150,000." They
should present a list of associated crisis scenarios without unduly attaching
probabilities to the array of events, until such time as we can show a better
grasp of probability of large deviations for portfolios and better confidence
with our measurement of "confidence levels." There is an internal
contradiction between managing risk (that is, standard deviation) and using
a tool with a possibly higher standard error than that of the measure itself.
I find that the professional risk managers I heard recommend a "guarded" use of the VAR on grounds that it "generally works" or "it
works on average" do not share my definition of risk management. The
risk management objective function is survival, not profits and losses (see
rule of thumb No. 8). One trader, according to Chicago legend, "made
$8 million in eight years and lost $80 million in eight minutes." According
to the same standards, he would be "in general," and "on
average," a good risk manager.
Nor am I swayed by the usual argument that the VAR's widespread use by
financial institutions should give it a measure of scientific credibility.
Banks have the ingrained habit of plunging headlong into mistakes together
where blame-minimizing managers appear to feel comfortable making blunders
so long as their competitors are making the same ones. The state of the
Japanese and French banking systems, the stories of lending to Latin America,
the chronic real estate booms and busts, and the S&L debacle provide
us with an interesting cycle of communal irrationality. I believe that the
VAR is the alibi bankers will give shareholders (and the bailing-out taxpayer)
to show documented due diligence, and will express that their blow-up came
from truly unforeseeable circumstances and events with low probability-not
from taking large risks they did not understand. But my sense of social
responsibility will force me to point my finger menacingly. I maintain that
the due-diligence VAR tool encourages untrained people to take misdirected
risk with shareholders', and ultimately the taxpayers', money.
The act of reducing risk to one simple quantitative measure on grounds
that "everyone can understand it" clashes with my culture. As
rule of thumb No. 1 from "trader lore" recommends, do not venture
in businesses and markets you do not understand. I have no sympathy for
warned people who lose money in these circumstances.
Praising VAR because it would have prevented the Orange County and P&G debacles is a stretch. Many VAR defenders made a similar mistake. These
events arose from issues of extreme leverage-and leverage is a deterministic,
not a probabilistic, measurement. If my leverage is 10 to one, a 10 percent
move can bankrupt me. A Wall Street clerk would have picked up these excesses
using an abacus. VAR defenders make it look like the only solution, but
there are simpler and more reliable ones. Thanks to Ockham's razor, scientific
methodology does not allow the acceptance of a solution on casual corroboration
without first ascertaining whether more elementary ones are available (like
one you can keep on a napkin).
I disagree with the statement that "the degree of precision in daily volatility is much higher than that in daily return." My observations
show that the one-week volatility of volatility is generally between 5 and
50 times higher than the one-week volatility (too high for the normal kurtosis).
Nor do I believe that the ARCH-style modeling of heteroskedasticity that
appeared to work in research papers, but has failed thus far in many dealing
rooms, can be relied upon for risk management. The fact that the precision
of the risk measure (volatility) is volatile and intractable is sufficient
reason to discourage us from such a quantitative venture. I would accept
VAR if indeed volatility were easy to forecast with a low standard error.
The Science of Misplaced Concreteness
On the apology of engineering, I would like to stress that the applications of its methods to the social sciences in the name of progress have lead
to economic and human disasters (see Joseph Stiglitz's Whither Socialism?
for a description of some of the arrogant uses of engineering methods in
economic policy). The critics of my position resemble the Marxist defenders
of a more "scientific" society who seized the day in the 1960s,
who portrayed Frederick von Hayek as backward and "unscientific."
I hold that, in economics and the social sciences, engineering has been
the science of misplaced and misdirected concreteness. Perhaps old John
Maynard Keynes had the insight of the problem when he wrote: "To convert
a model into a quantitative formula is to destroy its usefulness as an instrument
If financial engineering means the creation of financial instruments
that improve risk allocation, then I am in favor of it. If it means using
engineering methods to quantify the immeasurable with great precision, then
I am against it.
During the interview I was especially careful to require technology to
be "flawless," not "perfect." While perfection is unattainable,
flawlessness can be, as it is a methodological consideration and refers
to the applicability for the task at hand.
Marshall, Allais and Coase used the term charlatanism to describe the
concealment of a poor understanding of economics with mathematical smoke.
Philosophers of science used the designation charlatanism in the context
of a theory that does not lend itself to falsification (Popper) or gradual
corroboration (the Bayesians). No self-respecting scientist ever thought
anyone would hold on to a falsified theory and no stronger word than charlatanism
was created. (I would have used it.) Using VAR before 1985 was simply the
result of a lack of insight into statistical inference. Given the fact that
it has been falsified in 1985, 1987, 1989, 1991, 1992, 1994 and 1995, it
can be safely pronounced plain charlatanism. The prevalence of between 7
and 30 standard deviations events (using whatever information on parameters
was available before the event) can convince the jury that the model is
wrong. A hypothesis test between the validity of the model and the rarity
of the events would certainly reject the hypothesis of the rare events.
Trading as Clinical Research
Why do I put trader lore high above "scientific" methods? Traders Stan Jonas, Victor Niederhoffer and George Soros hold that trading is "lab-coat"
scientific research. I go beyond that and state that traders are clinical
researchers, like medical doctors working on real patients-a more truth-revealing
approach than simulated laboratory experiments. Most of the problems with
statistics, according to the consensus among science historians, lie in
the fact that observations are theory-laden (some, such as Feyerabend, believe
they are plain theory). An opinionated econometrician will show you (and
will produce) the data that will confirm his side of the story (or his departmental
party line). I hold that active trading is the only near-data-mining-free
approach to understanding financial markets. You only have one life and
cannot retrofit your experience. As a result, clinical experiences of the
sort are not just the best verifiable accounts of the accuracy of a method-they
are the only ones. Whatever the pecuniary motivation, trading is a disciplined,
truth-seeking proposition. We are trained to look into reality's garbage
can, not into the elegant world of models. Unlike professional researchers,
traders are never tempted to relax assumptions to make their models more
Option traders present the additional attribute of making their living
trading the statistical properties of the distribution, therefore carefully
observing all of its higher-order wrinkles. They are rational researchers
who deal with the unobstructed Truth for a living and get (but only in the
long term) their paycheck from the Truth without the judgment or agency
of the more human and fallible scientific committee.
Charlatanism: a Technical Argument
At an econometric level, the problem of VAR is whether the (properly
integrated) processes we observe are (weakly) stationary. If they are weakly
stationary, then ergodic theory states that we can estimate parameters with
a confidence level in some proportion to the sample size. Assuming "stationarity,"
for higher dimensional processes like a vector of uncorrelated securities
returns, and for Markov switching distributions with strong asymmetry, we
may need centuries, sometimes hundreds of centuries of data. Some people
compute a monstrous covariance matrix with limited data points, and make
up additional data using a poor application of the bootstrap technique (or
the Geman and Geman Gibbs Sampler). Clearly no amount of quantitative sophistication
will expand your information set-by a similar argument no amount of mathematical
knowledge will help me estimate someone's phone number.
At a more philosophical level, the casual quantitative inference in use
in VAR (which consists of estimating parameters from past frequencies) is
too incomplete a method. Rule No. 1 conjectures that there is no "canned,"
standard way to explore stressful events-they never look alike because humans
adjust. It is indeed hard to conciliate standard naive inference (based
on past frequencies) and the dialectic of historical events (people adjust).
The crash of 1987 caused a sharp rally in bonds. This became a trap during
the mini-crash of 1989 ( I was caught myself). The problem with the adjustments
to VAR by "fattening the tails" as an after-the-fact adaptation
to the stressful events that happened is dangerously naive. Thus the VAR
is like a Maginot line. In other words, there is a tautological link between
the harm of the events and their unpredictability, since harm comes from
surprise. As rule of thumb No. 2 conjectures (see Box), nothing predictable
can be truly harmful and nothing truly harmful can be predictable. We may
be endowed with enough rationality to heed past events (people rationally
remember events that hurt them).
Furthermore, the simplified mean-variance paradigm was designed as a
tool to understand the world, not to quantify risk. This explains its survival
in financial economics as a pedagogical tool for MBA students. It is therefore
too idealized for risk management, which requires higher moment analysis.
It also ignores the forays made by market microstructure theory. As a market-maker,
the fact of having something in your portfolio can be more potent information
than all of its past statistical properties-securities do not randomly land
in portfolios. A bank's position increase in a Mexican security signifies
an increase in the probability of devaluation. The position might originate
from the niece of an informed government official trading with a local bank.
For having been picked on routinely, traders (who survived the sitting duck
stage) adjust for these asymmetric information biases better than the "scientific"
The greatest risk we face, therefore, is that of the mis-specification
of financial price dynamics by the available models. The two-standard-deviations
(and higher) VAR is very sensitive to model specification. The sensitivity
is compounded with every additional increase in dimension (that is, in the
number of securities included). For portfolios of 75 securities (a small
portfolio for a trading room), I have seen frequent seven- and higher standard-deviation
variations during quiet markets. Thus VAR is not adapted for the brand of
diversified leverage we usually take in trading firms. I call this the risk
of incompleteness, or the model risk. A model might show you some risks,
but not the risks of using it. Moreover, models are built on a finite set
of parameters, while reality affords us infinite sources of risks.
Options may or may not deliver an estimation of the consensus on volatility and correlations. We can compute, in some markets, some transition probabilities
and, in some currency pairs with liquid crosses, joint-transition probabilities
(hence local correlation). We cannot, however, use such pricing kernels
as gospel. Option traders do not have perfect foresight, and, as much as
I would like them to, cannot be considered prophets. Why should their forecast
of the second moment be superior to that of a forward trader's future price?
I only see one use of covariance matrices: in speculative trading, where the bets are on the first moments of the marginal distributions, and where
operators rely on the criticized "trader lore" for higher moments.
Such a technique, which I call generalized pairs trading, has been carried
in the past with large measure of success by "kids with Brooklyn accent."
A use of the covariance matrix that is humble enough to limit itself to
conditional expectations (not risks of tail events) is acceptable, provided
it is handled by someone with the critical and rigorous mind that develops
from the observation of and experimentation with real-time market events.
Nassim Taleb, a veteran option arbitrageur, is the author of Dynamic
Hedging: Managing Vanilla and Exotic Options. He holds an MBA from Wharton
and is soon to defend a Ph.D. thesis in option pricing at Universite Paris
Dauphine. His next book (coauthored with Helyette Geman) will be called
Applied Option Theory. Aside from option theory, Taleb is interested in
the philosophy of statistical inference.
Trader Risk Management Lore: Taleb's Major Rules of Thumb
Rule No. 1- Do not venture in markets and products you do not
understand. You will be a sitting duck.
Rule No. 2- The large hit you will take next will not resemble
the one you took last. Do not listen to the consensus as to where the risks
are (that is, risks shown by VAR). What will hurt you is what you expect
Rule No. 3- Believe half of what you read, none of what you hear. Never study a theory before doing your own observation and thinking. Read
every piece of theoretical research you can-but stay a trader. An unguarded
study of lower quantitative methods will rob you of your insight.
Rule No. 4- Beware of the nonmarket-making traders who make a
steady income-they tend to blow up. Traders with frequent losses might hurt
you, but they are not likely to blow you up. Long volatility traders lose
money most days of the week. (Learned name: the small sample properties
of the Sharpe ratio).
Rule No. 5- The markets will follow the path to hurt the highest
number of hedgers. The best hedges are those you alone put on.
Rule No. 6- Never let a day go by without studying the changes
in the prices of all available trading instruments. You will build an instinctive
inference that is more powerful than conventional statistics.
Rule No. 7- The greatest inferential mistake: "This event
never happens in my market." Most of what never happened before in
one market has happened in another. The fact that someone never died before
does not make him immortal. (Learned name: Hume's problem of induction).
Rule No. 8- Never cross a river because it is on average 4 feet
Rule No. 9- Read every book by traders to study where they lost
money. You will learn nothing relevant from their profits (the markets adjust).
You will learn from their losses.