The World According to Robert Jarrow
Robert Jarrow is one of the leading derivatives theoreticians. In 1986,
he worked with David Heath, a fellow Cornell professor, and Andrew Morton,
a Ph.D. student, to develop the Heath-Jarrow-Morton option pricing model,
which is now used extensively in the derivatives world. He is currently
the Ronald P. and Susan E. Lynch Professor of Investment Management at Cornell
University. He has also served as a consultant to a number of derivatives
dealers and software developers and is currently director of research for
Kamakura Corp. Jarrow has published four books: Option Pricing (1988); Finance
Theory (1988); Modelling Fixed-Income Securities and Interest Rate Options
(1996); and Derivative Securities (1996). He spoke with editor Joe Kolman
Derivatives Strategy: There seems to be quite a difference of
opinion in the derivatives community about how to value certain kinds of
interest rate products. There is your model-the Heath-Jarrow-Morton model-the
Hull and White model and others. Why is all this so important?
RJ: First, the volume of trading in the OTC derivatives market
is enormous. Lots of wealth changes hands in trading every day. Millions
or billions of dollars, theoretically, are at stake. The sheer size of the
market makes the valuation important.
Second, many of the products that are traded are not tremendously liquid. If you don't have an active market for the instrument, it's difficult for
a market-maker to come up with a price, and to come up with another instrument
to hedge it. On balance, all dealers are doing is selling or buying an instrument,
taking a spread and then eliminating the risk. To eliminate the risk, you
have to create the same product synthetically in the opposite direction.
Different models will give different recipes for synthetic construction.
How well you hedge or eliminate the risk depends on the model.
DS: How would you distinguish the different models?
RJ: There are a number of different dimensions upon which these
models can be distinguished. The first level is the number of factors. A
factor is an abstract notion. It represents an economic force impacting
the movement of interest rates. Monetary policy is an example of a factor
that's influencing interest rates. A two-factor model would imply that there
are two economic forces. The two factors could be correlated but still independently
influence interest rates-for example, monetary policy and government spending.
The more factors, the more realistic the model is thought to be.
Another dimension that distinguishes models is the actual evolution of
what's called the spot rate of interest or the forward rate curve. Different
types of evolutions reflect judgments concerning future movements of interest
rates. Do you allow for negative interest rates, for example? Do you want
the evolution to have only parallel shifts in the yield curve or to have
much richer shifts over time?
DS: Ho and Lee had the first answer for that problem.
RJ: The Ho-Lee model was the first term-structure model developed in this area. It was published in about 1986. The Ho-Lee model was a significant
improvement over what came before, but it had a number of failings. It was
a one-factor model, and the way the term structure evolved over time was
through parallel shifts. Parallel shifts of the term structure is an acceptable
first approximation, but...
DS:...it's not realistic because the interest rates across the
yield curve don't move in parallel.
RJ: That's correct. It had a parallel shift, which was not realistic. The Ho and Lee model was also formulated in discrete time, using a binomial
lattice. A binomial lattice is a method for calculating interest rate derivative
values on a computer. It is discrete time because, in a binomial lattice,
time is divided into discrete intervals of equal length, say a week, and
cash flows can only occur at these time intervals. It is called binomial
because at each time interval, only one of two possibilities can occur.
The discrete time nature of the model turned out to be a limitation as
well. This is because the model was parameterized in terms of this binomial
lattice. By parameterized, I mean that the model parameters estimated were
there from the binomial lattice. So the probability of interest rates moving
up in a week to a single value, and down in a week to a single value, were
It is difficult to conceptualize how that corresponds to real movement
of interest rates. So to understand the discrete time model, one often looks
at the unit of the discrete time model as the time intervals get small,
and Ho-Lee did not do that. So it was very difficult to estimate the parameters
in the original formulation of the mode. That was the motivation for HJM.
We looked at the continuous time limit of Ho-Lee and also looked at generalizing
it to multiple factors.
DS: And the Hull and White model?
RJ: Hull and White were actually working on their model about
the same time we were. The time at which papers are published doesn't necessarily
correspond to the time the working papers are being circulated.
They overcame the problem in a slightly different way. Their approach
is often called the spot rate approach, because it concentrates on the evolution
of the spot rate of interest. That is the interest rate that can be earned
over an instant in time, without risk. It's a conceptualization. They allowed
for a richer evolution of the spot rate of interest than that implied by
the discrete-time Ho and Lee model.
The HJM model, however is so general that it's possible to understand
any approach, including Hull and White, within it. The HJM model has an
arbitrary number of factors, and arbitrary evolutions for the term structure
of interest rates. The Hull and White approach, on the other hand, is much
more restricted with this. You can actually view the Hull and White model
as a special case of HJM.
DS: Your model was criticized for being impractical when it first came out.
RJ: That was a misconception. Actually, let me try to correct
a couple of misconceptions about HJM. Early in the development of HJM, the
industry thought that HJM models required multiple factors, and always assumed
non-negative interest rates. This is not necessarily true. Another misconception
is that HJM models are quite difficult to compute. Conceptually it may be
the better model, but if you can't compute with it, then of what use is
The first myth can be debunked by pointing out that the HJM model is
really a class of models. Within this class there are subcases in which
computation is quite easy. For example, some subcases don't necessarily
have to have positive interest rates. Furthermore, multiple factors make
computation impossible. Computing technology has advanced to such a state
that it is easy to compute with two or three factors. When you start getting
more, four or five, then computation does become an issue.
DS: What do you do when you need to compute with a model with
more than four factors and non-negative interest rates?
RJ: Sometimes the only way you can compute with these models is
Trees are a generalization of a binomial lattice. A binomial lattice
can be thought of in the following way. At each node of a binomial tree,
one of two things can happen-you branch up or down. It's called a lattice
because over two steps, the node is reached by an up/down or down/up.
Lattice computation is fast relative to computations in a tree. For some models in the HJM class, the best way to compute it is using a tree. A tree
grows exponentially where as a lattice grows linearly. Exponential growth
slows computation time.
DS: How do you overcome computational complexity?
RJ: There has been a lot of research in efficient ways of computing with respect to trees. David Heath, my colleague, has spent a lot of time
thinking about that. He has actually developed an efficient computer code
for computing with trees, demonstrating its feasibility. For example, one
needs to use tricks-unequal time steps-and to be careful about where the
time steps occur during the life of an interest rate product. Sometimes
the code has to be designed specific to the interest rate product. But with
tricks and the existing computer technology, computing with trees is feasible.
DS: A lot of traders like to think instinctively and prefer to
use the simplest models possible.
RJ: But there is a danger. You can use a model that's too simple
for the product at hand. It can give false or incorrect pricing, just because
the model is too simple.
Let's say you're dealing with a spread option, where you're betting that long rates are going to move differently from short rates. A one-factor
model can't capture that because it basically has all the rates moving up
together or down together. It may be in different proportions, but they
would all be moving in the same direction. So the spread option would have
very little value under this model.
To price and hedge a spread option accurately, you are going to need
at least a two-factor model. The richer the model, the more realistic the
model, the better the hedging is going to be. And there's a tradeoff between
the richness of the model and the computational complexity.
DS: What's the tradeoff?
RJ: The more realistic the model, the more time-consuming it is
to compute and estimate, and to understand and use intelligently. People
tend to want to use the simplest model for the product and the application
at hand so that intuition can play a part. If the models become too complex,
you lose a lot of the intuition.
DS: Do you think that this is really a serious problem-that dealers are using models that are too simple for their needs?
RJ: I think of it as an evolutionary process. You'll always move
to a more complex model or approach when there's a value in doing so. It's
a chicken-or-egg problem. As interest rates became more volatile, moving
to a more complex model becomes important. This need causes the development
of newer models and their implementation. On the other hand, the development
of newer models and implementation creates a need as well, because as you
know better how to create synthetics, you are more willing to trade. Why?
You understand the risks.
DS: In addition to doing a lot of work in interest rate models,
you're also doing a lot of work with asset liability problems that financial
institutions such as regional banks face. How did their problems differ
from the problems faced by derivatives dealers?
RJ: The problems are related, but different. In both cases you're dealing with products whose value changes with interest rates. The over-the-counter
derivatives products are priced theoretically, such that the net present
value of the interest rate derivative is zero. You basically find the price
that the instrument has to have so that there are no arbitrage opportunities.
When there are no arbitrage opportunities, it means the price is fair. If
a buyer and a seller think the price is fair, the net present value of the
product is zero. That's how the models work in the area of interest rate
It's different in the commercial banking arena. As a commercial bank,
they can issue nonmaturity demand deposits, time deposits, savings accounts,
checking accounts, NOW accounts. You have to be a commercial bank to do
that. Economists would say there are barriers to entry. Not everyone can
issue these securities, and consumers are willing to forgo some interest
rate payments in return for a certain amount of service.
DS: ...and the guarantees.
RJ: Right. The interest rate products that a bank issues have
a positive net present value, because the bank is not paying "market
rates"-they are paying rates a little lower. So, they can take that
dollar and invest it at a market rate, and pocket the difference.
The more deposits they have, the more they can collect on that difference. All the existing modeling technologies used for interest rate products explicitly
have net present values of zero. If you were to apply these same technologies
to demand deposits, you'd value a dollar demand deposit at a dollar, that
is, zero net present value. We know that is not the case.
The way I've modeled this positive net present value is by introducing
the notion of a segmented market. It's the same notion as having a barrier
to entry. It says there are rents to be earned from issuing demand deposits.
A rent is an economic term, which refers to the profit one gets from having
a special monopoly position.
DS: It's like a franchise.
RJ: That's right. Basically what happens is that within this model, we allow the banks to earn this rent. The rent is then priced using these
interest rate models in a consistent fashion.
DS: How does this differ from the traditional ways of dealing
with asset liability issues?
RJ: The traditional ways are outdated, a little bit naive. The
key problem is trying to understand how the demand deposit balances and
the rates paid on demand deposits change with interest rates.
DS: This sounds a little like the problem of how do you model
mortgage-backed securities and the consumer behavior on mortgages.
RJ: Yes, it's very similar. There you have irrational prepayment, which is the term used to describe prepayment behavior that we can't model
as a function of interest rates. It's important to value this prepayment
behavior when pricing mortgage-backed securities. In the pricing of demand
deposits, there is the option to withdraw the funds. The traditional approaches
to asset liability modeling don't really handle this option properly. They
are much too simplistic.