.
.--.
Print this
:.--:
-
|select-------
-------------
-
The 1999 Derivatives, Risk Management and
Technology Roundtable

MODERATORS
Ed Berko, head of derivatives systems practice, financial risk management group, PricewaterhouseCoopers
Joe Kolman, editor, Derivatives Strategy
PARTICIPANTS
Roger D. Lang, director of global risk programs, Compaq Computer Corp.
Michael Kowalski, software development manager of the server group at Askari, a business unit of State Street Corp.
Sunil Panikkath, senior research economist, SAS Institute Inc.
Reto Tuffli, CEO, Centerprise Services Inc.
David Miller, director of risk management systems, BankBoston
David Hamilton-Brown, managing director of global debt capital markets and derivatives technology, CIBC World Markets
Julie Shapiro, vice president of VAR systems, Chase Manhattan Bank
Till Guldimann, vice chairman, Infinity, a SunGard company
Victor Masch, deputy director of market risk management, AIG Inc.
Emmanuel Fruchard, director of financial engineering, Summit Systems
Alex Tsigutkin, CEO, Axiom Software Laboratories Inc.
Michael Cappi, senior vice president, Quadrian Inc.
Val Tannen, professor of information technology, University of Pennsylvania

On June 14, more than 75 senior risk management and technology professionals met at a special roundtable discussion on technology and risk management, sponsored by PricewaterhouseCoopers, Compaq Computer Corp., and Infinity, a SunGard company.

Ed Berko: Managing risk across several different locations has led to a number of widespread failures, involving both risk managers and the technologists who support them. Many firms that have tried to consolidate data globally have fumbled badly, but those that have taken a more local approach have done no better. So what can we learn from these mistakes, and how have the most successful firms avoided them?

Roger D. Lang: The critical issue is clearly the integration of data and the timely reporting of data. At Compaq, we’ve noticed that the most successful projects use software that has been designed to do enterprise risk. A lot of products out there were developed for other things such as pricing and have been “morphed” into enterprise risk products. Systems developed for enterprise risk, however, have a strong emphasis on data integration, data architecture, data mapping and message enrichment—the important issues.

Michael Kowalski: It’s easy to talk about engineering an enterprise-wide risk management system whose goal is to get to a Nirvana of perfect integration of global data. But we know that technology cannot, in fact, engineer solutions to administrative problems. At the same time, however, there will be a flaw in the technology if we ignore the administrative problem or the fact that we are living in a less-than-perfect world.

“Many firms that have tried to consolidate data globally have fumbled badly, but those that have taken a more local approach have done no better.”
—Ed Berko
PricewaterhouseCoopers

I think one of the jobs of software should be to report ambiguities and to sense inadequate or incomplete data instead of simply crunching numbers. I also think there should be a responsible, hierarchical way of tagging the data from all the multiple sources, so that when you produce a number, you’ll know if it’s the perfect number with Melbourne, Hong Kong, New York and Boston all integrated. Maybe today Melbourne was off-line for some reason. You may still have to put out a number, but it should be responsibly generated.

I don’t know systems that are really good at qualifying their answers. It’s generally an all-or-nothing proposition. That’s partially a technical failure, but if it’s partially the result of the quest for all-or-nothing, we’re either going to have all of this enterprise data or we’re simply going to have a lousy number.

Sunil Panikkath: Technology can help solve what has been described as administrative problems. In particular, well-proven technology exists to keep track of logistical problems associated with bringing together data from multiple sources. What is most important in this context is metadata—data about all aspects of data, such as sources of data, processes that transform the data, the owners of such processes, the systems on which data-handling is done and so on. Such an “administrator” software system can, for instance, intelligently and promptly notify the appropriate personnel when a particular link in a long data-handling chain fully or partially fails, for whatever reason, and the results at the end of the chain need to be reinterpreted. This is not wishful thinking. Such software exists and is widely used by many organizations.

Reto Tuffli: What’s extremely important for a successful systems solution is the integration of the risk management process with technology. Senior management usually has the right set of objectives in mind, and codifies them in organizational policies and procedures. The disconnect often comes when the process itself can’t be sufficiently monitored—and that’s where technology can help. It’s not simply a question of having metadata for the data flows going from traders to a central warehouse. Risk management includes the process of how responsibilities pass from one person to another, in terms of data ownership, reconciling the data, looking at risk management reports and so forth.

David Miller: I’d like to reinforce the point about the political unsexiness of data and how it’s affected by basic processes such as budgets. We’re all more interested in the enterprise-wide system than in what the enterprise-wide repository of information is and how data are collected, maintained and run. One difficulty is in getting the right budget groups—both technology and business—together. The technology people may say they don’t want to be in charge of integrity. But in the end, who is? Is it the feeding system? Is it the risk business group? These are the areas where political will, sponsorship and insufficient budgets are killing us.

Joe Kolman: So who should be in charge of the quality of data in a global operation? Should we practice risk management on a global basis or a local basis?

David Hamilton-Brown: If the assumption is that the system will be implemented in a complex institution with multiple locations, multiple systems and so on, local management also needs to be involved, especially in the initial implementation of the project. To get local management on board, a carrot-and-stick approach may work. A stick may be that senior corporate management will be mad if local risk managers don’t cooperate. A carrot may be telling local risk managers that the system will enable them to measure their real risk-adjusted performance and return. Another carrot might be a more flexible limits structure. If implementation of the risk management system leads to a value-at-risk-based set of limits, for instance, local management will be able to allocate more limits to more profitable businesses.

“Systems in which the producers of data don’t use the data are basically flawed—and central risk managers will always wind up with bad data.”
—Till Guldimann
Infinity

The answer to whether risk management should be global or local is, Neither. We have to reflect on how the business is organized. We have a 24-hour trading window, so it’s vitally important to share the workload. If there are risk managers and traders in one location producing the numbers and having the first cut, the data will have to be refined on a global basis. But with everybody’s shoulders leaning to the wheel and solving the problem on a local basis and then consolidating later on, we’re going to make more effective use of the 24-hour window to get this stuff processed.

In some of our systems at CIBC World Markets, for instance, we have a global mirroring of databases around the world at the transaction level. When a trade is entered, it’s reflected in a mirrored database in various locations. As part of the daily process, the traders and middle-office people look at those databases and produce risk numbers for the traders themselves, but the regional risk managers produce first cuts of the data.

I think the root of the problem is an approach that separates the corporation’s overall risk calculation system from the primary risk calculation methodologies and systems of the traders and middle-office people responsible for day-to-day risk control. Moving the data off the primary system and translating them into a separate location, and then recalculating the risk, is doubling the workload and introducing complications into the process. We can solve this problem by having risk managers, traders and middle-office people looking at the same systems.

We certainly need to have data warehouses to consolidate the data, but let’s focus on getting the data and calculation methods right in the original source systems. Then when we extract the data, it will be a simplified set of data because we’re not moving transactions around. We can move around risk data and the Greeks, and we can calculate VAR in those data warehouses as a secondary step.

Julie Shapiro: At Chase we produce VAR reports every day for the entire global bank. Initially, the focus was on a big system that could crunch all the numbers and produce reports every day, but we eventually learned that that was only half the problem. The other half is looking at these numbers and assessing their correctness. The big emphasis now is on pushing the diagnostic capabilities and tools back out to the people closer to the data, and involving them more in the process of ensuring the data’s correctness.

Till Guldimann: I think the problem of data management from the Chase perspective has been not only to get the producers of data to use good tools to check the data’s validity, but also to have them use the data. Systems in which the producers of data don’t use the data are basically flawed—and central risk managers will always wind up with bad data.

Miller: It’s important to allow technology to help people examine how the business makes money. By sending the data back to the traders and by making sure they have the ability to query their positions, you’re bringing the right resource to the people who need it.

“One of the jobs of software should be to report ambiguities and to sense inadequate or incomplete data instead of simply crunching numbers.”
—Michael Kowalski
State Street Corp.

Victor Masch: By now, senior management in financial institutions realize the importance of having an enterprise-wide risk management system, and are generally in favor of implementing such a system. The big question is what they want to do with it. Is it simply to get regulators off their backs, or is senior management going to look at the results and use them in the management of the business? The second case is obviously more desirable.

Kolman: I want to bring our attention back to the issue of data warehouses, since it’s one of the most common ways to organize global risk data. Is this the right approach, or are other alternatives more attractive?

Emmanuel Fruchard: Firms that have many systems often have to rely on data warehouses. They may have 10 systems around the world to handle fixed-income and equity trades, but it’s difficult to have only one system that is the repository of data for all those trades. Plain-vanilla trades are generally easy to integrate. The real problem is in integrating exotic trades in the fixed-income, foreign exchange and equity areas, because they involve in-house models that are not easy to replicate in other systems. At Summit, a solution we found attractive is to have a single system where the most exotic trades are, and then, when you want to calculate VAR, to import the plain-vanilla trades from other systems. This is much easier, and it can be done in real time or once a day.

Guldimann: I think the basic problem comes when you marry the complex end of finance, which is derivatives, with large-volume, simple types of demands, which come out of the high-volume trading areas. Trying to combine everything in one big mother system often won’t work because there are too many different or diametrically opposed requirements there. We shouldn’t try to solve all risk management problems with derivatives technology or extremely complex approaches, because that simply leads to deadlocks and huge budgets.

In the end, this may mean that we have different trading systems for different trading operations, and perhaps an overall integrated risk management system—but only for the integrated risk view. How detailed that will be is a question to be answered by the various users. But the idea that you can have as much detail at the center as you have in each trading unit is an illusion.

Alex Tsigutkin: Metadata is, of course, the answer to this issue, but there is also the question of what exactly metadata means in terms of risk management and data warehousing. Emmanuel Fruchard mentioned that one solution is to bring together components of simple trades. But there are more effective solutions that can deal with any component of a trade that is dynamically captured and stored. Instead of trying to create one big data source of common elements from multiple systems, you can deal with the sources of data on an individual basis and therefore reference every element of information as the appetite for richness of data changes. This can apply not only to transactional and dealer-related sources, but to market-data and legacy-data sources as well. Techniques that allow us to encapsulate and capture not only the common elements but all the elements from each source are the key to modern risk management.

Another important issue is how to delegate the data quality to the original source. One of the biggest issues in technology is figuring out, once we’ve aggregated the data, where they came from. The answer is ensuring that the information that arrives from separate sources carries its own identity, so you can go back to the original source and find the owner of that information. It also becomes difficult to delegate these activities if you didn’t bring the data to a central repository and you’re relying on the availability of information for transactional data in original sources such as virtual warehouses. The problem with that is that the data are only good if the original systems kept them well, and that rarely happens.

“The big emphasis now is on pushing the diagnostic capabilities and tools back out to the people closer to the data, and involving them more in the process of ensuring the data’s correctness.”
—Julie Shapiro
Chase Manhattan Bank

Hamilton-Brown: I’m troubled by the idea of copying data from A to B and then giving the data from B back to the user of A, since I’m pretty sure that the data at A are probably better than the data at B. We have to bring those two sources of data together. Maybe virtual data warehouses are better solutions for moving data around because having everybody looking at the same data is better than copying data around and saying my data are better than yours or yours are better than mine.

Michael Cappi: Data warehouses are clearly not a panacea. At the same time, there are a great deal of dynamic, decentralized and global data. There are products that propose a virtual data warehouse, in which all of the data stay in their native form, in the database in which they are created, and none of the data move. And a global data dictionary has all the metadata and information about those databases, and is able to go out and bring in the data that are required, regardless of the data’s profile—and it can do it in real time. Technologies like that may be more efficacious in establishing various kinds of risk, for different products as well as on an enterprise level.

Val Tannen: Information integration aimed at decision-support analytics suffers from pervasive pitfalls—the data change often, the data are provided by independent sources over which you sometimes have little or no control, and the data are heterogeneous.

While it’s true that administrative headaches should have administrative solutions, it is also the case that current technology can help by insulating the analytics from the vagaries of the data.

Was this information valuable?
Subscribe to Derivatives Strategy by clicking here!

--