Friday, July 30, 2010

Ray, Extreme Risk Management

Christina Ray has written an intriguing though somewhat frustrating book. Extreme Risk Management: Revolutionary Approaches to Evaluating and Measuring Risk (McGraw-Hill, 2010) draws in part on methodologies of the intelligence community and argues for their efficacy in managing financial risk. More specifically, she contends that causal models, coupled with expert knowledge, are better predictors of black swan events than statistical models that rely on correlation.

It would be a cheap shot to note that U.S. intelligence saddled the country with the costly results of false positives and missed deadly events. I am enough of a philosopher, a word I use loosely here, to analyze conflicting methodologies on their theoretical merits. Well, that’s not quite true. I am somewhat biased against statistical models perhaps because, statistically, I don’t think they work particularly well. So I was prepared to be convinced by an argument for a causal model, all the while holding my breath because causality is such a thorny concept. I can’t say that I became a true believer, but certain elements of the causal model as developed in this study are intellectually attractive.

Although Ray has written an actionable book for financial professionals and regulators who engage in varieties of stress testing and whose mission is to prevent their firms or the system from getting slapped around by fat tails, I’m going to focus in this post on a couple of topics I consider to be important to the individual trader or investor. First, road signs of regime change and, second, inferring causality from historical market behavior.

Ray criticizes stochastic volatility models such as GARCH because they assume reversion to a long-term mean and “implicitly assume system stability.” But what, she asks, if this assumption isn’t true? “What if instead the system evolves to an enduring new state in which risk is substantially higher?” (p. 139) Looking at historical VIX levels and bifurcating them into normal and extreme expressions of risk, she argues that “such very different patterns of volatility [illustrated with contour plots] provide some empirical evidence that risk regimes can shift and that stochastic volatility models may be ineffectual at the point of transition.” (p. 141)

As for inferring causality from historical market behavior, Ray outlines two approaches—the theory-driven approach and the data-driven approach. In the former “the analyst hypothesizes some model of a system and then attempts to determine whether observational data bear out or contradict that theory.” In the latter, “a modeler assumes no prior knowledge about systems behavior, and instead attempts to infer causality from empirical data alone.” The first approach “requires an iterative process, in which prior knowledge is refined to posterior knowledge as theory and experience merge.” The data-driven approach relies on inductive reasoning, much of it fleshed out by researchers in the field of artificial intelligence. It also differentiates among types of causality such as potential causes, genuine causes, spurious association, and genuine causation with temporal information. (pp. 226-27) By the way, here Ray cites the work of Judea Pearl from the 1990s; for those interested in his research, he has a website that offers its visitors two “gentle” introductions.

Ray continues, and here let me quote her at greater length: “In spite of the fact that the global financial system continually evolves in small ways and large, historical data can nevertheless provide evidence of causality. Although old cause-and-effect relationships may change and new ones form, there are still carryovers from state to state, not the least of which are behavioral effects. Human nature is one of the few constants, and human preferences—especially with respect to risk aversion—are one of the major drivers of market prices, perhaps even more important than any invisible hand.

“A firm that recognizes the causal capacity of the current state—and then first notices indicators that may trigger a chain reaction that uses that capacity—can beat the crowd by initiating changes in strategy that mitigate loss or generate profit.” (pp. 227-28)

By way of clarification, the author redefines the concept of causal capacity to mean “a measurement of a system’s capacity for change. In a complex system, causal capacity might be thought of as the spring loading of a fragile or robust system that makes it susceptible to shocks caused by the actions of free agents.” (p. 170)

Extreme Risk Management is not for the novice trader or investor or the quantitatively challenged, even though there is almost no math in the book. It is also not a book that could ever be described as a good read because the prose is often dense and the arguments truncated. Moreover, it draws from so many fields that almost by definition the reader will lose her way now and again. It assumes a familiarity with some notions I’ve written about in this blog, such as degrees of freedom and complex adaptive systems, and with a host of others better left to those more qualified than I.

I’m sure that most people who read this book will learn from it and move on to something else in their lives. I, however, came away from it with a “to-do” list—references to follow up on, ideas that are enticing but need more development. This may not be the litmus test of a good book, but I personally appreciate an author who can stimulate me to do some work on my own.

3 comments:

  1. Moving Beyond Risk Management

    But what, she asks, if this assumption isn’t true? “What if instead the system evolves to an enduring new state in which risk is substantially higher?” (p. 139)

    In a world of financial innovation, I argue that it is “uncertainty” and not “risk” that should be the randomness component of focus. Uncertainty is different from, rather than a higher degree of, risk. In brief, “risk” is present when future events occur with measurable probability while “uncertainty is present when the likelihood of future events is indefinite or incalculable.

    Investments that lack cash flow and are valued on a mark-to-model basis are uncertain. Absent randomness segmentation, indeterminate information cannot be processed effectively and efficiently by determinate metrics.

    As long as your qualifier is “risk” you are dealing with a deterministic state that more than likely will be governed by one-size-fits-all approach that becomes hostage of a single scale bright line. Sooner or later a critical mass learns how to acquire the current “hot” resource with no money down and begin marching herd-like in the same direction. The Broughton Bridge effect creates a “Minsky” moment from excess credit (the result of no-money down).

    This reinforces the troubling trend of larger and more frequent boom-bust cycles. It is why reality oriented regulatory governance requires that policymakers move beyond risk management to randomness governance of both determinate and indeterminate underlying economic conditions.

    Stephen A. Boyko

    Author of “We’re All Screwed: How Toxic Regulation Will Crush the Free Market System” and a series of articles on capital market governance.

    http://www.traderspress.com/detail.php?PKey=671

    ReplyDelete
  2. "Absent randomness segmentation, indeterminate information cannot be processed effectively and efficiently by determinate metrics."

    Yikes! Written by The Dense Prose Society, me thinks.

    ReplyDelete
  3. I found an overview of Ms. Ray's book here. This has prompted me to start reading it and to solicit ideas from others. The connectivist approach is compelling and I'll "invest" some effort in learning more. As I ramp up, I hope to glean more wisdom from this community.

    ReplyDelete