Tuesday, May 11, 2010

Redleaf and Vigilante, Panic

I first encountered Andrew Redleaf, founder and CEO of Whitebox Advisors, when he was a guest lecturer in Robert Shiller’s Yale economics course, available online. Even though he was somewhat ill at ease in the classroom, he came across as an intriguing thinker.

Panic: The Betrayal of Capitalism by Wall Street and Washington (Richard Vigilante Books, 2010) co-authored by Redleaf and Richard Vigilante, the communications director of Redleaf’s hedge fund, is a compelling work. It is for the most part an intellectual history of the financial meltdown, demonstrating how Wall Street became the victim of its own faulty paradigms. Unfortunately, the book could not be written in the past tense because most of these paradigms are still secure atop their pedestals. Until there is a paradigm shift we will continue to experience recurring economic havoc.

The overarching paradigm is that efficient markets are superior to free markets, that human judgment is inferior to structures and systems, and that, by extension, “public securities markets—computerized, blazingly fast, effusively liquid—are as close as mankind has ever come to realizing the perfectly efficient market of classical economic theory.” (p. 7) Even Thursday’s tape action has not called this model into question; rather, the solutions being bandied about focus simply on coordinating the existing structures and systems.

The authors proceed to dissect the notion of efficiency and some of its equally flawed ideological relatives. Among them: that investors are paid for taking risk, that if the so-called smart money (mutual fund managers are singled out because their performance is a matter of public record) can’t beat the market no one can, that technical analysis does not work because scientifically rigorous studies demonstrate that it provides at best only a minimal edge often erased by commissions, and that the primary skill of finance is diversification.

The book’s arguments are carefully developed. They are often nuanced, so summary will not do them justice. With that caveat I’m going to look briefly at the flawed idea that can most easily be separated out from the main argument of the book—that technical analysis doesn’t work.

The weak form of the efficient market hypothesis claims that technical analysis is bunk because “the very next price change in a publicly traded stock will be statistically indistinguishable from a random change.” Wrong, claim the authors. “In an efficient market, prices are fully ‘determined’ by the flow of information that the market is processing.” (p. 81) If a market is efficient we are, in the words of yesterday’s blog post, dealing with epistemic uncertainty, not stochastic uncertainty.

Critics of technical analysis would not be moved by this argument. Instead, they would press on, citing the numerous studies that have shown the limited value of technical analysis. The authors don’t fault the studies; they simply note that the academics were necessarily constrained by the rigors of scientific methodology. They couldn’t do what “adroit market practitioners do.” They couldn’t pick and choose, highlighting time periods when past prices predicted future prices and ignoring those blocks of time when they didn’t, or pointing to the handful of stocks where technical analysis worked and excluding evidence from the overwhelming majority. Savvy investors don’t have the scientific scruples of academicians. Here let me quote at length: “We assume that potentially profitable anomalies appear and disappear as market conditions change. We assume that such anomalies are almost certain to be more powerful and profitable for some sets of securities than for others. . . . When we build quantitative tools, our goal is not to find algorithms that work for all eternity across any arbitrarily defined class of securities. We look for tools that deliver very strong results over time periods biased to the near term. And in building the universe of securities to which to apply the algorithm, we do not choose some neutrally defined class that would please an academic such as every stock in the S&P or every large cap. We select a subset of securities with favorable characteristics that make them good candidates for the algorithm. . . . Once we go live, we monitor the universe, tossing securities whose behavior no longer seems to be well predicted by the algorithm and adding others that seem promising.” (p. 85)

In this review I’ve barely touched the surface of this book. For instance, I’ve not said a word about the role of Washington in the whole mess. Well, that’s the problem with good books—just too many ideas in them! This one also has the added benefit of being well written, with a healthy dose of humor. There’s even a theoretical rationale for the good style: “If economics were about entrepreneurship [which the authors advocate], it would not look like physics. It would look a little like philosophy. Mostly it would look like literature." (p. 49)

Panic is one of the best books I’ve read in a long time and one of the very few I can wholeheartedly recommend to everyone—liberal or conservative, investor or trader—who appreciates contrarian thinking.

1 comment:

  1. In reading the May 11th post entitled “Redleaf and Vigilante, Panic,” I was intrigued by the prospect of combining in reverse order the May 10th (“Stochastic and epistemic uncertainty”) and May 11th posts. I believe that the combined-whole is more instructive than the parts for effective and efficient governance of complex, global markets.

    If there is complexity, there is uncertainty. To achieve real regulatory reform, policymakers have to move beyond risk management to randomness governance of both determinate and indeterminate underlying economic conditions. Trying to govern both risk and uncertainty with the legacy, one-size-fits-all deterministic regime is analogous to having one set of driving instructions for both the U.S. and U.K.

    Important as to randomness is the bright-line that divides determinate from indeterminate underlying economic environments. Structure matters — stochastic risk can be further delineated by firm and market economic structures. The market perspective of randomness is demarcated using FASB 157 as the bright-line where determinate investments are marked-to-market and indeterminate investments are marked-to-model. The firm perspective of randomness is demarcated using cash flow as the bright line where positive cash flow corporations are deterministic investments that strive to maximize value and negative cash flow enterprises are indeterminate investments that strive to minimize their burn rate.

    Why does structure matter? It provides better information correlation and transparency. In a world of financial innovation and bubbles, it is uncertainty — not risk — that should be the randomness component of focus. When the low hanging investment fruit of a boom period has been picked, bad information leads to flawed price discovery that creates vapor assets. The result is larger and more frequent boom-bust cycles.

    Recall the S&L meltdown, where the indeterminate “asset” on the books of many insolvent S&Ls was “regulatory goodwill” – the regulator’s reward for acquiring an even more insolvent thrift. Who could have foreseen the Resolution Trust Corporation’s (RTC) liquidations that occurred when minimum reserve requirements became illusory in a setting where capital consisted of vapor assets? Shortly thereafter, “Dot Comets” that had their initial public offering priced at 200 percent of forecasted sales soon became financial road kill when their projections were not met. Collateralized debt obligations (CDOs) and credit derivative swaps were the subprime boom’s vapor assets where NINJA mortgagors were given property rights in order to enable questionable securitizations at even more questionable AAA-ratings prices to take place. But when the bubble burst, “questionable” became “improbable” as deterministic metrics lacked robustness to manage indeterminate investments.

    Distinguishing “stochastic” from “epistemic” uncertainty is relevant to the governance of today’s capital market. Is it more preferable to solve the “stochastic problems” of scale that is to-big-to-fail and scope that is too-random-to-regulate, or is it preferable to epistemically fix the “market” by segmenting the underlying economic condition in terms of predictable, probabilistic, and uncertain governance regimes. I argue in favor of the latter and believe it to be consistent with the essence of Senator Dodd’s proposal to provide market practitioners with better information.

    Thanks for a thought-provoking read.


    Stephen A. Boyko


    Author of “We’re All Screwed: How Toxic Regulation Will Crush the Free Market System” and a series of five SFO articles on capital market governance.

    http://w-apublishing.com/Shop/BookDetail.aspx?ID=D6575146-0B97-40A1-BFF7-1CD340424361

    ReplyDelete