High frequency traders are a fact of life in the markets. They justify their activity by claiming that they provide liquidity. But, Jea Yu argues in Way of the Trade: Tactical Applications of Underground Trading Methods for Traders and Investors (Bloomberg/Wiley, 2013), the algos/HFTs generate volume and magnify momentum “by luring in and trapping the greatest number of participants on the WRONG side of the trade so they can kidnap all the liquidity and ransom it out to the highest bidders. … They don’t steal liquidity, just as kidnappers don’t steal their victims. They just borrow long enough to extort the highest prices for the return.” (p. 15)
Whatever we might think of high frequency traders (and the debate rages on), retail traders simply don’t have the firepower to compete directly with them. They need new tools to navigate a treacherous landscape.
Jea Yu, cofounder of UnderGround Trader.com and the author of three earlier books, introduces the reader to the ideal trader for these conditions: the hybrid market predator who has “the precision timing of execution, risk averse scaling, and technical analysis of the daytrader, the premeditative assertiveness tethered by patience and risk management of the swing trader, and the relentless investigative fundamental prowess of the investor. Whereas all three roles have butted heads in the past, now they are components that converge to manifest into a more efficient market predator that can seamlessly shift between skillsets to adapt to changing landscapes, climate, and terrain.” (p. 43) Of course, as we know, easier said than done.
The ideal trader seeks to exploit pockets (think football). “The caliber of a trader … can be gauged on how efficiently he can spot, time, and work the pockets in the market. The pockets that pertain to execution are transparency, liquidity, and momentum. The pockets that pertain to conditions are divergence, reversion, and convergence. These pockets construct the elusive window of opportunity.” (p. 48)
In this book Yu offers both general guidelines and specific trading strategies, complete with color charts. Among the general guidelines, he describes the eight-step process required to turn an idea into profit: (1) constantly monitor the macro market conditions, (2) filter/qualify to determine if the idea presents a viable opportunity, (3) analyze the risk/reward, support/resistance, (4) devise a trading/strategy plan with triggers/scaling points/stops/set-ups, (5) factor in the macro market context—convergence/divergence/fades, (6) execute the trade and manage risk (size + set-up + duration), (7) manage the trade by monitoring the technical with macro market, and finally (8) exit the trade—scale out position/down exposure into liquidity pockets.
He describes in detail “the strongest price pattern” he has ever played—what he calls the perfect storm pattern trade. “Simply put, this powerful pattern forms when three or more pups/mini pups (or inverse pups/mini inverse pups) form and converge simultaneously on three or more separate time-frame charts (of the seven total time frames).” (p. 157) I’m not going to explain this pattern—or for that matter the meaning of “pups”—here; Yu himself spends almost fifty pages on it.
Anyone who opens this book thinking that trading is easy and that the market simply hands over profits for the asking (the “15 minutes a day for 50% annual returns” crowd) will be in for a rude awakening. For those who are serious about making money in the markets, however, Yu opens windows himself. There’s something for traders at every level in this book. It even takes a stab at options trading. And for those who are happier with streaming video, each copy of the book comes with its unique access code to a 70- or 90-minute (depending on which description you believe) video course.
Monday, August 26, 2013
Wednesday, August 21, 2013
Clark and Mills, Masterminding the Deal
After a lengthy hiatus M& A seems to be back in fashion. So it’s time to take another look at the often rocky road that companies face when they contemplate acquiring or merging with another company. (Historical data show that “two-thirds or more of takeovers reduce the value of the acquiring company.”) Masterminding the Deal: Breakthroughs in M&A Strategy and Analysis by Peter J. Clark and Roger W. Mills (Kogan Page, 2013) is both a guidebook for corporate boards and executives and a research tool for investors.
The first step to M&A success is to get the merger valuation methodology right. The authors describe four methods: event studies, total shareholder return, value gap, and incremental value effect. None of these is a standalone guarantor of success; in fact, the authors recommend combining the last two discounted cash flow methods.
Value gap “reflects the commonsense notion that for a merger to be successful, post-merger improvements in the combined company—synergies—must exceed the [acquisition purchase premium] paid by the acquirer to secure control of that target.” (p. 92) As the poster child for what not to do, Hewlett-Packard paid stratospheric premiums for each of its three major acquisitions (3Par, Palm, and Autonomy) when Leo Apotheker was CEO.
Incremental value effect looks to the discounted cash flow-based “valuation of the two principals (acquirer and acquiree) on a standalone basis and combined, including consideration of both realizable synergies and a purchase premium adjustment in the latter.” (p. 108)
In their attempt to assess why some mergers succeed while most fail, the authors offer a ranking scheme by merger type. The most successful deals are made by bottom trawlers (87-92%). Then, in decreasing order of success, come bolt-ons, line extension equivalents, consolidation mature, multiple core related complementary, consolidation-emerging, single core related complementary, lynchpin strategic, and speculative strategic (15-20%). Speculative strategic deals, which prompt “a collective financial market response of ‘Is this a joke?’ have included the NatWest/Gleacher deal, Coca Cola’s purchase of film producer Columbia Pictures, AOL/Time Warner, eBay/Skype, and nearly every deal attempted by former Vivendi Universal chief executive officer Jean-Marie Messier.” (pp. 159-60)
More simply put, acquisitions fail for three key reasons. The acquirer could have selected the wrong target (Conseco/Green Tree, Quaker Oats/Snapple), paid too much for it (RBS Fortis/ABN Amro, AOL/Huffington Press), or poorly integrated it (AT&T/NCR, Terra Firma/EMI, Unum/Provident).
Although this book abounds in acronyms (fortunately there is a list of what they stand for at the back of the book), it also has some good turns of phrase, original and borrowed. For instance, banker-dealmakers, whose models depend on revenues from merger activity rather than merger success, are, in the words of Reggie Jackson, “the straw that stirs the drink.”
Masterminding the Deal may subscribe to the view that you can’t manage what you can’t measure, but it does not subject the reader to the nitty-gritty of measurement. It’s a book of principles, not an exercise in number-crunching. A book that more deal-chasers should read.
The first step to M&A success is to get the merger valuation methodology right. The authors describe four methods: event studies, total shareholder return, value gap, and incremental value effect. None of these is a standalone guarantor of success; in fact, the authors recommend combining the last two discounted cash flow methods.
Value gap “reflects the commonsense notion that for a merger to be successful, post-merger improvements in the combined company—synergies—must exceed the [acquisition purchase premium] paid by the acquirer to secure control of that target.” (p. 92) As the poster child for what not to do, Hewlett-Packard paid stratospheric premiums for each of its three major acquisitions (3Par, Palm, and Autonomy) when Leo Apotheker was CEO.
Incremental value effect looks to the discounted cash flow-based “valuation of the two principals (acquirer and acquiree) on a standalone basis and combined, including consideration of both realizable synergies and a purchase premium adjustment in the latter.” (p. 108)
In their attempt to assess why some mergers succeed while most fail, the authors offer a ranking scheme by merger type. The most successful deals are made by bottom trawlers (87-92%). Then, in decreasing order of success, come bolt-ons, line extension equivalents, consolidation mature, multiple core related complementary, consolidation-emerging, single core related complementary, lynchpin strategic, and speculative strategic (15-20%). Speculative strategic deals, which prompt “a collective financial market response of ‘Is this a joke?’ have included the NatWest/Gleacher deal, Coca Cola’s purchase of film producer Columbia Pictures, AOL/Time Warner, eBay/Skype, and nearly every deal attempted by former Vivendi Universal chief executive officer Jean-Marie Messier.” (pp. 159-60)
More simply put, acquisitions fail for three key reasons. The acquirer could have selected the wrong target (Conseco/Green Tree, Quaker Oats/Snapple), paid too much for it (RBS Fortis/ABN Amro, AOL/Huffington Press), or poorly integrated it (AT&T/NCR, Terra Firma/EMI, Unum/Provident).
Although this book abounds in acronyms (fortunately there is a list of what they stand for at the back of the book), it also has some good turns of phrase, original and borrowed. For instance, banker-dealmakers, whose models depend on revenues from merger activity rather than merger success, are, in the words of Reggie Jackson, “the straw that stirs the drink.”
Masterminding the Deal may subscribe to the view that you can’t manage what you can’t measure, but it does not subject the reader to the nitty-gritty of measurement. It’s a book of principles, not an exercise in number-crunching. A book that more deal-chasers should read.
Monday, August 19, 2013
Rabins, The Why of Things
Aristotle suggested that “men are never satisfied until they know the ‘why’ of a thing,” where to know the “why of a thing” is to know its cause(s). Centuries later, causation remains an intractable philosophical problem even as we’ve loosened and redefined the bonds between cause and effect in an attempt to deal with it.
Peter V. Rabins, a psychiatrist at the Johns Hopkins School of Medicine, offers a many-model account in The Why of Things: Causality in Science, Medicine, and Life (Columbia University Press, 2013). According to his three-facet schema, there are three conceptual models of causal logic, four levels of analysis, and three logics by which to gain causal knowledge.
Here I will restrict myself to a brief summary of the conceptual facet of causality—that is, to the categorical, probabilistic, and emergent models.
Categorical logic as applied to causality is binary—something either is or is not the cause of something else, and if it is, it acts directly to bring about an event. This model represents the most common view of causation despite the fact that it is beset by a host of philosophical problems.
In the probabilistic model, common in financial valuation and forecasting, “causes are conceptualized as events that affect the likelihood that another event will occur. In this model, causes act as influences, risk factors, predispositions, modifiers, and buffers.” (p. 45) Here the binary is replaced with the continuous, the categorical with gradations of probability.
There is some support for collapsing these two models. For instance, the “dramatic success of the computer and the digital camera … illustrate that complex, graded phenomena can ultimately be coded digitally. Perhaps nature is constructed digitally (categorically), while humans are constructed to perceive it continuously. Or, conversely, perhaps nature functions continuously, but humans have constructed categorical concepts to simplify it.”
Rabins argues, however, that both models should be kept because they have different functions and strengths (as well as limitations) and because “the choice of a specific model is determined or at least strongly influenced by the circumstances or events being considered.” (p. 57)
The third model, the one most applicable to understanding financial markets as complex adaptive systems, takes an emergent, nonlinear approach. The idea here is that systems are interrelated wholes that require models such as chaos theory, complexity theory, self-organizing systems, and network theory.
What are some of the characteristics of nonlinearity that provide a springboard for both defining and characterizing nonlinear causality?
“First, nonlinear change occurs in systems that have a large number of elements. One or two water molecules would not form ice, nor would a system made up of only two tectonic plates generate an earthquake. The presence of a large number of elements increases the number of potential interactions and increases the probability that an uncommon or unanticipated outcome will occur.” (p. 67)
Following from this first characteristic is a second: limited predictability.
A third, all too familiar characteristic is that outliers are more likely to occur.
Fourth, “some changes that precipitate an event appear to be quite small.” For instance, “the formation of ice and the development of superconductor status … seem to occur after small changes in temperature.” (p. 69) We know, of course, that most “sudden” events occur “after a period of gradual and often unrecognized change or accumulation.” (p. 70)
Finally, nonlinear causality combines “top-down” and “bottom-up” approaches. “The top-down approach begins with a systemwide, big-picture view and identifies interactions at that macro level. The bottom-up approach, on the other hand, starts with the smallest elements and builds a causal explanation based on the interactions at the micro level.” (p. 71)
No single model can capture the activity that occurs in financial markets, no single model can describe the nature of financial markets as a whole. We live in a many-model world where, as the saying goes, all models are wrong but some are useful.
Peter V. Rabins, a psychiatrist at the Johns Hopkins School of Medicine, offers a many-model account in The Why of Things: Causality in Science, Medicine, and Life (Columbia University Press, 2013). According to his three-facet schema, there are three conceptual models of causal logic, four levels of analysis, and three logics by which to gain causal knowledge.
Here I will restrict myself to a brief summary of the conceptual facet of causality—that is, to the categorical, probabilistic, and emergent models.
Categorical logic as applied to causality is binary—something either is or is not the cause of something else, and if it is, it acts directly to bring about an event. This model represents the most common view of causation despite the fact that it is beset by a host of philosophical problems.
In the probabilistic model, common in financial valuation and forecasting, “causes are conceptualized as events that affect the likelihood that another event will occur. In this model, causes act as influences, risk factors, predispositions, modifiers, and buffers.” (p. 45) Here the binary is replaced with the continuous, the categorical with gradations of probability.
There is some support for collapsing these two models. For instance, the “dramatic success of the computer and the digital camera … illustrate that complex, graded phenomena can ultimately be coded digitally. Perhaps nature is constructed digitally (categorically), while humans are constructed to perceive it continuously. Or, conversely, perhaps nature functions continuously, but humans have constructed categorical concepts to simplify it.”
Rabins argues, however, that both models should be kept because they have different functions and strengths (as well as limitations) and because “the choice of a specific model is determined or at least strongly influenced by the circumstances or events being considered.” (p. 57)
The third model, the one most applicable to understanding financial markets as complex adaptive systems, takes an emergent, nonlinear approach. The idea here is that systems are interrelated wholes that require models such as chaos theory, complexity theory, self-organizing systems, and network theory.
What are some of the characteristics of nonlinearity that provide a springboard for both defining and characterizing nonlinear causality?
“First, nonlinear change occurs in systems that have a large number of elements. One or two water molecules would not form ice, nor would a system made up of only two tectonic plates generate an earthquake. The presence of a large number of elements increases the number of potential interactions and increases the probability that an uncommon or unanticipated outcome will occur.” (p. 67)
Following from this first characteristic is a second: limited predictability.
A third, all too familiar characteristic is that outliers are more likely to occur.
Fourth, “some changes that precipitate an event appear to be quite small.” For instance, “the formation of ice and the development of superconductor status … seem to occur after small changes in temperature.” (p. 69) We know, of course, that most “sudden” events occur “after a period of gradual and often unrecognized change or accumulation.” (p. 70)
Finally, nonlinear causality combines “top-down” and “bottom-up” approaches. “The top-down approach begins with a systemwide, big-picture view and identifies interactions at that macro level. The bottom-up approach, on the other hand, starts with the smallest elements and builds a causal explanation based on the interactions at the micro level.” (p. 71)
No single model can capture the activity that occurs in financial markets, no single model can describe the nature of financial markets as a whole. We live in a many-model world where, as the saying goes, all models are wrong but some are useful.
Subscribe to:
Posts (Atom)