Continuing the journey that I began with "causation" and continued in "Granger causality and cointegration," today I’m pitting the recipient of a MacArthur “genius” grant against a Nobel Prize winner. Nancy Cartwright, a philosopher of science (no, not the voice of Bart Simpson), claims that the argument structure of Granger causality is “exceedingly simple.” But, she continues, “the premises are concomitantly exceedingly strong.” That is, “every possible source of variation of every kind must be controlled if a valid conclusion is to be drawn.” (Hunting Causes and Using Them [Cambridge University Press, 2007], p. 29)
Her critique is straightforward though stylistically dense. First, we must “suppose that for populations picked out by the right descriptions Ki, if X and Y are probabilistically dependent and X precedes Y then X causes Y. If any population P contains such a Ki as a subpopulation, then X causes Y in P in the sense that for some individuals in P, X will cause Y (in the ‘long run’). …
“The argument is deductive because of the way the Ki are supposed to be characterized. Begin from the assumption that if X and Y are probabilistically dependent in a population that must be because of the causal principles operating in that population. (Without this kind of assumption it will never be possible to establish any connection between probabilities and causality.) The trick then is to characterize the Ki in just the right way to eliminate all possible accounts of dependency between X and Y other than that X causes Y (there is no correlation in Ki between X and any ‘other’ causes of Y, there is no ‘selection bias’, etc.). Given that Ki is specified in this way, if X and Y are probabilistically dependent in population Ki, there is no possibility left other than the hypothesis that X causes Y.” That is, everything that has occurred up to the time of the putative cause must be held fixed.
“Of course the epistemic problems are enormous. How are we to know what to include in Ki? … Knowledge of just the right kind is thought to be rare in the social sciences…. Granger causality solves the problem of our ignorance about just what to put in the descriptions Ki by putting in everything that happens previous to X. That of course is literally impossible so in the end very specific decisions about the nature of the K’s must be made for any application.” (p. 30)
If we say that X Granger-causes Y, we have to know that “all other sources of probabilistic dependence have been randomized over or controlled for” and that “we are studying systems where all dependencies are due to causal connections.” (pp. 33-34) This is most likely impossible.
Which takes us back to Logic 101. If the premises of a deductive argument are true the conclusion must be true. If we aren’t sure whether the premises of the argument are true but are willing to assign a 90% probability of their being true, it is “reasonable to assign a probability of 90 percent to the conclusion.” But, Cartwright continues, that “is very different from the case where we are fairly certain, may even take ourselves to know, nine out of ten of the premises, but have strong reason to deny the tenth. In that case the method can make us no more certain of the conclusion than we are of that doubtful premise. Deductions can take us from truths to truths but once there is one false premise, they cannot do anything at all.” (p. 34)
Cartwright’s critique ultimately hinges on her characterization of Granger causality as a deductive scheme that clinches causal inferences rather than merely inductively vouches for them. Kevin Hoover grants her claim that “many arguments take the form of clinchers, conditional on background assumptions.” But, he counters, “she is wrong to imply that advocates of these forms of argument are insensitive to the tentativeness and the fallibility of those strong background assumptions. Such sensitivity means that arguments that take the form of clinchers are, in reality, always practically vouchers.” (review of Cartwright’s book) In another piece (“RCTs and the Gold Standard”) Hoover repeats his contention, arguing that all methods—clinchers and vouchers—“require good judgment to draw relevant conclusions—and a great deal of it. Since judgment cannot be eliminated, we had best get on with managing it. This however requires judgment!”
To my mind Cartwright’s criticism stands unscathed. There is a philosophical chasm between “X Granger-causes Y” and “In my judgment X Granger-causes Y.” The former is intended to be viewed probabilistically; the latter introduces elements of subjectivity and uncertainty.
* * * *
Now that you’ve eaten your brussel sprouts, tomorrow you’ll get dessert—a very short non-financial YouTube video that you’ve probably already seen but then again maybe you haven’t.
Tuesday, November 30, 2010
Monday, November 29, 2010
Price, The Conscious Investor
John Price, author of The Conscious Investor: Profiting from the Timeless Value Approach (Wiley, 2011), began his career as a research mathematician and for thirty-five years taught math, physics, and finance at universities around the world. He then morphed into an entrepreneur, developing stock screening software that emulates Warren Buffett’s investing strategies. And, as is evident from this book, he didn’t neglect his writing skills. He proceeds with the analytical precision of a mathematician but with the facility and clarity of a careful wordsmith.
Price describes over twenty methods of valuation. He explains the circumstances in which each method is most appropriate. He also evaluates each method’s strengths and weaknesses.
Here I am going to confine myself to describing the screen that underlies Price’s own investing system. He focuses on earnings forecasts, offering objective methods in place of the strategies of analysts, which are tainted with behavioral biases. Critically, he screens to find companies that are actually amenable to growth forecasts. They share three characteristics. “The first two, stable growth in earnings and stable return on equity, are based on histories of financial data taken from the financial statements. The third one, strong economic moat, is based on the ability of the company to protect itself from competitors.” (p. 292) Since many readers will be familiar with Warren Buffett’s notion of moats, I will discuss only the first two characteristics and how to measure them.
Price developed a proprietary function called STAEGR which “measures the stability or consistency of the growth of historical earnings per share from year to year, expressed as a percentage in the range of 0 to 100 percent. … STAEGR of 100 percent signifies complete stability, meaning that the data is changing by exactly the same percentage each year. The function has the feature of adjusting for data that could overly distort the result, such as one-off extreme data points, negative data, and data near zero. It also puts more emphasis on recent data.” This function is “independent of the actual growth. This means that whether a company has high or low stability of earnings is independent of whether the earnings are growing or contracting. In this way the two measures, stability and growth, complement each other in describing qualities of historical earnings.” (p. 294)
In measuring the stability of return on equity, Price assumes the clean surplus relationship which, on a per-share basis, states that the initial book value + dividends per share + earnings per share = the resulting book value. Under that assumption, “whenever return on equity is constant, … the growth rate of earnings each year is approximately equal to return on equity times the dividend retention rate. … [I]f a company pays no dividends, the growth rate of earnings and return on equity will match very closely.” (p. 297) Price does not contend that return on equity implies information about the growth of earnings since return on equity is defined in terms of earnings. Rather, he suggests that stability in return on equity and the payout ratio enable the analyst to estimate stability in the growth in earnings.
Once he screens for stability functions he then calculates margins of safety for forecasts of earnings, P/E ratios, and dividend payout ratios. I have no space here to describe his techniques. Suffice it to say that he has developed methods by which to “decrease the size and frequency of negative earnings surprises in a consistent manner for large databases of companies.” (p. 318)
The Conscious Investor serves two important functions. First, it critically assesses a range of valuation methods. Second, it is an accessible case study in the application of quantitative methods to fundamental analysis. Perhaps I should add a third: it seems that Price’s techniques are actually profitable in the real world. Using the conscious investor system, to which he sells subscriptions for a fairly hefty fee, he booked audited returns of 19.45% per year over the course of five years versus the 2.82% actual return of the S&P 500 index.
Price describes over twenty methods of valuation. He explains the circumstances in which each method is most appropriate. He also evaluates each method’s strengths and weaknesses.
Here I am going to confine myself to describing the screen that underlies Price’s own investing system. He focuses on earnings forecasts, offering objective methods in place of the strategies of analysts, which are tainted with behavioral biases. Critically, he screens to find companies that are actually amenable to growth forecasts. They share three characteristics. “The first two, stable growth in earnings and stable return on equity, are based on histories of financial data taken from the financial statements. The third one, strong economic moat, is based on the ability of the company to protect itself from competitors.” (p. 292) Since many readers will be familiar with Warren Buffett’s notion of moats, I will discuss only the first two characteristics and how to measure them.
Price developed a proprietary function called STAEGR which “measures the stability or consistency of the growth of historical earnings per share from year to year, expressed as a percentage in the range of 0 to 100 percent. … STAEGR of 100 percent signifies complete stability, meaning that the data is changing by exactly the same percentage each year. The function has the feature of adjusting for data that could overly distort the result, such as one-off extreme data points, negative data, and data near zero. It also puts more emphasis on recent data.” This function is “independent of the actual growth. This means that whether a company has high or low stability of earnings is independent of whether the earnings are growing or contracting. In this way the two measures, stability and growth, complement each other in describing qualities of historical earnings.” (p. 294)
In measuring the stability of return on equity, Price assumes the clean surplus relationship which, on a per-share basis, states that the initial book value + dividends per share + earnings per share = the resulting book value. Under that assumption, “whenever return on equity is constant, … the growth rate of earnings each year is approximately equal to return on equity times the dividend retention rate. … [I]f a company pays no dividends, the growth rate of earnings and return on equity will match very closely.” (p. 297) Price does not contend that return on equity implies information about the growth of earnings since return on equity is defined in terms of earnings. Rather, he suggests that stability in return on equity and the payout ratio enable the analyst to estimate stability in the growth in earnings.
Once he screens for stability functions he then calculates margins of safety for forecasts of earnings, P/E ratios, and dividend payout ratios. I have no space here to describe his techniques. Suffice it to say that he has developed methods by which to “decrease the size and frequency of negative earnings surprises in a consistent manner for large databases of companies.” (p. 318)
The Conscious Investor serves two important functions. First, it critically assesses a range of valuation methods. Second, it is an accessible case study in the application of quantitative methods to fundamental analysis. Perhaps I should add a third: it seems that Price’s techniques are actually profitable in the real world. Using the conscious investor system, to which he sells subscriptions for a fairly hefty fee, he booked audited returns of 19.45% per year over the course of five years versus the 2.82% actual return of the S&P 500 index.
Friday, November 26, 2010
And the winners are . . .
A reader suggested that I do what practically every bookseller or newspaper book review section does: highlight the best of 2010. I thought about this suggestion, I even tried compiling a list. And I threw up my hands. There’s no need to follow my tortuous mental processes.
Instead, I decided to redefine this task. Challenging my Internet-compromised memory, I set out to recall a few books that I’ve reviewed since I launched this blog that have made a difference to the way I think. They are also books that I’ve returned to over time. Without further ado, here they are in alphabetical order.
John B. Abbink, Alternative Assets & Strategic Allocation. Another Yale philosophy Ph.D. gone astray, but intriguingly so.
Steven Drobny, Inside the House of Money. The interview with Jim Leitner is one that I keep going back to.
Scott E. Page. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. A brilliant book, with lots of ramifications for the markets.
Andrew Redleaf and Richard Vigilante. Panic. Among other things, distinguishing between the scientific scruples of academicians and the savvy of investors.
Josh Waitzkin. The Art of Learning. A book everyone should read.
This list reflects my intellectual predilections, with a bit of a tilt toward hedge fund thinking. But as I review the list these are all books I feel comfortable recommending.
Instead, I decided to redefine this task. Challenging my Internet-compromised memory, I set out to recall a few books that I’ve reviewed since I launched this blog that have made a difference to the way I think. They are also books that I’ve returned to over time. Without further ado, here they are in alphabetical order.
John B. Abbink, Alternative Assets & Strategic Allocation. Another Yale philosophy Ph.D. gone astray, but intriguingly so.
Steven Drobny, Inside the House of Money. The interview with Jim Leitner is one that I keep going back to.
Scott E. Page. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. A brilliant book, with lots of ramifications for the markets.
Andrew Redleaf and Richard Vigilante. Panic. Among other things, distinguishing between the scientific scruples of academicians and the savvy of investors.
Josh Waitzkin. The Art of Learning. A book everyone should read.
This list reflects my intellectual predilections, with a bit of a tilt toward hedge fund thinking. But as I review the list these are all books I feel comfortable recommending.
Wednesday, November 24, 2010
Tuesday, November 23, 2010
Carr, The Shallows
I don’t know about you, but I can certainly identify with Nicholas Carr’s problem. He writes: “Over the last few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. … [When reading] my concentration starts to drift after a page or two. I get fidgety, lose the thread, begin looking for something else to do. I feel like I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.” In The Shallows: What the Internet Is Doing to Our Brains (W. W. Norton & Company, 2010) Carr makes a compelling case that, despite its usefulness and in part because of its addictiveness, the Internet is making us shallower thinkers.
Neuroscience has taught us that the brain is very plastic. But “for all the mental flexibility [neuroplasticity] grants us, it can end up locking us into ‘rigid’ behaviors.” That is, plastic is not the same as elastic. Repeated mental activity can alter our neural circuitry. The mental skills we exercise increasingly take up more brain map space; the circuits of those we neglect can weaken or dissolve. With concerted effort we can rebuild skills we’ve lost, but by and large the vital paths in our brain are the paths of least resistance. Many of these paths take us straight to Google.
As Carr writes: “The influx of competing messages that we receive whenever we go online not only overloads our working memory; it makes it much harder for our frontal lobes to concentrate our attention on any one thing. The process of memory consolidation can’t even get started. And, thanks once again to the plasticity of our neuronal pathways, the more we use the Web, the more we train our brain to be distracted—to process information very quickly and very efficiently but without sustained attention. That helps explain why many of us find it hard to concentrate even when we’re away from our computers. Our brains become adept at forgetting, inept at remembering. Our growing dependence on the Web’s information stores may in fact be the product of a self-perpetuating, self-amplifying loop. As our use of the Web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the Net’s capacious and easily searchable artificial memory, even if it makes us shallower thinkers.”
Carr warns: “When we outsource our memory to a machine, we also outsource a very important part of our intellect and even our identity.”
And, yes, while writing this piece I checked my e-mail and looked at some charts. My own brain is constantly being, as T. S. Eliot wrote, “distracted from distraction by distraction.” Perhaps it’s time for some serious effort at rewiring.
Neuroscience has taught us that the brain is very plastic. But “for all the mental flexibility [neuroplasticity] grants us, it can end up locking us into ‘rigid’ behaviors.” That is, plastic is not the same as elastic. Repeated mental activity can alter our neural circuitry. The mental skills we exercise increasingly take up more brain map space; the circuits of those we neglect can weaken or dissolve. With concerted effort we can rebuild skills we’ve lost, but by and large the vital paths in our brain are the paths of least resistance. Many of these paths take us straight to Google.
As Carr writes: “The influx of competing messages that we receive whenever we go online not only overloads our working memory; it makes it much harder for our frontal lobes to concentrate our attention on any one thing. The process of memory consolidation can’t even get started. And, thanks once again to the plasticity of our neuronal pathways, the more we use the Web, the more we train our brain to be distracted—to process information very quickly and very efficiently but without sustained attention. That helps explain why many of us find it hard to concentrate even when we’re away from our computers. Our brains become adept at forgetting, inept at remembering. Our growing dependence on the Web’s information stores may in fact be the product of a self-perpetuating, self-amplifying loop. As our use of the Web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the Net’s capacious and easily searchable artificial memory, even if it makes us shallower thinkers.”
Carr warns: “When we outsource our memory to a machine, we also outsource a very important part of our intellect and even our identity.”
And, yes, while writing this piece I checked my e-mail and looked at some charts. My own brain is constantly being, as T. S. Eliot wrote, “distracted from distraction by distraction.” Perhaps it’s time for some serious effort at rewiring.
Monday, November 22, 2010
Çaliskan, Market Threads
We all know that cotton prices have been soaring. Or do we? Koray Çaliskan’s Market Threads: How Cotton Farmers and Traders Create a Global Commodity (Princeton University Press, 2010) is a fascinating study of the international cotton market. Its theoretical focus is the nature of price and how it is determined in a range of environments, from commodity futures markets to providers of indexed spot prices to the markets for buying and selling the commodity itself. In the process of developing this theme, the author takes us to the cotton fields of Egypt and Turkey as well as to the global markets and individual companies that buy and sell the physical cotton that is used in our cheap T-shirts as well as our 400-thread-count sheets.
Çaliskan challenges the classic model of supply and demand. In its stead he proposes that most prices are “prosthetic devices” that are “made, produced, and challenged by a multiplicity of actors in a market process that happens in a variety of trading places.” (p. 85) These prosthetic prices are trading tools that are used to make actual prices—that is, prices that normally result from bargaining and become contractual prices to buy or sell a certain number of bales of a particular variety of cotton.
If you are at all curious about how cotton is grown and harvested, how it is graded, and how it moves around the world this book is rich in detail. Çaliskan did fieldwork in Turkey and Egypt, sometimes literally working in the fields alongside local farmers. He spent time with cotton traders in these countries as well. He even enrolled in a two-month training program in Memphis, Tennessee, designed for future cotton traders.
A couple of random takeaways. Cotton farmers in Turkey sell their entire crop immediately after harvest to repay the loans they took out (rarely from banks) to grow the cotton; “growing cotton requires farmers to borrow heavily.” (p. 144) As a result, they are forced to sell into a market where prices are depressed. By contrast, as I recently learned, cotton farmers in the United States receive government subsidies to store their cotton until the price becomes more favorable.
Children, sometimes as young as seven, work in the fields performing such tasks as collecting cotton-leaf worm eggs and harvesting the crop. “There is usually no reward for good work, yet mistakes are punished either through defamation, or at times beatings. … An overseer told [the author] that he was warned by the field’s owner not to hit children’s hands, but their backs instead if necessary, for the hands are needed the most. … These small hands are perhaps the cheapest and most abundant labor force behind the making of a commodity for the world market.” (p. 172)
Although the author undertook extensive ethnographic research and shares abundantly, this book is really about the dynamics among the players that determine that illusive thing called price. It demonstrates in vivid detail how essential knowledge and bargaining skills are to finding a price and how far removed from reality the traditional model of supply and demand can be. All in all, an intellectually exciting book which I thoroughly enjoyed reading.
Çaliskan challenges the classic model of supply and demand. In its stead he proposes that most prices are “prosthetic devices” that are “made, produced, and challenged by a multiplicity of actors in a market process that happens in a variety of trading places.” (p. 85) These prosthetic prices are trading tools that are used to make actual prices—that is, prices that normally result from bargaining and become contractual prices to buy or sell a certain number of bales of a particular variety of cotton.
If you are at all curious about how cotton is grown and harvested, how it is graded, and how it moves around the world this book is rich in detail. Çaliskan did fieldwork in Turkey and Egypt, sometimes literally working in the fields alongside local farmers. He spent time with cotton traders in these countries as well. He even enrolled in a two-month training program in Memphis, Tennessee, designed for future cotton traders.
A couple of random takeaways. Cotton farmers in Turkey sell their entire crop immediately after harvest to repay the loans they took out (rarely from banks) to grow the cotton; “growing cotton requires farmers to borrow heavily.” (p. 144) As a result, they are forced to sell into a market where prices are depressed. By contrast, as I recently learned, cotton farmers in the United States receive government subsidies to store their cotton until the price becomes more favorable.
Children, sometimes as young as seven, work in the fields performing such tasks as collecting cotton-leaf worm eggs and harvesting the crop. “There is usually no reward for good work, yet mistakes are punished either through defamation, or at times beatings. … An overseer told [the author] that he was warned by the field’s owner not to hit children’s hands, but their backs instead if necessary, for the hands are needed the most. … These small hands are perhaps the cheapest and most abundant labor force behind the making of a commodity for the world market.” (p. 172)
Although the author undertook extensive ethnographic research and shares abundantly, this book is really about the dynamics among the players that determine that illusive thing called price. It demonstrates in vivid detail how essential knowledge and bargaining skills are to finding a price and how far removed from reality the traditional model of supply and demand can be. All in all, an intellectually exciting book which I thoroughly enjoyed reading.
Saturday, November 20, 2010
Taleb's aphorisms
Just in time for holiday sales Nassim Taleb is back with what The New York Times dubs a "happily provocative new book of aphorisms," The Bed of Procrustes. If you are among the unwashed who don't understand the reference, "the Procrustes of Greek mythology was the cruel and ill-advised fool who stretched or shortened people to make them fit his inflexible bed." It's easy to understand why Taleb invoked this fool: "we humans, facing limits of knowledge, and things we do not observe, the unseen and the unknown, resolve the tension by squeezing life and the world into crisp commoditized ideas, reductive categories, specific vocabularies, and prepackaged narratives, which, on the occasion, has explosive consequences."
The book is short and inexpensive. I will undoubtedly succumb and buy it even though it evokes mixed memories of exchanged aphoristic barbs with a titan in his (very different) field. Academic cleverness can easily turn ugly. But then why should the battles for intellectual capital be any different from those for other forms of capital?
Friday, November 19, 2010
Granger causality and cointegration
I think we can all agree that if there is any causality in the financial markets it is not the same as classic scientific causality. Most tellingly, there are no financial laws that dictate that the occurrence of B (the effect) depends on the occurrence of A (the cause). Moreover, the often cited requirement of spatial contiguity is irrelevant. About the only thing that seems to be left from classic scientific causality is antecedence—that is, that the cause must be prior to the effect. But even that may be called into question. Think of the common situation in which the anticipation of an event gives rise to market movement. The event hasn’t yet occurred, yet it has to be referenced in describing the cause.
Let’s start our journey with the form of causality most popular with those engaged in time series forecasting. It is named after its developer Clive W.J. Granger, winner of the Nobel prize in economics (along with Robert Engle) in 2003. In his own words:
“The basic ‘Granger Causality’ definition is quite simple. Suppose that we have three terms, Xt, Yt, and Wt, and that we first attempt to forecast Xt+1 using past terms of Xt and Wt. We then try to forecast Xt+1 using past terms of Xt, Yt, and Wt. If the second forecast is found to be more successful, according to standard cost functions, then the past of Y appears to contain information helping in forecasting Xt+1 that is not in past Xt or Wt. … Thus, Yt would ‘Granger cause’ Xt+1 if (a) Yt occurs before Xt+1; and (b) it contains information useful in forecasting Xt+1 that is not found in a group of other appropriate variables.
"Naturally, the larger Wt is, and the more carefully its contents are selected, the more stringent a criterion Yt is passing. Eventually, Yt might seem to contain unique information about Xt+1 that is not found in other variables which is why the 'causality' label is perhaps appropriate."
Granger’s concept, as applied to time series, essentially says that although the current value of a time series can often be predicted from its own past values, the introduction of a second time series can improve predictive accuracy. This second time series, however, must be related to the first in a particular way. Otherwise, pairs of non-stationary time series can be highly correlated but not causally related. For instance, bread prices in Britain and sea levels in Venice both rise over time and hence are correlated, but they are clearly not causally connected. Enter the concept of cointegration.
In his Nobel lecture Granger explained: “if a pair of series [is] cointegrated then at least one of them must cause the other.” What does it mean for two series to be cointegrated? Here are three ways of picturing cointegration. First, Granger’s own. He compares a time series to a roughly stretched out string of pearls. Suppose, he says, that there were two similar strings of pearls, both laid out (or thrown) on the same table. “Each would represent smooth series but would follow different shapes and have no relationship. The distances between the two sets of pearls would also give a smooth series if you plotted it. However, if the pearls were set in small but strong magnets, it is possible that there would be an attraction between the two chains, and that they would have similar, but not identical, smooth shapes. In that case, the distance between the two sets of pearls would give a stationary series and this would give an example of cointegration.”
Here’s another example of cointegration offered by Thomas Karier in his book Intellectual Capital: Forty Years of the Nobel Prize in Economics (Cambridge University Press, 2010). (I can only assume he doesn’t own an untrained basset hound.) “Suppose a person and a dog are free to wander in any direction and we track their movements. If the person and the dog are unrelated, then there may not be any apparent relationship between the two paths. However, if the dog belongs to the person, then their paths should coincide more frequently and Granger would say that the two paths are cointegrated.” (pp. 270-71)
And finally, from Kevin D. Hoover comes what many would consider the best image for participants in the markets: the randomly-walking drunk and his faithful, sober friend who follows him to make sure he does not hurt himself. “Because he is following the drunk, the friend, viewed in isolation, also appears to follow a random walk, yet his path is not aimless; it is largely predictable, conditional on knowing where the drunk is.”
Pairs trading is often based on cointegration because theoretically this approach guarantees mean reversion in the long run although, as we know, not profits. (Those interested in pursuing this line of thinking might want to start with the Trading with Matlab blog post "Pairs Trading—Cointegration Testing." But, like the untrained basset hound, I’m wandering.)
The question is whether Granger causality is the best we can come up with. What are its flaws? That will be the subject of the next post in this series.
Let’s start our journey with the form of causality most popular with those engaged in time series forecasting. It is named after its developer Clive W.J. Granger, winner of the Nobel prize in economics (along with Robert Engle) in 2003. In his own words:
“The basic ‘Granger Causality’ definition is quite simple. Suppose that we have three terms, Xt, Yt, and Wt, and that we first attempt to forecast Xt+1 using past terms of Xt and Wt. We then try to forecast Xt+1 using past terms of Xt, Yt, and Wt. If the second forecast is found to be more successful, according to standard cost functions, then the past of Y appears to contain information helping in forecasting Xt+1 that is not in past Xt or Wt. … Thus, Yt would ‘Granger cause’ Xt+1 if (a) Yt occurs before Xt+1; and (b) it contains information useful in forecasting Xt+1 that is not found in a group of other appropriate variables.
"Naturally, the larger Wt is, and the more carefully its contents are selected, the more stringent a criterion Yt is passing. Eventually, Yt might seem to contain unique information about Xt+1 that is not found in other variables which is why the 'causality' label is perhaps appropriate."
Granger’s concept, as applied to time series, essentially says that although the current value of a time series can often be predicted from its own past values, the introduction of a second time series can improve predictive accuracy. This second time series, however, must be related to the first in a particular way. Otherwise, pairs of non-stationary time series can be highly correlated but not causally related. For instance, bread prices in Britain and sea levels in Venice both rise over time and hence are correlated, but they are clearly not causally connected. Enter the concept of cointegration.
In his Nobel lecture Granger explained: “if a pair of series [is] cointegrated then at least one of them must cause the other.” What does it mean for two series to be cointegrated? Here are three ways of picturing cointegration. First, Granger’s own. He compares a time series to a roughly stretched out string of pearls. Suppose, he says, that there were two similar strings of pearls, both laid out (or thrown) on the same table. “Each would represent smooth series but would follow different shapes and have no relationship. The distances between the two sets of pearls would also give a smooth series if you plotted it. However, if the pearls were set in small but strong magnets, it is possible that there would be an attraction between the two chains, and that they would have similar, but not identical, smooth shapes. In that case, the distance between the two sets of pearls would give a stationary series and this would give an example of cointegration.”
Here’s another example of cointegration offered by Thomas Karier in his book Intellectual Capital: Forty Years of the Nobel Prize in Economics (Cambridge University Press, 2010). (I can only assume he doesn’t own an untrained basset hound.) “Suppose a person and a dog are free to wander in any direction and we track their movements. If the person and the dog are unrelated, then there may not be any apparent relationship between the two paths. However, if the dog belongs to the person, then their paths should coincide more frequently and Granger would say that the two paths are cointegrated.” (pp. 270-71)
And finally, from Kevin D. Hoover comes what many would consider the best image for participants in the markets: the randomly-walking drunk and his faithful, sober friend who follows him to make sure he does not hurt himself. “Because he is following the drunk, the friend, viewed in isolation, also appears to follow a random walk, yet his path is not aimless; it is largely predictable, conditional on knowing where the drunk is.”
Pairs trading is often based on cointegration because theoretically this approach guarantees mean reversion in the long run although, as we know, not profits. (Those interested in pursuing this line of thinking might want to start with the Trading with Matlab blog post "Pairs Trading—Cointegration Testing." But, like the untrained basset hound, I’m wandering.)
The question is whether Granger causality is the best we can come up with. What are its flaws? That will be the subject of the next post in this series.
Thursday, November 18, 2010
Day, Investing in Resources
Anyone thinking about adding commodities to his portfolio, especially gold, would do well to read Adrian Day’s Investing in Resources: How to Profit from the Outsized Potential and Avoid the Risks (Wiley, 2010). It is sensible, well documented, and written in fluid prose.
Day is a gold bug, arguing that the precious metal is “the asset of choice for the next few years.” For one thing, it will perform well in a variety of scenarios; it is not subject to the “major risk associated with the resource complex, namely slowing demand from a major recession.” (p. 79) In fact, Day identifies fourteen reasons for gold to continue to go up, from monetary instability and reflation policies to supply and demand imbalances.
Let’s assume that we buy into the thesis that we are in the midst of a commodities super cycle yet know that commodities can be extremely volatile. What is the best way to gain exposure to commodities and at the same time control risk?
Addressing the second question first, Day suggests that investors divide their commodity investments into a core portfolio and a trading portfolio. “The core is intended to provide exposure to the broad complex for the duration of the super cycle. Here you will buy with less regard to price, hold for the long term, and accept volatility. In the trading portfolio, however, price is more critical, you will hold for shorter periods trying to maximize gains, and you will attempt to use volatility to your benefit. The goals are different: One is intended to provide certain exposure to the sector for the duration of the cycle, so what you own is critical; the other is intended to maximize gains from the sector, so how and when you own is critical.” (p. 125) Sometimes the same stock is included in each portfolio. Indeed, one of Day’s favorite strategies is “to take a long-term position in a favorite stock and then trade around the edges.” (p. 126)
Day carefully assesses the various ways of gaining exposure to commodities. He looks at the pros and cons of holding physical commodities, primarily precious metals, including numismatic coins (“an area rife with ignorance and worse”). He points out a spate of problems with commodity ETFs—the not-insignificant drag on returns from the need to continually roll over futures contracts, their complicated and often unwelcome tax treatment, and CFTC rules that enforce daily trading limits and overall investment levels for various commodities.
He devotes three chapters to investing in mining companies: the major producers, the junior producers or developmental companies, and the explorers. For investors, he contends, “the selection criteria and investing tactics should be different for each group.” (p. 149)
After his detailed analysis of investing in gold and gold companies, Day then moves on to the other metals--silver, platinum, copper, and the base metals and rare earths. In the final hundred pages of the book he looks at the energy sector and, briefly, at agriculture. He concludes with some suggestions for building a commodities portfolio.
Investing in Resources is a thoughtful, practical book for anyone who thinks that the commodity “Super Cycle” has years left to play out.
Day is a gold bug, arguing that the precious metal is “the asset of choice for the next few years.” For one thing, it will perform well in a variety of scenarios; it is not subject to the “major risk associated with the resource complex, namely slowing demand from a major recession.” (p. 79) In fact, Day identifies fourteen reasons for gold to continue to go up, from monetary instability and reflation policies to supply and demand imbalances.
Let’s assume that we buy into the thesis that we are in the midst of a commodities super cycle yet know that commodities can be extremely volatile. What is the best way to gain exposure to commodities and at the same time control risk?
Addressing the second question first, Day suggests that investors divide their commodity investments into a core portfolio and a trading portfolio. “The core is intended to provide exposure to the broad complex for the duration of the super cycle. Here you will buy with less regard to price, hold for the long term, and accept volatility. In the trading portfolio, however, price is more critical, you will hold for shorter periods trying to maximize gains, and you will attempt to use volatility to your benefit. The goals are different: One is intended to provide certain exposure to the sector for the duration of the cycle, so what you own is critical; the other is intended to maximize gains from the sector, so how and when you own is critical.” (p. 125) Sometimes the same stock is included in each portfolio. Indeed, one of Day’s favorite strategies is “to take a long-term position in a favorite stock and then trade around the edges.” (p. 126)
Day carefully assesses the various ways of gaining exposure to commodities. He looks at the pros and cons of holding physical commodities, primarily precious metals, including numismatic coins (“an area rife with ignorance and worse”). He points out a spate of problems with commodity ETFs—the not-insignificant drag on returns from the need to continually roll over futures contracts, their complicated and often unwelcome tax treatment, and CFTC rules that enforce daily trading limits and overall investment levels for various commodities.
He devotes three chapters to investing in mining companies: the major producers, the junior producers or developmental companies, and the explorers. For investors, he contends, “the selection criteria and investing tactics should be different for each group.” (p. 149)
After his detailed analysis of investing in gold and gold companies, Day then moves on to the other metals--silver, platinum, copper, and the base metals and rare earths. In the final hundred pages of the book he looks at the energy sector and, briefly, at agriculture. He concludes with some suggestions for building a commodities portfolio.
Investing in Resources is a thoughtful, practical book for anyone who thinks that the commodity “Super Cycle” has years left to play out.
Wednesday, November 17, 2010
Causation, the beginning of a journey
We have a seemingly insatiable urge to find causes for things. From “The market sold off because…” to “smoking causes cancer” to (and I kid you not) "Fear of hell makes us richer, Fed says." If x causes y, we seem to be in the comfortable world of rationality. There are reasons why y occurred; it was not some random or inexplicable event.
We distrust correlations because they often appear to lack rational, defensible foundations. Perhaps worse, they often masquerade as causal relations. James Stock, a Harvard economist, offered an example that is perhaps even more bizarre than the “fear of hell” study. “He noted that U.S. national income has been growing significantly for at least the last 100 years, and at the same time Mars has been slowly but steadily getting closer to the Earth. Because of these two long-term trends, it is virtually guaranteed that a simple statistical correlation would support the hypothesis that U.S. national income is determined by the country’s proximity to Mars.” (Thomas Karier, Intellectual Capital, p. 269)
And yet some correlations intuitively seem more causally linked than others. Let’s assume that we were presented with two equity index trading systems, one based on the relative strength or weakness of the U.S. dollar and the other on the motion of the planets. Let’s assume further that back tested over ten years these systems had identical profiles. Which system would most people be likely to embrace? The former, I assume, since we can more or less explain the sometimes simple, at other times complex relationship between equities and currencies whereas most people would be hard pressed to make any sense of the relationship between equities and the planets.
There are two main paths we can follow in trying to sort all this out. One is to embrace a kind (and possibly kinds) of causality that is either weaker than or different from “mainstream” causality. The second is to redefine the kinds of effects we are expecting, from invariable or regular to probabilistic (and “probabilistic” covers a wide range of possibilities). Ideally, these exploratory paths will eventually converge.
I’ve decided to begin a series of posts, erratically spaced, to inquire into these issues. Quite frankly, I don’t know where we’ll end up. Perhaps back where we began. Perhaps with such a watered-down version of causality that it’s virtually indistinguishable from correlation. But with any luck the journey will be educational. Maybe it will even inspire some ideas for system development and testing.
We distrust correlations because they often appear to lack rational, defensible foundations. Perhaps worse, they often masquerade as causal relations. James Stock, a Harvard economist, offered an example that is perhaps even more bizarre than the “fear of hell” study. “He noted that U.S. national income has been growing significantly for at least the last 100 years, and at the same time Mars has been slowly but steadily getting closer to the Earth. Because of these two long-term trends, it is virtually guaranteed that a simple statistical correlation would support the hypothesis that U.S. national income is determined by the country’s proximity to Mars.” (Thomas Karier, Intellectual Capital, p. 269)
And yet some correlations intuitively seem more causally linked than others. Let’s assume that we were presented with two equity index trading systems, one based on the relative strength or weakness of the U.S. dollar and the other on the motion of the planets. Let’s assume further that back tested over ten years these systems had identical profiles. Which system would most people be likely to embrace? The former, I assume, since we can more or less explain the sometimes simple, at other times complex relationship between equities and currencies whereas most people would be hard pressed to make any sense of the relationship between equities and the planets.
There are two main paths we can follow in trying to sort all this out. One is to embrace a kind (and possibly kinds) of causality that is either weaker than or different from “mainstream” causality. The second is to redefine the kinds of effects we are expecting, from invariable or regular to probabilistic (and “probabilistic” covers a wide range of possibilities). Ideally, these exploratory paths will eventually converge.
I’ve decided to begin a series of posts, erratically spaced, to inquire into these issues. Quite frankly, I don’t know where we’ll end up. Perhaps back where we began. Perhaps with such a watered-down version of causality that it’s virtually indistinguishable from correlation. But with any luck the journey will be educational. Maybe it will even inspire some ideas for system development and testing.
Tuesday, November 16, 2010
Proofiness
I’m going to do some posts on causality and correlation—undoubtedly fewer than I originally planned because I realized soon enough that struggling with a philosophical concept that has stymied the best minds for centuries and writing a trading/investing blog are not really compatible enterprises. Prefatory to this ill-conceived yet even in its truncated form perhaps enlightening venture, let me suggest that you “taste-test” Proofiness: The Dark Arts of Mathematical Deception by Charles Seife (Viking, 2010). Here are some links that provide interviews with the author and/or tidbits from the book.
"Lies, Damned Lies, and ‘Proofiness’"
"Fibbing with Numbers"
"The Dark Art of Statistical Deception"
"Lies, Damned Lies, and ‘Proofiness’"
"Fibbing with Numbers"
"The Dark Art of Statistical Deception"
Monday, November 15, 2010
Thomsett, Trading with Candlesticks
The prolific Michael C. Thomsett, probably best known for his books on options, has a new release: Trading with Candlesticks: Visual Tools for Improved Technical Analysis and Timing (FT Press, 2011). He describes the key single-stick signs, double-stick moves, and complex stick patterns. For those unfamiliar with the intricacies of candlestick charting, this is a clear, well-illustrated account.
It is also a sobering book for anyone who thinks that candlestick charts in and of themselves provide entry and exit signals. As Thomsett writes in what is perhaps an overstatement, “By itself, the chart—candlestick or other type—has limited value. … The candlestick chart is the easel, and the broader indicators are the paint.” (p. 18)
Over and over again Thomsett illustrates false signals, especially in single sticks—the marubozu that is followed by a downtrend, the dragonfly doji where price breaks below the doji’s lower shadow. Even complex candlestick patterns are often unreliable. In brief, it is insufficient to recognize a candlestick pattern; the pattern must be analyzed. In the case of reversal patterns, “the analysis should include judgment about whether the signal is true or false, the degree of strength or weakness in the reversal, and whether or not it confirms another indicator (or is confirmed in turn). Confirmation can include additional candlestick patterns, moving averages, and traditional technical signs.” (p. 91)
Candlestick charts monitor price. But “focusing solely on price trends is a mistake because changes in volume indicate changes in trading activity, and such changes often accompany or even anticipate changes in price trends. The same is true for changes in volatility levels; broadening trading ranges or repeated violations of support and resistance indicate coming price changes.” (p. 119) Combining such indicators as on-balance volume or Chaikin money flow with candlestick price trend analysis can “improve timing and bolster an initial indicator.” (p. 126) Recognizing changes in volatility as evidenced in such chart patterns as triangles and wedges is also an important part of a trader’s analysis.
Added to the mix are trendlines, Bollinger bands, MACD, overbought and oversold indicators (RSI and stochastics), support and resistance levels, and traditional patterns such as the head-and-shoulders formation. The rationale for all this “paint” is that a system with a series of confirming signals is more accurate than a single-indicator system. Thomsett does, however, warn against ending up with a canvas that uses so many colors that it becomes an unintelligible mess.
Thomsett is writing for the novice who wants to learn about candlestick patterns and who aspires to join the legions of chartists and technicians. His book is a reasonable place to start, especially since he dampens enthusiasm, stressing the ever-present possibility of false signals, no matter how many indicators confirm.
It is also a sobering book for anyone who thinks that candlestick charts in and of themselves provide entry and exit signals. As Thomsett writes in what is perhaps an overstatement, “By itself, the chart—candlestick or other type—has limited value. … The candlestick chart is the easel, and the broader indicators are the paint.” (p. 18)
Over and over again Thomsett illustrates false signals, especially in single sticks—the marubozu that is followed by a downtrend, the dragonfly doji where price breaks below the doji’s lower shadow. Even complex candlestick patterns are often unreliable. In brief, it is insufficient to recognize a candlestick pattern; the pattern must be analyzed. In the case of reversal patterns, “the analysis should include judgment about whether the signal is true or false, the degree of strength or weakness in the reversal, and whether or not it confirms another indicator (or is confirmed in turn). Confirmation can include additional candlestick patterns, moving averages, and traditional technical signs.” (p. 91)
Candlestick charts monitor price. But “focusing solely on price trends is a mistake because changes in volume indicate changes in trading activity, and such changes often accompany or even anticipate changes in price trends. The same is true for changes in volatility levels; broadening trading ranges or repeated violations of support and resistance indicate coming price changes.” (p. 119) Combining such indicators as on-balance volume or Chaikin money flow with candlestick price trend analysis can “improve timing and bolster an initial indicator.” (p. 126) Recognizing changes in volatility as evidenced in such chart patterns as triangles and wedges is also an important part of a trader’s analysis.
Added to the mix are trendlines, Bollinger bands, MACD, overbought and oversold indicators (RSI and stochastics), support and resistance levels, and traditional patterns such as the head-and-shoulders formation. The rationale for all this “paint” is that a system with a series of confirming signals is more accurate than a single-indicator system. Thomsett does, however, warn against ending up with a canvas that uses so many colors that it becomes an unintelligible mess.
Thomsett is writing for the novice who wants to learn about candlestick patterns and who aspires to join the legions of chartists and technicians. His book is a reasonable place to start, especially since he dampens enthusiasm, stressing the ever-present possibility of false signals, no matter how many indicators confirm.
Friday, November 12, 2010
Pressfield, The War of Art
Steven Pressfield’s The War of Art: Break Through the Blocks and Win Your Inner Creative Battles was published in 2002, but I just came across it. I consider it a find. Pressfield is a novelist, which means that the book is several cuts above the standard self-help manual stylistically. Perhaps even more important, it is infused with humor, so it’s fun to read. Herewith a few excerpts.
The key theme of the book is what prevents us from achieving our dreams, especially our creative dreams, and how to overcome it. Pressfield labels this destructive force Resistance. It manifests itself most notably in procrastination and rationalization. What does it feel like? “First, unhappiness. We feel like hell. A low-grade misery pervades everything. We’re bored, we’re restless. We can’t get no satisfaction. There’s guilt but we can’t put our finger on the source. We want to go back to bed; we want to get up and party. We feel unloved and unlovable. We’re disgusted. We hate our lives. We hate ourselves. Unalleviated, Resistance mounts to a pitch that becomes unendurable. At this point vices kick in. Dope, adultery, web surfing.” (p. 31)
Is the problem fear? No, Pressfield writes. “Fear is good. Like self-doubt, fear is an indicator. Fear tells us what we have to do. … [T]he more fear we feel about a specific enterprise, the more certain we can be that that enterprise is important to us.” (p. 40)
So what then is the problem? Those who are defeated by Resistance “share one trait. They all think like amateurs. They have not yet turned pro.” (p. 62) Pros are not weekend warriors. They show up every day, no matter what, and they stay on the job all day. They master the necessary techniques, receive praise or blame in the real world, and have a sense of humor. They love what they do, but they play for money. “The more you love your art/calling/enterprise, the more important its accomplishment is to the evolution of your soul, the more you will fear it and the more Resistance you will experience facing it. The payoff of playing-the-game-for-money is not the money (which you may never see anyway, even after you turn pro). The payoff is that playing the game for money produces the proper professional attitude. It inculcates the lunch-pail mentality, the hard-core, hard-head, hard-hat state of mind that shows up for work despite rain or snow or dark of night and slugs it out day after day.” (pp. 73-74)
The professional “is prepared, each day, to confront his own self-sabotage. … He is prepared to be prudent and prepared to be reckless, to take a beating when he has to, and to go for the throat when he can. He understands that the field alters every day. His goal is not victory (success will come by itself when it wants to) but to handle himself, his insides, as sturdily and steadily as he can.” (p. 82)
The professional also “dedicates himself to mastering technique not because he believes technique is a substitute for inspiration but because he wants to be in possession of the full arsenal of skills when inspiration does come.” (p. 84)
Some food for thought.
The key theme of the book is what prevents us from achieving our dreams, especially our creative dreams, and how to overcome it. Pressfield labels this destructive force Resistance. It manifests itself most notably in procrastination and rationalization. What does it feel like? “First, unhappiness. We feel like hell. A low-grade misery pervades everything. We’re bored, we’re restless. We can’t get no satisfaction. There’s guilt but we can’t put our finger on the source. We want to go back to bed; we want to get up and party. We feel unloved and unlovable. We’re disgusted. We hate our lives. We hate ourselves. Unalleviated, Resistance mounts to a pitch that becomes unendurable. At this point vices kick in. Dope, adultery, web surfing.” (p. 31)
Is the problem fear? No, Pressfield writes. “Fear is good. Like self-doubt, fear is an indicator. Fear tells us what we have to do. … [T]he more fear we feel about a specific enterprise, the more certain we can be that that enterprise is important to us.” (p. 40)
So what then is the problem? Those who are defeated by Resistance “share one trait. They all think like amateurs. They have not yet turned pro.” (p. 62) Pros are not weekend warriors. They show up every day, no matter what, and they stay on the job all day. They master the necessary techniques, receive praise or blame in the real world, and have a sense of humor. They love what they do, but they play for money. “The more you love your art/calling/enterprise, the more important its accomplishment is to the evolution of your soul, the more you will fear it and the more Resistance you will experience facing it. The payoff of playing-the-game-for-money is not the money (which you may never see anyway, even after you turn pro). The payoff is that playing the game for money produces the proper professional attitude. It inculcates the lunch-pail mentality, the hard-core, hard-head, hard-hat state of mind that shows up for work despite rain or snow or dark of night and slugs it out day after day.” (pp. 73-74)
The professional “is prepared, each day, to confront his own self-sabotage. … He is prepared to be prudent and prepared to be reckless, to take a beating when he has to, and to go for the throat when he can. He understands that the field alters every day. His goal is not victory (success will come by itself when it wants to) but to handle himself, his insides, as sturdily and steadily as he can.” (p. 82)
The professional also “dedicates himself to mastering technique not because he believes technique is a substitute for inspiration but because he wants to be in possession of the full arsenal of skills when inspiration does come.” (p. 84)
Some food for thought.
Wednesday, November 10, 2010
Defining risk, an allegedly impossible task
Glyn A. Holton, a contributor to Haslett’s Risk Management, tackles a fundamental conceptual problem, “Defining Risk” (pp. 113-123). His thesis is that risk lies at the intersection of subjective probability and operationalism.
He starts with Hume, who laid the philosophical foundation for both streams when he wrote: “Though there be no such thing as Chance in the world; our ignorance of the real cause of any event has the same influence on the understanding, and begets a like species of belief or opinion.”
Holton then criticizes objectivists such as Frank Knight and Keynes who believed that risk is real. Knight distinguished between objective or measurable probabilities and subjective or unmeasurable probabilities, designating the former as risk and the latter as uncertainty. For Keynes probabilities apply not to individual propositions but to pairs of propositions where one proposition is not known to be true or false and the second is the evidence for the first.
It’s not important to go into Holton’s arguments against objectivism; we can skip straight to his own efforts to define risk. As a first stab, he suggests that risk has two essential components: exposure (a person has a personal interest or stake in what transpires) and uncertainty. He admits, however, that to define risk as “exposure to a proposition of which one is uncertain” is flawed.
It is indeed flawed if one accepts Percy Bridgman’s philosophy of operationalism, developed in his 1927 book The Logic of Modern Physics, which contends that “we mean by any concept nothing more than a set of operations.” From Bridgman’s viewpoint, Holton’s preliminary definition of risk would be inadequate because it is intuitive; it “depends on the notions of exposure and uncertainty, neither of which can be defined operationally.”
The paper’s conclusion is that there is no true risk. “At best, we can operationally define our perception of risk.” Therefore, when assessing risk metrics such as delta, value-at-risk, or beta in financial applications, we can never ask whether they capture true risk or whether they misrepresent risk. The most we can ask is whether a particular risk metric is useful, whether it will “promote behavior that management considers desirable.”
Holton’s conclusion may seem intellectually unsatisfactory, but at least it’s a position worth arguing against.
He starts with Hume, who laid the philosophical foundation for both streams when he wrote: “Though there be no such thing as Chance in the world; our ignorance of the real cause of any event has the same influence on the understanding, and begets a like species of belief or opinion.”
Holton then criticizes objectivists such as Frank Knight and Keynes who believed that risk is real. Knight distinguished between objective or measurable probabilities and subjective or unmeasurable probabilities, designating the former as risk and the latter as uncertainty. For Keynes probabilities apply not to individual propositions but to pairs of propositions where one proposition is not known to be true or false and the second is the evidence for the first.
It’s not important to go into Holton’s arguments against objectivism; we can skip straight to his own efforts to define risk. As a first stab, he suggests that risk has two essential components: exposure (a person has a personal interest or stake in what transpires) and uncertainty. He admits, however, that to define risk as “exposure to a proposition of which one is uncertain” is flawed.
It is indeed flawed if one accepts Percy Bridgman’s philosophy of operationalism, developed in his 1927 book The Logic of Modern Physics, which contends that “we mean by any concept nothing more than a set of operations.” From Bridgman’s viewpoint, Holton’s preliminary definition of risk would be inadequate because it is intuitive; it “depends on the notions of exposure and uncertainty, neither of which can be defined operationally.”
The paper’s conclusion is that there is no true risk. “At best, we can operationally define our perception of risk.” Therefore, when assessing risk metrics such as delta, value-at-risk, or beta in financial applications, we can never ask whether they capture true risk or whether they misrepresent risk. The most we can ask is whether a particular risk metric is useful, whether it will “promote behavior that management considers desirable.”
Holton’s conclusion may seem intellectually unsatisfactory, but at least it’s a position worth arguing against.
Tuesday, November 9, 2010
Quant equation archive
I just found this site, sitmo.com, and thought I would pass along my discovery even though it might be old hat for some of my readers. It has a collection of option calculators as well as a quant equation archive. Lots of fascinating stuff here.
Monday, November 8, 2010
Baker & Nofsinger, eds., Behavioral Finance
Behavioral Finance: Investors, Corporations, and Markets, edited by H. Kent Baker and John R. Nofsinger (Wiley, 2010) is a must-have book for anyone who wants a comprehensive review of the literature on behavioral finance. In thirty-six chapters academics from around the world write about the key concepts of behavioral finance, behavioral biases, behavioral aspects of asset pricing, behavioral corporate finance, investor behavior, and social influences. The book is hefty (757 pages of typographically dense text), and each contribution includes an extensive bibliography. But this is not simply a reference book; it reads surprisingly well.
Why should we study behavioral finance? “Anyone with a spouse, child, boss, or modicum of self-insight knows that the assumption of Homo economicus is false.” (p. 23) In our investing and trading—indeed, in all the financial decisions we make, we are prone to behavioral biases; we are often inconsistent in our choices. Only if we understand the kinds of emotional pulls that negatively affect our financial decisions can we begin to address them as problems. Some of the authors offer suggestions for overcoming these problems.
Here are a few takeaways from the book that give a sense of its tone and breadth.
First, I am happy to report that the literature shows that “high-IQ investors have better stock-picking abilities” than low-IQ investors and they “also appear more skillful because they incur lower transaction costs.” (p. 571) I figure that everyone reading this review falls into the Lake Wobegon category.
Second, individual investors can form powerful herds. “[T]rading by individuals is highly correlated and surprisingly persistent. …[I]ndividual investors tend to commit the same kind of behavioral biases at or around the same time [and hence] have the potential of aggregating. If this is the case, individual investors cannot be treated merely as noise traders but more like a giant institution in terms of their potential impact on the markets.” (p. 531)
Third, what are some of the behavioral factors affecting perceived risk? Although the author lists eleven factors, I’ll share just two. “Benefit: The more individuals perceive a benefit from a potential risky activity, the more accepting and less anxiety (fear) they feel…. Controllability: People undertake more risk when they perceive they are personally in control because they are more likely to trust their own abilities and skills….” (p. 139)
And finally, investors’ attitude toward risk is not fixed. They care about fluctuations in their wealth, not simply the total level. “[T]hey are much more sensitive to reductions in their wealth than to increases,” and “people are less risk averse after prior gains and more risk averse after prior losses.” (p. 355) Interestingly, CBOT traders tend to exhibit a different pattern, reducing risk in the afternoon if they’ve had a profitable morning.
As should be expected in this kind of volume, there is a fair amount of repetition. The same studies are quoted by several authors. We read about such topics as overconfidence and the disposition effect multiple times. The context is different, the principles are the same. But through repetition we come to appreciate the scope of behavioral finance (and often its limitations as well).
Although this book is certainly no primer, the reader needs only a passing familiarity with behavioral finance to profit from it. And for those who are better acquainted with the field, it is a useful compendium and an excellent research tool. It has earned a place in my library.
Why should we study behavioral finance? “Anyone with a spouse, child, boss, or modicum of self-insight knows that the assumption of Homo economicus is false.” (p. 23) In our investing and trading—indeed, in all the financial decisions we make, we are prone to behavioral biases; we are often inconsistent in our choices. Only if we understand the kinds of emotional pulls that negatively affect our financial decisions can we begin to address them as problems. Some of the authors offer suggestions for overcoming these problems.
Here are a few takeaways from the book that give a sense of its tone and breadth.
First, I am happy to report that the literature shows that “high-IQ investors have better stock-picking abilities” than low-IQ investors and they “also appear more skillful because they incur lower transaction costs.” (p. 571) I figure that everyone reading this review falls into the Lake Wobegon category.
Second, individual investors can form powerful herds. “[T]rading by individuals is highly correlated and surprisingly persistent. …[I]ndividual investors tend to commit the same kind of behavioral biases at or around the same time [and hence] have the potential of aggregating. If this is the case, individual investors cannot be treated merely as noise traders but more like a giant institution in terms of their potential impact on the markets.” (p. 531)
Third, what are some of the behavioral factors affecting perceived risk? Although the author lists eleven factors, I’ll share just two. “Benefit: The more individuals perceive a benefit from a potential risky activity, the more accepting and less anxiety (fear) they feel…. Controllability: People undertake more risk when they perceive they are personally in control because they are more likely to trust their own abilities and skills….” (p. 139)
And finally, investors’ attitude toward risk is not fixed. They care about fluctuations in their wealth, not simply the total level. “[T]hey are much more sensitive to reductions in their wealth than to increases,” and “people are less risk averse after prior gains and more risk averse after prior losses.” (p. 355) Interestingly, CBOT traders tend to exhibit a different pattern, reducing risk in the afternoon if they’ve had a profitable morning.
As should be expected in this kind of volume, there is a fair amount of repetition. The same studies are quoted by several authors. We read about such topics as overconfidence and the disposition effect multiple times. The context is different, the principles are the same. But through repetition we come to appreciate the scope of behavioral finance (and often its limitations as well).
Although this book is certainly no primer, the reader needs only a passing familiarity with behavioral finance to profit from it. And for those who are better acquainted with the field, it is a useful compendium and an excellent research tool. It has earned a place in my library.
Friday, November 5, 2010
Nyaradi, Super Sectors
John Nyaradi’s Super Sectors: How to Outsmart the Market Using Sector Rotation and ETFs (Wiley, 2010) is for the most part a book for the novice investor who wants to be a bit more active in the market. The general plan is to use ETFs combined with straightforward signals to find winners and manage losses.
After introducing ETFs and the classic S&P sector rotation model, Nyaradi recommends expanding one’s horizon beyond the nine basic sectors in the search for ETFs that will outperform. The “new science of sector rotation” has a much larger palette from which to select, including international offerings, currencies, and commodities. Moreover, given government intervention in the economy after the “Great Recession,” a trading plan that uses sector rotation can no longer rely exclusively on the traditional economic cycle. Instead, Nyaradi suggests a more technical approach.
He offers several trading systems, all mechanical and easy to implement. The first is “almost like buy and hold” and relies on a long-term moving average for buy and sell signals. The second uses support and resistance lines on point and figure charts. The third system invokes the familiar golden crossover.
Nyaradi’s own system for trading ETFs is a bit more complicated because it looks for confirmation of the likelihood of a profitable trade. He therefore uses five signals to determine if and when to enter a trade, two based on point and figure charts, two on technical indicators, and the last on relative strength. He then describes how to score these five signals. The ETFs with the highest scores have the highest probability for profit.
As we all know, getting into a trade is less than half the battle. The hard part is managing the trade and knowing when and how to get out. Nyaradi discusses position sizing, stop placement, and exit strategies.
After a chapter on the psychology of trading, he gets to what the title promised—five super sectors. No nail biters here, though room for debate. They are Asia, energy, health care, technology, and financials.
The most interesting part of the book for any reader who is not a rank novice is the chapter entitled “Ask the Experts.” Nyaradi has gathered a cast of eighteen top investors, traders, and managers to pick their brains about what they view as potential super sectors. The interviewees are Larry Connors, Marc Faber, Keith Fitz-Gerald, Todd Harrison, Gene Inger, Carl Larry, Timothy Lutts, Tom Lydon, John Mauldin, Lawrence G. McMillan, Paul Merriman, Robert Prechter, Jim Rogers, Matthew Simmons, Sam Stovall, Cliff Wachtel, and Gabriel Wisdom and Michael Moore. The interviews average two and a half pages each.
After introducing ETFs and the classic S&P sector rotation model, Nyaradi recommends expanding one’s horizon beyond the nine basic sectors in the search for ETFs that will outperform. The “new science of sector rotation” has a much larger palette from which to select, including international offerings, currencies, and commodities. Moreover, given government intervention in the economy after the “Great Recession,” a trading plan that uses sector rotation can no longer rely exclusively on the traditional economic cycle. Instead, Nyaradi suggests a more technical approach.
He offers several trading systems, all mechanical and easy to implement. The first is “almost like buy and hold” and relies on a long-term moving average for buy and sell signals. The second uses support and resistance lines on point and figure charts. The third system invokes the familiar golden crossover.
Nyaradi’s own system for trading ETFs is a bit more complicated because it looks for confirmation of the likelihood of a profitable trade. He therefore uses five signals to determine if and when to enter a trade, two based on point and figure charts, two on technical indicators, and the last on relative strength. He then describes how to score these five signals. The ETFs with the highest scores have the highest probability for profit.
As we all know, getting into a trade is less than half the battle. The hard part is managing the trade and knowing when and how to get out. Nyaradi discusses position sizing, stop placement, and exit strategies.
After a chapter on the psychology of trading, he gets to what the title promised—five super sectors. No nail biters here, though room for debate. They are Asia, energy, health care, technology, and financials.
The most interesting part of the book for any reader who is not a rank novice is the chapter entitled “Ask the Experts.” Nyaradi has gathered a cast of eighteen top investors, traders, and managers to pick their brains about what they view as potential super sectors. The interviewees are Larry Connors, Marc Faber, Keith Fitz-Gerald, Todd Harrison, Gene Inger, Carl Larry, Timothy Lutts, Tom Lydon, John Mauldin, Lawrence G. McMillan, Paul Merriman, Robert Prechter, Jim Rogers, Matthew Simmons, Sam Stovall, Cliff Wachtel, and Gabriel Wisdom and Michael Moore. The interviews average two and a half pages each.
Thursday, November 4, 2010
Beck, The Gartley Trading Method
It’s odd that just as buy and hold is pronounced dead a spate of books arrives advocating a more patient approach to trading. Ross L. Beck’s The Gartley Trading Method: New Techniques to Profit from the Market’s Most Powerful Formation (Wiley, 2010) is the latest. Beck invokes Jesse Livermore’s classic quotation, “It never was my thinking that made the big money for me. It always was my sitting.”
Although there are several modern renditions of the Gartley pattern (the most notable coming from Larry Pesavento and Scott Carney), Beck returns to the original pattern to begin building his own version. In Profits in the Stock Market (1935), H. M. Gartley discussed this pattern under the heading “One of the Best Trading Opportunities.” It is a reversal pattern after a substantial trend move (A-B). There is a corrective B-C leg with expanding volume typical of what happens when traders cover their shorts in the first instance or close out their longs in the second case. Focusing on the first case to keep things simple, the B-C rally exceeds the previous rallies in the A-B downtrend in both price and time. “And when a minor decline, after canceling a third to a half of the preceding minor advance (B-C) comes to a halt,” Gartley writes, “with volume drying up again, a real opportunity is presented to buy stocks, with a stop under the previous low.” (p. 44)
In its general outlines this pattern should be familiar to traders, though not under the Gartley trademark. Think, for instance, of the Trader Vic 1-2-3.
It’s hard to leave well enough alone. Beck re-labels the pattern and suggests that “in addition to conforming to Elliott Wave, … the real key to making this pattern work has to do with angles and the geometry of W. D. Gann.” (p. 73) He also agrees with Scott Carney that “the best Gartleys are the ones that complete at .786.” So overlaid on Gartley’s simple pattern are Fibonacci ratios, Elliott waves, and Gann geometry.
Beck also introduces trade-continuation Gartleys. Here there is no need for volume analysis, and Gartley’s A-B leg is absent. Instead, “when there appears to be a corrective phase, such as an Elliott Wave 4 taking place during an impulsive phase, then look for the following: 1. AB = CD is apparent in the corrective phase. 2. AB = CD price projection clusters appear with one of the [Fibonacci] ratios.” (p. 75)
Beck then outlines entry and exit strategies. Entry strategies include the Fibonacci entry method, the 1-bar reversal entry method, the candlestick entry method, and the technical indicator entry method. Exit strategies are more difficult; Beck advocates a single in/scale out approach. He demonstrates his technique with several case studies.
In the appendixes he looks at Elliott Wave theory, Gann’s mysterious emblem, and the Wolfe wave.
Fibonacci and Elliott wave traders will find a lot to like in this book. The basic Gartley pattern lends itself to all sorts of emendations, and Beck’s is one of the more clearly articulated.
Although there are several modern renditions of the Gartley pattern (the most notable coming from Larry Pesavento and Scott Carney), Beck returns to the original pattern to begin building his own version. In Profits in the Stock Market (1935), H. M. Gartley discussed this pattern under the heading “One of the Best Trading Opportunities.” It is a reversal pattern after a substantial trend move (A-B). There is a corrective B-C leg with expanding volume typical of what happens when traders cover their shorts in the first instance or close out their longs in the second case. Focusing on the first case to keep things simple, the B-C rally exceeds the previous rallies in the A-B downtrend in both price and time. “And when a minor decline, after canceling a third to a half of the preceding minor advance (B-C) comes to a halt,” Gartley writes, “with volume drying up again, a real opportunity is presented to buy stocks, with a stop under the previous low.” (p. 44)
In its general outlines this pattern should be familiar to traders, though not under the Gartley trademark. Think, for instance, of the Trader Vic 1-2-3.
It’s hard to leave well enough alone. Beck re-labels the pattern and suggests that “in addition to conforming to Elliott Wave, … the real key to making this pattern work has to do with angles and the geometry of W. D. Gann.” (p. 73) He also agrees with Scott Carney that “the best Gartleys are the ones that complete at .786.” So overlaid on Gartley’s simple pattern are Fibonacci ratios, Elliott waves, and Gann geometry.
Beck also introduces trade-continuation Gartleys. Here there is no need for volume analysis, and Gartley’s A-B leg is absent. Instead, “when there appears to be a corrective phase, such as an Elliott Wave 4 taking place during an impulsive phase, then look for the following: 1. AB = CD is apparent in the corrective phase. 2. AB = CD price projection clusters appear with one of the [Fibonacci] ratios.” (p. 75)
Beck then outlines entry and exit strategies. Entry strategies include the Fibonacci entry method, the 1-bar reversal entry method, the candlestick entry method, and the technical indicator entry method. Exit strategies are more difficult; Beck advocates a single in/scale out approach. He demonstrates his technique with several case studies.
In the appendixes he looks at Elliott Wave theory, Gann’s mysterious emblem, and the Wolfe wave.
Fibonacci and Elliott wave traders will find a lot to like in this book. The basic Gartley pattern lends itself to all sorts of emendations, and Beck’s is one of the more clearly articulated.
Wednesday, November 3, 2010
“The Grieving Owl”—the analyst and the trader
To those who have wondered both silently and in writing whether I ever read anything that isn’t financially related, the short answer is “yes.” Rarely, however, can I mine any of these books for blog post material. For instance, I recently finished John Le Carré’s Our Kind of Traitor. Since it had a Russian money laundering backdrop, I thought it might prove fruitful. Wrong. But then, to make a long story short, and a short story very short, came David Sedaris’ Squirrel Seeks Chipmunk: A Modest Bestiary (Little, Brown, 2010) and “The Grieving Owl.” I’ve taken the liberty of subtitling this story “the analyst and the trader,” even though the book carries the usual disclaimer: “Any similarity to real persons, living or dead, is coincidental and not intended by the author.” I should add my own disclaimer: The portrait of the “trader” may be unduly harsh, but don’t forget which owl is painting it. Take it in the whimsical spirit in which it is offered.
Here are the salient passages.
“It’s not just that they’re stupid, my family—that, I could forgive. It’s that they’re actively against knowledge—opposed to it the way that cats, say, are opposed to swimming, or turtles have taken a stand against mountain climbing. All they talk about is food, food, food, which can be interesting but usually isn’t.”
. . .
“One of the things an owl learns early is never engage with the prey. It’s good advice if you want to eat and continue to feel good about yourself. Catch the thing and kill it immediately, and you can believe that it wanted to die, that the life it led—this mean little exercise in scratching the earth or collecting seeds from pods—was not a real life but just some pale imitation of it. The drawback is that you learn nothing new.”
The narrator owl hasn’t learned this survivalist lesson well; he wants to learn new things. And so he bargains with his potential prey, the rat: “Teach me something new, and I’ll let you go.” The rat obliges and is duly freed to take off across the parking lot. But “just as he reached the restaurant’s back door,” the narrator owl continues, “my pill of a brother swooped down and carried him away. It seemed he had been following me, just as, a week earlier, I’d been trailed by my older sister, who ate the kitten I had just interrogated, the one who taught me the difference between regular yarn and angora, which is reportedly just that much softer.
“'Who’s the smart one now?’ my brother hooted as he flew off over the steak house. I might have given chase, but the rat was already dead—done in, surely, by my brother’s talons the second he snatched him up. This has become a game for certain members of my family. Rather than hunt their own prey, they trail behind me and eat whoever it was I’d just been talking to. ‘It saves me time,’ my sister explained after last week’s kitten episode.
“With the few hours she saved, I imagine she sat on a branch and blinked, not a thought in her empty head.” (pp. 74-75)
Here are the salient passages.
“It’s not just that they’re stupid, my family—that, I could forgive. It’s that they’re actively against knowledge—opposed to it the way that cats, say, are opposed to swimming, or turtles have taken a stand against mountain climbing. All they talk about is food, food, food, which can be interesting but usually isn’t.”
. . .
“One of the things an owl learns early is never engage with the prey. It’s good advice if you want to eat and continue to feel good about yourself. Catch the thing and kill it immediately, and you can believe that it wanted to die, that the life it led—this mean little exercise in scratching the earth or collecting seeds from pods—was not a real life but just some pale imitation of it. The drawback is that you learn nothing new.”
The narrator owl hasn’t learned this survivalist lesson well; he wants to learn new things. And so he bargains with his potential prey, the rat: “Teach me something new, and I’ll let you go.” The rat obliges and is duly freed to take off across the parking lot. But “just as he reached the restaurant’s back door,” the narrator owl continues, “my pill of a brother swooped down and carried him away. It seemed he had been following me, just as, a week earlier, I’d been trailed by my older sister, who ate the kitten I had just interrogated, the one who taught me the difference between regular yarn and angora, which is reportedly just that much softer.
“'Who’s the smart one now?’ my brother hooted as he flew off over the steak house. I might have given chase, but the rat was already dead—done in, surely, by my brother’s talons the second he snatched him up. This has become a game for certain members of my family. Rather than hunt their own prey, they trail behind me and eat whoever it was I’d just been talking to. ‘It saves me time,’ my sister explained after last week’s kitten episode.
“With the few hours she saved, I imagine she sat on a branch and blinked, not a thought in her empty head.” (pp. 74-75)
Tuesday, November 2, 2010
Models
Emanuel Derman, author of the popular My Life as a Quant, is always worth reading. His piece in Haslett’s Risk Management (Wiley, 2010), “Models” (pp. 681-88), contrasts the models of hobbyists, scientific models, and financial models. Hobbyists are satisfied, sometimes delighted, with resemblance: a model airplane resembles the real thing. Scientists with their models aim to foretell the future and control it. These models can be either fundamental (laws of the universe) or phenomenological. Phenomenological models “make pragmatic analogies between things one would like to understand and things one already understands from fundamental models.” They are approximations and “often have a toylike quality.”
Financial models “are used less for divination than for interpolation or extrapolation from the known dollar prices of liquid securities to the unknown dollar values of illiquid securities.” For example, the Black-Scholes model “proceeds from a known stock price and a riskless bond price to the unknown price of a hybrid security—an option—much in the same way one estimates the value of fruit salad from its constituent fruits or, inversely, the way one estimates the price of one fruit from the prices of the other fruits in the salad. None of these metrics is strictly accurate, but they all provide immensely helpful ways to begin to estimate value.”
Financial models transform intuitive linear quantities into nonlinear dollar values. We can transform price per square foot into the dollar value of an apartment; this is intuitively easy because price per square foot “captures much of the variability of apartment prices. Similarly, P/E describes much of the variability of share prices. Developing intuition about yield to maturity, option-adjusted spread, default probability, or return volatility is harder than thinking about price per square foot. Nevertheless, all of these parameters are clearly related to value and easier to think about than dollar value itself. They are intuitively graspable, and the more sophisticated one becomes, the richer one’s intuition becomes. Models are developed by leapfrogging from a simple, intuitive mental concept (e.g., volatility) to the mathematics that describes it (e.g., geometric Brownian motion, the Black-Scholes model), to a richer mental concept (e.g., the volatility smile), to experienced-based intuition about it, and, finally, to a model (e.g., a stochastic volatility model) that incorporates the new concept.”
Alas, “the gap between a successful financial model and the correct value is nearly indefinable because fair value is finance’s fata morgana, undefined by prices, which themselves are not stationary. So, model success is temporary at best. If fair value were precisely calculable, markets would not exist.”
The essence of financial modeling is to use the known price of a security that is as similar as possible to the security whose value you want to know. “The law of one price [that any two securities with identical estimated future payoffs, no matter how the future turns out, should have identical current prices]—this valuation by analogy—is the only genuine law in quantitative finance, and it is not a law of nature. It is a general reflection on the practices of human beings—who, when they have enough time and enough information, will grab a bargain when they see one.” The modeler’s job is to show that “the target and the replicating portfolio have identical future payoffs under all circumstances.” That’s tricky, of course. The Black-Scholes model, for example, sees a future that is not real since stock returns are not normally distributed nor do stock prices move continuously.
Financial models change over time to reflect changing economic conditions and increasing financial sophistication, but their correctness is always uncertain and this uncertainty is much vaguer than probabilistic risk. In the final analysis “models are best regarded as a collection of parallel, inanimate ‘thought universes’ to explore. Each universe should be internally consistent, but the financial/human world, unlike the world of matter, is vastly more complex and vivacious than any model we could ever make of it.”
* * *
A footnote to this summary of Derman’s paper. In The Business of Options (Wiley, 2001) Martin O’Connell recalls a 1985 seminar on interest rate options where he was one of four speakers. The best known was Myron Scholes. “One of the participants was quite persistent in hassling Dr. Scholes about perceived imperfections in his model. Finally, things came to a head when the guy said: ‘Your model is just wrong.’ Dr. Scholes, who so far had not said anything funny or ironic, came back with: ‘Of course, it’s wrong. That’s why we call it a model.’” (p. 37)
Financial models “are used less for divination than for interpolation or extrapolation from the known dollar prices of liquid securities to the unknown dollar values of illiquid securities.” For example, the Black-Scholes model “proceeds from a known stock price and a riskless bond price to the unknown price of a hybrid security—an option—much in the same way one estimates the value of fruit salad from its constituent fruits or, inversely, the way one estimates the price of one fruit from the prices of the other fruits in the salad. None of these metrics is strictly accurate, but they all provide immensely helpful ways to begin to estimate value.”
Financial models transform intuitive linear quantities into nonlinear dollar values. We can transform price per square foot into the dollar value of an apartment; this is intuitively easy because price per square foot “captures much of the variability of apartment prices. Similarly, P/E describes much of the variability of share prices. Developing intuition about yield to maturity, option-adjusted spread, default probability, or return volatility is harder than thinking about price per square foot. Nevertheless, all of these parameters are clearly related to value and easier to think about than dollar value itself. They are intuitively graspable, and the more sophisticated one becomes, the richer one’s intuition becomes. Models are developed by leapfrogging from a simple, intuitive mental concept (e.g., volatility) to the mathematics that describes it (e.g., geometric Brownian motion, the Black-Scholes model), to a richer mental concept (e.g., the volatility smile), to experienced-based intuition about it, and, finally, to a model (e.g., a stochastic volatility model) that incorporates the new concept.”
Alas, “the gap between a successful financial model and the correct value is nearly indefinable because fair value is finance’s fata morgana, undefined by prices, which themselves are not stationary. So, model success is temporary at best. If fair value were precisely calculable, markets would not exist.”
The essence of financial modeling is to use the known price of a security that is as similar as possible to the security whose value you want to know. “The law of one price [that any two securities with identical estimated future payoffs, no matter how the future turns out, should have identical current prices]—this valuation by analogy—is the only genuine law in quantitative finance, and it is not a law of nature. It is a general reflection on the practices of human beings—who, when they have enough time and enough information, will grab a bargain when they see one.” The modeler’s job is to show that “the target and the replicating portfolio have identical future payoffs under all circumstances.” That’s tricky, of course. The Black-Scholes model, for example, sees a future that is not real since stock returns are not normally distributed nor do stock prices move continuously.
Financial models change over time to reflect changing economic conditions and increasing financial sophistication, but their correctness is always uncertain and this uncertainty is much vaguer than probabilistic risk. In the final analysis “models are best regarded as a collection of parallel, inanimate ‘thought universes’ to explore. Each universe should be internally consistent, but the financial/human world, unlike the world of matter, is vastly more complex and vivacious than any model we could ever make of it.”
* * *
A footnote to this summary of Derman’s paper. In The Business of Options (Wiley, 2001) Martin O’Connell recalls a 1985 seminar on interest rate options where he was one of four speakers. The best known was Myron Scholes. “One of the participants was quite persistent in hassling Dr. Scholes about perceived imperfections in his model. Finally, things came to a head when the guy said: ‘Your model is just wrong.’ Dr. Scholes, who so far had not said anything funny or ironic, came back with: ‘Of course, it’s wrong. That’s why we call it a model.’” (p. 37)
Monday, November 1, 2010
Tatro, Trade the Trader
The key argument of Quint Tatro’s Trade the Trader: Know Your Competition and Find Your Edge for Profitable Trading (FT Press, 2010) is that since average investors have flocked to technical analysis “you must be willing to trade on the failure of these patterns to exploit the crowd’s movement for your own benefit.” (p. 32)
Tatro simplifies patterns, reducing them to lateral trends and angular trends (though he does reference such well-known patterns as the head and shoulders). Trend lines provide a guide for the trader to either go with the trend or trade a trend break. Often these basics will work. But, Tatro writes, “The problem for most investors is that their belief about what technical analysis is ends with the basics when it should, in fact, just begin there. If at its core technical analysis is the graphical representation of traders’ emotions and now most of those traders are seeking to capitalize by using basic technical analysis, your goal is no longer to assume the basics will always work. Instead, you must sometimes be able to alter your strategy to trade the traders who are trading the basics.” (p. 81)
This book devotes a lot of space to such fundamentals as pattern recognition, picking your time frame, developing your plan, using and controlling risk, and dealing with emotions. It explains a way to determine entry points and how to set stops. Only then does Tatro embark on describing how to trade the trader. We know the saying that from failed moves come fast moves. The challenge, however, is to distinguish between situations that will lead to expected moves and those that will result in failed moves.
Tatro admits that trading pattern failures “can be a dangerous game,” especially for those who try to anticipate failures. He says that it has been his experience that “trading pattern failures is best done only after the traditional pattern has in fact failed, thereby giving you a clear level from which to place your stop.” (p. 144)
Trade the Trader is a well-crafted book from which beginning traders can obviously profit. But even those with more experience can learn from Tatro’s ability to simplify—and in the process clarify—trading patterns and market sentiment.
Tatro simplifies patterns, reducing them to lateral trends and angular trends (though he does reference such well-known patterns as the head and shoulders). Trend lines provide a guide for the trader to either go with the trend or trade a trend break. Often these basics will work. But, Tatro writes, “The problem for most investors is that their belief about what technical analysis is ends with the basics when it should, in fact, just begin there. If at its core technical analysis is the graphical representation of traders’ emotions and now most of those traders are seeking to capitalize by using basic technical analysis, your goal is no longer to assume the basics will always work. Instead, you must sometimes be able to alter your strategy to trade the traders who are trading the basics.” (p. 81)
This book devotes a lot of space to such fundamentals as pattern recognition, picking your time frame, developing your plan, using and controlling risk, and dealing with emotions. It explains a way to determine entry points and how to set stops. Only then does Tatro embark on describing how to trade the trader. We know the saying that from failed moves come fast moves. The challenge, however, is to distinguish between situations that will lead to expected moves and those that will result in failed moves.
Tatro admits that trading pattern failures “can be a dangerous game,” especially for those who try to anticipate failures. He says that it has been his experience that “trading pattern failures is best done only after the traditional pattern has in fact failed, thereby giving you a clear level from which to place your stop.” (p. 144)
Trade the Trader is a well-crafted book from which beginning traders can obviously profit. But even those with more experience can learn from Tatro’s ability to simplify—and in the process clarify—trading patterns and market sentiment.
Subscribe to:
Posts (Atom)