In Three Feet from Gold: Turn Your Obstacles into Opportunities! (Sterling, 2009) Sharon L. Lechter and Greg S. Reid continue the work of Napoleon Hill, author of Think and Grow Rich. In his self-help classic, published during the Great Depression, Hill wrote about a man who abandoned his dreams of becoming rich by prospecting for gold a mere three feet before someone else, to whom he sold both his mine and his equipment, found a major gold vein. The message: the most common cause of failure is quitting.
Teasing out this message, the authors distinguish between the goal of the quitter and that of the winner. The quitter wanted to become rich, the winner wanted to become a gold miner. The quitter wanted instant success, the winner had studied mining for more than a decade before he made his investment. The winner had stickability, which came from being committed as opposed to being merely interested.
There are, of course, morals in this story for the trader. Good traders love what they do and want to become good traders; they don’t trade simply as a means to becoming rich. Some traders pan for gold (and most likely will come up empty), others exploit a major gold vein. All traders go to work every day hoping that the value of the gold they discover will exceed their costs of exploration and extraction.
Unfortunately a trader’s hopes are often dashed, and he may feel like quitting. At least, assuming that he employs effective risk management tools, he has the choice of whether to quit or not. Sometimes quitting is in fact the correct decision. If he isn’t committed to the life of a trader, perhaps it’s time to move on to something better suited to his personality and talents. If he’s treading water, making no progress in his trading skills, perhaps he would be wise to go back to school. (I use the “go back to school” phrase in its broadest sense.) But if he can’t think of anything he’d rather do for a living and if he knows that he is a more accomplished trader than he used to be, however painful any temporary setback, what choice does he really have? As Napoleon Hill wrote, “Living life to the fullest is a lot like shooting the rapids in a rubber raft. Once you’ve made the commitment, it’s difficult to change your mind, turn around, and paddle upstream to placid waters.” (p. ix)
Thursday, April 29, 2010
Wednesday, April 28, 2010
Risk is not the same as uncertainty
For many people semantic distinctions are niceties that are the stock in trade of rigid English teachers and zealous copy editors. They are ever so quickly glossed over by the infamous word “whatever.” But in the financial world a semantic distinction can sometimes differentiate profit from loss, innocence from culpability. We may care less about semantic precision in everyday conversation with our coworkers or neighbors, but when we’re building models or writing contracts we need not only a grasp of mathematics or the law but also of the subtleties of language. (And, no, I’m not going down the slippery slope of the meaning of “is.”)
In financial literature risk is often equated with uncertainty. But Terje Aven argues in Misconceptions of Risk that this equation fails to take consequences into account. He gives the extreme example of a case where there are only two possible outcomes, 0 and 1, corresponding to no and one fatality. “[T]he decision alternatives are A and B, having uncertainty (probability) distributions (0.5, 0.5) and (0.0001 and 0.9999), respectively. Hence, for alternative A there is a higher degree of uncertainty than for alternative B, meaning that risk according to this definition is higher for alternative A than for B. However, considering both dimensions, both uncertainty and the consequences, we would, of course, judge alternative B to have the highest risk as the negative outcome 1 is nearly certain to occur.” (p. 52)
Continuing on the fatality theme, Aven writes that we can predict the number of traffic fatalities in a given country over the course of the coming year with a high level of precision since there are rather small variations in traffic deaths from year to year. “The variance is small. Hence, seeing risk as uncertainty means that we have to conclude that the risk is small, even though the number of fatalities are many thousands each year. Again we see that this perspective leads to a non-intuitive language.” (p. 52)
For traders the distinction between risk and uncertainty should be apparent. All trades have uncertain outcomes; there is no way to measure this uncertainty. It matters not how good your trading strategy is, the fact remains that the outcome of each and every trade is uncertain. The trader hopes that he has devised a winning strategy that will pay off over a number of trades, but he can never know whether it will. The future may not resemble the past sufficiently to make his backtested system pay off, and outliers may (indeed, will) occur that will skew the results of his strategy dramatically, positively or negatively.
Trading risk, however, can and must be managed. Risk management is neither a perfect nor a precise science; the uncertainty inherent in the marketplace can bedevil even the best designed risk management scheme. But risk management is nonetheless the line in the sand between winners and losers. A mediocre trading strategy with great risk management will almost always beat a great trading strategy with poor risk management. (Note the lack of symmetry here; I didn’t say a poor trading strategy with great risk management would outperform.) Why is it, then, that traders are so lazy about devising risk management guidelines? I really don’t know. I’m a risk manager by temperament and I think it’s terrific fun to try to figure out ways to manage trades. (I’m not so great when it comes to portfolio management, but just give me another ten years or so!)
In financial literature risk is often equated with uncertainty. But Terje Aven argues in Misconceptions of Risk that this equation fails to take consequences into account. He gives the extreme example of a case where there are only two possible outcomes, 0 and 1, corresponding to no and one fatality. “[T]he decision alternatives are A and B, having uncertainty (probability) distributions (0.5, 0.5) and (0.0001 and 0.9999), respectively. Hence, for alternative A there is a higher degree of uncertainty than for alternative B, meaning that risk according to this definition is higher for alternative A than for B. However, considering both dimensions, both uncertainty and the consequences, we would, of course, judge alternative B to have the highest risk as the negative outcome 1 is nearly certain to occur.” (p. 52)
Continuing on the fatality theme, Aven writes that we can predict the number of traffic fatalities in a given country over the course of the coming year with a high level of precision since there are rather small variations in traffic deaths from year to year. “The variance is small. Hence, seeing risk as uncertainty means that we have to conclude that the risk is small, even though the number of fatalities are many thousands each year. Again we see that this perspective leads to a non-intuitive language.” (p. 52)
For traders the distinction between risk and uncertainty should be apparent. All trades have uncertain outcomes; there is no way to measure this uncertainty. It matters not how good your trading strategy is, the fact remains that the outcome of each and every trade is uncertain. The trader hopes that he has devised a winning strategy that will pay off over a number of trades, but he can never know whether it will. The future may not resemble the past sufficiently to make his backtested system pay off, and outliers may (indeed, will) occur that will skew the results of his strategy dramatically, positively or negatively.
Trading risk, however, can and must be managed. Risk management is neither a perfect nor a precise science; the uncertainty inherent in the marketplace can bedevil even the best designed risk management scheme. But risk management is nonetheless the line in the sand between winners and losers. A mediocre trading strategy with great risk management will almost always beat a great trading strategy with poor risk management. (Note the lack of symmetry here; I didn’t say a poor trading strategy with great risk management would outperform.) Why is it, then, that traders are so lazy about devising risk management guidelines? I really don’t know. I’m a risk manager by temperament and I think it’s terrific fun to try to figure out ways to manage trades. (I’m not so great when it comes to portfolio management, but just give me another ten years or so!)
Tuesday, April 27, 2010
Leibowitz et al. The Endowment Model of Investing
The Endowment Model of Investing: Return, Risk, and Diversification by Martin L. Leibowitz, Anthony Bova, and P. Brett Hammond (Wiley, 2010) is a serious study by market practitioners, two from Morgan Stanley and one from TIAA-CREF Asset Management. They look at asset allocation and portfolio risk through the lens of beta. The book is therefore valuable for CIOs as well as for systems builders who want to incorporate beta into their models.
Beta, we know, measures how much a particular asset is expected to move in response to a one percent change in the overall equity market. Equivalently, beta is “the correlation between the asset (or portfolio) return and the market return, multiplied by the ratio of their volatilities.” The authors are quick to point out that although the relation between portfolio or asset class volatility and beta is “linear and positive,” market volatility has “a powerful inverse and nonlinear effect on beta.” From these relationships, one can draw some important conclusions about the benefit of adding nonstandard assets with allegedly low correlations and betas when compared to traditional equities. A low beta, the authors write, “can result from three nonexclusive conditions: (1) low correlation between an asset class and the market; (2) low asset class volatility; or (3) high equity market volatility, or any combination of 1, 2, or 3." Thus, they continue, “an asset class may have a low correlation with U.S. equity, but still have a relatively significant beta sensitivity.” (p. 14)
Although the authors focus on beta-based asset allocation, they do not neglect alpha. Contrary to many portfolio models they place non-traditional assets such as non-U.S. equity, real estate, hedge funds, and private equity in the portfolio’s alpha core, ascertaining the maximum acceptable limits for each asset class, and then they add traditional liquid assets—U.S. equities, U.S. bonds, and cash—to achieve the desired risk level for the entire portfolio. They refer to these traditional assets as swing assets.
Of course, given the hit that endowment funds took during the market meltdown the authors devote considerable space to what they call stress betas, created by the tightening of correlations of asset classes with equities. Curiously but I suppose predictably, we learn that the typical well diversified portfolio might experience more short-term downside risk than the traditional 60/40 equity/bond portfolio during times of market turbulence.
This book is thorough in its analysis at the same time that it offers a “how-to” manual for building a diversified endowment-style portfolio, a rare combination of scholarship and actionable ideas. It has many useful exhibits (the generic name for figures, graphs, and tables) compliments of Morgan Stanley Research. All in all, it’s a fine piece of thoughtful financial writing.
Beta, we know, measures how much a particular asset is expected to move in response to a one percent change in the overall equity market. Equivalently, beta is “the correlation between the asset (or portfolio) return and the market return, multiplied by the ratio of their volatilities.” The authors are quick to point out that although the relation between portfolio or asset class volatility and beta is “linear and positive,” market volatility has “a powerful inverse and nonlinear effect on beta.” From these relationships, one can draw some important conclusions about the benefit of adding nonstandard assets with allegedly low correlations and betas when compared to traditional equities. A low beta, the authors write, “can result from three nonexclusive conditions: (1) low correlation between an asset class and the market; (2) low asset class volatility; or (3) high equity market volatility, or any combination of 1, 2, or 3." Thus, they continue, “an asset class may have a low correlation with U.S. equity, but still have a relatively significant beta sensitivity.” (p. 14)
Although the authors focus on beta-based asset allocation, they do not neglect alpha. Contrary to many portfolio models they place non-traditional assets such as non-U.S. equity, real estate, hedge funds, and private equity in the portfolio’s alpha core, ascertaining the maximum acceptable limits for each asset class, and then they add traditional liquid assets—U.S. equities, U.S. bonds, and cash—to achieve the desired risk level for the entire portfolio. They refer to these traditional assets as swing assets.
Of course, given the hit that endowment funds took during the market meltdown the authors devote considerable space to what they call stress betas, created by the tightening of correlations of asset classes with equities. Curiously but I suppose predictably, we learn that the typical well diversified portfolio might experience more short-term downside risk than the traditional 60/40 equity/bond portfolio during times of market turbulence.
This book is thorough in its analysis at the same time that it offers a “how-to” manual for building a diversified endowment-style portfolio, a rare combination of scholarship and actionable ideas. It has many useful exhibits (the generic name for figures, graphs, and tables) compliments of Morgan Stanley Research. All in all, it’s a fine piece of thoughtful financial writing.
Monday, April 26, 2010
Feedback, nudges, and stickK
Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard H. Thaler and Cass R. Sunstein (Yale University Press, 2008) revisits a lot of familiar behavioral economic territory in support of the authors’ increasingly popular idea of libertarian paternalism. (Ben Bernanke, for one, seems to be a fan.) I’m going to limit this post to a single topic: feedback.
I wrote about it earlier in “Trading and the problem of random reinforcement.” Thaler and Sunstein address the topic very briefly but add a couple of thoughts that are worth mentioning.
They make the fairly obvious point that “learning is most likely if people get immediate, clear feedback after each try” and take the example of learning to putt. Even a golfing doofus can figure out how hard to hit the ball if he stays in one place and takes ten practice shots toward the same hole. On the other hand, if he doesn’t get to see where the balls are going he’ll never improve. The markets, alas, offer an even tougher challenge. We know that, although they provide immediate feedback, much of it is anything but clear; on the contrary, for the most part it is random. Ten similar trades will have unpredictably different outcomes. What kind of rotten feedback is that?
Moreover, Thaler and Sunstein suggest another problem with getting good feedback—in trader talk, that we normally get feedback only on the trades we take, not the ones we either reject or don’t know about. “Unless people go out of their way to experiment, they may never learn about” sometimes far superior “alternatives to the familiar ones.” (p. 75)
When feedback doesn’t work, the authors say a nudge might be appropriate. A web site that they tout, developed by Yale academics, is stickK. It provides a free venue for people to enter into commitment contracts, which show the value you put on achieving your goals. You can put either money or your reputation on the line. As the stickK FAQ reads, “stickK was founded on the principle that creating incentives and assigning accountability are the two most important keys to achieving a goal. Thus, the ‘Commitment Contract’ was born. Entirely unique to each person, a Commitment Contract obliges you to achieve your goal within a particular time-frame. Not only are you challenging yourself by saying ‘Hey, I can do this,’ you’re also putting your reputation at stake. If you are unsuccessful, we’ll let your friends know about it. Oh but wait, there’s more. . . . Sometimes losing face with your friends might not be enough to keep you on track. So, what is the one thing no one can stand to part with? You guessed it! Cold hard cash. As a true test of your commitment, StickK will let you put your money on the line for any Commitment Contract. Achieve your goal and you don’t pay a thing (and you’re much happier than before, aren’t you?). But if you aren’t successful, you forfeit your money to a charity, an anti-charity or even that neighbor who keeps stealing your newspaper.”
A commitment contract is a more powerful incentive than simply filling in your own report card. Now all you have to do is figure out some measurable goal that doesn’t invite cheating!
I wrote about it earlier in “Trading and the problem of random reinforcement.” Thaler and Sunstein address the topic very briefly but add a couple of thoughts that are worth mentioning.
They make the fairly obvious point that “learning is most likely if people get immediate, clear feedback after each try” and take the example of learning to putt. Even a golfing doofus can figure out how hard to hit the ball if he stays in one place and takes ten practice shots toward the same hole. On the other hand, if he doesn’t get to see where the balls are going he’ll never improve. The markets, alas, offer an even tougher challenge. We know that, although they provide immediate feedback, much of it is anything but clear; on the contrary, for the most part it is random. Ten similar trades will have unpredictably different outcomes. What kind of rotten feedback is that?
Moreover, Thaler and Sunstein suggest another problem with getting good feedback—in trader talk, that we normally get feedback only on the trades we take, not the ones we either reject or don’t know about. “Unless people go out of their way to experiment, they may never learn about” sometimes far superior “alternatives to the familiar ones.” (p. 75)
When feedback doesn’t work, the authors say a nudge might be appropriate. A web site that they tout, developed by Yale academics, is stickK. It provides a free venue for people to enter into commitment contracts, which show the value you put on achieving your goals. You can put either money or your reputation on the line. As the stickK FAQ reads, “stickK was founded on the principle that creating incentives and assigning accountability are the two most important keys to achieving a goal. Thus, the ‘Commitment Contract’ was born. Entirely unique to each person, a Commitment Contract obliges you to achieve your goal within a particular time-frame. Not only are you challenging yourself by saying ‘Hey, I can do this,’ you’re also putting your reputation at stake. If you are unsuccessful, we’ll let your friends know about it. Oh but wait, there’s more. . . . Sometimes losing face with your friends might not be enough to keep you on track. So, what is the one thing no one can stand to part with? You guessed it! Cold hard cash. As a true test of your commitment, StickK will let you put your money on the line for any Commitment Contract. Achieve your goal and you don’t pay a thing (and you’re much happier than before, aren’t you?). But if you aren’t successful, you forfeit your money to a charity, an anti-charity or even that neighbor who keeps stealing your newspaper.”
A commitment contract is a more powerful incentive than simply filling in your own report card. Now all you have to do is figure out some measurable goal that doesn’t invite cheating!
Sunday, April 25, 2010
Are you "Smart"?
I found the reference to Shel Silverstein's very funny poem "Smart" in Thaler and Sunstein's Nudge (Yale University Press, 2008). They added an equally funny footnote. "Silverstein had personally given Thaler permission to use the poem in an academic paper published in 1985 . . . but the poem is now controlled by his estate, which, after several nudges (otherwise known as desperate pleas), has denied us permission to reprint the poem here. Since we would have been happy to pay royalties, unlike the Web sites you will find via Google, we can only guess that the managers of the estate (to paraphrase the poem) don't know that some is more than none." (p. 77)
Saturday, April 24, 2010
Weekend fare
In a Harvard Business Review blog Peter Bregman writes about his experience of getting lost hiking in "Don't Get Distracted by Your Plan."
From the Hack the Market blog comes an interesting piece on constructing a pairs portfolio. Although it highlights the features of Stratbox, there are a lot of good conceptual tips that would work with any software.
The Deipnosophist blog always offers good material, some financial, some just plain fun. In the latter category I recommend Raymond Crowe's hand shadows video (April 13 post).
From the Hack the Market blog comes an interesting piece on constructing a pairs portfolio. Although it highlights the features of Stratbox, there are a lot of good conceptual tips that would work with any software.
The Deipnosophist blog always offers good material, some financial, some just plain fun. In the latter category I recommend Raymond Crowe's hand shadows video (April 13 post).
Friday, April 23, 2010
Two interpretations of probability
To many of my readers this will be old hat; I wrote about the topic only a short time ago in "Plight of the Fortune Tellers: two views of probability." But let’s have another go at it anyway. Terje Aven in Misconceptions of Risk distinguishes between the frequentist and the subjective (which he upgrades to the knowledge-based) views of probability.
A frequentist probability is the percentage of “successes” if an experiment is repeated an infinite number of times. Since this definition defies empirical replication we need to estimate the probability. We do this with standard statistical calculations. Yet even here, Aven argues, “the estimation error is expressed by confidence intervals and the use of second-order knowledge-based probabilities.” (p. 31) That is, knowledge-based probabilities (even if at a meta-level) are an integral part of calculating frequency-based probabilities.
We don’t have to agonize unduly over sullying frequentist theories of probability with the intrusion of meta-order gunk since they never adequately explained Wall Street action in the first place. So on to the subjective view of probability. Here “probability is a measure of uncertainty about future events and consequences, seen through the eyes of the assessor and based on some background information and knowledge.” (p. 32) Uncertainty is the fundamental concept, probability a tool used to express this uncertainty. Probability is a measure of our best guesses based on available knowledge.
The problem is that our available knowledge may be flawed or woefully incomplete, our assumptions may be off base, and we may not have noticed that black swan swimming in the pond. Probability assignments often camouflage uncertainties. To take Aven’s example, assume that the management of an offshore oil installation is concerned about leakages and undertakes a special maintenance regime. The risk assessor, after compiling background information on the effectiveness of this maintenance program, assigns a leakage probability of 10%. This number, however painstakingly derived, masks a host of hidden uncertainties about corrosive forces at work that could result in unpleasant surprises. If we focus exclusively on the probability assignment we will not have captured the essence of risk. In Aven’s words, “It is common to define and describe risk using probabilities and probability distributions. However, . . . the estimated or assigned probabilities are conditioned on a number of assumptions and suppositions. They depend on background knowledge. Uncertainties are often hidden in such background knowledge, and restricting attention to the estimated or assigned probabilities could camouflage factors that could produce surprising outcomes. By jumping directly into probabilities, important uncertainty aspects are easily truncated, meaning that potential surprises could be left unconsidered.” (p. 31) Uncertainty, Avens contends, should be the pillar of risk, not probability.
A frequentist probability is the percentage of “successes” if an experiment is repeated an infinite number of times. Since this definition defies empirical replication we need to estimate the probability. We do this with standard statistical calculations. Yet even here, Aven argues, “the estimation error is expressed by confidence intervals and the use of second-order knowledge-based probabilities.” (p. 31) That is, knowledge-based probabilities (even if at a meta-level) are an integral part of calculating frequency-based probabilities.
We don’t have to agonize unduly over sullying frequentist theories of probability with the intrusion of meta-order gunk since they never adequately explained Wall Street action in the first place. So on to the subjective view of probability. Here “probability is a measure of uncertainty about future events and consequences, seen through the eyes of the assessor and based on some background information and knowledge.” (p. 32) Uncertainty is the fundamental concept, probability a tool used to express this uncertainty. Probability is a measure of our best guesses based on available knowledge.
The problem is that our available knowledge may be flawed or woefully incomplete, our assumptions may be off base, and we may not have noticed that black swan swimming in the pond. Probability assignments often camouflage uncertainties. To take Aven’s example, assume that the management of an offshore oil installation is concerned about leakages and undertakes a special maintenance regime. The risk assessor, after compiling background information on the effectiveness of this maintenance program, assigns a leakage probability of 10%. This number, however painstakingly derived, masks a host of hidden uncertainties about corrosive forces at work that could result in unpleasant surprises. If we focus exclusively on the probability assignment we will not have captured the essence of risk. In Aven’s words, “It is common to define and describe risk using probabilities and probability distributions. However, . . . the estimated or assigned probabilities are conditioned on a number of assumptions and suppositions. They depend on background knowledge. Uncertainties are often hidden in such background knowledge, and restricting attention to the estimated or assigned probabilities could camouflage factors that could produce surprising outcomes. By jumping directly into probabilities, important uncertainty aspects are easily truncated, meaning that potential surprises could be left unconsidered.” (p. 31) Uncertainty, Avens contends, should be the pillar of risk, not probability.
Thursday, April 22, 2010
A bird in the hand
Sam Silverstein’s book No More Excuses: The Five Accountabilities for Personal and Organizational Growth (Wiley, 2010) covers a lot of familiar territory. Today I’ll confine my post to a single suggestion—that sometimes it makes good sense to get rid of a profitable product or service (or strategy) and replace it with something more in line with current goals. Silverstein writes, “We are told as we are brought up, ‘A bird in the hand is worth two in the bush.’ I never hear anyone ask, ‘What if there are three birds in the bush? Or five? Or ten? How old is the bird in your hand, anyway? What if the bird that’s in your hand is losing weight and isn’t looking as chipper as it once did?’” (p. 74)
We have to get rid of some old, familiar stuff, the author argues, to make room for new things, to continue to innovate. We have to allocate our resources strategically to stay ahead of the pack. I know, yada yada yada. But true nonetheless. If we stick exclusively with the familiar we are unlikely to have any experience with the optimum.
We have to get rid of some old, familiar stuff, the author argues, to make room for new things, to continue to innovate. We have to allocate our resources strategically to stay ahead of the pack. I know, yada yada yada. But true nonetheless. If we stick exclusively with the familiar we are unlikely to have any experience with the optimum.
Wednesday, April 21, 2010
Recognizing a spoken word, predicting a market
Renaissance Technologies, the renowned quantitative hedge fund firm, is now headed by two men who came from the world of voice-recognition technology. See the WSJ article from March 16. A Renaissance researcher enticed them to join the firm almost 20 years ago because, as he said, “I realized that there are some deep technical links between the way speech recognition is done and some good ways of predicting the markets.”
I’m not planning to go very far into yet another unknown world, but here’s a linguistic thought that might just trigger some trading ideas. In “Spoken Word Recognition,” a contribution to Traxler and Gernsbacher’s Handbook of Psycholinguistics, 2d ed. (Academic Press, 2006), Delphine Dahan and James S. Magnuson argue that we cannot recognize a word simply by looking at a string of phonemes. I won’t follow out their argument, but I was struck by one paragraph. See if it rings a bell with you.
“What purpose might . . . fine-grained sensitivity serve? One challenge posed by assuming that words are identified from a string of phonemes is the embedding problem; most long words have multiple shorter words embedded within their phonemic transcriptions (e.g., . . . unitary contains you, unit, knit, it, tarry, air, and airy). . . . Successful spoken word recognition depends on distinguishing words from embeddings. However, the embedding problem is significantly mitigated when subphonemic information in the input is considered. For example, listeners are sensitive to very subtle durational differences (in the range of 15-20 ms) that distinguish phonemically identical syllables that occur in short words (ham) from those embedded in longer words (hamster).” (pp. 250-51)
Think, for instance, about comparing charts with volume bars to charts with time bars. Volume-based bars can sometimes differentiate between bars that look identical on a time-base chart. Can we distinguish bars that will immediately reverse from those that belong to a trend by separating out “sub-bar” information? There are lots of ways to play around with this idea, especially if you have access to relatively high frequency data. Go for it!
I’m not planning to go very far into yet another unknown world, but here’s a linguistic thought that might just trigger some trading ideas. In “Spoken Word Recognition,” a contribution to Traxler and Gernsbacher’s Handbook of Psycholinguistics, 2d ed. (Academic Press, 2006), Delphine Dahan and James S. Magnuson argue that we cannot recognize a word simply by looking at a string of phonemes. I won’t follow out their argument, but I was struck by one paragraph. See if it rings a bell with you.
“What purpose might . . . fine-grained sensitivity serve? One challenge posed by assuming that words are identified from a string of phonemes is the embedding problem; most long words have multiple shorter words embedded within their phonemic transcriptions (e.g., . . . unitary contains you, unit, knit, it, tarry, air, and airy). . . . Successful spoken word recognition depends on distinguishing words from embeddings. However, the embedding problem is significantly mitigated when subphonemic information in the input is considered. For example, listeners are sensitive to very subtle durational differences (in the range of 15-20 ms) that distinguish phonemically identical syllables that occur in short words (ham) from those embedded in longer words (hamster).” (pp. 250-51)
Think, for instance, about comparing charts with volume bars to charts with time bars. Volume-based bars can sometimes differentiate between bars that look identical on a time-base chart. Can we distinguish bars that will immediately reverse from those that belong to a trend by separating out “sub-bar” information? There are lots of ways to play around with this idea, especially if you have access to relatively high frequency data. Go for it!
Tuesday, April 20, 2010
Richard, Confidence Game
Christine S. Richard’s Confidence Game: How a Hedge Fund Called Wall Street’s Bluff (Wiley, 2010) is a fascinating tale. It recounts Bill Ackman’s relentless efforts to expose flaws in MBIA’s business model even as he was amassing a huge short position against the company in the form of CDSs.
The story reaches all the way back to John Mitchell, Nixon’s subsequently disgraced attorney general, who early in his career created the “moral-obligation” bond. Mitchell’s idea was to bypass local referendums on debt issuance; the local government would set up a bond-issuing authority and then pledge that it “intended” to help pay off the authority’s debt if necessary. It did not, however, pledge “its full faith and credit to the repayment of the bonds,” so no taxpayer vote was necessary. (p. 63)
The municipal bond market relied on this implicit understanding between public officials and investors, and the muni bond insurance market, with help from the ratings agencies, exploited it. “If a smart investor could find bonds that were safer than they appeared, an even more astute businessperson could create a business guaranteeing these bonds. The bond insurance business was simple: In exchange for receiving an upfront insurance premium, the bond insurer agreed to cover all interest and principal payments over the life of the bond if the issuer defaulted. As long as the bond insurer maintained its triple-A rating, the bonds remained triple-A.” (p. 5) The insurer had to set aside some fraction of the amount of each bond it guaranteed in case of default. Given the extreme leverage involved, the insurance company had to underwrite to a no-loss standard.
MBIA’s triple-A rating was essential to its survival. Hence the paradox of “a company being triple-A rated even though the only thing that stood between the company and its collapse was a triple-A rating.” (p. 117) MBIA’s survival depended on Moody’s credit-rating system. And Moody’s helped MBIA and other municipal bond insurers because its rating system skewed the risk transfer involved in underwriting municipal bonds. It rated municipalities on the assumption, contrary to history, that they would never be bailed out. Municipal bonds with the same likelihood of default as corporate bonds were rated lower. When it rated the insurers, however, it factored in the “moral-obligation” quality of the underwritten bonds. As an investor wrote in an e-mail to the author, “Moody’s overrates MBIA and massively underrates municipals, thus generating this huge business of bond insurance. It is a racket that the taxpayers are subsidizing private bond-insurance companies. The ultimate irony is that the triple-A-rated bond insurers, who are rated on the corporate scale, are in fact riskier than the A-rated muni issuers, who are rated on the muni scale but are much less likely to default than the insurers. Who is insuring whom?” (p. 175)
Christine Richard’s book documents, thanks in large part to Bill Ackman’s meticulous research in building his case against MBIA, how a company that launched its business on Mitchell’s concept of bypassing pesky legalities extended this concept. For instance, MBIA backed CDOs at “super-senior levels” through CDS contracts. But insurance companies were prohibited by law from writing swaps. Time for a quick bypass. MBIA set up “an orphaned subsidiary,” a shell company owned by an apparently unaffiliated charity which sold credit-default swaps; MBIA guaranteed its obligations. Or when a securitization plan fell apart (an MBIA-owned company had bought tax liens which it then intended to use as collateral to back bonds, which MBIA would then insure) and MBIA ended up with about $500 million in exposure, a private placement, known in house as the “Caulis Negris” deal, made the problem vanish with the rating agencies none the wiser. “Caulis Negris” is bad Latin for “black hole.”
No one is a hero in Richard’s book. Although Bill Ackman is the focus of the story and some might consider him a hero, his motives were mixed; after all he made more than $1 billion betting against MBIA. It was a bet that took a long time to pay off; year after year as his investment was showing a growing paper loss Ackman pressed his case against MBIA in as many venues as possible. As a result he wasn’t simply ignored like Harry Markopolos in the Bernie Madoff affair; he was vilified in the press and investigated by both the SEC and Eliot Spitzer.
This is not a book with a single lesson. Its many interwoven threads point to the difficulties of financial regulation—short sellers can perform a valuable service even when they’re acting in their own best interest, rating agencies can create inequalities by making faulty assumptions in their models, no-loss underwriting standards can mask serious potential risk. And all this with no assumption of fraud.
The story reaches all the way back to John Mitchell, Nixon’s subsequently disgraced attorney general, who early in his career created the “moral-obligation” bond. Mitchell’s idea was to bypass local referendums on debt issuance; the local government would set up a bond-issuing authority and then pledge that it “intended” to help pay off the authority’s debt if necessary. It did not, however, pledge “its full faith and credit to the repayment of the bonds,” so no taxpayer vote was necessary. (p. 63)
The municipal bond market relied on this implicit understanding between public officials and investors, and the muni bond insurance market, with help from the ratings agencies, exploited it. “If a smart investor could find bonds that were safer than they appeared, an even more astute businessperson could create a business guaranteeing these bonds. The bond insurance business was simple: In exchange for receiving an upfront insurance premium, the bond insurer agreed to cover all interest and principal payments over the life of the bond if the issuer defaulted. As long as the bond insurer maintained its triple-A rating, the bonds remained triple-A.” (p. 5) The insurer had to set aside some fraction of the amount of each bond it guaranteed in case of default. Given the extreme leverage involved, the insurance company had to underwrite to a no-loss standard.
MBIA’s triple-A rating was essential to its survival. Hence the paradox of “a company being triple-A rated even though the only thing that stood between the company and its collapse was a triple-A rating.” (p. 117) MBIA’s survival depended on Moody’s credit-rating system. And Moody’s helped MBIA and other municipal bond insurers because its rating system skewed the risk transfer involved in underwriting municipal bonds. It rated municipalities on the assumption, contrary to history, that they would never be bailed out. Municipal bonds with the same likelihood of default as corporate bonds were rated lower. When it rated the insurers, however, it factored in the “moral-obligation” quality of the underwritten bonds. As an investor wrote in an e-mail to the author, “Moody’s overrates MBIA and massively underrates municipals, thus generating this huge business of bond insurance. It is a racket that the taxpayers are subsidizing private bond-insurance companies. The ultimate irony is that the triple-A-rated bond insurers, who are rated on the corporate scale, are in fact riskier than the A-rated muni issuers, who are rated on the muni scale but are much less likely to default than the insurers. Who is insuring whom?” (p. 175)
Christine Richard’s book documents, thanks in large part to Bill Ackman’s meticulous research in building his case against MBIA, how a company that launched its business on Mitchell’s concept of bypassing pesky legalities extended this concept. For instance, MBIA backed CDOs at “super-senior levels” through CDS contracts. But insurance companies were prohibited by law from writing swaps. Time for a quick bypass. MBIA set up “an orphaned subsidiary,” a shell company owned by an apparently unaffiliated charity which sold credit-default swaps; MBIA guaranteed its obligations. Or when a securitization plan fell apart (an MBIA-owned company had bought tax liens which it then intended to use as collateral to back bonds, which MBIA would then insure) and MBIA ended up with about $500 million in exposure, a private placement, known in house as the “Caulis Negris” deal, made the problem vanish with the rating agencies none the wiser. “Caulis Negris” is bad Latin for “black hole.”
No one is a hero in Richard’s book. Although Bill Ackman is the focus of the story and some might consider him a hero, his motives were mixed; after all he made more than $1 billion betting against MBIA. It was a bet that took a long time to pay off; year after year as his investment was showing a growing paper loss Ackman pressed his case against MBIA in as many venues as possible. As a result he wasn’t simply ignored like Harry Markopolos in the Bernie Madoff affair; he was vilified in the press and investigated by both the SEC and Eliot Spitzer.
This is not a book with a single lesson. Its many interwoven threads point to the difficulties of financial regulation—short sellers can perform a valuable service even when they’re acting in their own best interest, rating agencies can create inequalities by making faulty assumptions in their models, no-loss underwriting standards can mask serious potential risk. And all this with no assumption of fraud.
Monday, April 19, 2010
Does risk equal expected value?
Over a series of posts (not consecutive because I realize that not everyone finds risk an intriguing subject) I’m going to sample Misconceptions of Risk by Terje Aven (Wiley, 2010). As traders and investors we’re always told to manage risk (often mistakenly viewed as potential negative consequences; see “Risk management and profit targets”). But how many of us know what we are supposed to manage, let alone how to manage it? Aven analyzes nineteen common views of risk, exposing their strengths, weaknesses, and limitations. My sampling will be much more modest. Let’s start this short series with the first view, that risk equals expected value.
Expected value, as calculated in probability theory, is the sum of each possible outcome with its associated probability. For example, the expected value, or average outcome, of rolling a die some reasonably large number of times would be 3.5 (1 x 1/6 + 2 x 1/6 + 3 x 1/6 + 4 x 1/6 + 5 x 1/6 + 6 x 1/6). Or, one could recast expected value to measure the probability of success or failure; the probability of rolling a 2 is 1/6 and of not rolling it is 5/6.
But does the very simple concept of expected value give us the information we need to make a decision? Aven looks at a financial form of Russian roulette: if the revolver discharges you lose $24 million, if it doesn’t you win $6 million. By the same math as above the expected gain is $1 million (-24 x 1/6 + 6 x 5/6). Sounds like a good deal, doesn’t it? Unfortunately, if you know only the expected gain you don’t understand that there is a 1 in 6 possibility of losing $24 million.
And what happens to the notion of expected value when we apply it outside the world of gambling with its well-controlled probability distributions? Aven shifts to a portfolio perspective, looking at 100 projects (we could easily substitute trades for projects) each with the same risk profile as the Russian roulette example. Assuming that the projects are independent of one another, we can invoke the central limit theorem to calculate the probability distribution of the average gain and plot a Gaussian probability curve. Ah, what a beautifully ordered statistical world!
But there are three major problems with this simple way of viewing and measuring risk—outliers that dominate calculated averages, dependency (that is, are the trades really independent of one another?), and the uncertainty that the future will resemble the past. In brief, once we leave the Las Vegas strip risk becomes a much thornier issue.
Expected value, as calculated in probability theory, is the sum of each possible outcome with its associated probability. For example, the expected value, or average outcome, of rolling a die some reasonably large number of times would be 3.5 (1 x 1/6 + 2 x 1/6 + 3 x 1/6 + 4 x 1/6 + 5 x 1/6 + 6 x 1/6). Or, one could recast expected value to measure the probability of success or failure; the probability of rolling a 2 is 1/6 and of not rolling it is 5/6.
But does the very simple concept of expected value give us the information we need to make a decision? Aven looks at a financial form of Russian roulette: if the revolver discharges you lose $24 million, if it doesn’t you win $6 million. By the same math as above the expected gain is $1 million (-24 x 1/6 + 6 x 5/6). Sounds like a good deal, doesn’t it? Unfortunately, if you know only the expected gain you don’t understand that there is a 1 in 6 possibility of losing $24 million.
And what happens to the notion of expected value when we apply it outside the world of gambling with its well-controlled probability distributions? Aven shifts to a portfolio perspective, looking at 100 projects (we could easily substitute trades for projects) each with the same risk profile as the Russian roulette example. Assuming that the projects are independent of one another, we can invoke the central limit theorem to calculate the probability distribution of the average gain and plot a Gaussian probability curve. Ah, what a beautifully ordered statistical world!
But there are three major problems with this simple way of viewing and measuring risk—outliers that dominate calculated averages, dependency (that is, are the trades really independent of one another?), and the uncertainty that the future will resemble the past. In brief, once we leave the Las Vegas strip risk becomes a much thornier issue.
Saturday, April 17, 2010
An addendum to the last post
So here I am, doing my last series of wide casts before logging off and having dinner, and what do I find but an article on The New York Times web site entitled "An Open Mind"? It's a portal to the wealth of online courses available from top-tier universities. Go have your own free feast!
Ignorance is not bliss
I hate being ignorant! Well, that’s an overstatement. I am willing to accept my ignorance in fields where I have a track record of seemingly forgetting things faster than I learn them. Don’t ask me about Greek mythology, astronomy, or human anatomy. No matter how hard I try, no matter how much I reprimand myself for my woeful ignorance, I still don’t know where my liver is. And this despite the fact that I have a six-page laminated anatomy chart, compliments of my doctor niece. (Well, yeah, I can find it on the chart, but once I put the chart away my mind goes blank.)
Yet here I am writing a blog that draws on a wide swath of financial literature, increasingly including books that say “a knowledge of linear algebra is desirable.” And where did I stop my math education? With calculus. So I decided that learning linear algebra should be my next “hobby.” This was a great decision. I started watching Gilbert Strang’s MIT lecture series on YouTube and have been enthralled. Not only does linear algebra strike a chord with me intellectually, but I can instinctively see so many ways quants can use it.
Here’s a Cutty Sark toast to my liver, wherever it is!
Yet here I am writing a blog that draws on a wide swath of financial literature, increasingly including books that say “a knowledge of linear algebra is desirable.” And where did I stop my math education? With calculus. So I decided that learning linear algebra should be my next “hobby.” This was a great decision. I started watching Gilbert Strang’s MIT lecture series on YouTube and have been enthralled. Not only does linear algebra strike a chord with me intellectually, but I can instinctively see so many ways quants can use it.
Here’s a Cutty Sark toast to my liver, wherever it is!
Friday, April 16, 2010
Correlation as a tool in analyzing performance
As promised in my post on correlation on Tuesday (and, by the way, it's always wise to read comments to these posts), I'm returning to the theme, this time from a practical perspective. The source for today’s post is Kenneth L. Grant’s Trading Risk: Enhanced Profitability through Risk Control (Wiley, 2004). In his chapter on understanding profit/loss patterns he suggests that the trader can often identify areas of strength and weakness in his performance by performing correlation analysis on the time series of his returns.
The first order of business is to choose the time series to analyze. There is no need to be obsessively granular here, except perhaps on rainy weekends when the trader has nothing better to do than look for correlations based on one-minute data. Let’s say the trader chooses a daily P/L time series and a time span of a week or a month.
What kinds of correlations should the trader run? One obvious choice is to see how correlated his daily returns are with the daily returns of market benchmarks. Grant encourages the trader “to be creative, running correlations between your P/L and as many market times series as you can think of—whether they make intuitive sense or not. While you may not find any surprising interdependencies, you may gain one or two insights into the external market patterns that are likely to have the most dramatic impact on your performance. This is particularly true if you perform the analysis over multiple time periods.” (p. 74)
If the trader is pursuing more than one strategy or trading in different accounts, Grant suggests that he do cross correlation analysis. That is, find out how the strategies or accounts correlate with one another. Is the trader really pursuing independent strategies? Or are they just variations on a theme?
Serial correlation can also be a useful, even though lagging, metric. There are two major types of serial correlation. First, autocorrelation “measures the levels at which today’s absolute performance is tied to absolute performance in the recent past.” Do returns exhibit momentum or mean reversion? Second, autoregression measures deviation from the mean of the data set. Unfortunately most discussions of autoregression quickly become very mathematical. Grant says simply: “Suppose you are trading an account that has an average daily P/L of, say, $10,000. The account would be considered autoregressive if it tended to perform excessively well or badly on a routine basis on days after it deviated from this mean significantly.” (p. 77)
And then there are the “kitchen sink” correlations. That is, try correlating any two things that move against one another. Grant urges creativity. For example, try correlating returns with market volume, volatility, time of day or day of the week, length of holding period, transaction size.
It’s easy to become a correlation junkie because running correlations makes performance analysis more transparent and less emotional. Just remember basic statistical principles when running these correlations. And, Grant adds, “gird yourself against the temptation to undertake aggressive modifications of your trading behavior on the basis of a direct interpretation of a single statistic or even a combination of quantitative indicators. . . . Instead, . . . use these statistics as a general diagnostic in your portfolio management tool kit.” (p. 38)
The first order of business is to choose the time series to analyze. There is no need to be obsessively granular here, except perhaps on rainy weekends when the trader has nothing better to do than look for correlations based on one-minute data. Let’s say the trader chooses a daily P/L time series and a time span of a week or a month.
What kinds of correlations should the trader run? One obvious choice is to see how correlated his daily returns are with the daily returns of market benchmarks. Grant encourages the trader “to be creative, running correlations between your P/L and as many market times series as you can think of—whether they make intuitive sense or not. While you may not find any surprising interdependencies, you may gain one or two insights into the external market patterns that are likely to have the most dramatic impact on your performance. This is particularly true if you perform the analysis over multiple time periods.” (p. 74)
If the trader is pursuing more than one strategy or trading in different accounts, Grant suggests that he do cross correlation analysis. That is, find out how the strategies or accounts correlate with one another. Is the trader really pursuing independent strategies? Or are they just variations on a theme?
Serial correlation can also be a useful, even though lagging, metric. There are two major types of serial correlation. First, autocorrelation “measures the levels at which today’s absolute performance is tied to absolute performance in the recent past.” Do returns exhibit momentum or mean reversion? Second, autoregression measures deviation from the mean of the data set. Unfortunately most discussions of autoregression quickly become very mathematical. Grant says simply: “Suppose you are trading an account that has an average daily P/L of, say, $10,000. The account would be considered autoregressive if it tended to perform excessively well or badly on a routine basis on days after it deviated from this mean significantly.” (p. 77)
And then there are the “kitchen sink” correlations. That is, try correlating any two things that move against one another. Grant urges creativity. For example, try correlating returns with market volume, volatility, time of day or day of the week, length of holding period, transaction size.
It’s easy to become a correlation junkie because running correlations makes performance analysis more transparent and less emotional. Just remember basic statistical principles when running these correlations. And, Grant adds, “gird yourself against the temptation to undertake aggressive modifications of your trading behavior on the basis of a direct interpretation of a single statistic or even a combination of quantitative indicators. . . . Instead, . . . use these statistics as a general diagnostic in your portfolio management tool kit.” (p. 38)
Thursday, April 15, 2010
A potpourri
Yale Hirsch, founder of the Stock Trader’s Almanac, has long collected words of wisdom on a range of topics to use in his annual almanacs. The Capitalist Spirit (Wiley, 2010) gives a larger canvas for inspirational, informational, and sometimes funny thoughts.
Here are a few samples.
“See your destination in your mind. . . . Start walking. . . . Think ahead as you walk. ‘It’s like driving a car at night. You can see only as far as your headlights, but you can make the whole trip that way.’ E. L. Doctorow (b. 1931), author of Ragtime” (p. 41, by Roy H. Williams)
* * *
Philip Humbert offers and elaborates on ten characteristics of reachable goals: they are specific, simple, significant, strategic, measurable, rational, tangible, written, shared, and consistent with your values. (pp. 142-43)
* * *
Two selections from “puns to put a smile on your face” (p. 205):
Does the name Pavlov ring a bell?
A pessimist’s blood type is always b-negative.
* * *
Finally, for those who enjoy number sequences:
Here are a few samples.
“See your destination in your mind. . . . Start walking. . . . Think ahead as you walk. ‘It’s like driving a car at night. You can see only as far as your headlights, but you can make the whole trip that way.’ E. L. Doctorow (b. 1931), author of Ragtime” (p. 41, by Roy H. Williams)
* * *
Philip Humbert offers and elaborates on ten characteristics of reachable goals: they are specific, simple, significant, strategic, measurable, rational, tangible, written, shared, and consistent with your values. (pp. 142-43)
* * *
Two selections from “puns to put a smile on your face” (p. 205):
Does the name Pavlov ring a bell?
A pessimist’s blood type is always b-negative.
* * *
Finally, for those who enjoy number sequences:
Wednesday, April 14, 2010
The life cycle of a hedge fund investment strategy
Hedge Fund Alpha: A Framework for Generating and Understanding Investment Performance (World Scientific, 2009) is a collection of papers edited by John M. Longo of the Rutgers Business School. In this post I am going to focus on one paper that I think should be of general interest, “From Birth to Death: The Lifecycle of a Hedge Fund Investment Strategy” by Longo and Yaxuan Qi.
The authors claim that the viability of a hedge fund is threatened if it has a single year of double-digit losses or two consecutive years of losses of any magnitude. They then analyze three well-known strategies—the small firm anomaly, momentum anomaly, and accrual anomaly—adjusting them to include a hedge component. Would following any of these single-factor strategies have enabled a hedge fund to survive the authors’ study periods?
The first hypothetical fund, the Hedged Size Anomaly Fund, goes long the smallest decile of firms in the CRSP universe, sells short the Russell 2000, and adds the rebate from the short sales. The authors use the 30-day T-bill to approximate this rebate. The returns of this hypothetical fund between 1981 and 2005 were never outright terrible, but between 1984 and 1990 they were, for the most part, negative; the average return over this period was -0.57%. The authors suggest that the fund would undoubtedly have gone out of business at some point during these seven lean years.
The second hypothetical fund is the Hedged Momentum Anomaly Fund, begun in 1993, the year of a comprehensive study of the momentum strategy, and ended in 2005. This fund goes long the top decile momentum portfolio of the firms in the CRSP universe, sells short the Russell 1000 Growth index, and again uses the 30-day T-bill to approximate the rebate. This fund had much more impressive returns overall, but it turned in a very poor performance in 1998. It lost more than 14%, a year in which the S&P 500 was up 28.6% and the Russell 1000 Growth gained 38.7%. The fund would have faced serious scrutiny, perhaps implosion, during this time.
The last hypothetical fund is the Accrual Anomaly Fund, begun in 1996 and ended in 2005. It follows a fundamentally based strategy that analyzes earnings quality. This fund buys a portfolio of low accrual firms and sells short a similar sized basket of high accrual firms, once again adding into its performance metrics the returns on the 30-day T-bill. This fund saw some exceptionally profitable years but also back-to-back double-digit losses (17.47% and 20.24%) in 1997 and 1998. So it might have folded during or after these grim years.
All three funds had respectable to very good average returns and CAGRs: (1) 9.56% and 8.82%, (2) 21.81% and 19.78%, and (3) 21.55% and 12.92%. But the returns were cyclical. An equally weighted portfolio of these three strategies, as should be evident to readers of recent posts in this blog, was superior in terms of buying survival time for the hedge fund: its average return was 17.33%, its CAGR 15.80%, and the only bad year was 1998, with a gross loss of 2.48%.
The authors conclude that if alpha is indeed cyclical, a hedge fund management company should either offer a multistrategy fund or run multiple funds under its management umbrella. (The authors don’t consider the possibility of successful regime switching.) Moreover, they argue, “hedge fund companies should act in a similar manner to [p]harmaceutical firms, creating a pipeline of products (i.e. hedge fund strategies) in the event that their primary product/strategy falters.” (p. 260)
The individual trader/investor would be wise to follow this advice.
The authors claim that the viability of a hedge fund is threatened if it has a single year of double-digit losses or two consecutive years of losses of any magnitude. They then analyze three well-known strategies—the small firm anomaly, momentum anomaly, and accrual anomaly—adjusting them to include a hedge component. Would following any of these single-factor strategies have enabled a hedge fund to survive the authors’ study periods?
The first hypothetical fund, the Hedged Size Anomaly Fund, goes long the smallest decile of firms in the CRSP universe, sells short the Russell 2000, and adds the rebate from the short sales. The authors use the 30-day T-bill to approximate this rebate. The returns of this hypothetical fund between 1981 and 2005 were never outright terrible, but between 1984 and 1990 they were, for the most part, negative; the average return over this period was -0.57%. The authors suggest that the fund would undoubtedly have gone out of business at some point during these seven lean years.
The second hypothetical fund is the Hedged Momentum Anomaly Fund, begun in 1993, the year of a comprehensive study of the momentum strategy, and ended in 2005. This fund goes long the top decile momentum portfolio of the firms in the CRSP universe, sells short the Russell 1000 Growth index, and again uses the 30-day T-bill to approximate the rebate. This fund had much more impressive returns overall, but it turned in a very poor performance in 1998. It lost more than 14%, a year in which the S&P 500 was up 28.6% and the Russell 1000 Growth gained 38.7%. The fund would have faced serious scrutiny, perhaps implosion, during this time.
The last hypothetical fund is the Accrual Anomaly Fund, begun in 1996 and ended in 2005. It follows a fundamentally based strategy that analyzes earnings quality. This fund buys a portfolio of low accrual firms and sells short a similar sized basket of high accrual firms, once again adding into its performance metrics the returns on the 30-day T-bill. This fund saw some exceptionally profitable years but also back-to-back double-digit losses (17.47% and 20.24%) in 1997 and 1998. So it might have folded during or after these grim years.
All three funds had respectable to very good average returns and CAGRs: (1) 9.56% and 8.82%, (2) 21.81% and 19.78%, and (3) 21.55% and 12.92%. But the returns were cyclical. An equally weighted portfolio of these three strategies, as should be evident to readers of recent posts in this blog, was superior in terms of buying survival time for the hedge fund: its average return was 17.33%, its CAGR 15.80%, and the only bad year was 1998, with a gross loss of 2.48%.
The authors conclude that if alpha is indeed cyclical, a hedge fund management company should either offer a multistrategy fund or run multiple funds under its management umbrella. (The authors don’t consider the possibility of successful regime switching.) Moreover, they argue, “hedge fund companies should act in a similar manner to [p]harmaceutical firms, creating a pipeline of products (i.e. hedge fund strategies) in the event that their primary product/strategy falters.” (p. 260)
The individual trader/investor would be wise to follow this advice.
Tuesday, April 13, 2010
Correlation
Sometimes the seemingly simplest concepts give me the greatest grief. Correlation is a prime example. We know that in the world of finance correlation measures how two or more securities move in relation to one another. Web sites give us chart overlays and numbers, positive or negative (between +1 and -1), representing the correlation between two assets over a given period of time. It all seems so straightforward and yet I’ve always had the nagging suspicion that my understanding was superficial. Perhaps as a result I’ve always distrusted the concept.
I turned to Carol Alexander’s Market Models: A Guide to Financial Data Analysis (Wiley, 2001) to try to sort things out. Alexander gives a more elegant definition than I provided: “Correlation is a measure of co-movements between two return series.” (p. 5) It is a standardized form of covariance; that is, “for two random variables X and Y the correlation is just the covariance divided by the product of the standard deviations.” (p. 7) Thus far we’re in the realm of elementary statistics.
But now things get dicey, and I’m starting to understand why I was always uncomfortable with the notion of correlation. Let me quote from Alexander rather than sound pretentious summarizing something I just learned. “The greater the absolute value of correlation, the greater the association or ‘co-dependency’ between the series. If two random variables are statistically independent then a good estimate of their correlation should be insignificantly different from zero. We use the term orthogonal to describe such variables. However, the converse is not true. That is, orthogonality (zero correlation) does not imply independence, because two variables could have zero covariance and still be related (the higher moments of their joint density could be different from zero). In financial markets, where there is often a non-linear dependence between returns, correlation may not be an appropriate measure of co-dependency.” (p. 7)
Since I am not a statistician, I’m not going to try to follow the various threads of Alexander’s claims here. For those who are interested, there’s a good analysis of linear correlation and the problems of measuring nonlinear relations at
statsoft.com.
But, leaving the world of statistics briefly, it makes intuitive sense that the returns of two assets can be uncorrelated but not independent of one another, especially if we view markets as adaptive systems. For instance, I pulled up a spread sheet from February of commodity inter-market correlations (compliments of MRCI) that showed a zero correlation over a 180 trading day period between orange juice and soybean meal returns. But in the world of managed futures, where portfolio managers work hard to get just the right mix, who would say that these two return series were truly independent of one another?
Returning to the world of statistics, we are hit with the distinction between constant and time-varying correlation models. The constant model assumes that the relationship between the two assets is stable over time; correlation is independent of the time at which it is measured. But aren’t financial markets quintessentially temporal? Alexander writes that “in currency markets, commodity markets and equity markets it is not uncommon for time-varying correlation estimates to jump around considerably from day to day. Commonly, cross-market correlation estimates are even more unstable.” (pp. 15-16)
So how in the world does one hedge correlation risk? The answer, in a nutshell, is that one can’t. Adjusting “mark-to-model values for uncertainty in correlation estimates” hasn’t worked out all that well. We know that in a crisis everything becomes highly correlated.
Correlation, Alexander concludes, is unobservable and can only be estimated within the context of a model. Therefore, she says, the analysis of correlation is a very complex subject. (p. 19) At least I now feel justified in my sense of unease with the concept.
Nonetheless, in a future post I’m going to overcome my theoretical squeamishness to look at how a trader can use correlation analysis to dissect his trading performance.
I turned to Carol Alexander’s Market Models: A Guide to Financial Data Analysis (Wiley, 2001) to try to sort things out. Alexander gives a more elegant definition than I provided: “Correlation is a measure of co-movements between two return series.” (p. 5) It is a standardized form of covariance; that is, “for two random variables X and Y the correlation is just the covariance divided by the product of the standard deviations.” (p. 7) Thus far we’re in the realm of elementary statistics.
But now things get dicey, and I’m starting to understand why I was always uncomfortable with the notion of correlation. Let me quote from Alexander rather than sound pretentious summarizing something I just learned. “The greater the absolute value of correlation, the greater the association or ‘co-dependency’ between the series. If two random variables are statistically independent then a good estimate of their correlation should be insignificantly different from zero. We use the term orthogonal to describe such variables. However, the converse is not true. That is, orthogonality (zero correlation) does not imply independence, because two variables could have zero covariance and still be related (the higher moments of their joint density could be different from zero). In financial markets, where there is often a non-linear dependence between returns, correlation may not be an appropriate measure of co-dependency.” (p. 7)
Since I am not a statistician, I’m not going to try to follow the various threads of Alexander’s claims here. For those who are interested, there’s a good analysis of linear correlation and the problems of measuring nonlinear relations at
statsoft.com.
But, leaving the world of statistics briefly, it makes intuitive sense that the returns of two assets can be uncorrelated but not independent of one another, especially if we view markets as adaptive systems. For instance, I pulled up a spread sheet from February of commodity inter-market correlations (compliments of MRCI) that showed a zero correlation over a 180 trading day period between orange juice and soybean meal returns. But in the world of managed futures, where portfolio managers work hard to get just the right mix, who would say that these two return series were truly independent of one another?
Returning to the world of statistics, we are hit with the distinction between constant and time-varying correlation models. The constant model assumes that the relationship between the two assets is stable over time; correlation is independent of the time at which it is measured. But aren’t financial markets quintessentially temporal? Alexander writes that “in currency markets, commodity markets and equity markets it is not uncommon for time-varying correlation estimates to jump around considerably from day to day. Commonly, cross-market correlation estimates are even more unstable.” (pp. 15-16)
So how in the world does one hedge correlation risk? The answer, in a nutshell, is that one can’t. Adjusting “mark-to-model values for uncertainty in correlation estimates” hasn’t worked out all that well. We know that in a crisis everything becomes highly correlated.
Correlation, Alexander concludes, is unobservable and can only be estimated within the context of a model. Therefore, she says, the analysis of correlation is a very complex subject. (p. 19) At least I now feel justified in my sense of unease with the concept.
Nonetheless, in a future post I’m going to overcome my theoretical squeamishness to look at how a trader can use correlation analysis to dissect his trading performance.
Monday, April 12, 2010
Fabbozi et al., Quantitative Equity Investing
Quantitative Equity Investing: Techniques and Strategies (Wiley, 2010) by Frank J. Fabozzi, Sergio M. Focardi, and Petter N. Kolm is a dense book. More than 500 pages long and replete with mathematical formulas, it is intended for students, academics, and financial practitioners, especially those who have a working knowledge of linear algebra and probability theory. It may not be bedtime reading, but it thoroughly and clearly covers topics that other books only allude to. For anyone who wants to know how quantitative equity investing strategies are developed this is an ideal text.
The three major themes of the book are financial econometrics (linear regressions and time series), factor-based trading strategies, and portfolio optimization. The authors also look at some issues of trade execution, such as transaction costs and algorithmic trading.
It is impossible in this short space to do justice to even a single topic in the book. I have somewhat arbitrarily chosen to sample a tiny section, just over a page, with the subhead “Time Aggregation of Models and Pitfalls in the Selection of Data Frequency.” The authors invoke the familiar distinction between continuous- and discrete-time models. Continuous-time models resemble the differential equations found in physics; an example is the Black-Scholes option pricing model. Discrete-time models, by contrast, have definable time steps. For instance, if we are looking at the returns of a model we might specify a daily, weekly, or monthly time frame. The authors ask two companion questions. “Given a process that we believe is described by a given model, can we select the time step arbitrarily?” And “Can we improve the performance of our models considering shorter time steps?” (p. 174)
The authors write that “there is no general answer to these questions. Most models currently used are not invariant after time aggregation. [That is, we cannot successfully use the same model at different time steps.] Therefore, in the discrete world in general, we have to accept the fact that there are different models for different time steps and different horizons. We have to decide what type of dynamics we want to investigate and model. . . . Using shorter time steps . . . might result in a better understanding of short-term dynamics but might not be advantageous for making longer-term forecasts.” (p. 174) This might seem intuitively obvious, but we must remember that most chartists claim that patterns are time invariant. What works on a one-minute chart, they claim, works equally well on a daily chart.
This book does not take on the “softer side” of equity investing. It sticks to its quantitative knitting, combining theoretical analysis with practical recommendations. For example, suggestions for mitigating model risk, ways to perform portfolio sorts, and conditions under which VWAP is an ideal trading strategy (one condition is that the trader’s strategy has little or no alpha!). As such, it should be required reading for everyone who either builds quantitative models or aspires to do so.
The three major themes of the book are financial econometrics (linear regressions and time series), factor-based trading strategies, and portfolio optimization. The authors also look at some issues of trade execution, such as transaction costs and algorithmic trading.
It is impossible in this short space to do justice to even a single topic in the book. I have somewhat arbitrarily chosen to sample a tiny section, just over a page, with the subhead “Time Aggregation of Models and Pitfalls in the Selection of Data Frequency.” The authors invoke the familiar distinction between continuous- and discrete-time models. Continuous-time models resemble the differential equations found in physics; an example is the Black-Scholes option pricing model. Discrete-time models, by contrast, have definable time steps. For instance, if we are looking at the returns of a model we might specify a daily, weekly, or monthly time frame. The authors ask two companion questions. “Given a process that we believe is described by a given model, can we select the time step arbitrarily?” And “Can we improve the performance of our models considering shorter time steps?” (p. 174)
The authors write that “there is no general answer to these questions. Most models currently used are not invariant after time aggregation. [That is, we cannot successfully use the same model at different time steps.] Therefore, in the discrete world in general, we have to accept the fact that there are different models for different time steps and different horizons. We have to decide what type of dynamics we want to investigate and model. . . . Using shorter time steps . . . might result in a better understanding of short-term dynamics but might not be advantageous for making longer-term forecasts.” (p. 174) This might seem intuitively obvious, but we must remember that most chartists claim that patterns are time invariant. What works on a one-minute chart, they claim, works equally well on a daily chart.
This book does not take on the “softer side” of equity investing. It sticks to its quantitative knitting, combining theoretical analysis with practical recommendations. For example, suggestions for mitigating model risk, ways to perform portfolio sorts, and conditions under which VWAP is an ideal trading strategy (one condition is that the trader’s strategy has little or no alpha!). As such, it should be required reading for everyone who either builds quantitative models or aspires to do so.
Sunday, April 11, 2010
Julia Child quotations
Light Sunday fare, compliments of Julia Child (quotes collected on qotd.org).
• Everything in moderation, including moderation.
• Drama is very important in life: You have to come on with a bang. You never want to go out with a whimper.
• If it's beautifully arranged on the plate, you know someone's fingers have been all over it.
• If you're afraid of butter, use cream.
• The measure of achievement is not winning awards. It's doing something that you appreciate, something you believe is worthwhile. I think of my strawberry souffle. I did that at least twenty-eight times before I finally conquered it.
• The only time to eat diet food is while you're waiting for the steak to cook.
• Life itself is the proper binge.
• Everything in moderation, including moderation.
• Drama is very important in life: You have to come on with a bang. You never want to go out with a whimper.
• If it's beautifully arranged on the plate, you know someone's fingers have been all over it.
• If you're afraid of butter, use cream.
• The measure of achievement is not winning awards. It's doing something that you appreciate, something you believe is worthwhile. I think of my strawberry souffle. I did that at least twenty-eight times before I finally conquered it.
• The only time to eat diet food is while you're waiting for the steak to cook.
• Life itself is the proper binge.
Saturday, April 10, 2010
Friday, April 9, 2010
Merrill’s patterns
Sometimes it’s good to go back to basics. Arthur Merrill, the author of Behavior of Prices on Wall Street, compared the market to a warped roulette wheel. It has a bias and often tips its hand. One way it does this is graphical, in the relation of swing legs to one another.
John Bollinger brought Merrill to the attention of the online world some years ago when he posted Merrill’s 16 W and 16 M patterns. These patterns, of course, all have four legs, but they differ in their message to traders. They’re a one-page Cliff Notes for pattern traders.
John Bollinger brought Merrill to the attention of the online world some years ago when he posted Merrill’s 16 W and 16 M patterns. These patterns, of course, all have four legs, but they differ in their message to traders. They’re a one-page Cliff Notes for pattern traders.
Thursday, April 8, 2010
Four switches and a light bulb, the solution
If you don’t remember the puzzle from Paul Wilmott’s terrific book Frequently Asked Questions in Quantitative Finance, go back to the post "Solving fiendish problems"and yesterday’s hint. And if you tried and failed to figure it out, here’s the two-step solution. First, “turn on [switches] 1 and 2, and go and have some coffee.” Second, “turn off 1 and turn on 3, then go quickly into the room and touch the lamp.” (p. 384) If it is hot and dark it is controlled by switch 1, if it is hot and light it is controlled by switch 2, if it is cold and light it is controlled by switch 3, and—as by now even the worst puzzle solver should have figured out—if it is cold and dark it is controlled by switch 4.
A word of advice: if your brain seems to be controlled by switch 4 in dealing with such puzzles, just move on. Success is rarely found in a cold, dark world.
A word of advice: if your brain seems to be controlled by switch 4 in dealing with such puzzles, just move on. Success is rarely found in a cold, dark world.
The tea party movement, not your greatgrandfather’s populism
Since we are constantly being bombarded with news about the tea party movement, I thought it might be interesting to go back in history to the populism of the late nineteenth century, its antipathy toward Wall Street, and its bizarre embrace of big government. I am relying on Steve Fraser’s Every Man a Speculator (Harper, 2006) for this post.
The early 1890s were economically horrific. The country was experiencing “20 percent unemployment, an avalanche of bankruptcies and bank failures, farm evictions, shuttered mills, panic on Wall Street, and the marching feet of Coxey’s Army of the unemployed.” (p. 203)
The American farmer was in a particularly unenviable position. With the expansion of railroads all over the world, he now had global competition; the farmer who exported into a glutted commodities world market saw wheat go from $1.37 a bushel in 1870 to 50 cents in 1894 and cotton from 23 cents a pound to 7 cents. Commodity exchanges with their wildly fluctuating prices confounded him. “To survive this mercantile cyclone, farmers hooked themselves up to long lines of credit that stretched circuitously back to the financial centers of the East. . . . In a sense, the farmer was the looniest speculator, the most deluded gambler of them all. He was wagering he would somehow master this fathomlessly intricate global game, pay off his many debts, and come out with enough extra to play another round. On top of that he was betting on the kindness of Mother Nature, always supremely risky.” (p. 197) Indeed, the American farmer had just experienced a devastating drought in 1889-90.
When things did not go well for the farmer, who was to blame? “[A]grarian anger tended to pool around the strangulating system of currency and credit run out of the great banking centers of the East, and especially Wall Street.” (p. 197) The “money power” was constricting the money supply, rationing credit, and depressing prices. All the while the Street was feeding its ravenous appetite for excess of every stripe. Wall Street machinations were described as “a devil’s dance . . . an orgy of fiduciary harlotry.” (p. 218)
Curiously, the populist remedy was government intervention and activism. The lynchpin of their economic policy was the so-called subtreasury plan. The idea was to “wrest control of the monetary system from the Wall Street elite and vest it in the hands of the U.S. Treasury.” The subtreasury was envisaged to be essentially a “purchasing and marketing cooperative run by the government. It would make low-interest loans to farmers in legal-tender treasury notes in return for their crops. Then it would warehouse that agricultural output, releasing or holding back supplies from the market so as to maintain stable prices.” (p. 199)
Of course I could fast forward to agricultural subsidies and assorted other governmental interventions in a “pure” free market system. But I have no interest in being a political commentator. I was just struck by the disparity between the populist solutions of the 1890s and the populist beefs of 2010.
The early 1890s were economically horrific. The country was experiencing “20 percent unemployment, an avalanche of bankruptcies and bank failures, farm evictions, shuttered mills, panic on Wall Street, and the marching feet of Coxey’s Army of the unemployed.” (p. 203)
The American farmer was in a particularly unenviable position. With the expansion of railroads all over the world, he now had global competition; the farmer who exported into a glutted commodities world market saw wheat go from $1.37 a bushel in 1870 to 50 cents in 1894 and cotton from 23 cents a pound to 7 cents. Commodity exchanges with their wildly fluctuating prices confounded him. “To survive this mercantile cyclone, farmers hooked themselves up to long lines of credit that stretched circuitously back to the financial centers of the East. . . . In a sense, the farmer was the looniest speculator, the most deluded gambler of them all. He was wagering he would somehow master this fathomlessly intricate global game, pay off his many debts, and come out with enough extra to play another round. On top of that he was betting on the kindness of Mother Nature, always supremely risky.” (p. 197) Indeed, the American farmer had just experienced a devastating drought in 1889-90.
When things did not go well for the farmer, who was to blame? “[A]grarian anger tended to pool around the strangulating system of currency and credit run out of the great banking centers of the East, and especially Wall Street.” (p. 197) The “money power” was constricting the money supply, rationing credit, and depressing prices. All the while the Street was feeding its ravenous appetite for excess of every stripe. Wall Street machinations were described as “a devil’s dance . . . an orgy of fiduciary harlotry.” (p. 218)
Curiously, the populist remedy was government intervention and activism. The lynchpin of their economic policy was the so-called subtreasury plan. The idea was to “wrest control of the monetary system from the Wall Street elite and vest it in the hands of the U.S. Treasury.” The subtreasury was envisaged to be essentially a “purchasing and marketing cooperative run by the government. It would make low-interest loans to farmers in legal-tender treasury notes in return for their crops. Then it would warehouse that agricultural output, releasing or holding back supplies from the market so as to maintain stable prices.” (p. 199)
Of course I could fast forward to agricultural subsidies and assorted other governmental interventions in a “pure” free market system. But I have no interest in being a political commentator. I was just struck by the disparity between the populist solutions of the 1890s and the populist beefs of 2010.
Wednesday, April 7, 2010
When you keep missing the basket
Just think of the UConn women’s basketball team in the first half of the championship game which, as the New York Times reporter quipped, “might have set the women’s game back a couple of decades, with both teams shooting at a staggeringly inept pace.” UConn went scoreless for 10 minutes 37 seconds and ended the half with a mere 12 points.
Assuming some prankster hadn’t moved the baskets up or down an inch, I couldn’t fathom how the two best women’s teams in the NCAA were logging stats worthy of junior high players. In the second half UConn got its rhythm back while Stanford “continued firing away like cross-eyed gunslingers.” And thus it was that UConn triumphed for its 78th consecutive victory and coach Geno Auriemma’s seventh national title, “even as perfection never looked so imperfect.”
We don’t know, of course, why there was such a disparity between UConn’s play in the first and second halves. Was it a case of nerves? Was the Stanford defense shutting down UConn’s shooting game and did the coaches or the players figure out a way around the defense during halftime? Whatever the case, halftime seemed to be essential.
A good trader who starts missing the basket repeatedly needs more than a timeout. If she’s going to change her momentum she needs to get off the court, go into the locker room, figure out what’s been going wrong and how she’s going to improve it, and be cajoled (in whatever way is most effective—yelling at stupidity or encouraging the best player in her trading arsenal to step up and take charge) by her inner coach. She can then say with confidence, “I know there’s a run coming.” Oh yes, the UConn player who said this also made it happen.
Assuming some prankster hadn’t moved the baskets up or down an inch, I couldn’t fathom how the two best women’s teams in the NCAA were logging stats worthy of junior high players. In the second half UConn got its rhythm back while Stanford “continued firing away like cross-eyed gunslingers.” And thus it was that UConn triumphed for its 78th consecutive victory and coach Geno Auriemma’s seventh national title, “even as perfection never looked so imperfect.”
We don’t know, of course, why there was such a disparity between UConn’s play in the first and second halves. Was it a case of nerves? Was the Stanford defense shutting down UConn’s shooting game and did the coaches or the players figure out a way around the defense during halftime? Whatever the case, halftime seemed to be essential.
A good trader who starts missing the basket repeatedly needs more than a timeout. If she’s going to change her momentum she needs to get off the court, go into the locker room, figure out what’s been going wrong and how she’s going to improve it, and be cajoled (in whatever way is most effective—yelling at stupidity or encouraging the best player in her trading arsenal to step up and take charge) by her inner coach. She can then say with confidence, “I know there’s a run coming.” Oh yes, the UConn player who said this also made it happen.
Four switches and a light bulb, a hint
As with so many things in life, especially things that stump us, “the trick is to realize that there is more to the bulb than light.” (And if this hint isn’t sufficient, read the comments to yesterday’s post.)
Schneeweis, Crowder, and Kazemi, The New Science of Asset Allocation
Asset allocation is a topic that has long fascinated me; I’m infatuated with the idea of moving parts that sometimes correlate and sometimes don’t, that sometimes spike and sometimes dive all being blended into portfolio that can outperform benchmarks, sometimes significantly. Individual investors are usually told to diversify among stocks, bonds, and perhaps real estate and commodities and periodically rebalance. The recommended percentage to allocate to each asset class may vary somewhat from person to person depending on age and risk tolerance, but the general principle is simple and straightforward. Unlike many other recommendations from financial advisors, it’s not even obviously wrong. It’s just boring and intellectually unsatisfying.
Thomas Schneeweis, Garry B. Crowder, and Hossein Kazemi teamed up to write The New Science of Asset Allocation: Risk Management in a Multi-Asset World (Wiley, 2010). As is increasingly the case with financial book titles, this particular title both overpromises and misleads. The book does not blaze new trails, and although it has ample charts, graphs, and the occasional mathematical formula it will not appeal to hard-core quants. That said, the book provides a very readable overview of the tradeoff between risk and return and the role that asset allocation plays in the inevitable balancing act. It provides some intellectual underpinnings to the process of asset allocation, a process that requires both number-crunching and personal judgment. It is also a valuable source of data--for instance, tables that give benchmark returns along with their standard deviations, information ratios, maximum drawdowns, and correlations to other indices.
The book covers a range of topics--risk measurement; alpha and beta; strategic, tactical, and dynamic asset allocation; and core and satellite investment. It also devotes ample space to alternative investments (hedge funds, managed futures, private equity, real estate, and commodities).
Even though I thoroughly enjoyed reviewing material with which I was familiar and occasionally incorporating new insights, I decided for the purposes of this post to look at a simple way to monitor the risk-return profile of a fund or portfolio—adjusting its volatility. The authors argue throughout the book that risk is multidimensional, so this is nothing more than a “quick and dirty” technique. Assume that the five-year historical volatility on a portfolio’s pro-forma returns has been 10% while during the same period the average implied volatility of the U.S. equity market as measured by the VIX has been 18%. That is, the portfolio’s volatility has been about 55% of the VIX. Assume further that the portfolio manager’s job is to keep this ratio roughly constant, increasing portfolio risk if the VIX falls and hedging out some of the portfolio’s volatility with index futures if the VIX spikes. The authors provide the formula for accomplishing this task, a formula that I’ll file away for the time that I am no longer an active trader but have a multimillion dollar portfolio that needs this kind of adjustment, perhaps in my next life.
Okay, so perhaps my readers aren’t all managing funds (though I know that some are) or sitting on multimillion dollar personal portfolios. This book is valuable even for the intraday S&P futures trader, which is why I brought up the portfolio volatility adjustment example. If the VIX gets to elevated levels, portfolio managers who are managing risk start hedging. It’s important to know the depth of the teams on each side of the trade.
Thomas Schneeweis, Garry B. Crowder, and Hossein Kazemi teamed up to write The New Science of Asset Allocation: Risk Management in a Multi-Asset World (Wiley, 2010). As is increasingly the case with financial book titles, this particular title both overpromises and misleads. The book does not blaze new trails, and although it has ample charts, graphs, and the occasional mathematical formula it will not appeal to hard-core quants. That said, the book provides a very readable overview of the tradeoff between risk and return and the role that asset allocation plays in the inevitable balancing act. It provides some intellectual underpinnings to the process of asset allocation, a process that requires both number-crunching and personal judgment. It is also a valuable source of data--for instance, tables that give benchmark returns along with their standard deviations, information ratios, maximum drawdowns, and correlations to other indices.
The book covers a range of topics--risk measurement; alpha and beta; strategic, tactical, and dynamic asset allocation; and core and satellite investment. It also devotes ample space to alternative investments (hedge funds, managed futures, private equity, real estate, and commodities).
Even though I thoroughly enjoyed reviewing material with which I was familiar and occasionally incorporating new insights, I decided for the purposes of this post to look at a simple way to monitor the risk-return profile of a fund or portfolio—adjusting its volatility. The authors argue throughout the book that risk is multidimensional, so this is nothing more than a “quick and dirty” technique. Assume that the five-year historical volatility on a portfolio’s pro-forma returns has been 10% while during the same period the average implied volatility of the U.S. equity market as measured by the VIX has been 18%. That is, the portfolio’s volatility has been about 55% of the VIX. Assume further that the portfolio manager’s job is to keep this ratio roughly constant, increasing portfolio risk if the VIX falls and hedging out some of the portfolio’s volatility with index futures if the VIX spikes. The authors provide the formula for accomplishing this task, a formula that I’ll file away for the time that I am no longer an active trader but have a multimillion dollar portfolio that needs this kind of adjustment, perhaps in my next life.
Okay, so perhaps my readers aren’t all managing funds (though I know that some are) or sitting on multimillion dollar personal portfolios. This book is valuable even for the intraday S&P futures trader, which is why I brought up the portfolio volatility adjustment example. If the VIX gets to elevated levels, portfolio managers who are managing risk start hedging. It’s important to know the depth of the teams on each side of the trade.
Tuesday, April 6, 2010
Solving fiendish problems
William Byers, in How Mathematicians Think (Princeton University Press, 2007), recalls the interview with Andrew Wiles on Nova. Wiles is the mathematician who proved Fermat’s last theorem after seven years of dedicated work, focus, determination, and, yes, a little help from his friends. (By the way, for Malcolm Gladwell fans, there’s a video from the 2007 New Yorker conference in which he talks about the importance of stubbornness and collaboration in Wiles’s triumph.)
Wiles described the process of solving what I have dubbed fiendish problems: “Perhaps I can best describe my experience of doing mathematics in terms of a journey through a dark unexplored mansion. You enter the first room of the mansion and it’s completely dark. You stumble around bumping into the furniture, but gradually you learn where each piece of furniture is. Finally after six months or so, you find the light switch, you turn it on, and suddenly it’s all illuminated. You can see exactly where you were. Then you move into the next room and spend another six months in the dark. So each of these breakthroughs, while sometimes they’re momentary, sometimes over a period of a day or two, they are the culmination of—and couldn’t exist without—the many months of stumbling around in the dark that precede them.” (p. 1)
Problems in the financial markets aren’t nearly so fiendish as proving Fermat’s theorem. But haven’t we all been in that dark unexplored mansion? Well, that question is incorrect. Aren’t we all still in that mansion? Perhaps we’re in room two or three, perhaps in room six or seven, but I’d wager to say that most of us still spend more time bumping into furniture than seeing the light.
And this reminds me of the puzzle of four switches and a light bulb from Paul Wilmott’s Frequently Asked Questions in Quantitative Finance (Wiley, 2007; a second edition is now available). “Outside a room there are four switches, and in the room there is a light bulb. One of the switches controls the light. Your task is to find out which one. You cannot see the bulb or whether it is on or off from outside the room. You may turn any number of switches on or off, any number of times you want. But you may only enter the room once.” (p. 383) Tomorrow I’ll share the “trick” to the solution, the day after I’ll outline the steps necessary to identifying the correct light switch.
Wiles described the process of solving what I have dubbed fiendish problems: “Perhaps I can best describe my experience of doing mathematics in terms of a journey through a dark unexplored mansion. You enter the first room of the mansion and it’s completely dark. You stumble around bumping into the furniture, but gradually you learn where each piece of furniture is. Finally after six months or so, you find the light switch, you turn it on, and suddenly it’s all illuminated. You can see exactly where you were. Then you move into the next room and spend another six months in the dark. So each of these breakthroughs, while sometimes they’re momentary, sometimes over a period of a day or two, they are the culmination of—and couldn’t exist without—the many months of stumbling around in the dark that precede them.” (p. 1)
Problems in the financial markets aren’t nearly so fiendish as proving Fermat’s theorem. But haven’t we all been in that dark unexplored mansion? Well, that question is incorrect. Aren’t we all still in that mansion? Perhaps we’re in room two or three, perhaps in room six or seven, but I’d wager to say that most of us still spend more time bumping into furniture than seeing the light.
And this reminds me of the puzzle of four switches and a light bulb from Paul Wilmott’s Frequently Asked Questions in Quantitative Finance (Wiley, 2007; a second edition is now available). “Outside a room there are four switches, and in the room there is a light bulb. One of the switches controls the light. Your task is to find out which one. You cannot see the bulb or whether it is on or off from outside the room. You may turn any number of switches on or off, any number of times you want. But you may only enter the room once.” (p. 383) Tomorrow I’ll share the “trick” to the solution, the day after I’ll outline the steps necessary to identifying the correct light switch.
Monday, April 5, 2010
I’ve lost my keys!
No, but I have lost the url for the site (compliments of Andrew Lo?) that asks folks to distinguish between a random chart and a real chart. I’ve also lost the reference (and I suspect it’s in one of my books) that said something along the following lines: the distinction is subtle but therein lies $$$$. I don’t need the second reference because the concept is emblazoned in my obviously shrinking brain. But I would really appreciate some help on the first front. Just post it as a comment, and many thanks!
Kotok and Sciarretta, Invest in Europe Now!
There are books that stretch my brain and those that don’t. It is often not a function of the books themselves but rather of the breadth of my knowledge. I received three books from Wiley to review and they span the spectrum. I decided to start with the fastest read, Invest in Europe Now! Why Europe’s Markets Will Outperform the U.S. in the Coming Years by David R. Kotok and Vincenzo Sciarretta (Wiley, 2010). The book’s 215 pages of text are divided into three parts: macro issues, stock-specific strategies, and guru chapters. The third part, which comprises almost half the text, is a series of interviews that Sciarretta did with ten experts on the European markets.
I am not a macro investor nor do I consider myself a seer. Moreover, to me the word “now” means a period much shorter than the time it takes to write and publish a book. So instead of writing a real review, I’m going to use this opportunity to explore two topics—what investment strategies have worked in European equities and what I would have written had I been considered a guru on investing in Hungary (and given only a paragraph’s worth of space).
There has been extensive research in recent years on building multifactor portfolios. I wrote about a fairly new research paper, "Diversification Across Characteristics," a couple of weeks ago. Kotok and Sciarretta’s book provides data from the European markets.
The Eurozone, the authors admit, does not offer the American investor much diversification when it comes to trading strategies; generally speaking, what works in U.S. equities works in European equities. Using the EURO STOXX index as their database, an Italian firm designed and analyzed several 10-stock portfolios (both single factor and multifactor) over a ten-year period—December 31, 1998 through December 31, 2008. The portfolios were rebalanced every three years, leaving the fateful 2008 period to stand alone.
Over the course of the first nine years the best single-factor strategy was low enterprise-value-to-sales stocks with relative strength coming in second. EV/sales returned 16.2%, relative strength 14.4%, and the DJ EURO STOXX 3.7%. Once 2008 was added to the previous nine years performance sagged, but EV/sales was still the best overall at 8.1% and relative strength second best at 6.8%. The index lost 2.9%.
The three multifactor portfolios combined value and previous-year momentum. The winner used as its value component a price-to-cash-flow ratio below 10 and a dividend yield greater than 2%. This multifactor portfolio outperformed all single factor portfolios over both the nine-year and ten-year time horizons—19.4% and 11.1% respectively.
To move beyond the contents of this book, if the authors urge us to invest in Europe now, what about Eastern Europe, more specifically Hungary? (I know more about Hungarian politics than anyone without an ounce of Hungarian blood in her should, so now might be an appropriate time to share a couple of observations.) Hungary has been digging itself out of its financial hole with the help of IMF and EU loans and a stringent austerity program. So far so good. The problem is that national elections are coming up very soon (April 11 is the first round) and the prediction is that the opposition party will win, some say by a landslide. To a Western investor this might sound like good news: a right-of-center party will replace the socialists. The problem is that in Hungary things are topsy-turvy: the socialists have been the advocates of democracy, capitalism, and foreign investment while their opponents have a checkered record in areas normally considered investor friendly. For instance, they have illegally broken contracts with foreign companies. In general they are less democratic, less capitalistic, and more nationalistic. I am, of course, making neither a political nor an investment recommendation, just urging due diligence before investing in a country that has already outperformed most world markets. As of April 1 the BUX ETF, registered in Hungary and a tracker of the BUX Index, has a one-year return of 108.82% and has gained 14.55% year to date.
I am not a macro investor nor do I consider myself a seer. Moreover, to me the word “now” means a period much shorter than the time it takes to write and publish a book. So instead of writing a real review, I’m going to use this opportunity to explore two topics—what investment strategies have worked in European equities and what I would have written had I been considered a guru on investing in Hungary (and given only a paragraph’s worth of space).
There has been extensive research in recent years on building multifactor portfolios. I wrote about a fairly new research paper, "Diversification Across Characteristics," a couple of weeks ago. Kotok and Sciarretta’s book provides data from the European markets.
The Eurozone, the authors admit, does not offer the American investor much diversification when it comes to trading strategies; generally speaking, what works in U.S. equities works in European equities. Using the EURO STOXX index as their database, an Italian firm designed and analyzed several 10-stock portfolios (both single factor and multifactor) over a ten-year period—December 31, 1998 through December 31, 2008. The portfolios were rebalanced every three years, leaving the fateful 2008 period to stand alone.
Over the course of the first nine years the best single-factor strategy was low enterprise-value-to-sales stocks with relative strength coming in second. EV/sales returned 16.2%, relative strength 14.4%, and the DJ EURO STOXX 3.7%. Once 2008 was added to the previous nine years performance sagged, but EV/sales was still the best overall at 8.1% and relative strength second best at 6.8%. The index lost 2.9%.
The three multifactor portfolios combined value and previous-year momentum. The winner used as its value component a price-to-cash-flow ratio below 10 and a dividend yield greater than 2%. This multifactor portfolio outperformed all single factor portfolios over both the nine-year and ten-year time horizons—19.4% and 11.1% respectively.
To move beyond the contents of this book, if the authors urge us to invest in Europe now, what about Eastern Europe, more specifically Hungary? (I know more about Hungarian politics than anyone without an ounce of Hungarian blood in her should, so now might be an appropriate time to share a couple of observations.) Hungary has been digging itself out of its financial hole with the help of IMF and EU loans and a stringent austerity program. So far so good. The problem is that national elections are coming up very soon (April 11 is the first round) and the prediction is that the opposition party will win, some say by a landslide. To a Western investor this might sound like good news: a right-of-center party will replace the socialists. The problem is that in Hungary things are topsy-turvy: the socialists have been the advocates of democracy, capitalism, and foreign investment while their opponents have a checkered record in areas normally considered investor friendly. For instance, they have illegally broken contracts with foreign companies. In general they are less democratic, less capitalistic, and more nationalistic. I am, of course, making neither a political nor an investment recommendation, just urging due diligence before investing in a country that has already outperformed most world markets. As of April 1 the BUX ETF, registered in Hungary and a tracker of the BUX Index, has a one-year return of 108.82% and has gained 14.55% year to date.
Friday, April 2, 2010
ETFreplay
I may be a little late to the party, but I just found this site and decided to pass along my discovery. ETFreplay.com is glorious eye candy for the ETF investor.
Thursday, April 1, 2010
The fruit of knowledge
This tiny piece from Raymond Smullyan’s This Book Needs No Title (p. 119) is completely off topic but I found it philosophically amusing. So here it is, in toto:
“The reason Adam ate of the fruit of knowledge was that he didn’t know any better. Had he had just a little more knowledge, he would have known enough not to do such a damn fool thing!
“Can we return to the Garden of Eden? Well, if we returned completely, if we entered again into the complete state of innocence, we would no longer have the knowledge to prevent us from eating the apple again. And so again we would fall out of grace. It seems, therefore, that to regard the Garden of Eden, the state of innocence, as the perfect state is simply a mistake. It has the obvious imperfection of being internally unstable and self-annihilating.
“Too bad there weren’t two trees of knowledge in the Garden of Eden, a big tree and a little tree. The only knowledge to be imparted by the little tree should be, ‘It is a mistake to eat of the big tree.’”
“The reason Adam ate of the fruit of knowledge was that he didn’t know any better. Had he had just a little more knowledge, he would have known enough not to do such a damn fool thing!
“Can we return to the Garden of Eden? Well, if we returned completely, if we entered again into the complete state of innocence, we would no longer have the knowledge to prevent us from eating the apple again. And so again we would fall out of grace. It seems, therefore, that to regard the Garden of Eden, the state of innocence, as the perfect state is simply a mistake. It has the obvious imperfection of being internally unstable and self-annihilating.
“Too bad there weren’t two trees of knowledge in the Garden of Eden, a big tree and a little tree. The only knowledge to be imparted by the little tree should be, ‘It is a mistake to eat of the big tree.’”
Subscribe to:
Posts (Atom)