Archive for March, 2007

Strategy begins with tactical asymmetry

Friday, March 30th, 2007

What happens when every MBA has read Porter? or when every general has read Clausewitz and Sun Tzu? What happens to strategy when everyone knows about it?

What’s left is the exploitation of tactical asymmetry — usually in the form of a competitor’s inflexibility.

To see why, try a thought experiment: what if a natural monopoly opportunity is revealed simultaneously to two companies of equivalent resources and competitive position?

If firms were rational agents with equivalent information, then each would invest one half of the present value of the monopoly over its lifetime, and NPV across that entire industry would be near zero, no matter who won.

But real-world agents are not rational, and the bias is toward overinvestment. In particular, individuals within firms have personal incentives to overestimate market size and odds of success, and to abandon failed efforts only reluctantly. As a result, in the thought experiment, we would tend to get negative NPV in real-world cases.

This negative NPV results from the symmetry of the two positions: even though the industry may be a highly profitable natural monopoly in the steady state, no firm can reach a dominant position. Yet it continues to invest as if it can.

Historical examples abound. English canals and American railroads, obviously natural monopolies, yielded many investment failures in part because of such overinvestment.

The symmetry is the problem. So, to have a reasonable expectation of positive NPV in chasing a monopoly, you must begin with a significant tactical asymmetry over competitors — resources (money, incumbency), information (insight, access, surprise), or preferably all of the above.

For a venture capitalist, this would be the second half of an investment evaluation. The first half would be “is the market likely to be a monopoly (i.e., to yield supernormal profits to the winners) at all?” If so, then to have fair odds of success, there must be an tactical advantage that may be exploited as a strategy.

MSN Search Is in Free Fall

Friday, March 30th, 2007

MSN Search (or Live Search, or whatever they call it this month) now accounts for under 6% of my search engine traffic. In fact, even this is overstated, because it includes all links from “unknown search engines.

Google accounts for 68% to 91% of my search engine traffic, depending upon the site.

My sites are all #1-ranked for their primary search phrases. My site content is highly diversified (video games, Palm image software, and investment newsletters). The implication is that my results are representative of search traffic generally. It implies strongly that Microsoft’s share of search traffic has fallen dramatically (more than half) in the past year.

This may explain their desperation to attract advertisers.

Index funds are parasites

Thursday, March 29th, 2007

Mr. Bogle, don’t take this the wrong way. Index funds are a great invention: they are simple, tax efficient, perform well, and push down investment management fees generally. All of these things benefit individual investors.

“Parasite” refers only to the fact that index funds benefit from market efficiency, while providing none.

Index funds track an overall market index by maintaining an appropriately weighted portfolio that mirrors the index. This performs well by riding the rising tide of the general stock market at very low cost.

But imagine if all equity capital were invested through index funds. Then index funds would comprise the entire market. No mechanism would exist for buying stocks below intrinsic value, and selling above intrinsic value. Markets would not be efficient, and stocks would follow a path that bore no relationship to value.

That imaginary situation cannot occur, because in the real world, “value arbitrageurs” always step in to restore market efficiency. But the fact is that index funds are nonmarket entities. Realistically, as index funds take an ever larger fraction of total market capitalization, a market distortion might begin to appear in the form of increased volatility.

Investors extrapolate. Thus, when markets have gone up recently, they pile into index funds, which then automatically buy stocks without regard to price. If this is done in sufficient volume, stock prices generally will rise to historically unusual price/earnings ratios. This creates a self-reinforcing state: rising stocks cause rising index investments, which cause rising stocks.

But individual investors also fear loss more than they desire gain. Thus, when a general market decline begins, redemptions accelerate. If index funds accounted for a large percentage of all stocks, and if price-earnings ratios were at historically high levels, then such a decline/redemption cycle could also be self-reinforcing. “Self-reinforcing decline/redemption cycle” is a very polite way of describing a market crash.

But that can’t happen, the efficient market theorist might argue, because the remaining non-index investors will simply arbitrage all that out, restoring efficient pricing. OK, let’s assume the theorist is right there. In that situation, who makes all the money? Hint: it’s not the index fund investor. Suddenly, they are on the wrong side of the efficiency equation. The arbitrageur makes the money by betting against the blind money going into or out of the market.

Thus it seems two things are true. First, index funds cannot grow past a certain point without ceasing to perform as they have. The change seems likely to emerge not as lower total returns, but rather as much higher volatility.

Second, for index funds to work, a mechanism of market efficiency must exist in the form of managed investments. That role might be small — perhaps only 10% of total market capitalization — but it must exist. And the larger the fraction of total capital under index fund management, the greater and more likely the distortion, and thus the higher the potential returns to traditional managed investments would become.

Thus, in the long run, index funds may actually benefit the few that remain as traditional investment managers.

After Stock Screening

Thursday, March 29th, 2007

Computer-driven stock selection began in the late 1960s, most notably with Ed Thorp’s firm, Convertible Hedge Associates. Thorp applied information theory principles to portfolio allocation, and sought simple convergence plays, e.g. warrants underpriced relative to underlying stock.

The 1970s were a heyday for such strategies. Small but systematic market inefficiencies lurked everywhere. Computers were not widely available, so the few who had access and capability were at an advantage.

By the mid-1990s, most such opportunities had been competed away as understanding became widespread. But the democratization of data and computation was not finished.

By 2000, a small fund could afford fundamental and other data from specialized services like Compustat ($12,000/yr).

By 2002, an individual could obtain reasonably accurate U.S. company and stock data from free services like MSN Money. One could also filter the data set for free, to find all companies fitting a particular profile.

In 2006, these capabilities were spreading to worldwide markets, beginning with London.

It’s reasonable to assume that, very soon, investment managers will have no information advantage in terms of data and screening capability. It will all be instant and free, for everyone.

What’s left? Plenty. As always, strategy — defined as the profit that remains when your competitors execute perfectly — provides the direction.

Financial industry structure always forces a short-term bias. Because clients can pull their money out any time, there is irresistible pressure for investment managers to focus on short-term gain. This is a competitive inflexibility shared by almost the entire investment management industry. As a result, long-term strategies are likely to retain a larger advantage, for longer, than any other strategy.

Short-term bias forces non-fundamental strategies. When a fund manager is under pressure to hang onto clients by delivering an extra 100 basis points this month, he has no time to wait for intrinsic value to be recognized. That can take months or years, and he’ll be fired or out of business by then. So his effort inevitably goes to strategies that attempt non-fundamental strategies, those that try to guess short-term price movements. Individual investors tend to do the same, simply due to impatience. Thus intrinsic value strategies are less competitively exposed.

Index funds are a non-fundamental strategy. Index funds simply buy everything — they do not have the flexibility to stay out. Again, intrinsic value strategies are less competitively exposed.

Non-fundamental strategies are destabilizing in aggregate. Relative-price investment schemes — those that buy or sell based on a stock’s price relative to other stocks — are popular, yet distorting and destabilizing in the aggregate. This definition emphatically does include index funds. Since such “investors” are simply reacting to one another, trends are amplified. This is the reason stock prices are not normally distributed (the “fat tails” distribution problem). It is the source of bubbles and panics, and will always create opportunity for the investor that makes decisions only based on price relative to intrinsic value.

In summary, what’s left is intrinsic value, always intrinsic value. If it says to stay out of the market altogether, then so be it. But that won’t always be true.

Windows Vista – Orphan in the Sky

Wednesday, March 28th, 2007

A popular theme of science fiction writers is a sort of high-tech cargo cult, in which ignorant descendants of an advanced culture live, dependent and uncomprehending, on the towering technical achievements of their forefathers. This idea turns up in everything from Heinlein (”Orphans of the Sky”) to Star Trek (#38, “The Apple” #63, “For the World Is Hollow and I Have Touched the Sky” — thanks to Eric for the correction).

There is historical precedent. Radiocarbon dating of the Egyptian pyramids yielded an archaeological surprise: the biggest and most advanced ones came first. Thereafter, they became progressively more primitive, as successive generations essentially made imperfect, poorly understood copies of the original.

One might argue the Enlightenment was similar: a new reasoning methodology fomented by Descartes, Galileo and Bacon triggered a blinding flash of collective insight, fading ever since, even as we discover ever more.

A more immediate example appears to be Windows Vista and, more generally, Microsoft product development. Here are some forgotten technologies of the Forefathers of Microsoft

Compatiblity engineering. This is the primary complaint about both Vista and AdCenter. These two products are arguably the top two strategic priorities for the company, so if both suffer the same failing, the problem is systemic to the company.

Sustained initiatives. Traditionally, companies feared Microsoft in part because it could sustain a strategic focus for years at a time, chipping away at a market until it took over. This characterized Excel in the 80s, Windows in the 90s, and XBox in the 00s. This seems less true today. What looks like sustained focus today is merely a group of insular departments just doing what they did last year, over and over again.

Speed. Say what you want about MSFT, but in the 80s and 90s, by and large, they wrote fast code. They were early commercial adopters of C/C++, they knew downcoding, they knew about designing data structures for speed, etc. It’s hard to see those strengths in the newer products. Adcenter is slow compared to lightning-quick alternatives (Google). Vista is slow compared to alternatives (Intel Macs). In both cases, no advantage is offered in return for this slowness.

The overall impression is that Microsoft is now a pilotless, decaying spaceship, carrying the childlike descendants of its brilliant forebears to a distant, forgotten destination.

AdCenter: desperate, but not serious

Friday, March 23rd, 2007

Free advertising offers I’ve received from AdCenter:

July 2006: $30
Feb 2007: $100
Mar 2007: $200

Tables don’t communicate the drama as well as a graph:

On current trends, within a few months, it will be cheaper for MSFT to just buy out my business than to convince me to try their ad system.

At any rate, $200 is a lot of money, so I visited AdCenter and found the front end has been completely overhauled, but contains the same amateur-hour website errors.

  1. Incompatible with Safari browser. Safari is inconsequential for the general population (probably 2%), but widely used by web designers (probably 30%) — AdCenter’s primary target audience. But OK, I can use Firefox.
  2. Fails silently on incompatible browsers. AdCenter doesn’t bother to detect and warn of incompatible browsers — it simply fails in various comical/nonsensical ways, halfway through whatever task you were attempting. Browser detection (”sorry, this site is incompatible with Safari”) is vanishingly easy to implement, so it’s utterly amateur for an organization MSFT’s size to ignore it.
  3. Credit card entry page is 1200 pixels wide. Good thing I upgraded my monitor last month. Clearly no one outside the ivory tower tested this app, or they would have noticed most people don’t have a monitor this big.
  4. On Firefox, the voluminous “Terms and Conditions” legalese appears in a tiny text box only 2 lines high by 20 characters wide. It’s like reading Tolstoy through a keyhole.
  5. On credit card validation, received mysterious “Alert: www.contentmedianetwork.com has sent an incorrect or unexpected message. Error Code: -12263.” I click “OK,” and am greeted with “Thanks for signing up!” So everything is OK now? Who was contentmedianetwork.com? What was error -12263? The mysterious unexplained errors are nothing like a normal Web experience. They remind me more of… Windows.
  6. Inside the app’s campaign center, web pages load forever, evidently to keep a live connection with the server. How can that scale? They keep a connection open for every user simultaneously?

It is absolutely shocking how far behind Microsoft is in this arena.

Obesity — why now?

Tuesday, March 20th, 2007

The previous post (”Adaptive Morality“) implied that the American obesity epidemic results from one of two things: (i) immoral behavior, or (ii) some new cause which currently has low awareness.

The latter seems more likely. Why? One reason is that obesity arose abruptly in America only in the past 20 years, after over 60 years of gustatory plenty. But the bigger clue is a related, very odd epidemic: thin parents with fat children. Since parents exercise some control over their children’s diet, slim parents would tend usually to result in slim children, barring some new obesity cause with heretofore low awareness.

Here are some possibilities. These are unproven hypotheses, merely correlated trends: both of these have arisen in the past 20 years, exactly concurrent with the obesity explosion.

  • “First-person shooter” video games. I suspect that the chronically activated adrenal response from hours of fight-or-flight action games may flood susceptible individuals with cortisol, causing weight gain. (3/21/07 Addendum: There is some evidence obesity correlates with video games but not television — consistent with the adrenal argument.)
  • Unregulated, powerful, poorly understood stimulant cocktails like Red Bull. Again, I suspect, but can’t prove, that these may chronically activate adrenaline and flood one with cortisol.
  • Prescribed child stimulants like Ritalin, again because of a potential cortisol link. (3/21/07 Addendum: Eric cites that child stimulants are actually prescribed for weight loss. Since stimulants are known to fail long-term — all weight is regained when drugs are stopped — we seem to have made a case for immoral behavior by parents or their doctors.)

Correlation is not causation. But it would be worth some research.

Adaptive Morality

Tuesday, March 20th, 2007

Morality is a mutually agreed behavioral rule set that arises among groups to assign immediate personal benefits and costs to positive and negative externalities. Specific rules, then, are adapted to the specific externalities in a given group in response to awareness of the externalities.

Examples. Random killing always has negative externalities for all groups, so murder remains universally wrong. But when penicillin, birth control and welfare are invented, the externalities of STDs and starving unwed mothers are reduced, so premarital sex becomes de facto acceptable after millenia of taboo.

Obesity has both a personal externality (you live poorly and die early), as well as a public externality (health costs). A newly developed country like South Korea gains awareness of this only over time, so the moral rules don’t respond immediately. So one would expect a brief explosion of obesity, followed by a recovery.

But wait a minute. For decades, the US has enjoyed a diet rich in fat and sugar, and practically unlimited in quantity. Beginning about 40 years ago, 3 simple rules (eat only when hungry, no sugar, no saturated fat) became widely known and publicly promoted. So why did the obesity epidemic began only 15 years ago? Are we becoming immoral?

Possibly, but there is another potential explanation: some new cause of obesity, for which there is low awareness.

Public Policy as a Design Challenge

Wednesday, March 14th, 2007

The goal of government is to maximize quality of life for its citizens. The functional requirements for achieving this are covered pretty well in the Constitution: basically justice, domestic safety and economic opportunity.

Serving those requirements are an infinite variety of potential actions, so it makes sense to prioritize based on return on investment to the nation collectively. This provides the greatest good, for the greatest number, in the shortest time. High-ROI policies should be done immediately, and low-ROI policies should be delayed. Extreme examples:

HIGH ROI – Teach people to read, write, and use the Internet.

LOW OR NEGATIVE ROI – Pay farmers not to grow things.

The top priority should be for policies that simultaneously decrease spending and increase quality of life. The number of such options is surprisingly large. To name just a few:

  • All government document filing should be via Web browser, with no surcharge. This by itself could save a double-digit percentage of all government spending, through decreased staffing. For example, to renew your driver’s license in California, currently the fee is higher over the Internet than in person, even though the latter service costs 10 times more to provide. The latter service should cease to exist — all renewals should be web-based. This should also be true for filing court documents, paying property taxes, and a thousand other government interactions. Billions of hours are wasted on both sides, at public expense, by making you write something down on paper, deliver it to a live person in an office, and having them transcribe the info into a computer. The money saved should be applied to providing public Web access at libraries and government offices, and to educating everyone in how to conduct such business over the Web — a very high-ROI use of funds.
  • Phase out services provided in non-English languages. Not that English is superior — on the contrary, Spanish is objectively better, as it’s more grammatically consistent, easier to learn, and easier to pronounce. (Bonus: it’s also better for poetry, because so many words rhyme.) No, the reason is simply that standardization yields huge return on investment, and most people here already speak it. Nearly all other large nations with diverse linguistic heritage — India, Philippines, Indonesia, and many countries in Africa — standardize on a single language for government and commerce. Not to do so here means that we pay money to decrease our own economic output, which is, to be succinct, crazy. The money saved should be applied to providing free English lessons to anyone who can prove legal U.S. resident status — again, a very high-ROI use of funds.
  • Derive military vehicles from standard commercial platforms. Currently, army jeeps and trucks are developed as they were in World War II — purely to “effectiveness” specifications, with little regard for return on investment. As a result, in 90% of real-world applications, a Humvee provides less utility than a loaded Chevy Suburban, but costs three times as much because it is produced in low volume. Simply armoring a Suburban saves money, increases reliability, and lets soldiers travel in plush leather comfort. Kidding about the leather.

The point here is that we are missing obvious opportunities to improve output, benefiting the lives of all Americans, for free or even negative cost.