Nine out of 10 active managers underperformed their benchmark for the year ended June 30, 2012, says the just-released S&P Indices Versus Active (SPIVA®) study. Does this strike you as over the top, and if so do you suppose it’s because the study is flawed?
This Website Is For Financial Professionals Only
Do you really think investment managers can all go brain dead? That they have all taken stupid pills? Of course not. The SPIVA study
is seriously flawed, as are all studies that involve peer group comparisons. So it’s dumb to believe it, even though everyone seems to be talking about it.
The fact is that classification bias in peer groups
determines winners and losers. Success or failure has nothing to do with it, and everything to do with bad decisions: Peer group evaluations lead to hiring losers and firing winners.
I'd like to clarify classification bias. The definition of "bias" is: a systematic as opposed to a random distortion of a statistic as a result of sampling procedure. Peer groups are very poor samples of investment manager approaches.
The three central questions of manager due diligence are (1) What does this manager do?, (2) Does he do it well?, and (3) Why? The second question addresses the fact that most managers can be replicated with a passive collection of mutual funds and/or exchange-traded funds, so we want to know if the manager adds enough value above this passive alternative blend to justify his fee, taxes, etc.
But the unfortunate reality is that this question is usually addressed with a peer group comparison which is won or lost primarily because of the manager's differences rather than any value added. So for example, an index hugger will look good (have a good peer group rank) if his style is in favor because he will be pure in a land of the impure. But should he get hired as an active manager?
Classification bias causes managers to win or lose solely because they're different rather than better.