Nothing is more powerful than an idea whose time has come. The invisible hand of the market sets in motion a clear chain of causality:
- If you have a good idea (quality)
- experts/gatekeepers will recognize its potential (predictions based on experience)
- and as a result it will turn out to be a success (as peoples’ decision making is based on quality)
Textbook examples of inferior products winning out (QWERTY-keyboard, VHS, etc.) show that this is not always the case, as certain other factors come into play as well (network effects, lock-ins, etc.). These examples, however, are only mentioned as mere exceptions to the rule that the best will win. Is this really the case and does the rule really exist in general?
Do experts know anything?
In contrast to the amount of money spent on experts’ predictions, there is very little evidence that experts have any clue of what the future will hold. One reason could be that experts need a better predictive process to improve their forecasting. But as Nassim Taleb suggests in his book The Black Swan: The Impact of the Highly Improbable (excellent review by James Surowiecki) the very reason for their poor performance is that the job of forecasting is simply impossible. We assume that historical patterns enable us to extrapolate what will happen. In contrast to this assumption, Taleb argues, history tells us that the radical outliers outside the realm of regular expectations have had the biggest impact and that these outliers account for just about everything of significance around us. In a world of classic bell curve distributions (coined ‘Mediocristan’ by Taleb) most things happen close to the middle and are therefore relatively easy to predict. But then there are a few events that shape the world which happen outside the center of a bell curve (in ‘Extremistan’). They represent a break of what has come before and because of this very nature of those events experts fail in predicting them. They happen in every domain — bestsellers, technological innovations, stock market returns, etc. — and outweigh everything else. Extracting generizable stories from these events in hindsight can make us believe that they could have been predicted — which might be emotionally satisfying but is at the same time practically useless.
Does quality matter?
But since all media companies employ experts to assess the potential of a certain idea (music, movies, books, etc.), there must be certain criteria all can agree on to have an impact on the success. Maybe it’s quality? Let’s assume we are able to objectively identify the quality of one of these products — does it really matter?
According to a study by three Columbia sociologists quality’s impact on the popularity is very limited:
“In our study, … 14,000 participants … were asked to listen to, rate and, if they chose, download songs by bands they had never heard of. Some of the participants saw only the names of the songs and bands, while others also saw how many times the songs had been downloaded by previous participants. This second group – in what we called the “social influence” condition – was further split into eight parallel “worlds” such that participants could see the prior downloads of people only in their own world. …
In all the social-influence worlds, the most popular songs were much more popular (and the least popular songs were less popular) than in the independent condition. At the same time, however, the particular songs that became hits were different in different worlds, just as cumulative-advantage theory would predict. …
In fact, intrinsic “quality,” which we measured in terms of a song’s popularity in the independent condition, did help to explain success in the social-influence condition. …. But the impact of a listener’s own reactions is easily overwhelmed by his or her reactions to others. The song “Lockdown,” by 52metro, for example, ranked 26th out of 48 in quality; yet it was the No. 1 song in one social-influence world, and 40th in another. Overall, a song in the Top 5 in terms of quality had only a 50 percent chance of finishing in the Top 5 of success.”
Although a 50 percent chance for a quality song to end up in the Top 5 of success doesn’t sound too bad to me (even though the universe only consists of 48 songs), the impact of quality seems to be fairly limited and what becomes a hit seems to be determined largely by the previous choices of others, making it a random walk. Or as Cory Doctorow so tellingly coined the implications of power laws for media products “Content isn’t king. … Conversation is king. Content is just something to talk about”. The more people I have that I can talk with about a certain song, movie, or book the more relevant it is for me and the more likely I am to choose it based on the choice of others. Prove of this can be seen when a devastating book review leads to an uplift in sales almost by the same degree as raving reviews.
Nobody knows anything?
So if experts’ predictions aren’t worth much for finding the next hit and the choice of others matters just as much as the quality, how much can I then know in advance about the chance for a success? Unfortunately not very much.
Surowiecki offers some hope, however: by involving a diversified, independent, and decentralized crowd of people and aggregating their evaluations in advance of publishing we have a very good chance to come up with significantly better predictions. Or in Surowiecki’s reply to the famous assertion by screenwriter William Goldman: “Nobody knows anything.” “But everybody, it turns out, may know something.”