quillp is live – our blog has moved

August 10, 2008

After many months of hard work quillp is finally live and you can check it out at http://www.quillp.com

In the meantime our blog has also shifted to its new location here: http://blog.quillp.com so no more news on this one.

Looking forward to welcoming you on quillp – where books find friends


Skype is down – FastTrack came back to haunt them…

August 17, 2007

Some of you might remember a year in the distant past of internet time: 2002. Only five years have passed but things could hardly have changed more: the record labels and RIAA were fighting Kazaa and Morpheus, trying to jam shut pandora’s box (well, so much hasn’t changed here, after all…) and telcos were still charging rediculous rates for long-distance calls, while a former wizard and analyst darling Bernie Ebbers saw his WorldCom empire file for Chapter 11.

Today Kazaa and Morpheus are gone while the record labels are still fighting legal battles against its file-sharing-successors LimeWire, et al., still trying to defend an outdated business model by legal and regulatory means, just like the telcos who saw their profitable long-distance business crumble under the pressure of VOIP. The connection between the pressure on both of these industries was and is not to a small degree exerted by one protocol: FastTrack. And another time the protocol came back to haunt the ones who use it. This time around it is Skype, the VOIP company everybody loves. Until now.

Just like Kazaa and Morpheus, the major file-sharing platforms in 2002, Skype is based on P2P-technology, which should make it resistant to any failures due to its decentralized nature, where users connect each others computers directly, peer-to-peer. So far the theory, and the court-room argument of Kazaa back in 2002 which went like this: since Kazaa is based on the FastTrack-P2P-protocol, there is nothing Kazaa can do to stop the illegal file-sharing, as they can’t centrally shut down a P2P-network. But while Kazaa was distributing their file-sharing-client along with a nice set of spyware, Morpheus came without spyware. Both were using the Kazaa-owned FastTrack protocol. Due to this fact the user base of Morpheus was much bigger than Kazaa’s. And while the court-rooms and labels were — sort of — buying Kazaa’s argument of an unstopable P2P network, Kazaa did the unthinkable to kill the competition from Morpheus: it released a new version of FastTrack and from one minute to the other all Morpheus-users were disconnected from its “P2P”-network.

Great strategic move to kill off the competition? Not really. If FastTrack is soooo P2P and therefore without any central control by Kazaa, how then can they kill Morpheus off easily by simply releasing a new version of the protocol? The court rooms weren’t buying Kazaa’s argument anymore and the labels wanted millions in compensation.

The Kazaa founders have moved on founding Skype on the same protocol that Kazaa was based on, settling the labels’ Kazaa claims from the millions they made with the $4.1bn takeover of Skype by eBay in 2005. Since yesterday the millions of Skype users that weren’t around in the Kazaa/Morpheus days of the FastTrack protocol know what relying on a network of central nodes can mean: eBay is seeing its worst-case-scenerio come true, Skype has been down for over 24 hours now. Morpheus did never really recover from this blow as enough competitors were just around the corner. It’s not like it’s any different for Skype — Google Talk, Yahoo Messenger, etc. are not that bad after all once you made the switch. And more than 24 hours of downtime should be just as much time as users need to do so…


Popularity – just random or science of success?

July 13, 2007

Nothing is more powerful than an idea whose time has come. The invisible hand of the market sets in motion a clear chain of causality:

  1. If you have a good idea (quality)
  2. experts/gatekeepers will recognize its potential (predictions based on experience)
  3. and as a result it will turn out to be a success (as peoples’ decision making is based on quality)

Textbook examples of inferior products winning out (QWERTY-keyboard, VHS, etc.) show that this is not always the case, as certain other factors come into play as well (network effects, lock-ins, etc.). These examples, however, are only mentioned as mere exceptions to the rule that the best will win. Is this really the case and does the rule really exist in general?

Do experts know anything?

In contrast to the amount of money spent on experts’ predictions, there is very little evidence that experts have any clue of what the future will hold. One reason could be that experts need a better predictive process to improve their forecasting. But as Nassim Taleb suggests in his book The Black Swan: The Impact of the Highly Improbable (excellent review by James Surowiecki) the very reason for their poor performance is that the job of forecasting is simply impossible. We assume that historical patterns enable us to extrapolate what will happen. In contrast to this assumption, Taleb argues, history tells us that the radical outliers outside the realm of regular expectations have had the biggest impact and that these outliers account for just about everything of significance around us. In a world of classic bell curve distributions (coined ‘Mediocristan’ by Taleb) most things happen close to the middle and are therefore relatively easy to predict. But then there are a few events that shape the world which happen outside the center of a bell curve (in ‘Extremistan’). They represent a break of what has come before and because of this very nature of those events experts fail in predicting them. They happen in every domain — bestsellers, technological innovations, stock market returns, etc. — and outweigh everything else. Extracting generizable stories from these events in hindsight can make us believe that they could have been predicted — which might be emotionally satisfying but is at the same time practically useless.

Does quality matter?

But since all media companies employ experts to assess the potential of a certain idea (music, movies, books, etc.), there must be certain criteria all can agree on to have an impact on the success. Maybe it’s quality? Let’s assume we are able to objectively identify the quality of one of these products — does it really matter?

According to a study by three Columbia sociologists quality’s impact on the popularity is very limited:

“In our study, … 14,000 participants … were asked to listen to, rate and, if they chose, download songs by bands they had never heard of. Some of the participants saw only the names of the songs and bands, while others also saw how many times the songs had been downloaded by previous participants. This second group – in what we called the “social influence” condition – was further split into eight parallel “worlds” such that participants could see the prior downloads of people only in their own world. …

In all the social-influence worlds, the most popular songs were much more popular (and the least popular songs were less popular) than in the independent condition. At the same time, however, the particular songs that became hits were different in different worlds, just as cumulative-advantage theory would predict. …

In fact, intrinsic “quality,” which we measured in terms of a song’s popularity in the independent condition, did help to explain success in the social-influence condition. …. But the impact of a listener’s own reactions is easily overwhelmed by his or her reactions to others. The song “Lockdown,” by 52metro, for example, ranked 26th out of 48 in quality; yet it was the No. 1 song in one social-influence world, and 40th in another. Overall, a song in the Top 5 in terms of quality had only a 50 percent chance of finishing in the Top 5 of success.”

Although a 50 percent chance for a quality song to end up in the Top 5 of success doesn’t sound too bad to me (even though the universe only consists of 48 songs), the impact of quality seems to be fairly limited and what becomes a hit seems to be determined largely by the previous choices of others, making it a random walk. Or as Cory Doctorow so tellingly coined the implications of power laws for media products “Content isn’t king. … Conversation is king. Content is just something to talk about”. The more people I have that I can talk with about a certain song, movie, or book the more relevant it is for me and the more likely I am to choose it based on the choice of others. Prove of this can be seen when a devastating book review leads to an uplift in sales almost by the same degree as raving reviews.

Nobody knows anything?

So if experts’ predictions aren’t worth much for finding the next hit and the choice of others matters just as much as the quality, how much can I then know in advance about the chance for a success? Unfortunately not very much.

Surowiecki offers some hope, however: by involving a diversified, independent, and decentralized crowd of people and aggregating their evaluations in advance of publishing we have a very good chance to come up with significantly better predictions. Or in Surowiecki’s reply to the famous assertion by screenwriter William Goldman: “Nobody knows anything.” “But everybody, it turns out, may know something.”


Wisdom of Crowds – or stupidity of the mob?

July 5, 2007

A couple of days ago I came across a review of Andrew Keen‘s book The Cult of the Amateur. As in Keen‘s (must read!) debate with Chris Anderson (podcast) he is making the point that today’s internet — Web 2.0 — is killing our culture because the traditional gatekeepers are being removed, opening cultural, economic, and political life up to amateurs. With a negative impact on quality, as the crowd would only be able to produce mediocrity. To quote Keen in his own words from his debate with Anderson:

“I still think that the wisdom that I value — the scarcity, to put it in economic terms — is not in the crowd, but in people with talent and experience, whether they exist in political life, in economic life or cultural life.”

To defend the pre-Web 2.0 state of affairs that Keen prefers two conditions would have to be true:

  1. The people in the gatekeeper positions are people with talent and experience, enabling them to discover all the other talent in an effective way and foster it efficiently
  2. The result of this process is output of high quality

Just focusing on publishing as one example of the cultural life this is clearly not the case (as previously discussed):

  • the traditional process of screening new authors and estimating the sales is clearly neither effective nor efficient: a lot of talent goes undiscovered, as the process of discovery is not only based on talent and not all talent is getting through; a lot of other talent is overestimated (at least from a sales point of view), resulting in return rates of about 40% on average
  • as everybody familiar with the bestseller lists can attest they are not really a beacon for literary excellence, since the publishing houses are not operating in an economical vacuum and therefore have to publish what sells — to the crowds, who are by Keen‘s terms responsible for mediocrity in the Web 2.0 environment; I argue that because economic rational dictates what gets published and what doesn’t today the same is already true in the pre-Web 2.0 world

These obvious shortcomings in today’s publishing are at the heart of what we are trying to solve with quilp — to be launched shortly — and what I believe the technologies around Web 2.0 are best suited to enable people to do. I therefore couldn’t disagree more with Keen‘s argument, neither with his underlying observation that the current process is effective and ensures quality nor with the assumption that quality has to suffer if this process is opened up to amateurs.

I believe that there is a tremendous amount of unused creative potential out there that we can tap into and open up to everybody by providing the right tools as an enabler for sharing and evaluating ideas without outdated bottlenecks.

Strangely enough, at the same time that Keene favors a few gatekeepers over selections by the crowds he cites an article from the Wall Street Journal to prove his point: of the 900,000 registered users at Digg.com, only 30 were responsible for submitting one-third of the postings on the home page. Isn’t this the selection by the few experts that he is so fond of? Or is it a question about the authority of being an expert? But who is better suited to determine who the expert is in a certain field — a few employees at a publisher or newspaper or a crowd of people especially interested in a specific topic?

I think what Keen really wants to stress with this quote is the room for manipulation he is seeing in amateurs vs. professionals, their hidden agendas. If I recall the events that lead to the current war in Iraq correctly, however, it is the professional media that has been mislead the most and the independent amateurs that were giving a voice to a more pluralistic debate. That the professional media should be immune to hidden agendas especially if they are owned by arms industry magnates like in France (Lagardere, Dassault) is highly questionable, as Jürgen Altwegg points out in the Frankfurter Allgemeine Zeitung (July 4th, 2007); the internet, as Altwegg — a professional journalist by Keene‘s standards — argues, gives more room for tough questions, putting more pressure on journalists in turn to address these issues.

As Surowiecki points it out in his excellent book The Wisdom of Crowds there are certain elements required to make a crowd’s decision wise. Arguing against or in favor of crowdsourcing in general terms therefore doesn’t make a lot of sense. The design of the tools enabling this process is crucial for the value it manages to provide.

P.S. Funny enough, I only came across the review of Keen‘s book debating the value of crowdsourcing through one of my favorite crowdsourcing tools: using the affinities with other readers which I discovered through my favorites on del.icio.us I can now quickly filter millions of documents, as other gatekeepers which I determined as being relevant to my interests are doing the filtering for me.

Update (July 7th, 2007):
There has been quite some coverage of this topic recently – some interesting links:

Update (July 10th, 2007):
“A Luddite argument is one in which some broadly useful technology is opposed on the grounds that it will discomfit the people who benefit from the inefficiency the technology destroys. An argument is especially Luddite if the discomfort of the newly challenged professionals is presented as a general social crisis, rather than as trouble for a special interest.” Clay Shirky in Andrew Keen: Rescuing ‘Luddite’ from the Luddites (July 9th, 2007)

And here another post by Clay Shirky, quoting Scott Bradner: The Internet means you don’t have to convince anyone else that something is a good idea before trying it. The upshot is that the internet’s output is data, but its product is freedom.(July 10th, 2007)

Update (July 20th, 2007):
The Good, the Bad, And the ‘Web 2.0’ (Andrew Keen and David Weinberger in Wall Street Journal; July 18th, 2007)


My-oh-my: $100 million miomi web 2.0 frenzy?

June 29, 2007

Is this just another of those turning points that we will look back on in time as the culmination of irrational exuberance around the latest buzz-word like the sale of business.com for the domain market or the bankruptcy of boo.com for the dot-com bubble have been before or am I just totally missing the potential of an amazing idea?

miomi wants to provide a tool that enables everybody to map their personal experiences on a timeline. You will then be able to browse through time, discovering what your friends did while you went shopping for toilet paper or to discover new friends because they had the same need at the same time. Microsoft is heralding it as the next YouTube or Skype.

The problems I have with this concept are the following:

  1. Will I really be mapping all the details of my life as I am busy enough living it? There might be a certain degree of automation that is possible through cameras supporting geocoding and date-tagging, which will make mapping and sharing straighforward. The ubiquity of those devices in the near future and broadband access from everywhere will make this process pretty seamless. But what is miomi’s added value over a flickr map mashup which is already available today?
  2. Will the community of mappers find the right level of abstraction for mapping their activities that will actually provide some interesting insights for others? As there is probably a positive correlation of events I want to remember and events that I take a picture of the positive impact of the ubiquity of camera functionality as described above comes into play here as well – but so do the limitations if I want to differentiate miomi. If I want to go beyond what I am already doing through flickr map mashups and map additional events and experiences that are less well documented, how do I really create relevance for others? This leads me to the third problem:
  3. Am I really interested in what everybody else is doing? As the newsfeeds on Facebook demonstrate it might be very interesting to stalk my friends and have a topic for starting a conversation – but do I really care that someone I don’t know was shopping at the shop around the corner at the same time that I shopped there? Does this create a level of affinity that I want to build a friendship on – even if it is just virtual one?
  4. How can $100 million be needed to build a platform like miomi? We are in the post-dot-com bubble web 2.0 era and not in web 1.0, after all. Although strategic reasons of scaring away competitors and creating free coverage in the media might have played a role in inflating the number communicated beyond the real numbers, it sounds a little far fetched.

Looking at the press coverage so far it seems like the strategy of creating a splash without much reflection by the journalists covering the story has worked quite well. Or I’m just not getting how brilliant this idea really is? I’d love to hear what you think!

More articles on miomi:

Update (July 5th, 2007):
Interesting insight on my doubts about the $100 million: apparently the check only read ‘Whatever it takes’. The size of the fund is 50 million pounds = $100 million -> perfect line for the media frenzy: they are investing $100 million… maybe some of these journalists should be checking their sources.

Here is where I got this update from – comment #5:
Visualblog: Der neue Web 2.0 Wahnsinn: Miomi (German)


Startup how-to

June 28, 2007

The engines are running hot and everybody here is working hard on putting all the pieces together. It’s always a great feeling to see something that started out as a general idea materialize and become a reality – with all attention to detail. And as everybody who went through this process can attest it’s quite a rollercoaster ride!

Time to sit back and contemplate the learnings of others. Here are some of the best I have found:

So – all set but lacking a groundbreaking idea like the mobile loo locator MizPee? Try these sites for some inspiration:


Facebook or MySpace – the Aristocrats…

June 26, 2007

Interesting article by Danah Boyd on class divisions through Facebook and MySpace. Not totally surprising, though, that a platform like Facebook, which grew out of the ivy league space seems to cater more to the tastes of the university crowd than a more messy, anarchic network like MySpace, that grew bottom up out of the alternative music space, and seems to be the favorite of the ‘working class’ kids.

As choices by my peers do effect my desision making, even a slight preference for a certain network by my friends can have a dramatical impact on the overall affinity of the entire group:

“To see how freedom of choice could create such unequal distributions, consider a hypothetical population of a thousand people, each picking their 10 favorite blogs. One way to model such a system is simply to assume that each person has an equal chance of liking each blog. This distribution would be basically flat – most blogs will have the same number of people listing it as a favorite. A few blogs will be more popular than average and a few less, of course, but that will be statistical noise. The bulk of the blogs will be of average popularity, and the highs and lows will not be too far different from this average. In this model, neither the quality of the writing nor other people’s choices have any effect; there are no shared tastes, no preferred genres, no effects from marketing or recommendations from friends.

But people’s choices do affect one another. If we assume that any blog chosen by one user is more likely, by even a fractional amount, to be chosen by another user, the system changes dramatically. Alice, the first user, chooses her blogs unaffected by anyone else, but Bob has a slightly higher chance of liking Alice’s blogs than the others. When Bob is done, any blog that both he and Alice like has a higher chance of being picked by Carmen, and so on, with a small number of blogs becoming increasingly likely to be chosen in the future because they were chosen in the past.

Think of this positive feedback as a preference premium. The system assumes that later users come into an environment shaped by earlier users; the thousand-and-first user will not be selecting blogs at random, but will rather be affected, even if unconsciously, by the preference premiums built up in the system previously.” 

It’s therefore only natural that members of different sociodemographic groups making choices in dependence of the choices of their peers are a perfect example for power laws at play, resulting in the divide witnessed. Even a minor skew of the early adopters within each network towards a certain sociodemographic (which clearly existed between Facebook and MySpace from the get go) is therefore very likely to result in a permanent skew in the sociodemographic affiliation.


Web 2.0 – Who’s participating?

June 25, 2007

BusinessWeek ran Web Strategies That Cater To Customers – a big title for a meager paragraph that doesn’t tell you anything more than “there are people out there who care about what you are doing – tune in to what they’re doing”. Thanks for this strategic enlightenment.

They had a couple of interesting stats to show, though, on the growth of social media usage and on what people are doing and who participates by Forrester. On average participation still follows the 1% rule so the interesting question is how to incentivize participation in the right way. Some musings on that later.


Let the tail wag

June 24, 2007

It’s been almost three years since Joi Ito asked the question “Will the tail wag?”. Will there always be a clear separation between producers and consumers of content or will the lines blur completely? Will most consumers create content as well?

Although there has never been a clear separation between the two as all producers are consumers and many consumers are also producers of content, Joi’s question is a very valid one. The rise of an environment coined with the buzz word Web 2.0 changed the rules of the game to the extent that now consumers are not only producing content in their private realms. They now have access to a global audience and crowdsourcing starts to substitute the traditional content-selection gate keepers within big media. Bands, authors, and journalists have proven their talent and marketability directly in the consumer space, landing deals with big media after the fact.

Too many books are being printed

The publishing industry has to deal with annual return rates of roughly 40% – or $7 billion. These are a lot of trees that get cut down just to be destroyed as an unsold book at the end of the value chain by my standards – and any ecological and economical standards, as well. Although marketing aspects come into play as well, inflating the print runs for a bigger splash at the book stores, the publisher’s are clearly not doing a very good job estimating the sales potential of a specific title.

Too few authors get published

On the other hand many aspiring authors are not being discovered, as the few editors at publishing houses are unable to screen the 6 million manuscripts in circulation. Those that get screened are being decided upon by the gut feeling of one editor. Prominent cases of disastrously wrong decision making spring to mind with twelve publishers rejecting J. K. Rowling before she was finally allowed to start her billion dollar career.

It is therefore obvious that the traditional process of content selection is seriously flawed: too few gate keepers create a bootleneck for the vast amount of creativity out there, preventing many great authors from being discovered or even screened. The wrong books are being printed in the wrong quantities as the decision making is based on the gut feel of a few.

Wisdom of crowds

The internet opens unlimited opportunities to tap into the wisdom of crowds. Anybody can publish, anybody can screen and review, the decision making for publishing certain stories as a book can be based on more relevant criteria than the gut feel of one editor. Authors with the most positive ratings by readers get a publishing contract. Publishers get more input for sales predictions by gaining direct insight into their target group prior to any print runs. And special interest topics with a limited expected sales potential still find their audience online.

Over the last couple of years I spent quite some time thinking about how to structure a platform that facilitates this process. I’m thrilled that with a great team of friends working on the development this platform is now becoming a reality. Stay tuned for our beta launch soon.