Paying (or not) online

The continuing – and ever-more raucous – debate on paid content online got me thinking about what I pay for and why. Here’s the list:

  • MobileMe – not sure why I pay for this, except that I was caught up in the hype and haven’t got round to doing anything about it. There are probably better services like Dropbox, but there you are…
  • Flickr – I’ve been a Pro user for a long time, inspired to pay extra to remove the monthly upload limits.
  • Remember the Milk – I upgraded to paid status in order to sync this excellent to do list program with my iPhone
  • MindMeister – an online mind mapping tool which is really superb. I upgraded to remove the cap on charts which I could create and share.
  • Evernote – a brilliant note application . I was managing fine with the free account until it became really useful and I found myself dangerously near my monthly limit. I was happy to upgrade
  • Wall Street Journal – just because….I have subscribed for a long time. Would I miss it if my subscription lapsed? A bit I suspect…
What’s interesting about this list is that with the one exception on the end of the list what I pay for is functionality rather than straight content. In all those cases the services offered have many competitors but I was tempted to eventually pay be trying the service out, often for a long time, and really getting used to the value it provided.

All these services offered rich functionality, but there was a restriction which would only become an inconvenience if I was getting a lot of value (and use) out of the service – but not until (that is the important point). Those services which tried to get my money by crippling the service until I paid, didn’t convince me to become a customer as I never got beyond the (irritated) experimentation stage.

I’m more and more convinced that this kind of model is the one with the most legs on the web right now, and I expect we will see a lot more currently free services offering this kind of “freemium” model, described so well in Chris Anderson’s recent book . This is the model that, in the content world, the FT is trying with its article limit.

It’s a model well worth studying more.

Posted via email from jmuttram’s posterous

Of Murdoch, the BBC and Google

An interesting Observer today (for a change) which has prompted some thoughts on the Murdoch/BBC and Google Books debates.

First up, Murdoch. The Observer gave continuing coverage to the reaction following on from James Murdoch’s speech in its story about a dinner row between Robert Peston and Murdoch(!) Murdoch’s argument, in case you missed it, is that the BBC is “state-sponsored” media and because of the “hypothecated tax” of the licence fee can ride rough-shod over the interests of commercial broadcasters.

A comment piece by Will Hutton arguing that Murdoch’s arguments at the Edinburgh Television Festival were “specious and out of date” puts the counter position very nicely; Murdoch is hardly without an axe to grind and if we want to look to examples of market distortion then we have his own company, Sky’s own past practices to study.

It was the voracious bidding for sports rights – cross subsidised by the vast Murdoch media empire – which set up Sky for its own meteroric rise to (pay TV) market dominance. The complaints about the BBC from Murdoch, and other parts of the media, look more like a hurt industry looking around for someone to blame as ad markets go south. Google is the other target, as well as, more generally, “the internet”.

In fact, the traditional media industry has seen steady declines for years. The biggest problem is not the BBC or the internet but the industry’s own lack of foresight in building out multiple revenue streams.

All businesses need a mix of models if they are not to suffer disproportionately in the downturn (even as they capitalise disproportionately in the upturn). The healthiest have a blend of cyclical and counter-cyclical models and a spread of markets and sectors.

The very characteristic of the BBC that Murdoch is complaining of – the fact that it is too big and is into everything – will probably turn out to be its achilles heel. It is very unlikely that the BBC will enjoy above-inflation increases in the licence fee in the future – in fact, if the Tory’s win the next election a cut is probably on the cards. This means that its self-proclaimed need to cover the widest possible remit will see it stretching its resources more and more.

This in turn means that those in the niches will beat the BBC in their particular parts of the market. Already, the BBC is not best for sport, and it shouldn’t be the best for hyper-local services, either, once those really get going. My own company, Reed Business Information, is in 17 business-to-business markets in the UK and in none of them are we beaten by the BBC – or for that matter Google. We have simply more resources relative to our niches.

Does the same argument then apply to Google. Well, yes and no. In its feature on the project by Google to digitise all the world’s books – and more particularly the out of court settlement in the US which gives it protection from copyright suits – the Observer sets out the benefits and the pitfalls of Google’s adventure into books.

It is undoubtedly a good thing that all the worlds books be available to everyone – just think what that could do for education in the developing world, for example. But there are clearly dangers if Google, as a commercial enterprise, has exclusive rights to those books.

Google argues that it is motivated by philanthropy – we have only their word for that, and circumstances can change. There are parallels here with Sky and sports rights. Sky was able to outbid the incumbents and therefore won a major advantage in the market place. Google, by virtue of their exceptionally deep pockets and technical skills has been able to accomplish something in the book field that many thought impossible.

If competition is to reign then there either needs to be a second digital library of all the world’s books (who would want Microsoft, for instance, as one of the few companies with deep enough pockets, as an alternative?) or some other plan needs to be considered.

How about an international equivalent of the British Library, perhaps as a branch of the United Nations, mandated with digitising every book and making them available at a nominal (or no) cost to anyone who wants to build a service on top of the basic library? A sort of digital Library of Alexandria? There would be a lot of detail to be worked out, but nobody said this transition to a better digital world wouldn’t be complicated!
The comment piece arguing that Google needed to be policed so that its power in the digital space isn’t monopolised is I think well argued; clearly there have been huge benefits from Google’s efforts to digitise the world’s information (the digital map space wouldn’t exist in its current excellent form had it not been for Google’s determination and ingenuity. However, beware the unintended consequences.

Posted via email from jmuttram’s posterous

Translations which are good enough

The current issue of Wired had a very thought-provoking article which argued that “good enough” was now a viable business model in many fields. The examples of technologies where consumers had apparently determined that what was on offer was good enough included Skype (patchy quality, but free), Flip video cameras (not brilliant quality and lacking in features, but dead simple) and netbooks (fairly low powered but small, convenient and with great battery life).

It has occurred to me lately that another technology which is rapidly approaching the “good enough” hurdle is online translation. I’ve been amazed recently that the language pair translation which Google offers now within Gmail is actually good enough to enable me to easily understand emails written in another language. Some language translations are better than others, and the syntax leaves something to be desired, but I really believe we’re nearly there.

This has be brought about by the mining of the internet to enhance machine translation. And crowdsourcing has now started to play its part: Facebook has just applied for a patent for its Digg-like crowdsourced translation system which enabled the very rapid translation of the site into multiple languages.

In my view we’ll now see an explosion of “good enough” translations of journalism which was hitherto limited to the country of origin. Better to have the gist than nothing at all. And the one thing you can rely on is that the quality will get better and better. Babelfish here we come!

Writing for both audiences

I was interested in this great post from Yelvington which describes the two things necessary to cater for the twin newspaper audiences – the loyal regular reader and the ocassional visitor, maybe brought in by the search engine.
The answer, says Yelvington, is the beat blog and the topic page. I was particularly taken with the handbook-like details he gives to help the receptive to improve. Worth a read.


[Posted with iBlogger from my iPhone]

Twitter the platform

The innovations around Twitter just keep on coming, proving, if nothing else, that an open platform is the best spur to innovation there is.

Yesterday I noticed a couple of posts like this one which point to a simple discussion thread created by an application called Tiny Thread. It’s a simple but powerful add-on to the Twitter platform.

Meanwhile, the energetic Dave Winer posted about his mash-up with Disqus which allow you to start a discussion from a single thread.

And yesterday I also read about the latest iPhone app to take advantage of the open Twitter APIs, Twuner, which will read you your tweets on your iPhone.

Twitter may be struggling for a business model, but it is a beacon for innovation in the tech space second to none at the moment. Keep your fingers crossed they remain independent.

Microsoft woes

I wonder whether the news that Microsoft, still the world’s largest software company, posted profits down 29% in the second quarter may turn out to be a very significant turning point?

Microsoft has in the past been a shining example of the law of increasing returns where everything they produce pushes more developers to develop for their platform and therefore more people buy their software, and so on, and so on.

But now there are credible alternative platforms growing at speed: particularly Google in search, advertising, cloud based productivity apps, and now mobile; and a resurgent Apple in the desktop and especially the mobile space.

It could be that the relentless upgrade cycle which drove the company’s profits for so many years is finally faltering, as corporates delayed the upgrade to Vista because of the well-publicised woes of the platform, and enough alternatives have arrived on the scene to at least cause a review of IT strategy.

Of course it is far too early to call the end of Microsoft’s growth, but the earnings dip is surely a sign of more trouble to come.  

The new community site foundations?

I attended and very stimulating meeting yesterday morning with the Flightglobal team discussing the future of navigation on the site. It’s a hard topic, and nobody has all the answers anywhere, partly because web publishing is still an evolving medium. However, a lot of progress was made and some general principles agreed which I’m sure will make a huge improvement to the site in time.

One of the ideas discussed was that at the core of the site would be a wiki-like resource which allowed for a basic inventory of (in this case) aircraft, to which much of the site’s content would link. This made me start thinking that probably this should be at the heart of all good community sites – although what information is collected will obviously vary from industry to industry: it could be buildings, chemicals, people, companies, or just about any collection of “things” that makes a particular industry tick. And in some there could be more than one core information base required.

Up to now this has tended to be the “job” of the site managers, building key content landing pages for the most important “things” in a market – usually driven by the need for more search engine reach.

However, I think the key will be to open up this process to the audience and to encourage the collaborative building of the ultimate resource for the particular industry.

That doesn’t mean there won’t be a role for the journalists and site managers. For one thing they will need to seed this enterprise and keep in on the straight and narrow as it grows (hopefully helped by an army of industry super-user volunteers).

But the other key role will be in the building of topic wiki pages which will become the other core foundation of the community site of the future: this is the “story so far” wiki page curated by journalists, but open to contribution by all, as described by Jeff Jarvis. The idea is that stories evolve over time and the old fashioned “story” approach does not reflect the realities of the new web.

With these as it’s root, a community site should be well on its way to defining and reflecting its community – provided it continues to do all the others things well.

Elsevier’s experimental articles

Elsevier has announced an experimental project to design the scientific “article of the future”. There are currently two prototypes which our sister company is inviting comment on from the scientific community here and here and from my standpoint these look like pretty good attempts to update the scientific article formula. I particularly like the way that each of the constituent parts of the article – abstract, text, results, figures, references, authors etc, are given their own tabs and treatment relevant to their particular information needs – the analysis of the references struck me as a particularly useful fusion.

However, not everyone has so far been overly impressed with the efforts. Paul Carville, writing on the Online Journalism Blog, is particularly underwhelmed. He has a detailed critique which looks at things such as design, information architecture and innovation, and he is less that glowing:

I think this area of publishing is indeed long overdue a complete overhaul of its staid online publishing practices, and any move to define a new specification for doing so should be welcomed. Even the otherwise impressive nature.com only goes so far in its presentation of research papers, and there is much room for improvement. But when the result is as underwhelming, cumbersome and shortsighted as this, I despair.

As Paul says the reinvention of the scientific article is long overdue, so I look forward to seeing what else emerges in the coming months.