Category Archives: Society

Round One: Coronavirus

I took a picture of London from the top of the North Downs in 2017 after I was so struck by the visibility of the pollution hovering over the city. Last week I stopped again at the same spot and took roughly the same photo.

What is so striking is that after only five weeks of lockdown, the dramatic drop in traffic has had such a noticeable visual effect on the air quality.

TomTom, the navigation company, has provided graphs of various cities around the world showing the change in traffic.

What a difference it would make if we could effect a change like this but without the huge downside of a pandemic.

Some cities such as Milan are already planning to reclaim some of their streets inspired by the experience of the traffic-drop. And given that social distancing is likely to be here to stay for quite some time – at least until widespread vaccines are available, others are bound to follow suit.

Wired reports that many cities around the world have already blocked off city streets to provide more open spaces for people to safely navigate.

We could of course go back to normal after the pandemic is over but as The Economist eloquently illustrated coronavirus is merely Round One; the next battle is the big one.

There have been notable examples of self-less co-operation during the coronavirus challenge, but also many examples of narrow-minded, nationalistic responses following the lead of the catastrophically inadequate President of the United States.

We can only hope the sobering example of fighting a pandemic will create real impetus for change which can create a common will to deal with the biggest global challenge of all. Fingers crossed.

Incoherence in Government

A story in this morning’s Guardian perfectly illustrates the policy incoherence that runs to the heart of the current government.

It concerns Britain’s National Cycling network, a linked chain of over 16,500 miles of cycleways which are used by half the number of people who currently use the trains each year.

Sustrans, the organisation responsible for the network, says it would cost £2.8bn to bring the paths up to scratch, as many are potholed or damaged, have difficult obstructions on them, or rejoin highways at difficult or dangerous places.

Meanwhile, we have a Government facing many significant challenges such as meeting the climate change goals, currently likely to be missed, and a National Health Service struggling to cope in the face of an ailing population made sick by obesity and dirty air.

One obvious part of the solution to these challenges is to reduce the amount we drive significantly and to encourage the population to exercise more.

So you would think getting the population on their bikes, as some of our Continental neighbours do so well, would be an obvious part of the plan.

More than half of the UK population lives within a mile of their nearest route and 4.4 million people used the Network last year, making 786m trips.

And each year the network saves the UK economy nearly £90m through reduced road congestion, according to Sustrans. Its health benefits save the NHS the equivalent of 2,206 nurses’ salaries, and leisure and tourist trips contribute £2.5bn to local economies, the charity claims.

The benefits are therefore obvious.

The Government’s response? In his recent budget, the chancellor, Philip Hammond, pledged £30bn for road improvements targeted primarily at motorists.

What about cycling?

Jesse Norman, the government’s cycling and walking minister, said: “This report shows that more needs to be done to make [the network] fully accessible, and that’s why earlier this year the government dedicated £1m to support initial work repairing and upgrading sections of this popular network.”

Doesn’t quite stack up, does it?

Remaking Post-Industrial Cities

Interesting talk this lunchtime at the RSA from Don Carter about his new book Remaking Post-Industrial Cities, which looks at 10 cities in the US and Europe and charts their decline and recovery. 

Don Carter, Carnegie Mellon University

Carter looks at the history of the cities in three phases:

  • The industrial powerhouse phase, from 1865 to 1945
  • Renaissance, from 1946 to 1985
  • Re-invention from 1986 to 2015

He argues that there are clear parallels between all the cities he has studied and that lessons can be drawn. 

First up, turning cities around in the post-industrial period takes time and determination. It is important to realise the the scale is large – metropolitan and long-term. This means, a strong vision of what kind of city is being built it critical. And it means strong leadership and being prepared to take risks. Often it has involved very significant investment, such as the Olympics in Barcelona, but these grand plays aren’t enough on their own, as they can fail. 

The successful cases have all developed diversified economies, have strengthened the central city and have invested in culture, heritage and quality of life. 

The over-riding impression at the end though, underlined by perceptive questions from the audience, was that while the city may recover, many of the people who made their lives there often don’t and that tectonic societal upheavals, such as the election of Trump, or Brexit, or populism in Italy, may the cost.

Maybe we can look back on the cities themselves in 20 years with satisfaction that they recovered so well, but what happened to the broader society in the meantime is quite another question. 

Don Carter is an architect, urban designer and developer of international renown. He is currently Director of Urban Design and Regional Engagement at the Remaking Cities Institute, Carnegie Mellon University.

Instant judgements

There is something very disturbing about the modern habit of making instant judgements about everything, simple or complex. This has been blamed on the 24-hour news cycle which is said to force quicker and quicker stories out for fear that a particular news outlet is going to look slow. It has also been blamed on the rise of social media which has encouraged us all to believe that our voice has a right to be heard and that our opinions are as valid as anyone’s. It has also been blamed on short, modern attention spans.

Whatever the cause, the net effect of all this is quite seriously bad, in my view. Of course it is a good thing that we are not all meekly waiting to be told what to think by those in power, elected or otherwise. In the past this mental attitude of deference has lead to some terrible iniquities, the details of which come out on a regular basis (think sex scandals involving children’s homes or Churches of one denomination or another, or the horror stories coming out thanks to the #MeToo campaign, even now touching China, it seems. 

But there is a downside, too. Take yesterday’s news that there was a terrorist attack on the House of Parliament. We don’t know much about Salih Khater, the driver of the car which crossed lanes in front of Parliament and injured three cyclists and pedestrians before crashing into the barriers. Yesterday he was a terrorist. Today, the police say they haven’t found anything to link him to terrorism and it seems his motives (or reasons, as it could be he or the car malfunctioned for all we know) are a mystery as yet. 

So yesterday it was a daring terror attack, today we are not quite sure, tomorrow or probably sometime later we will find out the truth. 

What effect does this have, though? It feeds the impression that we are living in dangerous times, that we are under attack from people who would harm us. And the initial conclusions are, I suspect, seldom reset. 

In this sound-bite and tweet-driven world there is little room for complexity or subtlety. So far we have reaped the benefits of transparency and ease of publication but at the cost of polarisation and populism.  Consider the reaction to the dreadful collapse of the motorway bridge in Genoa which is being blamed by Italy’s senior politicians on privatisation, corruption and the EU before any investigation has even begun. How could you possibly draw a conclusion like that so quickly? The answer is you can’t, but that doesn’t really work in a rapid-fire world. 

We have yet to work out how to restore some balance and reason in this new environment. But we really should be trying harder. 

The wrong way to do driverless cars

I’m a great supporter of driverless cars. I think they have the potential to dramatically change the world, making much better use of resources, revolutionising mobility for all and radically improving our towns and cities.

Paradoxically, however, I am not so keen on Phillip Hammond’s announcement that the UK aims to be the first country in the world to permit them on public roads without any “safety attendant” on board.

I’m just not convinced that the Government has developed a solid appreciation for the benefits of technology. After all, this is the country where more than half of schools don’t even offer a computer science GCSE, according to a report from the Royal Society.

In fact, I think this has, like it seems everything these days, more to do with Brexit than anything else.

Having alienated the conventional motor industry who  are warning of the dire consequences of leaving the customs union, it probably seems like a really smart move to become the go-to place for manufacturers to be testing and developing self-driving cars, which the smart money says are the future. This way we can secure our place in world when conventional car manufacturing relocates to the Continent.

But recklessly throwing off safeguards simply in order to pursue narrow short-term economic objectives could set the development of self-driving cars back decades. The implementation of self-driving cars is multi-facetted and complex, as much from a societal as a technical perspective. It will require careful collaboration across countries and disciplines, as well as exceptionally well calibrated communication with the populations they are supposed to be benefiting. None of these things seem to particularly in the UK’s skillset at the moment.

We’ve already witnessed the outcry over a fatal accident where a Tesla which was driving failed to see a lorry crossing in front. This is in sharp contrast to the coverage given to the 1.25 million people estimated to be killed by human-driven cars each year around the world. And this was in a case where there was a clear responsibility on the driver to keep alert and supervise if necessary.

The first (pretty-well inevitable) fatality by a self-driving car could quite easily set off a backlash which sets the development of this transformational technology back decades. And that would be a tragedy, not least for the millions whose lives would have been saved by the technology in the interim.

When the extraordinary becomes ordinary

One sentence in one article I read this last week caused me to sit up and reflect more, I think, than any other: “Carmakers are threatened more by the end of the combustion engine than by Brexit.” Just a throw-away line in a story by Phillip Inman in the Observer on the current state of play in the politics of Brexit, but still…

It’s extraordinary that we are now talking in throw-away terms about:

  1. leaving the political and economic union which has defined us for the past 40 years without, it seems, anything like a plan for the future; and
  2. the death of the internal combustion engine which has been so central to our lives for a century.

It’s extraordinary how much radical change we can take in our stride without really blinking. And I didn’t even mention the $12 trillion of money spent on quantitive easing since 2008 (whatever happened to “you can’t just print money because inflation will sky rocket”) and Donald Trump…

Inside the echo chamber

The unforeseen nature of Donald Trump’s victory yesterday, and before that of the Brexit Leave Campaign, say something quite profound about the way in which US and UK populations now consume their information and form their views.

As the Independent said today, it was the social media “echo chamber” which allowed the pro-Clinton US electorate to misread the strength of the Trump campaign and as a consequence probably caused the Democrat machine to mishandle the response. The same can be said of Brexit.

I am currently reading Kevin Kelly’s excellent book The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future which I am finding compelling and well argued. There is one thing upon which I disagree with him, though.

He is profoundly optimistic that the global superbrain, which is how he characterises the internet, will broaden our viewpoint and make us much less certain and as a consequence much more questioning – characteristics he believes are very positive. This is how he puts it:

“Ironically, in an age of instant global connection, my certainty about anything has decreased. Rather than receiving truth from an authority, I am reduced to assembling my own certainty from the liquid stream of facts flowing through the web. Truth, with a capital T, becomes truths, plural. I have to sort the truths not just about things I care about, but about anything I touch, including areas about which I can’t possibly have any direct knowledge. That means that in general I have to constantly question what I think I know. We might consider this state perfect for the advancement of science, but it also means that I am more likely to have my mind changed for incorrect reasons.”

Kelly describes his surfing habits, hopping from one site to another, encountering multiple points of view and surprising facts. But I think his optimism on this count is misplaced. Most people don’t use the internet like this. Rather they experience the online world through a very few, powerful portals – Facebook and Twitter being, in the West at least, probably the two most potent. But these sites are heavily filtered, reflecting back on us the views of our friends and those who we have chosen to follow. This is a self-selecting sample and the algorithms reinforce this bias ruthlessly.

So, while we may see the odd beyond-the-pale post which slips through (probably from an unreconstructed relative) the overwhelming impression is that the world reassuringly largely shares our view. This, as both Brexit and Trump’s election clearly demonstrate, may be simply not the case.

Relatively speaking the internet has been around very little time and social media even less. And yet, as is now apparent, it forms a central part of the public ideas space. We have yet to work out how to properly harness it, or deal with its downsides. One suggestion which I read yesterday (but now can’t find unfortunately) is that Facebook add a button so that we can flip our personal filter algorithm. This way, in an instant, we will get to see what people with opposite views to us are actually saying.

In addition, there are excellent fact-checking sites available which could provide fact-based counterpoints which could be provided automatically. These are the kinds of partnerships that Facebook, Twitter et al could usefully be pursuing.

But this is just the start.

We need a real debate about the way in which information is being used and consumed and how to improve the quality of both debate and understanding.

Mainstream media organisations are the fuel which feeds the social media furnace. The media throughout the Brexit and Trump campaigns has fallen victim to false balance in an devastating way. By presenting two sides of a very unequal argument as somehow equivalent journalists may feel they are carrying out their duty of fairness, but they are not. By not pointing out, for example, the weight of scientific evidence and consensus for man-made global warming, or the evidence for the safety of vaccines, and instead giving equal platforms for both sides of the debate, they add fuel to the furnace in a way which promotes the creation of bubbles. Allowing lies to go unchallenged editorially, as happened in both campaigns, creates the impression that there are no truths, only opinions.

We are now living with the fallout.

 

Thinking Digitally in Tyneside

The Sage
The Sage, Gateshead

This year was something of a turning point for Thinking Digital as the Tyneside-based event, regarded as a kind of home-grown TED since its launch in 2008, this year branched out into satellite events in London and Manchester. The original event was slimmed down from two days to one and there were worries that the unique quality which was Thinking Digital may be lost in the changes.

So how did it fare? Really rather well, actually. The conference’s first segment, called Sport, Culture and Terrorism, started off a little unpromisingly with a slightly underwhelming account of the IBM partnership with Wimbledon tennis by Bill Jinks is IBM’s CTO for Sales & Distribution in the UK.

Yes, the fact that the partnership has lasted 27 years is quite remarkable – this is longer than quite a few marriages. And, yes the stats are pretty impressive – 21.1m unique devices, 71m visits, 542m page views. And there were some interesting details – such as the pains they go to paint the wifi and 4G aerials green to preserve the ancient mystique and the fact that they employ 48 tennis players as data analysts to process the sensor data from all around the site so that they can maintain their reputation for having all the information as it happens.

But at root this felt like a usual tale of one old business harnessing the power of technology to speak to modern, global audiences across platforms. Jinks did hint that Watson might be brought into play in the future but was rather hazy on the details.

One specific did emerge which put the spotlight on IBMs technical prowess – the cloud services are provisioned entirely through predictive analytics based on previous traffic patterns, the popularity of players and the like. It’s a shame there weren’t more of this kind of details.

The second session was, if anything, the weakest of the day in my opinion. Irini Papadimitriou is Digital Programmes Manager at the V&A and responsible for programmes such as the annual Digital Design Weekend. She spoke about various collaborations between the venerable museum, the UK’s leading museum of fashion and design. These included the Met office and V&A climate and fashion hackathon (which apparently led to Helen Storey’s Dress for Our Time which was on show during the Paris climate talks), and other projects bringing together scientists and designers, and economists and designers, and lots of different people to look at the recycling of old electronics. It all looked well-intensioned but it was hard to grasp the real relevance to the V&A mission, or what the legacy of such collaborations was. Perhaps I’m being unfair.

Things started to look up on the third presentation, given by veteran cyber security expert Mikko Hypponen, Chief Research Officer of F-Secure.

Hypponen explained that his task was to hunt hackers for a living and he says that one of the most important lessons that he has learned is that you have to understand your enemy. It is quite a different proposition to protect your networks against hackivists or criminals or nation state or terrorists.

Complexity, says Hypponen, is the enemy of security. When they get large enough all networks will be breached. He points out that all 500 of the Fortune 500 are hacked right now.  You can’t avoid it, so you need resilience.

“Security getting better but we keep running into the old problems,” he said.

He used a good example of a scam from 1989 and one from 2016, both of which were essentially the same ransom trojan although the former was actually on a floppy disk.

Ransom software companies have a great business model, he says: “selling data back to the people who value it most – you.” The Cryptolocker Trojan, for instance, has so far made €300m and is, in fact, a “cybercrime unicorn.” And, he points out, they don’t pay tax.

“If there is one thing you learn today it’s: Don’t click the enable content button,” he said. It was by clicking this kind of link that both the 1989 and the 2016 trojans were able to gain access.

What of the future?  The Internet of Things will bring a lot more challenges. With IOT no device will be small enough that it won’t end up online, he argues.

But he is broadly optimistic: “The internet has brought us so much more good than bad and I hope the same will apply to IOT.”

There is already a problem with many industrial control systems being accessible through the internet. “If you scan the internet you find things which shouldn’t be there,” he says, such as generators, swimming pool systems, even hospital bed charts. And all the examples he showed on screen were not password protected.

Perhaps the biggest shift, though, was the fact that the world was now entering a cyber arms race: “Most of the things attributed to governments are spying rather than cyberwar.” Last December’s attack in Kiev against a power company, for instance, was Russia engaged in cyberwar. In the event it wasn’t that serious and they recovered power in a couple of hours, but things are escalating. “Last year the US launched drones to kill hackers twice”, he said.

But it is still the simple things that keep failing us. The attack on the Ukrainian power company started in November when one of the employees was sent and Excel document with an “enable this content” button.

“Don’t click the button.”

Session two, entitled Blockchains and Bass Drums brought together John Thorp, Sarah Meiklejohn and Ed Hipkin.

John Thorp, described as “an internationally recognized thought leader in the field of value and benefits management” opened by saying that the track record of organisations in getting value out of technology is poor.

“I joined IBM in Canada in 1984 which was going to be the year of the electronic health record. We are still waiting.”

What is needed is a real shift of mindset – moving from technology delivery to a real focus on business, he said.

Best practice was the approach that most companies relied on. But best practice works for simple environments but it doesn’t work in complex environments, such as we now find in all large firms. What is needed is “emerging practices”.

“When things aren’t working we need to do something different.” In modern large companies we are managing an uncertain journey to an unknown destination, he said. “Leadership needs to move from top down to distributed capability and projects need to be led by different people at different times according to need.”

This is anathema to the industrial mindset which is, he says, “top down,  risk averse and controlling.” Modern challenges call for a collaborative, networked environment.

“There is a huge leadership deficit in the public and private sectors,” he said. “I’ve never done a consulting job where someone in the business didn’t already know the answer.”

Sarah Meiklejohn, a Lecturer in the Departments of Computer Science and Security and Crime Science at University College London, was next up discussing the poster child for the distributed environment – the blockchain.

Most people, she said, had a very sketchy view about the issue of online privacy but there were principles which people did hold dear: confidentiality, integrity and what she called data democracy (having a say in how your data is used).

“Goals do matter to people, for instance when we find our government is spying on us or when a  company we buy from has child labour in supply chain.”

Transparency is the only way ensure democracy on the internet, she says.

That’s where she thinks blockchain, essentially a distributed ledger which is the underpinning technology is cryptocurrency Bitcoin, comes in.

“Transparency is real USP for the first companies who adopt it and if we find the killer apps then we will see a lot of progress.” At the moment, she argues, we have a “technology hammer looking for nails.”

The session ended with Ed Hipkin (aka bassdrummer). He explained briefly his inspiration (he was blown away the first time he heard dance music on the school playing field in the 90’s and since then he has been trying to get his drums to sound more like his hero’s music) before going on to give a fabulous and very well received demo.

Session three was called “The Searchers”.

First up was Will Dracup, the CEO of Biosignatures. He spoke about proteonics which he described as looking at blood protein signatures for differences between those with a disease and those without.

There had, he said, been too little progress so far – “We are eight years into a 9 month project.” The goal is to look for unique signatures for prostate cancer and others. “The principle is that you can take a blood test and diagnose many diseases.”

But bad science is holding us back he says. What is needed is blind tests in all studies to test results. “Science is getting a bad reputation because too many stories in the press are contradicting each other – wine causes cancer, wines prevents cancer.”

Next up was James Murray, the Search Advertising Lead for Microsoft UK. On the face of it Bing has a big problem in that it is way behind Google in public awareness and market share terms. There is even a term “to Google” which is synonymous with the act of search. But Murray says the company isn’t discouraged – after all, he says, owning the verb isn’t enough. He illustrated the point with another synonymous verb – to Hoover. How many people used the term “doing the hoovering” he asked the audience – virtually everyone. Now how many people own a Hoover – less than a quarter. Now, “who owns a Dyson?” Three quarters of the room. QED.

“Bing is trying to be the Dyson of search,” he said, by reinventing search as a contextual technology.

People often use the wrong terms for what they are searching for, says Murray, so the key to being useful is to sort the context to provide the right answer at the right time. For example when the film Jurassic World was launched many people were actually searching for “Jurassic Park release date”. Giving the “right” answer in this example means returning the strictly wrong answer.

“Search engines are very good at patterns once the know what they are looking for.”

He listed several different types of context to illustrate how Microsoft are thinking about the issue:

  • Emotional. Microsoft is starting to research facial monitoring in order to understand how the user is feeling. In the MS Research labs in Cambridge he says you don’t need to sign in as the reception computers reading faces to grant access – and even, futuristically, to check your calendar to summon lifts and choose floors in order to get you to your meeting on fifth floor.
  • Environmental. The search engine can know that your usual favourite coffee is Costa and so would normally direct you to the nearest one, but now it knows it’s raining so it offers you another chain much nearer so you don’t get wet.
  • Social. “I am different with my wife than when I’m at work”, he says, and he’d like the search engine to understand that.
  • External. There are other things like global recession, or Brexit, or climate change, which also have a bearing, he says, but the biggest external context is your own culture and language. “Disney are really good at this,” he says. “Disney makes many versions of a film for different places to account for the cultural nuances.”

Context, he says, is king. How different this is really than Google’s approach is open to question, though, so we shall have to wait and see.

Last up in the session was “tech jester” Tom Scott who describes himself as someone who makes things with lines of code, video editing tools, and a few meters of network cable. Scott gave an entertaining talk about the history of emoji which demonstrated just how unexpectedly powerful seemingly simple things can be if they are widely adopted. He explained how there are now permanent committees deciding which emojis are given official Unicode status which means they will be adopted worldwide and visible on every machine.

“The serious point is that in 2017 there will be a condom emoji which means teens all over the world will be able to text each other about safe sex.”

The final session was called Present at the Creation.

First up was Joe Faith, who sold his first software – a computer game – at 14, and is now a Product Manager at Google.

Google, he says, is the “least process driven company I have ever worked with”.  And the reason is because process “doesn’t fit the people who work there”.

What drives Google instead are strong core values, he says.

One of the key ones is Focus on Users.

“The shallow sense of focussing on users is talking to users,” he says. “The deeper meaning is adoption before money.”  For example, he says, with the development of the Android operating system is was not clear where the money was coming from at the beginning.

The success of the adoption before money approach depends on two things, he says: the scaleability digital gives you and venture capital firms who understand the model.

The real difference comes when you ask for really big improvements. “What’s the 10x?” is the question most asked about new projects in Google. “How is it much better? What does it do for the users? How would you get there?”

He says the 10x ideal is so powerful because “10x is big (not incremental) but not too big.” Also, you are looking for 10x in one dimension not all, he says. “It forces you to rethink the basics.”

The key to the Google approach is to launch and iterate, he says. “There is a lot you don’t know about innovative products by definition so the key is to launch as quickly as possible and learn as quickly as possible whether it’s worth it.”

Google always front-loads the technical risk, he says, as this is thing which is really going to kill you.

Google Docs was “not good when it came out”, he says. And Chrome, Google’s browser, now the most popular in the world, was poor at first. “But it was fast and auto updated.” These were the 10x’s. Getting users to update browsers to combat security issues was a serious problem, so if a browser was able to auto update it would be major improvement. And being fast is the main thing users want from a browser.  “The first version was just a box on the screen – there wasn’t even a button,” he said. But it auto updated which means that those Googlers who were persuaded to try the product didn’t have to do anything – it just kept getting better and better automatically.

Focussing on the user and looking for the 10x is easy to say but hard to do, argues Faith. “You are always working in problems outside your comfort zone. It means you have to kill projects. And it means you will get difficult feedback.”

Next up was Katherine Harmon Courage, an award-winning freelance journalist and contributing editor for Scientific American magazine, whose new book Cultured is coming out next Spring.

She gave a fascinating talk about, of all things, the large intestine.

Microbiomes are everywhere – mouth, soil, washrooms”, she said, but the gut is hot, acidic and lacking in oxygen so studying our own was hard because bacteria didn’t survive outside the body.

Eventually, though she said, we developed better environments and then genetic sequencing was the big leap forward about 10 years ago. “There are hundreds or thousands of species on and in you and they are changing all the time.”

Now we can study these organisms we are beginning to look at their interactions and how they affect our  health.

One of the problems with modern healthcare is that antibiotics wipe out good bacteria as well as bad and can result in some serious conditions such as clostridium difficile colitis which occurs when clostridium difficile (c-diff) outperforms other gut bacteria.

One of the ways that this condition is treated is “fecal microbial transplant” which is pretty much what is sounds like and has a bit of an image problem, says Harmon Courage.

The future is to create the well balanced biome mix in the lab and tackle a wider range of conditions through simple pills, she says.

In the meantime, eat more fermented foods, she advises.

“Fermented products are all around the world,” she says. Miso for instance is created in ancient vats and with human hands. “Kimshi and miso have much more bacteria than probiotic yogurt in the West.”

There have been recent studies which show that the live bacteria in yogurt in the West don’t survive long in the gut, and so some have questioned their efficacy.

But, she says, the key is to eat them all the time. “Then it doesn’t matter if they don’t survive.”

The final talk was from Mary Teresa Rainey, a tech and advertising industry veteran who was awarded an OBE for Services to Advertising in 2015.

Rainey have a highly personal account of her involvement with the young Steve Jobs and Apple. She was a young advertising exec working on a small team on the TV commercial for the Lisa computer. She recalled a film shoot for the ad which was directly by Ridley Scott, who had already made Blade Runner but who was far from having the cult status that he later enjoyed.

The star was a very young Kevin Costner who, she recalled, had a dog “and I had to look after it.” She did a bad job and the dog ran onto the set. “Ridley Scott just said ‘damnit let the dog be in the picture’ and he turned out to be a star”, she said.

Speaking about Steve Jobs, with whom she worked closely on the Macintosh project as one of only six agency insiders, she said he instinctively understood communications and design. She is convinced he was a genius.

Steven had the “revolutionary idea of personal computing”, she said, and it was this idea of revolution which inspired the now legendary “1984” ad. She recalled how the Board of Apple didn’t like the commercial at all, but Steve was convinced. So as a callow 23-year-old she “had to persuade the board”.

The ad only ran once in the Super Bowl (the Apple Board insisted that they cancelled all other slots). But Steve was right, she says, and the ad is now regarded as one of the finest ever made.

“Steve was a hot person not a cold person”, she said. “He could be rash, passionate and gesticulating. But he also often broke into a grin, or jumped up and down on the table.”

Another great thing about Steve Jobs was that he was genuinely only interested in talent. “There were a lot of great women in Apple,” she said. “He was a great support of talent whoever they were.”

The more things change the more some things stay the same, she says. “Ideas are a powerful patent for brands. Technology changes but humans don’t. Powerful communications trump everything.”

All in all a packed programme with a lot of food for though. To my mind it still remains to be seen whether the Newcastle event can keep its unique status – I rather doubt it as Manchester and London grow in stature – but I certainly hope so.

The era of emulation and what it means for us

Robin Hanson
Robin Hanson

What is the next phase for humanity? Robin Hanson set out to answer this question in a thought-provoking and lively talk to London Futurists on March 19th.

He argues that humanity has been through several distinct economic growth phases each of which has been “exponential” in character. The first lasted nearly 200,000 years from the moment Homo Sapiens first emerged as hunter-gatherers. These early humans were vastly superior to the animals they replaced, successfully exploiting their environment through the use of organisation and tools. The next economic era began with the arrival of agriculture about 10,000 years ago and brought about a huge acceleration in development, with efficient use of labour and larger and more sophisticated societies. This ended with the birth of the third era, the industrial era, which started around 1760. Again, an exponential increase in economic output and efficiency. This gave way to the computer age in which we currently are. The exponential periods of these eras has been becoming shorter and shorter with world GDP doubling roughly every 15 to 20 years today.

What, Hanson asked, could create an economy which doubles every week or month?

And the answer he comes up with is – robots.

What Hanson means by “robots” is true general artificial intelligence and he argues there are three ways to do this: better software, a comprehensive theory of intelligence, or emulating a human brain.

And it is his belief that the most likely scenario is that we will first develop the capacity to emulate a human brain and that this should happen “sometime in the next century”.

All we need, he argues, are “many parallel computers” which are capable of scanning a human brain, modelling every brain cell type and recording what we see and then “running the model”.

This doesn’t mean we need to understand how a brain works – he thinks we may be centuries away from this. But we would be able to run what he calls “EMs” – short for emulations.

If we had them there would be a new age – the age of EM.

It is this new era, then, that he sets out to describe. Running in software, EMs are effectively immortal -“like houses and cars, if we choose”. But it’s unlikely EMs will choose to be – much more likely that they will spawn short-lived versions of themselves to carry out repetitive or one-off tasks and then shut these down when they have served their purpose.

The new age will have new morals – EMs will probably be OK with termination and respooling.

Partly this is simply a result of obsolescence – “Currently if the economy doubles every 15 years your skills as an individual become obsolete in that time.” This is why we retire and let the next generation learn the next set of skills. “In the world of the EM faster emulation means faster obsolescence.”

They will run faster because, even though these new consciouses are essentially human brains, “human brains are parallel so more hardware means more speed.” And they will take up very little space as they only really need to inhabit robot bodies when they need to do something in the physical instead of the virtual world. Hanson believes most of the time they will inhabit a purely virtual environment.

Hanson sees the birth of EMs as inevitable – they will be developed to speed economic development. And in the early days humans will own the EMs – much like slaves were owned. But just like slaves, some EMs will “buy” their freedom and from there they will quickly make up more and more of the economy (which may now be doubling in a matter of weeks or days).  Because they are so cheap to create (an EM could be copied millions of times at very little cost) and because they cost so little to run he says wages will effectively fall to way below human subsistence wages.

Humans will be eclipsed. The whole human race will retire.

Whether that retirement is a happy or a tragic one is very much up to us, he believes, as we will be quite rich enough as a whole to ensure a good outcome, although those riches will be extremely unequally distributed.

But either way, we might be retiring into a very different world. “Robots don’t need nature” he says. “They may choose to save nature but don’t need to.”

And if we are thinking all this doesn’t sound too good, and that we humans are bound to resist, he doesn’t really buy the “robot wars” scenario, either. “There wasn’t a farmer-industry war during the switch to the industrial era.”

So if this new era could begin soon, how long will it last? Hanson believes that because EMs will be running so fast the whole era could last just a couple of years. After that, maybe they will develop true software AI which will spawn the next era – who knows….

Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University.