The wrong way to do driverless cars

I’m a great supporter of driverless cars. I think they have the potential to dramatically change the world, making much better use of resources, revolutionising mobility for all and radically improving our towns and cities.

Paradoxically, however, I am not so keen on Phillip Hammond’s announcement that the UK aims to be the first country in the world to permit them on public roads without any “safety attendant” on board.

I’m just not convinced that the Government has developed a solid appreciation for the benefits of technology. After all, this is the country where more than half of schools don’t even offer a computer science GCSE, according to a report from the Royal Society.

In fact, I think this has, like it seems everything these days, more to do with Brexit than anything else.

Having alienated the conventional motor industry who  are warning of the dire consequences of leaving the customs union, it probably seems like a really smart move to become the go-to place for manufacturers to be testing and developing self-driving cars, which the smart money says are the future. This way we can secure our place in world when conventional car manufacturing relocates to the Continent.

But recklessly throwing off safeguards simply in order to pursue narrow short-term economic objectives could set the development of self-driving cars back decades. The implementation of self-driving cars is multi-facetted and complex, as much from a societal as a technical perspective. It will require careful collaboration across countries and disciplines, as well as exceptionally well calibrated communication with the populations they are supposed to be benefiting. None of these things seem to particularly in the UK’s skillset at the moment.

We’ve already witnessed the outcry over a fatal accident where a Tesla which was driving failed to see a lorry crossing in front. This is in sharp contrast to the coverage given to the 1.25 million people estimated to be killed by human-driven cars each year around the world. And this was in a case where there was a clear responsibility on the driver to keep alert and supervise if necessary.

The first (pretty-well inevitable) fatality by a self-driving car could quite easily set off a backlash which sets the development of this transformational technology back decades. And that would be a tragedy, not least for the millions whose lives would have been saved by the technology in the interim.

AI-powered robots and the future

This is a post over which I have been pondering for quite a while. While the debate rages on daily about whether AI (specifically AGI) is humanity’s great saviour or the biggest existential threat we all face, several stories which have emerged over the past few weeks seem to me to cast some light on the issue.

The first inspiration for the post was a Click Podcast from BBC World Service which had a number of items to do with robotics.

One was news that autonomous robots with “socially aware navigation,” are being road-tested by MIT researchers. What the researchers found was that it wasn’t difficult to make a robot which could autonomously avoid obstacles, but that once you throw humans into the mix life becomes much more complex.

The researchers found that humans in fact act quite unpredictably and follow a complex set of social rules like keeping to the right, passing on the left, maintaining a respectable berth, and being ready to weave or change course to avoid oncoming obstacles. And they do all this while keeping up a steady walking pace.

By using a kind of machine learning they taught their robots to navigate the world when among humans the way humans do, especially important if we are increasingly to share our environment with various helper bots, delivering goods or helping in hospitals and care homes, for example. And plenty of other work is going in to making humans and robots rub along more smoothly.

Another item on the Click Podcast addressed the same issue, but from a different perspective. This time it was training children to see robots as a natural part of their environment.  Cozmo, a tiny robot toy with a “brain” and personality, is like a robot version of the far more irritating Tamagotchi (which demanded constant attention or it would die0. Cozmo is a bit cuter and more socially rewarding and promises to offer expanding options for interactive play for children. In early tests young children quickly became used to the presence of the robot and treated him almost as a human play companion. This is the “get ’em young” approach to robot acceptability.

So now we have robots which navigate the world the way humans do, can communicate more effectively and which have human-like emotional responses.

The next step – at least in the University of Edinburgh – is to give some economic agency to a robot. In this case it’s a coffee machine called Bitbarista. The aim was to create a coffee machine which could explore attitudes to ethical trading and autonomously respond. The machine, which had its own Bitcoin account and a connection to the internet, asks students to rate the important of various attributes of the coffee they want – taste, ethical sourcing, price etc – and on the basis of the crowd-sourced information adjusts its future orders of replacement beans accordingly.

In addition the machine uses some of the Bitcoin it earns on each coffee to pay students to carry out various maintenance tasks for it, such as refilling water or beans.

In this case, though it clearly has some agency, the coffee machine has only a Raspberry Pi for a brain so it unlikely to become too carried away with itself. Maybe not so, though, as we move on to more powerful implementations such as self-driving cars.

It strikes me making robots economic players is a pretty silly thing to do. The philosopher Nick Bostrom famously warned of the difficulty of setting objectives for AGI which wouldn’t backfire on us. His thought-experiment explored how even a seemingly innocuous goal – such as making paperclips – could go disasterously wrong and end up destroying the world.

People are already thinking about the new kinds of models that fully self-driving cars might enable in the world. Will we still need (or indeed want) to own a car if we can summon one immediately from our smart phones? And why have just human-owned and run companies owning fleets of cars? Why not self-owning cars? People are already seriously suggesting this as a clear possibility.

But giving an AI-powered robot a capitalist goal framework would be a terrible plan. The idea starts out being quite sensible-sounding. Why not give the car a bank account (Bitcoin or otherwise) and enable it to use the money it makes to book itself in for servicing, pay for upgrades and so on? And, if it finds it is in great demand, it has been suggested it should be allowed to buy a second car and become a fleet. Why not?

Because paperclips, that’s why not.

Imagine – the car starts out being the best self-driving car it can, arriving when summoned, taking the most efficient route it can, ensuring it hovers in the right places to make itself as useful as possible.

Pretty soon, though, prompted by the desire to earn more money so it can buy more upgrades or buy other cars for its fleet, it figures out that blocking other cars is a more efficient way of acquiring more money more rapidly. So it starts sending false reports to other cars on the road to ensure it gets the best pick ups.

Other self-owning cars respond and bingo, you’ve re-created the Wild West. What was going to be a utopia of cheap, ubiquitous, convenient transport becomes a nightmare.

I draw a few conclusions from all this. First, there is so much work going on that AI-powered robots are a racing certainty – it’s not “if”, it’s “when”. Second, the current debate about AI (humanity’s saviour or its destroyer) is too polarised and strident to be particularly useful. What’s needed is far more active discussion of how to make this all work well. Its is abundantly clear to me that simply considering the advancements in AI and automation (like self-driving cars) another opportunity for the current capitalist model simply won’t wash. Technological unemployment is inevitable (see Calum Chace’s excellent work on what he has dubbed The Economic Singularity for an account of why) and without quite radical change enough buyers won’t exist for the goods and services which the AI promises to bring. Therefore, we need a new plan. As Calum puts it in a blog post:

We should aim for a world in which machines do all the boring stuff and humans get on with the important things in life, like playing, exploring, learning, socialising, discovering, and having fun.

There is a lot to be optimistic about. But political and economic orthodoxy needs to catch up fast with the technology.