Category Archives: Futurism

Can robots be moral beings?

In short, we can make robots which display altruistic behaviour but they aren’t moral agents because we create them, says Joanna Bryson from Bath University. And what’s more we should not pretend that they are. 

Speaking at a London Futurists session today she said the key difference between robots and children is that, although we can guide the development of children, ultimately they are free agents.  We make robots entirely so the can’t be said to have moral agency.

But there are powerful forces at work. We humans have an overwhelming urge to impute agency on all sorts of animate and inanimate objects – think dogs and cats, and stuffed rabbits.

Soldiers using the bomb disposal robots in Iraq got very attached to them and would rescue them and want them repaired rather than being replaced by a new robot.

But, she says, there are serious moral hazards involved in treating robots as morally responsible. “Governments and manufacturers are going to want the robots to be responsible so they don’t have to pay when things go wrong.” Take the “killer robots” which are very much in the news at the moment. It isn’t the robots that are the killers, she argues. It is the politicians who have ultimate responsibility for the cost/benefit trade-offs programmed into them. But that is not how it is likely to be portrayed if something goes wrong.

joanna brysonDespite the apparent attractiveness of developing AI robots in our image, Bryson argues it probably doesn’t make any sense to try to make robots more like us.

“All the things that are important to us are because of our evolution, because we are apes.” Not only does imputing our values to robots not make sense, it may even be counterproductive. “It may not make them any better.”

And she doesn’t worry about crossing some magic line where one minute we don’t have AI and the next minute we do – the so-called intelligence explosion.

Neither does she believe that just because of AI the world is suddenly in danger of being turned into a giant paperclip factory as Nick Bostrom has suggested, pointing out that we are already doing that to the world, albeit making more than just paperclips.

She believes things won’t be like that and AI is simply getting better all the time (there are already AIs that pass the Turing Test, she argues). She does think, though, that we need to consider carefully how we want to proceed – much as we did with nuclear and chemical weapons.

For that reason she was involved with an initiative sponsored by the EPSRC and the AHRC to update Asimov’s famous laws of robotics.
Principles for designers, builders and users of robots

  • Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  • Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
  • Robots are products. They should be designed using processes which assure their safety and security
  • Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  • The person with legal responsibility for a robot should be attributed

What really lies behind Google’s acquisition strategy

Last month I wrote about Google’s acquisition spree and was somewhat critical of the depth of the analysis. I promised a follow up on what I thought was possibly really going, so here it is – better late than never. 

When ever Google acquisitions get discussed it seems the explanation is always somehow connected to data. The argument goes that Google’s only real business model is advertising. Advertising thrives on data (to make it more relevant and therefore effective). Therefore this is the reason behind whatever acquisition that is being discussed. 

I don’t entirely subscribe to this view. I believe there is something a bit deeper and more far reaching happening. 

It you take a look at the list of companies that Google has acquired, among the firms more obviously connected to Google’s core current business model there are a good number which fall into the categories of robotics, artificial intelligence and human computer interface. 

Google’s recent initiatives include Calico, which supports research into ageing and health, the much-publicised driverless car and the infamous Google Glass project. 

Meanwhile Google Ventures, the venture capital arm, is busy investing in life sciences, among other things. 

What have these things got it common? They are all thematically relevant to a particular view of how the future will unfold in 30 to 50 years time. The clue, I think, is in the appointment of Ray Kurzweil  as director of engineering who is officially there to “work on new projects involving machine learning and language processing”. Wikipedia describes him like this:

He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements, as has been displayed in his vast collection of public talks, wherein he has shared his primarily optimistic outlooks on life extension technologies and the future of nanotechnology, robotics, and biotechnology.

Google is investing, one way or another in most of the key technologies central to Kurzweil’s optimistic vision which means they are very much in the forefront of making this all happen. It could well be that Google may become the first example of a transhumanist corporation.