Category Archives: Technology

Elon Musk’s strengths and weaknesses

I’ve been thinking about the melt-down which is happening now Elon Musk was reluctantly successful in his purchase of Twitter.

To kill two birds with one stone I thought I would ask ChatGPT what it thought were his strengths and weaknesses:

Elon Musk is a highly successful entrepreneur and CEO, known for his innovative approaches and ambitious projects. However, like all individuals, he also has both strengths and weaknesses.

One of Musk’s greatest strengths is his ability to think big and envision the future. He has a strong vision for what he wants to achieve, and he is not afraid to take risks in pursuit of his goals. This has allowed him to successfully co-found and lead companies such as PayPal, Tesla, and SpaceX, which have all had a significant impact on their respective industries.

Another strength of Musk’s is his ability to adapt and innovate. He is constantly looking for ways to improve and evolve his companies, and he is not afraid to pivot or change course if necessary. This has allowed him to stay ahead of the curve and anticipate market trends, which has been key to the success of his businesses.

However, Musk is not without his weaknesses. One of his most notable weaknesses is his tendency to be overconfident and make bold predictions that may not come to fruition. This has led to some setbacks and criticism when his predictions have not been met. Additionally, Musk’s strong focus on achieving his goals can sometimes lead to him being perceived as lacking empathy or consideration for others.

Overall, Elon Musk is a complex and multifaceted individual with both strengths and weaknesses. While his confidence and vision have helped him achieve tremendous success, it is important for him to continue to strive for balance and consider the impact of his actions on others.

It’s not a bad answer, and exceptional if you think it took 10 seconds for an AI to come up with it.

It’s not a sophisticated answer: it ignored, for instance, his tendency to treat human “laws” (say the desirability of free speech) as if they were the laws of physics, which is his greatest flaw and which has been mercilessly exposed by the shit-show that is Musk-owned Twitter.

But prior to his latest display of his character I would say this is a pretty good assessment. I can only imagine the havoc that will be caused by ChatGPT and its ilk in the very near future!

The wrong way to do driverless cars

I’m a great supporter of driverless cars. I think they have the potential to dramatically change the world, making much better use of resources, revolutionising mobility for all and radically improving our towns and cities.

Paradoxically, however, I am not so keen on Phillip Hammond’s announcement that the UK aims to be the first country in the world to permit them on public roads without any “safety attendant” on board.

I’m just not convinced that the Government has developed a solid appreciation for the benefits of technology. After all, this is the country where more than half of schools don’t even offer a computer science GCSE, according to a report from the Royal Society.

In fact, I think this has, like it seems everything these days, more to do with Brexit than anything else.

Having alienated the conventional motor industry who  are warning of the dire consequences of leaving the customs union, it probably seems like a really smart move to become the go-to place for manufacturers to be testing and developing self-driving cars, which the smart money says are the future. This way we can secure our place in world when conventional car manufacturing relocates to the Continent.

But recklessly throwing off safeguards simply in order to pursue narrow short-term economic objectives could set the development of self-driving cars back decades. The implementation of self-driving cars is multi-facetted and complex, as much from a societal as a technical perspective. It will require careful collaboration across countries and disciplines, as well as exceptionally well calibrated communication with the populations they are supposed to be benefiting. None of these things seem to particularly in the UK’s skillset at the moment.

We’ve already witnessed the outcry over a fatal accident where a Tesla which was driving failed to see a lorry crossing in front. This is in sharp contrast to the coverage given to the 1.25 million people estimated to be killed by human-driven cars each year around the world. And this was in a case where there was a clear responsibility on the driver to keep alert and supervise if necessary.

The first (pretty-well inevitable) fatality by a self-driving car could quite easily set off a backlash which sets the development of this transformational technology back decades. And that would be a tragedy, not least for the millions whose lives would have been saved by the technology in the interim.

AI-powered robots and the future

This is a post over which I have been pondering for quite a while. While the debate rages on daily about whether AI (specifically AGI) is humanity’s great saviour or the biggest existential threat we all face, several stories which have emerged over the past few weeks seem to me to cast some light on the issue.

The first inspiration for the post was a Click Podcast from BBC World Service which had a number of items to do with robotics.

One was news that autonomous robots with “socially aware navigation,” are being road-tested by MIT researchers. What the researchers found was that it wasn’t difficult to make a robot which could autonomously avoid obstacles, but that once you throw humans into the mix life becomes much more complex.

The researchers found that humans in fact act quite unpredictably and follow a complex set of social rules like keeping to the right, passing on the left, maintaining a respectable berth, and being ready to weave or change course to avoid oncoming obstacles. And they do all this while keeping up a steady walking pace.

By using a kind of machine learning they taught their robots to navigate the world when among humans the way humans do, especially important if we are increasingly to share our environment with various helper bots, delivering goods or helping in hospitals and care homes, for example. And plenty of other work is going in to making humans and robots rub along more smoothly.

Another item on the Click Podcast addressed the same issue, but from a different perspective. This time it was training children to see robots as a natural part of their environment.  Cozmo, a tiny robot toy with a “brain” and personality, is like a robot version of the far more irritating Tamagotchi (which demanded constant attention or it would die0. Cozmo is a bit cuter and more socially rewarding and promises to offer expanding options for interactive play for children. In early tests young children quickly became used to the presence of the robot and treated him almost as a human play companion. This is the “get ’em young” approach to robot acceptability.

So now we have robots which navigate the world the way humans do, can communicate more effectively and which have human-like emotional responses.

The next step – at least in the University of Edinburgh – is to give some economic agency to a robot. In this case it’s a coffee machine called Bitbarista. The aim was to create a coffee machine which could explore attitudes to ethical trading and autonomously respond. The machine, which had its own Bitcoin account and a connection to the internet, asks students to rate the important of various attributes of the coffee they want – taste, ethical sourcing, price etc – and on the basis of the crowd-sourced information adjusts its future orders of replacement beans accordingly.

In addition the machine uses some of the Bitcoin it earns on each coffee to pay students to carry out various maintenance tasks for it, such as refilling water or beans.

In this case, though it clearly has some agency, the coffee machine has only a Raspberry Pi for a brain so it unlikely to become too carried away with itself. Maybe not so, though, as we move on to more powerful implementations such as self-driving cars.

It strikes me making robots economic players is a pretty silly thing to do. The philosopher Nick Bostrom famously warned of the difficulty of setting objectives for AGI which wouldn’t backfire on us. His thought-experiment explored how even a seemingly innocuous goal – such as making paperclips – could go disasterously wrong and end up destroying the world.

People are already thinking about the new kinds of models that fully self-driving cars might enable in the world. Will we still need (or indeed want) to own a car if we can summon one immediately from our smart phones? And why have just human-owned and run companies owning fleets of cars? Why not self-owning cars? People are already seriously suggesting this as a clear possibility.

But giving an AI-powered robot a capitalist goal framework would be a terrible plan. The idea starts out being quite sensible-sounding. Why not give the car a bank account (Bitcoin or otherwise) and enable it to use the money it makes to book itself in for servicing, pay for upgrades and so on? And, if it finds it is in great demand, it has been suggested it should be allowed to buy a second car and become a fleet. Why not?

Because paperclips, that’s why not.

Imagine – the car starts out being the best self-driving car it can, arriving when summoned, taking the most efficient route it can, ensuring it hovers in the right places to make itself as useful as possible.

Pretty soon, though, prompted by the desire to earn more money so it can buy more upgrades or buy other cars for its fleet, it figures out that blocking other cars is a more efficient way of acquiring more money more rapidly. So it starts sending false reports to other cars on the road to ensure it gets the best pick ups.

Other self-owning cars respond and bingo, you’ve re-created the Wild West. What was going to be a utopia of cheap, ubiquitous, convenient transport becomes a nightmare.

I draw a few conclusions from all this. First, there is so much work going on that AI-powered robots are a racing certainty – it’s not “if”, it’s “when”. Second, the current debate about AI (humanity’s saviour or its destroyer) is too polarised and strident to be particularly useful. What’s needed is far more active discussion of how to make this all work well. Its is abundantly clear to me that simply considering the advancements in AI and automation (like self-driving cars) another opportunity for the current capitalist model simply won’t wash. Technological unemployment is inevitable (see Calum Chace’s excellent work on what he has dubbed The Economic Singularity for an account of why) and without quite radical change enough buyers won’t exist for the goods and services which the AI promises to bring. Therefore, we need a new plan. As Calum puts it in a blog post:

We should aim for a world in which machines do all the boring stuff and humans get on with the important things in life, like playing, exploring, learning, socialising, discovering, and having fun.

There is a lot to be optimistic about. But political and economic orthodoxy needs to catch up fast with the technology.

The role of language

I was struck this week by the sharp contrast in language styles adopted by two of the world’s great tech leaders – Tim Cook and Elon Musk.

Tim Cook’s keynote at the WWDC  was full of the kind of language we have long associated with Apple – full of “incredible”, “great”, “changing the world” – all the while talking about new operating systems for the iPhone, Apple Watch, Apple TV and the Mac itself. While undoubtedly impressive, I’m not sure the epithets really fit the bill…

Elon Musk, on the other hand, seems to have invented a new language to deal with the highly experimental and ground-breaking work he is involved in. Take the goal of SpaceX to develop re-usable rockets. There is clearly a lot to learn as this has never been done before. Therefore every failure is a step on the path to learning how to achieve reliable reusability. With the Ironman allusions and fan worship it is nye-on impossible for social media, or indeed main stream media, to deal calmly with the failures as well as the successes. Hence this:

“Explosion” or “crash” smacks of failure. “RUD” speaks to a more reasoned, experimental and scientific approach.

This more measured approach to a complex world, as well as the humour, is surely worth adopting far more widely.

Thinking Digitally in Tyneside

The Sage
The Sage, Gateshead

This year was something of a turning point for Thinking Digital as the Tyneside-based event, regarded as a kind of home-grown TED since its launch in 2008, this year branched out into satellite events in London and Manchester. The original event was slimmed down from two days to one and there were worries that the unique quality which was Thinking Digital may be lost in the changes.

So how did it fare? Really rather well, actually. The conference’s first segment, called Sport, Culture and Terrorism, started off a little unpromisingly with a slightly underwhelming account of the IBM partnership with Wimbledon tennis by Bill Jinks is IBM’s CTO for Sales & Distribution in the UK.

Yes, the fact that the partnership has lasted 27 years is quite remarkable – this is longer than quite a few marriages. And, yes the stats are pretty impressive – 21.1m unique devices, 71m visits, 542m page views. And there were some interesting details – such as the pains they go to paint the wifi and 4G aerials green to preserve the ancient mystique and the fact that they employ 48 tennis players as data analysts to process the sensor data from all around the site so that they can maintain their reputation for having all the information as it happens.

But at root this felt like a usual tale of one old business harnessing the power of technology to speak to modern, global audiences across platforms. Jinks did hint that Watson might be brought into play in the future but was rather hazy on the details.

One specific did emerge which put the spotlight on IBMs technical prowess – the cloud services are provisioned entirely through predictive analytics based on previous traffic patterns, the popularity of players and the like. It’s a shame there weren’t more of this kind of details.

The second session was, if anything, the weakest of the day in my opinion. Irini Papadimitriou is Digital Programmes Manager at the V&A and responsible for programmes such as the annual Digital Design Weekend. She spoke about various collaborations between the venerable museum, the UK’s leading museum of fashion and design. These included the Met office and V&A climate and fashion hackathon (which apparently led to Helen Storey’s Dress for Our Time which was on show during the Paris climate talks), and other projects bringing together scientists and designers, and economists and designers, and lots of different people to look at the recycling of old electronics. It all looked well-intensioned but it was hard to grasp the real relevance to the V&A mission, or what the legacy of such collaborations was. Perhaps I’m being unfair.

Things started to look up on the third presentation, given by veteran cyber security expert Mikko Hypponen, Chief Research Officer of F-Secure.

Hypponen explained that his task was to hunt hackers for a living and he says that one of the most important lessons that he has learned is that you have to understand your enemy. It is quite a different proposition to protect your networks against hackivists or criminals or nation state or terrorists.

Complexity, says Hypponen, is the enemy of security. When they get large enough all networks will be breached. He points out that all 500 of the Fortune 500 are hacked right now.  You can’t avoid it, so you need resilience.

“Security getting better but we keep running into the old problems,” he said.

He used a good example of a scam from 1989 and one from 2016, both of which were essentially the same ransom trojan although the former was actually on a floppy disk.

Ransom software companies have a great business model, he says: “selling data back to the people who value it most – you.” The Cryptolocker Trojan, for instance, has so far made €300m and is, in fact, a “cybercrime unicorn.” And, he points out, they don’t pay tax.

“If there is one thing you learn today it’s: Don’t click the enable content button,” he said. It was by clicking this kind of link that both the 1989 and the 2016 trojans were able to gain access.

What of the future?  The Internet of Things will bring a lot more challenges. With IOT no device will be small enough that it won’t end up online, he argues.

But he is broadly optimistic: “The internet has brought us so much more good than bad and I hope the same will apply to IOT.”

There is already a problem with many industrial control systems being accessible through the internet. “If you scan the internet you find things which shouldn’t be there,” he says, such as generators, swimming pool systems, even hospital bed charts. And all the examples he showed on screen were not password protected.

Perhaps the biggest shift, though, was the fact that the world was now entering a cyber arms race: “Most of the things attributed to governments are spying rather than cyberwar.” Last December’s attack in Kiev against a power company, for instance, was Russia engaged in cyberwar. In the event it wasn’t that serious and they recovered power in a couple of hours, but things are escalating. “Last year the US launched drones to kill hackers twice”, he said.

But it is still the simple things that keep failing us. The attack on the Ukrainian power company started in November when one of the employees was sent and Excel document with an “enable this content” button.

“Don’t click the button.”

Session two, entitled Blockchains and Bass Drums brought together John Thorp, Sarah Meiklejohn and Ed Hipkin.

John Thorp, described as “an internationally recognized thought leader in the field of value and benefits management” opened by saying that the track record of organisations in getting value out of technology is poor.

“I joined IBM in Canada in 1984 which was going to be the year of the electronic health record. We are still waiting.”

What is needed is a real shift of mindset – moving from technology delivery to a real focus on business, he said.

Best practice was the approach that most companies relied on. But best practice works for simple environments but it doesn’t work in complex environments, such as we now find in all large firms. What is needed is “emerging practices”.

“When things aren’t working we need to do something different.” In modern large companies we are managing an uncertain journey to an unknown destination, he said. “Leadership needs to move from top down to distributed capability and projects need to be led by different people at different times according to need.”

This is anathema to the industrial mindset which is, he says, “top down,  risk averse and controlling.” Modern challenges call for a collaborative, networked environment.

“There is a huge leadership deficit in the public and private sectors,” he said. “I’ve never done a consulting job where someone in the business didn’t already know the answer.”

Sarah Meiklejohn, a Lecturer in the Departments of Computer Science and Security and Crime Science at University College London, was next up discussing the poster child for the distributed environment – the blockchain.

Most people, she said, had a very sketchy view about the issue of online privacy but there were principles which people did hold dear: confidentiality, integrity and what she called data democracy (having a say in how your data is used).

“Goals do matter to people, for instance when we find our government is spying on us or when a  company we buy from has child labour in supply chain.”

Transparency is the only way ensure democracy on the internet, she says.

That’s where she thinks blockchain, essentially a distributed ledger which is the underpinning technology is cryptocurrency Bitcoin, comes in.

“Transparency is real USP for the first companies who adopt it and if we find the killer apps then we will see a lot of progress.” At the moment, she argues, we have a “technology hammer looking for nails.”

The session ended with Ed Hipkin (aka bassdrummer). He explained briefly his inspiration (he was blown away the first time he heard dance music on the school playing field in the 90’s and since then he has been trying to get his drums to sound more like his hero’s music) before going on to give a fabulous and very well received demo.

Session three was called “The Searchers”.

First up was Will Dracup, the CEO of Biosignatures. He spoke about proteonics which he described as looking at blood protein signatures for differences between those with a disease and those without.

There had, he said, been too little progress so far – “We are eight years into a 9 month project.” The goal is to look for unique signatures for prostate cancer and others. “The principle is that you can take a blood test and diagnose many diseases.”

But bad science is holding us back he says. What is needed is blind tests in all studies to test results. “Science is getting a bad reputation because too many stories in the press are contradicting each other – wine causes cancer, wines prevents cancer.”

Next up was James Murray, the Search Advertising Lead for Microsoft UK. On the face of it Bing has a big problem in that it is way behind Google in public awareness and market share terms. There is even a term “to Google” which is synonymous with the act of search. But Murray says the company isn’t discouraged – after all, he says, owning the verb isn’t enough. He illustrated the point with another synonymous verb – to Hoover. How many people used the term “doing the hoovering” he asked the audience – virtually everyone. Now how many people own a Hoover – less than a quarter. Now, “who owns a Dyson?” Three quarters of the room. QED.

“Bing is trying to be the Dyson of search,” he said, by reinventing search as a contextual technology.

People often use the wrong terms for what they are searching for, says Murray, so the key to being useful is to sort the context to provide the right answer at the right time. For example when the film Jurassic World was launched many people were actually searching for “Jurassic Park release date”. Giving the “right” answer in this example means returning the strictly wrong answer.

“Search engines are very good at patterns once the know what they are looking for.”

He listed several different types of context to illustrate how Microsoft are thinking about the issue:

  • Emotional. Microsoft is starting to research facial monitoring in order to understand how the user is feeling. In the MS Research labs in Cambridge he says you don’t need to sign in as the reception computers reading faces to grant access – and even, futuristically, to check your calendar to summon lifts and choose floors in order to get you to your meeting on fifth floor.
  • Environmental. The search engine can know that your usual favourite coffee is Costa and so would normally direct you to the nearest one, but now it knows it’s raining so it offers you another chain much nearer so you don’t get wet.
  • Social. “I am different with my wife than when I’m at work”, he says, and he’d like the search engine to understand that.
  • External. There are other things like global recession, or Brexit, or climate change, which also have a bearing, he says, but the biggest external context is your own culture and language. “Disney are really good at this,” he says. “Disney makes many versions of a film for different places to account for the cultural nuances.”

Context, he says, is king. How different this is really than Google’s approach is open to question, though, so we shall have to wait and see.

Last up in the session was “tech jester” Tom Scott who describes himself as someone who makes things with lines of code, video editing tools, and a few meters of network cable. Scott gave an entertaining talk about the history of emoji which demonstrated just how unexpectedly powerful seemingly simple things can be if they are widely adopted. He explained how there are now permanent committees deciding which emojis are given official Unicode status which means they will be adopted worldwide and visible on every machine.

“The serious point is that in 2017 there will be a condom emoji which means teens all over the world will be able to text each other about safe sex.”

The final session was called Present at the Creation.

First up was Joe Faith, who sold his first software – a computer game – at 14, and is now a Product Manager at Google.

Google, he says, is the “least process driven company I have ever worked with”.  And the reason is because process “doesn’t fit the people who work there”.

What drives Google instead are strong core values, he says.

One of the key ones is Focus on Users.

“The shallow sense of focussing on users is talking to users,” he says. “The deeper meaning is adoption before money.”  For example, he says, with the development of the Android operating system is was not clear where the money was coming from at the beginning.

The success of the adoption before money approach depends on two things, he says: the scaleability digital gives you and venture capital firms who understand the model.

The real difference comes when you ask for really big improvements. “What’s the 10x?” is the question most asked about new projects in Google. “How is it much better? What does it do for the users? How would you get there?”

He says the 10x ideal is so powerful because “10x is big (not incremental) but not too big.” Also, you are looking for 10x in one dimension not all, he says. “It forces you to rethink the basics.”

The key to the Google approach is to launch and iterate, he says. “There is a lot you don’t know about innovative products by definition so the key is to launch as quickly as possible and learn as quickly as possible whether it’s worth it.”

Google always front-loads the technical risk, he says, as this is thing which is really going to kill you.

Google Docs was “not good when it came out”, he says. And Chrome, Google’s browser, now the most popular in the world, was poor at first. “But it was fast and auto updated.” These were the 10x’s. Getting users to update browsers to combat security issues was a serious problem, so if a browser was able to auto update it would be major improvement. And being fast is the main thing users want from a browser.  “The first version was just a box on the screen – there wasn’t even a button,” he said. But it auto updated which means that those Googlers who were persuaded to try the product didn’t have to do anything – it just kept getting better and better automatically.

Focussing on the user and looking for the 10x is easy to say but hard to do, argues Faith. “You are always working in problems outside your comfort zone. It means you have to kill projects. And it means you will get difficult feedback.”

Next up was Katherine Harmon Courage, an award-winning freelance journalist and contributing editor for Scientific American magazine, whose new book Cultured is coming out next Spring.

She gave a fascinating talk about, of all things, the large intestine.

Microbiomes are everywhere – mouth, soil, washrooms”, she said, but the gut is hot, acidic and lacking in oxygen so studying our own was hard because bacteria didn’t survive outside the body.

Eventually, though she said, we developed better environments and then genetic sequencing was the big leap forward about 10 years ago. “There are hundreds or thousands of species on and in you and they are changing all the time.”

Now we can study these organisms we are beginning to look at their interactions and how they affect our  health.

One of the problems with modern healthcare is that antibiotics wipe out good bacteria as well as bad and can result in some serious conditions such as clostridium difficile colitis which occurs when clostridium difficile (c-diff) outperforms other gut bacteria.

One of the ways that this condition is treated is “fecal microbial transplant” which is pretty much what is sounds like and has a bit of an image problem, says Harmon Courage.

The future is to create the well balanced biome mix in the lab and tackle a wider range of conditions through simple pills, she says.

In the meantime, eat more fermented foods, she advises.

“Fermented products are all around the world,” she says. Miso for instance is created in ancient vats and with human hands. “Kimshi and miso have much more bacteria than probiotic yogurt in the West.”

There have been recent studies which show that the live bacteria in yogurt in the West don’t survive long in the gut, and so some have questioned their efficacy.

But, she says, the key is to eat them all the time. “Then it doesn’t matter if they don’t survive.”

The final talk was from Mary Teresa Rainey, a tech and advertising industry veteran who was awarded an OBE for Services to Advertising in 2015.

Rainey have a highly personal account of her involvement with the young Steve Jobs and Apple. She was a young advertising exec working on a small team on the TV commercial for the Lisa computer. She recalled a film shoot for the ad which was directly by Ridley Scott, who had already made Blade Runner but who was far from having the cult status that he later enjoyed.

The star was a very young Kevin Costner who, she recalled, had a dog “and I had to look after it.” She did a bad job and the dog ran onto the set. “Ridley Scott just said ‘damnit let the dog be in the picture’ and he turned out to be a star”, she said.

Speaking about Steve Jobs, with whom she worked closely on the Macintosh project as one of only six agency insiders, she said he instinctively understood communications and design. She is convinced he was a genius.

Steven had the “revolutionary idea of personal computing”, she said, and it was this idea of revolution which inspired the now legendary “1984” ad. She recalled how the Board of Apple didn’t like the commercial at all, but Steve was convinced. So as a callow 23-year-old she “had to persuade the board”.

The ad only ran once in the Super Bowl (the Apple Board insisted that they cancelled all other slots). But Steve was right, she says, and the ad is now regarded as one of the finest ever made.

“Steve was a hot person not a cold person”, she said. “He could be rash, passionate and gesticulating. But he also often broke into a grin, or jumped up and down on the table.”

Another great thing about Steve Jobs was that he was genuinely only interested in talent. “There were a lot of great women in Apple,” she said. “He was a great support of talent whoever they were.”

The more things change the more some things stay the same, she says. “Ideas are a powerful patent for brands. Technology changes but humans don’t. Powerful communications trump everything.”

All in all a packed programme with a lot of food for though. To my mind it still remains to be seen whether the Newcastle event can keep its unique status – I rather doubt it as Manchester and London grow in stature – but I certainly hope so.

Bots are a transitional technology

Yesterday Facebook announced, as predicted, the launch of a range of tools to facilitate the development of “bots” on its Messenger platform. The argument being made far and wide is that bots are a replacement for apps which have become so numerous that their usefulness to users is plunging and most developers are no longer making any money. 

It’s easy to see why Facebook is so interested. Unlike Apple or Google they don’t have a hardware and operating system platform with which to “own” the customer. Facebook needs to make its apps perform this function and bots give it the chance to make Facebook, and specifically Messenger, much more useful and immersive and in the process make hardware and operating systems much less significant. 

But bots are only a transitional technology. The holy grail (told to me over 20 years ago by the head of Microsoft Research and still true today) is the Star Trek Computer. The film Her is the best modern take on that vision. That’s why Google (with Now), Apple (with Siri) and Microsoft (with Cortana) and even Amazon (with Alexa) have been pouring so much time and money into developing competent AI-driven assistants.

But the technology still falls short of the vision, so in meantime we will have bots – highly specific and constrained AI-driven chat bots which aim to do one thing (booking a hotel room or flight for instance) very well and reliably. 

They will undoubtedly be a huge success – WeChat in China has already demonstrated that quite clearly. This post from Andreessen Horowitz has the best account I’ve found. How long the success will last rather depends on how quickly the more general AI being developed by Google, Apple et al gets good enough. Expect them to develop bot platforms of their own, but also to amp up their own investment in generalized AI. We all still really want the Star Wars computer, after all.