The lessons we can learn when Sergey Brin says: I didn’t see AI coming

8153c8b2 9f8a 40d5 b645 a9c79b103912
18/08/2017

Sergey Brin, the co-founder of Google and one of the most successful Silicon Valley entrepreneurs returned to the WEF stage in Davos earlier this year and told the audience, “I didn’t pay attention to it at all, to be perfectly honest. Having been trained as a computer scientist in the 90s, everybody knew that AI didn’t work. People tried it, they tried neural nets and none of it worked.”

Fast-forward a few years and Google Brain, the company’s AI research project, has advanced so much that it now, as Brin puts it, “touches every single one of our main projects, ranging from search to photos to ads … everything we do. The revolution in deep nets has been very profound, it definitely surprised me, even though I was sitting right there.” This comment comes from the individual sitting right at the heart of advances in this field. It is astonishing (and humbling from the point of view of our own life experiences) to hear one of the leading proponents of advancing technology and one of the key inventors of a hugely successful company that is constantly pushing the boundaries of possibilities explain just how hard it is to see what’s coming.  “What can these things do? We don’t really know the limits,” said Brin. “It has incredible possibilities. I think it’s impossible to forecast accurately.”

The lessons implied in Brin’s comments are not new, but keep coming up. In life as in AI, the fact is no-one has a crystal ball, so what can you do to anticipate the ‘what next’ as far as you can see and steer through uncertain times and incredible change that’s almost impossible to predict? That’s one aspect of being at the helm and being responsible for decisions today that affect business tomorrow. The other aspect is just being human. Brin’s comment is a deeply telling psychological insight into one of the many flaws in human behaviour – our survival as a species is dependent on staying together with the herd. So it’s not usually a good idea to counteract the popularist view. Survival instinct tells us that the herd prevails and we are comforted by all being ‘right’ together and thinking along the same tramlines, not breaking up the pack. In leadership situations we are all sat in a meeting room, vehemently agreeing on the consensus of direction and not paying attention to the nagging doubt sat in the corner. Brin makes a telling comment, “everybody knew that AI didn’t work”.  Well that’s a little bit like saying, everybody knew the earth was flat. Everybody knew the sun revolved around the earth. Everybody knew that fresh air made people ill. The list goes on. Just because it did not seem possible at the time, everyone agreed it was not possible.

Brin also says “I didn’t pay attention to it at all”. That’s because he’d heard  that everyone knew it didn’t work. So he stopped listening. When you stop listening you don’t hear any dissenting voices.

If the computer scientists in Silicon Valley in the 1990’s had invited some psychologists, nurses, artists, theologians, financiers, anarchists or investigative journalists to be part of the innovation and thinking processes about AI, might there have been a different outcome? Would this have added different voices, spoken in different tones, coming at this from different life experiences? Might there have been a challenging perspective on why ‘everyone knew it didn’t work’  which could have led to a breakthrough moment and earlier advances?

In an increasingly complex world, you can never have too many perspectives. It’s better to invite critical assessment, fresh perspective and non-consensus thoughts. Today, AI has come back from the realms of the impossible to be touted as the only possible future. However a new perspective yet to come, will undoubtedly put paid to that. That’s if anyone is listening to the dissenting voices.