Back Industry News

A discussion about AI’s conflicts and challenges Posted on Jun 18 - 2017

Share This :

Thirty five years ago having a PhD in computer vision was considered the height of unfashion, as artificial intelligence languished at the bottom of the trough of disillusionment.

Back then it could take a day for a computer vision algorithm to process a single image. How times change.

“The competition for talent at the moment is absolutely ferocious,” agrees Professor Andrew Blake, whose computer vision PhD was obtained in 1983, but who is now, among other things, a scientific advisor to UK-based autonomous vehicle software startup, FiveAI, which is aiming to trial driverless cars on London’s roads in 2019.

Blake founded Microsoft’s computer vision group, and was managing director of Microsoft Research, Cambridge, where he was involved in the development of the Kinect sensor — which was something of an augur for computer vision’s rising star (even if Kinect itself did not achieve the kind of consumer success Microsoft might have hoped).

He’s now research director at the Alan Turing Institute in the UK, which aims to support data science research, which of course means machine learning and AI, and includes probing the ethics and societal implications of AI and big data.

So how can a startup like FiveAI hope to compete with tech giants like Uber and Google, which are also of course working on autonomous vehicle projects, in this fierce fight for AI expertise?

And, thinking of society as a whole, is it a risk or an opportunity that such powerful tech giants are throwing everything they’ve got at trying to make AI breakthroughs? Might the AI agenda not be hijacked, and progress in the field monopolized, by a set of very specific commercial agendas? View More


Get the Global Big Data Conference

Weekly insight from industry insiders.
Plus exclusive content and offers.