Google Engineer Cites Complexities Around AI Algorithms Explaining Themselves

Google has been saying for a year or so that they are moving from a mobile first world to an artificial intelligence first world but we’ve been told, at least in search, machine learning won’t take over the algorithm – at least not yet.

So I saw this interesting tweet from Paul Haahr, a top search ranking engineer at Google for over 15 years. He cited a NY Times article named Can A.I. Be Taught to Explain Itself? and wrote “This article by @cliffkuang on Explainable AI is an excellent introduction to hard, important problems. These issues are coming up at work every day; the article describes them well and gives some reasons for optimism.”

It reminded me of when he spoke at SMX some time ago where he said Google doesn’t fully understand RankBrain. I believe this article references what he means by that, because RankBrain is a machine learning, AI based search feature.

If Google cannot debug or understand how the algorithm itself gets better and makes certain decisions then how can Google debug it fully when it goes bad.

“As machine learning becomes more powerful, the fieldâs researchers increasingly find themselves unable to account for what their algorithms know â” or how they know it.” That is not just a science fiction Terminator movie like fear concept but in reality, it is a topic that engineers need to battle with to understand how to improve their AI.

Anyway, reading the article might shed some light into those challenges and show you maybe how far off we are from robots killing us.

Forum discussion at Twitter.

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInPin on PinterestShare on RedditShare on StumbleUponDigg thisShare on TumblrPrint this page