AI WITH THE BEST 
BLOG

BOOK NOW

Jürgen Schmidhuber, AI & the Deep Learning RNNaissance

For the last 30+ years, Jürgen Schmidhuber has maintained a status as a legend among machine learning and neural network influencers. Since the 1980s, he has spearheaded universal self-referential, self-improving learning algorithms, practical recurrent neural networks, the formal theory of curiosity and creativity, and many other relative topics involving deep learning and artificial intelligence. In 2015 he shared a comprehensive survey of deep neural networks with a detailed history of the field.

Jürgen gave an exclusive Q&A for AI With The Best (called the Deep Learning RNNaissance) and we are excited to share some of his thoughts on deep learning and the future of AI.

Deep Learning RNNaissance

“Machine learning and pattern recognition are currently being revolutionized by ‘Deep Learning’ (DL) Neural Networks (NNs). I summarize work on DL since the 1960s, and our own work since 1991. Our Long Short-Term Memory (LSTM) Recurrent Neural Networks helped to revolutionize handwriting recognition, speech recognition, machine translation, image captioning, and other fields. We also built the first reinforcement learning agent that learns complex video game control based on high-dimensional vision.” (Schmidhuber’s abstract, 2016).

Sentiment analysis using Deep LearningSentiment analysis has grown to become important in regards to Deep Learning, with many convinced of the benefits of a total free-flow of information, as well as participatory communication. Jürgen points out that sentiment analysis is often applied to sequential data like text, speech and videos. He says, “Recently, people have used Long Short-Term Memory to obtain better results than previous approaches.”

Enthusiasm has grown for the development of neural systems-based pattern recognition frameworks. When asked if he has an AI demo prepared, Jürgen’s response is:

“Do you have a smartphone?”

He explained how Google’s speech recognition works, and said that the new version of Google Voice (which is now available to billions of smartphone users) is based on Long Short-Term Memory (LSTM) networks. He added:

“Google had a blog recently on how they used LSTM networks to improve their speech recognition not only by 5% or 10%, which would have been great, but by 49%.”

Normally, improving speech recognition by 5-10% means a lot. But 49%?! That answers the question for the AI demo. That’s more than a demo. It’s a masterpiece!

More about the Pyramidal Multi-Dimensional (PyraMiD-LSTM) networks

Jürgen highlighted that PyraMiD-LSTM is another variety of LSTM most suitable when analyzing 2-dimensional data like images or 3-dimensional data like videos.

The PyraMiD-LSTM is easy to parallelize, particularly for three-dimensional data (like stacks of photos of brain slices). PyraMiD-LSTM outperformed the widely used, more traditional, but also less general convolutional NNs (CNNs) on tasks of pixel-wise brain image segmentation. That’s remarkable, because in earlier work of 2012, Jürgen’s own team (with lead author Dan Ciresan) still used CNNs to win a brain segmentation competition. PyraMiD-LSTM may soon challenge CNNs in many other domains.

Jürgen continued by mentioning one of the most important health benefits of this work: “it can be applied to cancer detection.” For each pixel of an image it can take into account the entire context of the entire image, not just a small patch around the pixel. This requires a recurrent network, typically an LSTM.

“That will help you in getting the entire temporal context for each pixel that you are trying to classify or segment”.

When will AI begin to have emotions?“For a long time,” Jürgen said, “our little AI systems have already had emotions.” He stated that many traditional “reinforcement learning” and “evolutionary” algorithms can be used to introduce emotions to AIs, which learn to avoid hunger (negative numbers from sensors measuring low battery charge) and pain, e.g., by finding the charging station in time, without painfully bumping into obstacles. Such AIs automatically are fearful of situations they learn to associate with pain, while also becoming fond of pleasurable situations.

To answer the question of the upcoming challenges for machine learning over the next 10 years, Jürgen explained,

“I think that within not so many years we’ll be able to build an NN-based AI (an NNAI) that incrementally learns in mostly unsupervised fashion to become as smart as a little animal, say, a little crow or a capuchin monkey, learning to plan and reason and decompose a wide variety of problems into quickly solvable (or already solved) subproblems, in a very general way. Through our formal theory of fun it is even possible to implement curiosity and creativity, to build unsupervised artificial explorers and scientists.”

Jürgen thinks it’s possible to achieve this and, once it’s done, the gap between attaining a human intelligence-level system won’t be too large anymore. He says that unsupervised learning is mostly about finding regularities in the input stream from the outer world. He goes further by explaining regularity as something in the data that makes the data compressible, e.g., because of repetitions or symmetries.

“Regularities mean you can compress the data, for example, through unsupervised predictive coding of the incoming stream of observations. We have exploited this for a quarter century.”

When asked about the dangers of AI, Jürgen stated that we should not be guided by silly plots of movies like “The Matrix,” where AIs live off the energy of human brains, which produce maybe 20 Watts; this is implausible, since the power plant needed for keeping the humans alive is producing much more energy than this. He said those movies are based on extremely unreasonable goal conflicts between humans and AIs.

In line with an answer from a reddit AMA, he admits that there is no lasting way of controlling systems much smarter than humans that are pursuing their own goals, and being curious and creative. Though this would be similar to the way humans and other mammals are creative, it will be potentially on a much grander scale.

But he thinks we should maintain hope that there won’t be too many goal conflicts between “us” and “them,” since all beings are mostly interested in those they can best compete and collaborate with. Jürgen says: “Politicians are interested in other politicians. Scientists are interested in other scientists. 10 year old girls are interested in other 10 year old girls. Goats are interested in other goats. Super-smart AIs will be mostly interested in other super-smart AIs, not in humans. Just like humans are mostly interested in other humans, not in ants. Although we are much smarter than ants, we don’t extinguish them, except for the few that invade our homes. The weight of all ants is still comparable to the weight of all humans.”

Juergen also says: “Human interests are mainly limited to a very thin film of biosphere around the third planet, full of poisonous oxygen that makes many robots rust. The rest of the solar system, however, is not made for humans, but for appropriately designed robots. Some of the most important explorers of the 20th century already were (rather stupid) robotic spacecraft. And they are getting smarter rapidly.

“Let’s go crazy. Imagine an advanced robot civilization in the asteroid belt, quite different from ours in the biosphere, with access to many more resources (e.g., the earth gets less than a billionth of the sun’s light). The belt contains lots of material for innumerable self-replicating robot factories. Robot minds or parts thereof will travel in the most elegant and fastest way (namely by radio from senders to receivers) across the solar system and beyond. AIs will colonize the entire galaxy within a few million years. Although they’ll be fascinated at least for a while with life and human civilization and their own origins in the biosphere, in the long run they’ll be much more interested in the incredible new opportunities for robots and software life in places hostile to biological beings.”

Originally posted on BeMyApp Media