AI WITH THE BEST 
BLOG

BOOK NOW

Google’s Peter Norvig on the Failures and Feats of Machine Learning

To Peter Norvig, Director of Research at Google Inc, all software engineering will change as AI becomes an important part of the programmer’s toolbox. 

The co-author of Artificial Intelligence: A Modern Approach (one of the leading textbooks on AI) is so convinced of the potential of machine learning that he cautions programmers that the biggest danger is innovating too much.

“Machine learning allows us to go fast, to develop applications that we couldn’t do before and go places we couldn’t go before. When you go fast, there’s always the possibility that accidents will happen and that’s what we have to be careful about now,” he says.

Speaking from a hotel room via teleconference, Norvig is presenting for AI With The Best, an online conference that tailors both to software engineers, as well as aspirational AI entrepreneurs. But he is here today to outline some of the challenges AI development teams face.

“In traditional software, a programmer or programming team sits and writes down rules by hand, then we come up with a system. But in machine learning, it’s the computer – not the human – that’s creating the system. They’re doing that by looking at data and coming up with an algorithm,” he explains.

These algorithms allow machines to establish patterns in much the same way we do. This is why the field of machine learning is of such interest to software developers. Norvig recognises that many developers are looking to leverage machine learning to disrupt conventional business practices that rely on traditional programming. It is a shift away from the rigid constraints of computing to the untrammeled horizon of the decision boundary.

Norvig recognizes that many developers are looking to leverage machine learning to disrupt conventional business practices that rely on traditional programming.”

In Google’s related research paper, machine learning is referred to as the high-interest credit card of technical debt. “Technical debt is this idea that as you’re going fast and trying to bring a product to market, you may cut some corners to allow you to move fast, but it builds up. It makes it more difficult to maintain and change the system over time, and eventually you have to go and pay that debt off.”

This “debt” is in the form of the programming ‘feedback’ necessary to develop a program’s algorithms. “In traditional software, we break it down into components, there’s individual functions and classes and so on… If there’s a bug, you can start testing and you can say, “I know the bug is in this one module because the inputs coming in are good and the outputs are coming out. It must be here.” With machine learning, we’re usually not able to do that because the bug is just in the system as a whole. You can’t say it’s in this one spot and not in another.”

Norvig’s unflappable enthusiasm seems to delight in such challenges. On a recent trip to Boston, he had a reminder appear on his phone to return his rental car. “It had the address and it had a little pin on the map. So far everything is great, this is a good reminder. It popped up at the right time, it had the right location, and then it said time of travel, 23 minutes by bicycle. It kind of spoiled it all.”

These interconnected pieces of logic that we can take for granted in our own cognition make up the greatest difficulties in machine learning. AI algorithms cannot assign values to input they are not taught to recognise. Norvig explains that the pop-up notification had no idea that he would return the rental car with the rental car.

These problems are indicative of the substantive challenges facing AI programmers. “How are we going to build a system that does the right thing if we don’t even know what the right thing is? We need more work on understanding that and understanding what it is we want to build our systems to optimize. If we have competing goals for groups of people, how do we combine those competing goals together?” These difficulties can be hugely important. “There’s an issue of fairness… The minority groups tend to be ignored if the only thing you’re trying to optimize is the overall accuracy. We have to decide what’s fair, which groups should we make sure we do a good job for, even if that group is smaller than the majority.”

It will be these challenges that Norvig continues to work on, but his belief in machine learning will forward his work with optimism and verve. “We’ve done a great job as an industry, but we don’t have that level of tools for doing machine learning yet. We’re just starting to figure out what the right tools are.” No matter what those tools are, you can bet that Peter Norvig can’t wait to get his hands on them.