Rodney Brooks – A Better Lesson

Professor, I completely agree with your reasoning, but I’d like to expand this part of your argument further: “for most machine learning problems today a human is needed to design a specific network architecture for the learning to proceed well.”

It seems that the predominant conventional belief is that the progress in AI is automatic, and it is often compared to the discovery of the steam engine (Erik Brynjolfsson) or electricity (Andrew Ng). Some (Kai-Fu Lee) even go as far as saying that the era of discovery in AI is over, and we have entered the new era of commercialization (hence Made in China 2025 plan simply calls for more of the same, i.e. more manufacturing robots and more facial recognition). I believe nothing is further from the truth, and agree with [1] that states that “AI’s greatest economic impact will come from its potential as a new method of invention that ultimately reshapes the nature of the innovation process and the organization of R&D”, thus crushing Eroom’s law [2] that has been hanging over technological progress and total factor productivity over last 40+ years.

However, there are two seemingly hopeless problems with using AI in R&D. First, today’s AI is all about pattern matching, not creativity: from object recognition in images to speech understanding and machine translation, the focus has been on machine learning of human skills and automation of human jobs, not coming up with new ideas. Second, today we treat AI as a hammer in search of a nail, and hence obsess over any discipline that throws off big data, such as advertising, intelligent vehicles, large corpora of text, games, etc. This approach breaks down when dealing with enormous search spaces in chemistry and biology, such as drug discovery in the universe of 10^60 potential molecules, making sense of gene regulation or mapping of mammalian metabolic pathways.

My answer to both problems is to redefine AI as IA, or Intelligence Amplification, a term invented along with the now-forgotten science of Cybernetics by Norbert Wiener and Ross Ashby in the 1940s and 1950s. They argued that the role of machines was to amplify human intelligence, augment human decision-making and, thus, improve human productivity. In Cybernetics, the human was never supposed to be automated away. Unfortunately, as you taught us in 6.836, Cybernetics fell out of favor after the seminal Dartmouth Workshop of 1956 founded the field of AI that aimed to build thinking machines that would eventually succeed at stacking geometric shapes using logic programming but fail miserably in the messy world of humans.

I would argue that the necessary condition for a successful IA process is the presence of the trifecta of (1) domain experts (chemists, biologists, etc.), (2) algorithms experts, and (3) powerful specialized machines. Together, domain experts and algorithms experts can co-invent a specific network architecture and then train it on fast machines, which is exactly your argument. I think this co-invention aspect puts the debate with Rich Sutton to rest since he doesn’t leave any space for the human in the loop.

References

1. https://www.technologyreview.com/s/612898/ai-is-reinventing-the-way-we-invent/
2. https://en.wikipedia.org/wiki/Eroom%27s_law

Sincerely,

Gleb Chuvpilo

PS In case you are nostalgic about Subsumption Architecture (I know I am!):
https://people.csail.mit.edu/chuvpilo/papers/chuvpilo-2002-6.836-project.pdf



from Hacker News https://ift.tt/2Fo37O5