Computational Sensorimotor Learning
Pulkit Agrawal
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2018-133
September 23, 2018
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-133.pdf
Our fascination with human intelligence has historically influenced AI research to directly build autonomous agents that can solve intellectually challenging problems such as chess and GO. The same philosophy of direct optimization has percolated in the design of systems for image/speech recognition or language translation. But, the AI systems of today are brittle and very different from humans in the way they solve problems as evidenced by their severely limited ability to adapt or generalize. Evolution took a very long time to evolve the necessary sensorimotor skills of an ape (approx. 3.5 billion years) and relatively very short amount of time to develop apes into present-day humans (approx. 18 million years) that can reason and make use of language. There is probably a lesson to be learned here: by the time organisms with simple sensorimotor skills evolved, they possibly also developed the necessary apparatus that could easily support more complex forms of intelligence later on. In other words, by spending a long time solving simple problems, evolution prepared agents for more complex problems. It is probably the same principle at play, wherein humans rely on what they already to know to find solutions to new challenges. The principle of incrementally increasing complexity as evidenced in evolution, child development and the way humans learn may, therefore, be vital to building human-like intelligence.
The current prominent theory in developmental psychology suggests that seemingly frivolous play is a mechanism for infants to conduct experiments for incrementally increasing their knowledge. Infant's experiments such as throwing objects, hitting two objects against each other or putting them in mouth help them understand how forces affect objects, how do objects feel, how different materials interact, etc. In a way, such play prepares infants for future life by laying down the foundation of a high-level framework of experimentation to quickly understand how things work in new (and potentially non-physical/abstract) environments for constructing goal-directed plans.
I have used ideas from infant development to build mechanisms that allow robots to learn about their environment by experimentation. Results show that such learning allows the agent to adapt to new environments and reuse its past knowledge to succeed at novel tasks quickly.
Advisors: Jitendra Malik
BibTeX citation:
@phdthesis{Agrawal:EECS-2018-133, Author= {Agrawal, Pulkit}, Title= {Computational Sensorimotor Learning}, School= {EECS Department, University of California, Berkeley}, Year= {2018}, Month= {Sep}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-133.html}, Number= {UCB/EECS-2018-133}, Abstract= {Our fascination with human intelligence has historically influenced AI research to directly build autonomous agents that can solve intellectually challenging problems such as chess and GO. The same philosophy of direct optimization has percolated in the design of systems for image/speech recognition or language translation. But, the AI systems of today are brittle and very different from humans in the way they solve problems as evidenced by their severely limited ability to adapt or generalize. Evolution took a very long time to evolve the necessary sensorimotor skills of an ape (approx. 3.5 billion years) and relatively very short amount of time to develop apes into present-day humans (approx. 18 million years) that can reason and make use of language. There is probably a lesson to be learned here: by the time organisms with simple sensorimotor skills evolved, they possibly also developed the necessary apparatus that could easily support more complex forms of intelligence later on. In other words, by spending a long time solving simple problems, evolution prepared agents for more complex problems. It is probably the same principle at play, wherein humans rely on what they already to know to find solutions to new challenges. The principle of incrementally increasing complexity as evidenced in evolution, child development and the way humans learn may, therefore, be vital to building human-like intelligence. The current prominent theory in developmental psychology suggests that seemingly frivolous play is a mechanism for infants to conduct experiments for incrementally increasing their knowledge. Infant's experiments such as throwing objects, hitting two objects against each other or putting them in mouth help them understand how forces affect objects, how do objects feel, how different materials interact, etc. In a way, such play prepares infants for future life by laying down the foundation of a high-level framework of experimentation to quickly understand how things work in new (and potentially non-physical/abstract) environments for constructing goal-directed plans. I have used ideas from infant development to build mechanisms that allow robots to learn about their environment by experimentation. Results show that such learning allows the agent to adapt to new environments and reuse its past knowledge to succeed at novel tasks quickly.}, }
EndNote citation:
%0 Thesis %A Agrawal, Pulkit %T Computational Sensorimotor Learning %I EECS Department, University of California, Berkeley %D 2018 %8 September 23 %@ UCB/EECS-2018-133 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-133.html %F Agrawal:EECS-2018-133