Learning Open-World Robot Navigation from Experience
Dhruv Shah
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2024-155
August 4, 2024
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-155.pdf
This report presents a novel approach to long-range robot navigation that combines machine learning with high-level planning. We posit that robust navigation in challenging, real-world environments requires both the ability to learn skills from past experience of the robot, as well as an explicit memory for planning and search. For the former, we describe an algorithm for experiential learning for visual navigation, where the robot can learn navigation behaviors directly from its past experience in the real world. For the latter, we design algorithms and systems that combine the low level learned policy with a high-level topological memory, enabling long range navigation and exploration in a scalable manner. By combining a learned policy with a topological graph, our system can determine how to reach a visually indicated goal even in the presence of variable appearance and lighting, making it robust for real-world deployment. To reach goals in previously unseen environments, we use a learned latent goal model to learn a density function of reachable future goals and plan over this distribution using the topological memory. Finally, we extend this system so that it can utilize side information, such as schematic roadmaps or satellite imagery, as a planning heuristic. Combining a learned low-level policy with a high-level planner and a learned heuristic allows our learning-based system to navigate kilometer-scale environments in a variety of locations and lighting conditions without any human intervention or collisions. The robotic systems developed in this report serve as a prototype for deploying machine learning models in challenging real-world environments, using a combination of learning and planning.
Advisors: Sergey Levine
BibTeX citation:
@mastersthesis{Shah:EECS-2024-155, Author= {Shah, Dhruv}, Title= { Learning Open-World Robot Navigation from Experience}, School= {EECS Department, University of California, Berkeley}, Year= {2024}, Month= {Aug}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-155.html}, Number= {UCB/EECS-2024-155}, Abstract= {This report presents a novel approach to long-range robot navigation that combines machine learning with high-level planning. We posit that robust navigation in challenging, real-world environments requires both the ability to learn skills from past experience of the robot, as well as an explicit memory for planning and search. For the former, we describe an algorithm for experiential learning for visual navigation, where the robot can learn navigation behaviors directly from its past experience in the real world. For the latter, we design algorithms and systems that combine the low level learned policy with a high-level topological memory, enabling long range navigation and exploration in a scalable manner. By combining a learned policy with a topological graph, our system can determine how to reach a visually indicated goal even in the presence of variable appearance and lighting, making it robust for real-world deployment. To reach goals in previously unseen environments, we use a learned latent goal model to learn a density function of reachable future goals and plan over this distribution using the topological memory. Finally, we extend this system so that it can utilize side information, such as schematic roadmaps or satellite imagery, as a planning heuristic. Combining a learned low-level policy with a high-level planner and a learned heuristic allows our learning-based system to navigate kilometer-scale environments in a variety of locations and lighting conditions without any human intervention or collisions. The robotic systems developed in this report serve as a prototype for deploying machine learning models in challenging real-world environments, using a combination of learning and planning.}, }
EndNote citation:
%0 Thesis %A Shah, Dhruv %T Learning Open-World Robot Navigation from Experience %I EECS Department, University of California, Berkeley %D 2024 %8 August 4 %@ UCB/EECS-2024-155 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-155.html %F Shah:EECS-2024-155