Kuffner, J. J. and LaValle, S. M. (2000). Rrt-connect: An ef-
ficient approach to single-query path planning. In Pro-
ceedings 2000 ICRA. Millennium Conference. IEEE
International Conference on Robotics and Automa-
tion. Symposia Proceedings (Cat. No. 00CH37065),
volume 2, pages 995–1001. IEEE.
Kulkarni, T. D., Narasimhan, K., Saeedi, A., and Tenen-
baum, J. (2016). Hierarchical deep reinforcement
learning: Integrating temporal abstraction and intrin-
sic motivation. Advances in neural information pro-
cessing systems, 29.
LaValle, S. M. (2006). Planning algorithms. Cambridge
university press.
Levy, A., Konidaris, G., Platt, R., and Saenko, K. (2018).
Learning multi-level hierarchies with hindsight. In In-
ternational Conference on Learning Representations.
Li, C., Xia, F., Mart
´
ın-Mart
´
ın, R., Lingelbach, M., Srivas-
tava, S., Shen, B., Vainio, K. E., Gokmen, C., Dharan,
G., Jain, T., et al. (2021). igibson 2.0: Object-centric
simulation for robot learning of everyday household
tasks. In 5th Annual Conference on Robot Learning.
Li, C., Xia, F., Martin-Martin, R., and Savarese, S. (2020).
Hrl4in: Hierarchical reinforcement learning for inter-
active navigation with mobile manipulators. In Con-
ference on Robot Learning, pages 603–616. PMLR.
Lu, D. V., Hershberger, D., and Smart, W. D. (2014). Lay-
ered costmaps for context-sensitive navigation. In
2014 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems, pages 709–715. IEEE.
Meng, Z., Sun, H., Teo, K. B., and Ang, M. H. (2018). Ac-
tive path clearing navigation through environment re-
configuration in presence of movable obstacles. In
2018 IEEE/ASME International Conference on Ad-
vanced Intelligent Mechatronics (AIM), pages 156–
163. IEEE.
Nachum, O., Gu, S. S., Lee, H., and Levine, S. (2018).
Data-efficient hierarchical reinforcement learning.
Advances in neural information processing systems,
31.
Pasula, H. M., Zettlemoyer, L. S., and Kaelbling, L. P.
(2007). Learning symbolic models of stochastic do-
mains. Journal of Artificial Intelligence Research,
29:309–352.
Patel, U., Kumar, N. K. S., Sathyamoorthy, A. J., and
Manocha, D. (2021). Dwa-rl: Dynamically feasi-
ble deep reinforcement learning policy for robot nav-
igation among mobile obstacles. In 2021 IEEE In-
ternational Conference on Robotics and Automation
(ICRA), pages 6057–6063. IEEE.
Puig, X., Ra, K., Boben, M., Li, J., Wang, T., Fidler, S.,
and Torralba, A. (2018). Virtualhome: Simulating
household activities via programs. In Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition, pages 8494–8502.
Samsani, S. S. and Muhammad, M. S. (2021). Socially
compliant robot navigation in crowded environment
by human behavior resemblance using deep reinforce-
ment learning. IEEE Robotics and Automation Let-
ters, 6(3):5223–5230.
Silver, T., Chitnis, R., Tenenbaum, J., Kaelbling, L. P., and
Lozano-P
´
erez, T. (2021). Learning symbolic opera-
tors for task and motion planning. In 2021 IEEE/RSJ
International Conference on Intelligent Robots and
Systems (IROS), pages 3182–3189. IEEE.
Sun, H., Zhang, W., Runxiang, Y., and Zhang, Y. (2021).
Motion planning for mobile robots–focusing on deep
reinforcement learning: A systematic review. IEEE
Access.
Szot, A., Clegg, A., Undersander, E., Wijmans, E., Zhao, Y.,
Turner, J., Maestre, N., Mukadam, M., Chaplot, D.,
Maksymets, O., Gokaslan, A., Vondrus, V., Dharur,
S., Meier, F., Galuba, W., Chang, A., Kira, Z., Koltun,
V., Malik, J., Savva, M., and Batra, D. (2021). Habitat
2.0: Training home assistants to rearrange their habi-
tat. In Advances in Neural Information Processing
Systems (NeurIPS).
Toussaint, M. (2015). Logic-geometric programming: An
optimization-based approach to combined task and
motion planning. In Twenty-Fourth International Joint
Conference on Artificial Intelligence.
Wang, M., Luo, R.,
¨
Onol, A.
¨
O., and Padir, T. (2020).
Affordance-based mobile robot navigation among
movable obstacles. In 2020 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS),
pages 2734–2740. IEEE.
Wang, Z., Garrett, C. R., Kaelbling, L. P., and Lozano-
P
´
erez, T. (2021). Learning compositional models of
robot skills for task and motion planning. The Inter-
national Journal of Robotics Research, 40(6-7):866–
894.
Xia, F., Li, C., Mart
´
ın-Mart
´
ın, R., Litany, O., Toshev, A.,
and Savarese, S. (2021). Relmogen: Integrating mo-
tion generation in reinforcement learning for mobile
manipulation. In 2021 IEEE International Conference
on Robotics and Automation (ICRA), pages 4583–
4590. IEEE.
Xiang, F., Qin, Y., Mo, K., Xia, Y., Zhu, H., Liu, F., Liu, M.,
Jiang, H., Yuan, Y., Wang, H., et al. (2020). Sapien: A
simulated part-based interactive environment. In Pro-
ceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 11097–11107.
Zeng, K.-H., Weihs, L., Farhadi, A., and Mottaghi, R.
(2021). Pushing it out of the way: Interactive visual
navigation. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition,
pages 9868–9877.
Task and Motion Planning Methods: Applications and Limitations
483