
Corona, E., Alenya, G., Gabas, A., and Torras, C. (2018).
Active garment recognition and target grasping point
detection using deep learning. Pattern Recognition,
74:629–641.
Ganin, Y. and Lempitsky, V. (2015). Unsupervised domain
adaptation by backpropagation. In Int. Conf. on Ma-
chine Learning, pages 1180–1189.
Garcia-Camacho, I., Borr
`
as, J., and Aleny
`
a, G. (2022).
Knowledge representation to enable high-level plan-
ning in cloth manipulation tasks. In ICAPS Workshop
on Knowledge Engineering for Planning and Schedul-
ing.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In IEEE Conf. on
Computer Vision and Pattern Recognition, pages 770–
778.
Hoque, R., Seita, D., Balakrishna, A., Ganapathi, A., Tan-
wani, A. K., Jamali, N., Yamane, K., Iba, S., and
Goldberg, K. (2020). Visuospatial foresight for multi-
step, multi-task fabric manipulation. In Robotics: Sci-
ence and Systems.
Jangir, R., Aleny
`
a, G., and Torras, C. (2020). Dynamic
cloth manipulation with deep reinforcement learning.
In IEEE Int. Conf. on Robotics and Automation, pages
4630–4636.
Kampouris, C., Mariolis, I., Peleka, G., Skartados, E.,
Kargakos, A., Triantafyllou, D., and Malassiotis, S.
(2016). Multi-sensorial and explorative recognition
of garments and their material properties in uncon-
strained environment. In IEEE Int. Conf. on Robotics
and Automation, pages 1656–1663.
Kingma, D. P. and Ba, J. (2014). Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Kumar, S., Cherian, A., Dai, Y., and Li, H. (2018). Scalable
dense non-rigid structure-from-motion: A grassman-
nian perspective. In IEEE Conf. on Computer Vision
and Pattern Recognition, pages 254–263.
Lee, R., Abou-Chakra, J., Zhang, F., and Corke, P. (2022).
Learning fabric manipulation in the real world with
human videos. arXiv preprint arXiv:2211.02832.
Li, Y., Wang, Y., Yue, Y., Xu, D., Case, M., Chang, S.-
F., Grinspun, E., and Allen, P. K. (2018). Model-
driven feedforward prediction for manipulation of de-
formable objects. IEEE Transactions on Automation
Science and Engineering, 15(4):1621–1638.
Lippi, M., Poklukar, P., Welle, M. C., Varava, A., Yin,
H., Marino, A., and Kragic, D. (2020). Latent space
roadmap for visual action planning of deformable and
rigid object manipulation. In IEEE/RSJ Int. Conf. on
Intelligent Robots and Systems, pages 5619–5626.
Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell,
T., and Xie, S. (2022). A convnet for the 2020s.
In IEEE/CVF Conf. on Computer Vision and Pattern
Recognition, pages 11976–11986.
Long, M., Cao, Z., Wang, J., and Jordan, M. I. (2018). Con-
ditional adversarial domain adaptation. Advances in
neural information processing systems, 31.
Loshchilov, I. and Hutter, F. (2016). Sgdr: Stochastic
gradient descent with warm restarts. arXiv preprint
arXiv:1608.03983.
Mariolis, I., Peleka, G., Kargakos, A., and Malassiotis, S.
(2015). Pose and category recognition of highly de-
formable objects using deep learning. In Int. Conf. on
advanced robotics, pages 655–662.
Matas, J., James, S., and Davison, A. J. (2018). Sim-to-real
reinforcement learning for deformable object manip-
ulation. In Conf. on Robot Learning, pages 734–743.
PMLR.
Pumarola, A., Agudo, A., Porzi, L., Sanfeliu, A., Lepetit,
V., and Moreno-Noguer, F. (2018). Geometry-aware
network for non-rigid shape prediction from a single
view. In IEEE Conf. on Computer Vision and Pattern
Recognition, pages 4681–4690.
Qian, J., Weng, T., Zhang, L., Okorn, B., and Held, D.
(2020). Cloth region segmentation for robust grasp se-
lection. In IEEE/RSJ Int. Conf. on Intelligent Robots
and Systems, pages 9553–9560.
Ramisa, A., Alenya, G., Moreno-Noguer, F., and Torras, C.
(2013). Finddd: A fast 3d descriptor to characterize
textiles for robot manipulation. In IEEE/RSJ Int. Conf.
on Intelligent Robots and Systems, pages 824–830.
Ramisa, A., Alenya, G., Moreno-Noguer, F., and Torras, C.
(2016). A 3d descriptor to detect task-oriented grasp-
ing points in clothing. Pattern Recognition, 60:936–
948.
Saito, K., Watanabe, K., Ushiku, Y., and Harada, T. (2018).
Maximum classifier discrepancy for unsupervised do-
main adaptation. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 3723–3732.
Schulman, J., Lee, A., Ho, J., and Abbeel, P. (2013). Track-
ing deformable objects with point clouds. In IEEE Int.
Conf. on Robotics and Automation, pages 1130–1137.
Seita, D., Jamali, N., Laskey, M., Tanwani, A. K., Beren-
stein, R., Baskaran, P., Iba, S., Canny, J., and Gold-
berg, K. (2018). Deep transfer learning of pick points
on fabric for robot bed-making. In Robotics Research:
The 19th Int. Symposium ISRR.
Sermanet, P., Lynch, C., Chebotar, Y., Hsu, J., Jang, E.,
Schaal, S., Levine, S., and Brain, G. (2018). Time-
contrastive networks: Self-supervised learning from
video. In 2018 IEEE international conference on
robotics and automation (ICRA), pages 1134–1141.
IEEE.
Tan, M. and Le, Q. (2019). Efficientnet: Rethinking model
scaling for convolutional neural networks. In Int.
Conf. on Machine Learning, pages 6105–6114.
Thananjeyan, B., Kerr, J., Huang, H., Gonzalez, J. E.,
and Goldberg, K. (2022). All you need is LUV:
Unsupervised collection of labeled images using uv-
fluorescent markings. In IEEE/RSJ Int. Conf. on In-
telligent Robots and Systems, pages 3241–3248.
Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles,
A., and J
´
egou, H. (2021). Training data-efficient im-
age transformers & distillation through attention. In
Int. Conf. on Machine Learning, pages 10347–10357.
Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. (2017).
Adversarial discriminative domain adaptation. In
Semantic State Estimation in Robot Cloth Manipulations Using Domain Adaptation from Human Demonstrations
181