shapes, two series of experiments using many dif-
ferent clothing items of five clothing categories were
conducted. Although the method of bringing cloth-
ing items into one of the limited shapes is still in a
stage of development, the classification is highly cor-
rect once the items are successfully reshaped into a
limited shape. Though we need more thorough exper-
iments for assertion, the current results showed that
the feedback of observed information to model build-
ing process enables common category models that is
highly discriminating among different categories and
at the same time is tolerant of intra-category shape
variation.
Because, in the proposed framework, the state of
the clothing item is also known at the same time as
the classification, such as Shape A of trousers in Fig.
10(a), the method can be directly connected to sub-
sequent actions for specific tasks such as folding or
spreading into a fixed shape. The results also have
high affinity with the model-drivenmethod of (Y. Kita
and Kita, 2014) to perform further tasks.
ACKNOWLEDGEMENTS
The authors thank Dr. Y. Kawai, Mr. T. Ueshiba
for their support of this research. This work was
supported by a Grant-in-Aid for Scientific Research,
KAKENHI (16H02885).
REFERENCES
A. Doumanoglou, A. Kargakos, T.-K. K. S. M. (2014). Au-
tonomous active recognition and unfolding of clothes
using random decision forests and probabilistic plan-
ning. In International Conference in Robotics and Au-
tomation (ICRA) 2014, pages pp.987–993.
B. Willimon, S. and Walker, I. (2013). Classification of
clothing using midlevel layers. ISRN Robot, pages pp.
1–17.
B. Willimon, S. B. and Walker, I. (2011). Model for un-
folding laundry using interactive perception. Int’l
Conf. on Intelligent Robots and Systems (IROS11), pp.
4871–4876.
F. Osawa, H. S. and Kamiya, Y. (2007). Unfolding of mas-
sive laundry and classification types by dual manip-
ulator. Journal of Advanced Computational Intelli-
gence and Intelligent Informatics, Vol. 11, No.5:457–
463.
GOULD, D. A. D. (2004). Complete Maya Programming.
Morgan Kaufmann Pub.
Hamajima, K. and Kakikura, M. (2000). Planning strat-
egy for task of unfolding clothes (classification of
clothes). Journal of Robotics and Mechatronics, Vol.
12, No.5:pp. 577–584.
Hu, J. and Kita, Y. (2015). Classification of the category
of clothing item after bringing it into limited shapes.
In Proc. of International Conference on Humanoid
Robots 2015, pages pp.588–594.
I. Mariolis, G. Peleka, A. K. and Malassiotis, S. (2015).
Pose and category recognition of highly deformable
objects using deep learning. In International Confer-
ence in Robotics and Automation (ICRA) 2015, pages
pp.655–662.
J. Maitin-Shepard, M. Cusumano-Towner, J. L. and Abbeel,
P. (2010). Cloth grasp point detection based on
multiple-view geometric cues with application to
robotic towel folding. In Proc. of IEEE Int’l Conf.
on Robotics and Automation (ICRA ’10).
Kaneko, K., Kanehiro, F., Kajita, S., Hirata, M., Akachi, K.,
and Isozumi, T. (2004). Humanoid Robot HRP-2. In
Proc. of IEEE Int’l Conf. on Robotics and Automation
(ICRA ’04), pages pp.1083–1090.
P. Yang, K. Sasaki, K. S. K. K. S. S. and Ogata, T. (2017).
Repeatable folding task by humanoid robot worker us-
ing deep learning. IEEE Robotics and Automation Let-
ters, Vol. 2:pp.397–403.
S. Miller, M. Fritz, T. D. and Abbeel, P. (2011). Parameter-
ized shape models for clothing. In Proc. of IEEE Int’l
Conf. on Robotics and Automation (ICRA ’11), pages
pp. 4861–4868.
Stria, J. and Hlavac, V. (2018). Classification of hang-
ing garments using learned features extracted from
3d point clouds. In Proc. of Int. Conf. on Intelligent
Robots and Systems (IROS 2018), pages pp.5307–
5312.
Ueshiba, T. (2006). An efficient implementation tech-
nique of bidirectional matching for real-time trinocu-
lar stereo vision. In Proc. of 18th Int. Conf. on Pattern
Recognition, pages pp.1076–1079.
Y. Kita, F. Kanehiro, T. U. and Kita, N. (2014). Strategy for
folding clothing on the basis of deformable models. In
Proc. of International Conference on Image Analysis
and Recognition 2014, pages pp.442–452.