loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Occlusion-Robust and Efficient 6D Pose Estimation with Scene-Level Segmentation Refinement and 3D Partial-to-6D Full Point Cloud Transformation

Topics: 3D Deep Learning; Categorization and Scene Understanding; Deep Learning for Visual Understanding ; Image-Based Modeling and 3D Reconstruction; Object Detection and Localization; Segmentation and Grouping

Authors: Sukhan Lee 1 ; Soojin Lee 1 and Yongjun Yang 2

Affiliations: 1 Department of Artificial Intelligence, Sungkyunkwan University, Suwon, Republic of Korea ; 2 Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea

Keyword(s): Object 6D Pose, Panoptic Segmentation, Dual Associative Point Autoencoder, Point Cloud, Occluded Object.

Abstract: Accurate estimation of the 6D pose of objects is essential for 3D scene modeling, visual odometry, and map building, as well as robotic manipulation of objects. Recently, various end-to-end deep networks have been proposed for object 6D pose estimation with their accuracies reaching the level of conventional regimes but with much higher efficiency. Despite progress, the accurate yet efficient 6D pose estimation of highly occluded objects in a cluttered scene remains a challenge. In this study, we present an end-to-end deep network framework for 6D pose estimation with particular emphasis on highly occluded objects in a cluttered scene. The proposed framework integrates an occlusion-robust panoptic segmentation network performing scene-level segmentation refinement and a dual associative point autoencoder (AE) directly reconstructing the 6D full camera and object frame-based point clouds corresponding to a captured 3D partial point cloud through latent space association. We evaluated the proposed deep 6D pose estimation framework based on the standard benchmark dataset, LineMod-Occlusion (LMO), and obtained the top-tier performance in the current leaderboard, validating the effectiveness of the proposed approach in terms of efficiency and accuracy. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 13.59.9.236

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Lee, S.; Lee, S. and Yang, Y. (2024). Occlusion-Robust and Efficient 6D Pose Estimation with Scene-Level Segmentation Refinement and 3D Partial-to-6D Full Point Cloud Transformation. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-679-8; ISSN 2184-4321, SciTePress, pages 763-771. DOI: 10.5220/0012457700003660

@conference{visapp24,
author={Sukhan Lee. and Soojin Lee. and Yongjun Yang.},
title={Occlusion-Robust and Efficient 6D Pose Estimation with Scene-Level Segmentation Refinement and 3D Partial-to-6D Full Point Cloud Transformation},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2024},
pages={763-771},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012457700003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - Occlusion-Robust and Efficient 6D Pose Estimation with Scene-Level Segmentation Refinement and 3D Partial-to-6D Full Point Cloud Transformation
SN - 978-989-758-679-8
IS - 2184-4321
AU - Lee, S.
AU - Lee, S.
AU - Yang, Y.
PY - 2024
SP - 763
EP - 771
DO - 10.5220/0012457700003660
PB - SciTePress