Paper ID | SS-3DPU.2 | ||
Paper Title | SEEING BY HAPTIC GLANCE: REINFORCEMENT LEARNING BASED 3D OBJECT RECOGNITION | ||
Authors | Kevin Riou, Suiyi Ling, CAPACITÉS SAS, France; Guillaume Gallot, CAPACITÉS, France; Patrick Le Callet, University of Nantes, France | ||
Session | SS-3DPU: Special Session: 3D Visual Perception and Understanding | ||
Location | Area B | ||
Session Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: 3D Visual Perception and Understanding | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Human is able to conduct 3D recognition by a limited number of haptic contacts between the target object and his/her fingers without seeing the object. This capability is defined as `haptic glance' in cognitive neuroscience. Most of the existing 3D recognition models were developed based on dense 3D data. Nonetheless, in many real-life use cases, where robots are used to collect 3D data by haptic exploration, only a limited number of 3D points could be collected. In this study, we thus focus on solving the intractable problem of how to obtain cognitively representative 3D key-points of a target object with limited interactions between the robot and the object. A novel reinforcement learning based framework is proposed, where the haptic exploration procedure (the agent iteratively predicts the next position for the robot to explore) is optimized simultaneously with the objective 3D recognition with actively collected 3D points. As the model is rewarded only when the 3D object is accurately recognized, it is driven to find the sparse yet efficient haptic-perceptual 3D representation of the object. Experimental results show that our proposed model outperforms the state of the art models. |