Abstract
Recently, many researches focus on learning to grasp novel objects, which is an important but still unsolved issue especially for service robots. While some approaches perform well in some cases, they need human labeling and can hardly be used in clutter with a high precision. In this paper, we apply a deep learning approach to solve the problem about grasping novel objects in clutter. We focus on two-fingered parallel-jawed grasping with RGBD camera. Firstly, we propose a 'grasp circle' method to find more potential grasps in each sampling point with less cost, which is parameterized by the size of the gripper. Considering the challenge of collecting large amounts of training data, we collect training data directly from cluttered scene with no manual labeling. Then we need to extract effective features from RGB and depth data, for which we propose a bimodal representation and use two-stream convolution neural networks (CNNs) to handle the processed inputs. Finally the experiment shows that compared to some existing popular methods, our method gets higher success rate of grasping for the original RGB-D cluttered scene.
http://ift.tt/2HTsRkz
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου