- Time:2017/12/21 Posted:Shenzhen Kenjia Technology Co., Ltd.
In the field of artificial intelligence, robots' tactile sense is one of the key technologies in the development of robots in order to make robots grasp objects as skillfully as human beings. Recently, the computer team at Carnegie Mellon University in the United States has been training a robotic practice called "Baxter" and feedback through continuous trial and error, devoted to developing a new generation of industrial robot with visual and tactile combination processing .
The robot can grasp rely on FingerVision gripper at the end of the arm. The FingerVision is a gripper printed on a desktop 3D printer. The exterior is covered with a transparent silicone sleeve, which has black spots for detection. When FingerVision grasps the object, the black spots on the surface of the sleeve will be deformed. Through the built-in small-sized camera, the deformation features of these black spots will be captured and FingerVision will make judgment and make the corresponding grab reaction.
In addition, this robot uses the AI self-learning technology. Baxter sends the visual and tactile information collected by Fingervision to a neural network similar to the human brain. After cross-matching the processed image with the image in ImageNet, the largest image recognition database in the world, they found that this robot's recognition accuracy has 10% increase over robots using only image data.
Currently, FingerVision is able to complete a series of grasping actions by tactilely sensing whether an object is sliding to control grip, such as banana peel. FingerVision grasps the object firmly when it comes into contact with familiar objects, and removes the arm when touching an unfamiliar object.
Once Carnegie Mellon's FingerVision have matured, they allow the robot to take a step further in perception. Perhaps as FingerVision developers say, robots will work more safely and efficiently with humans in the future.