Nvidia researchers train robots to pick up objects using synthetic datasets | Robotics

Nvidia have created a way to use data created in a virtual environment to train robots to pick up objects in the real world. The convolutional neural net system trained using synthetic data can use a Baxter robot and RGB camera to detect the location of objects in real time.

Cans of soup, a mustard bottle, and a box of Cheez-Its were used in trials to train the system to gently place an item into a human's hands.

To create their synthetic data, the researchers from Nvidia's robotics lab in Seattle created a custom plugin for Unreal Engine 4 that generated two sets of more than 120,000 labeled synthetic images.

The generated data randomizes things like the position of objects, lighting, and shadows to give the robot the ability to operate in more dynamic environments.

“When we fix these two together during our training process, what we find is that the network is able to [operate] at the level — or even better than — competing state of the art networks that were trained on real data. So this is the first time we've seen that sort of result of training on synthetic data beating a network that was trained on real data,” Stan Birchfield told VentureBeat in a phone interview.

The paper and its findings build upon work released earlier this year by Nvidia researchers in which robots were trained to pick up objects by ingesting large amounts of data generated in a virtual environment.

The code used to create the plugin has been made publicly available so researchers can train robots in more robust environments than academic labs.

“Robots are making their way into everyday applications, so there's vertical markets like agriculture and manufacturing — and then there's more horizontal markets like home robots and health care robots and those sorts of things. And I think in all of these markets it's going to be important for robots to perceive the world in a safe manner and a reactive manner so they can react to the changes of the world around them. And so this technology we developed, we think it is a meaningful step in that direction,” Birchfield said.

The findings are being presented this week at Conference on Robot Learning (CoRL) in Zurich, Switzerland.

Alongside Birchfield, the study was authored by Nvidia head of robotics research Dieter Fox, Jonathan Tremblay, Thang To, Balakumar Sundaralingam, and Yu Xiang.

You might also like
Leave A Reply

Your email address will not be published.