What is the future of robotics

Researchers discuss the future of robotics

Intelligent robots that grip, manipulate, plan and simulate - the world of technology is changing. At a specialist conference in Freiburg, researchers discuss the potential, possible solutions to problems and current projects.

A robot opens the door. What looks so simple is technically highly complex.

Photo: Jürgen Gocke

There hardly seems to be an area that is not about to be penetrated by robots. While their use in industry is refined and expanded to new areas of application, robots are increasingly finding their way into agriculture, are being discussed as a solution for care and could also revolutionize transport. This diversity is of course linked to numerous research areas that a group of international scientists is currently discussing at a specialist congress at the University of Freiburg.

"Robotics: Science and Systems 2019 (RSS)" has a long range of topics on the agenda for the three days of the congress. Starting with overarching areas such as the visual perception of robots, through their ability to grip gently, to cognitive robotics, humanoids, animaloids and special topics such as bio-inspired robotics. The RSS is intended to enable an intensive exchange between the professional world, through lectures, exhibitions and workshops.

How do autonomous robots learn their behavior?

Scientists from the American Stanford University, for example, presented their current research project. They are looking for ways to train intelligent robots as efficiently as possible. Because of course, even with a supposedly autonomous robot, it must first be established how it should behave in various situations - namely exactly as humans expect it to be. Interestingly enough, however, many people want robots to behave better than their own, for example an autonomous vehicle may drive less aggressively than its owner would. The researchers are currently testing a new training model. To do this, they combine real human demonstrations - humans show the robot what to do - with questions the robot asks so that it can better classify the behavior seen.

Why bother when it is also possible to write a program that gives instructions? Electrical engineer Erdem Biyik from Stanford can explain that. The challenge is to explain exactly what a robot should do, especially when the task is complex. Often times, the machine finds a faster way to achieve the specified goal. For example, Biyik wanted to program a robotic arm to grab a cylinder and hold it in the air. “I said the hand must be closed, the object must be more than X in height, and the hand should be at the same height,” says Biyik. “The robot rolled the cylinder object to the edge of the table, hurled it upwards and then hit the air next to it with its fist.” So all the conditions were met, and yet the programming was definitely designed differently.

Robots can learn a lot from plants

It is also exciting to approach the Growing Robots, where robots learn movement - yes, movement - from plants. They throw seed pods through the air or shimmy from tree to tree like lianas in the tropical rainforest. While they are looking for a new hold and practically stretching out their feelers, they even overcome greater distances. Only when they have found a new tree does the stalk become soft. Others form veritable hooks at the top of their shoots.

The scientists of the EU project GrowBot, which is in the process of collecting ideas, therefore want to concentrate initially on climbing plants and lianas. The project manager Barbara Mazzolai from the Italian Institute of Technology (IIT) brought with her results from the Plantoid project, which she has overseen over the past few years. The created robot could dig itself into the ground - like the roots of a plant. To do this, he added new material and used sensors for touch, temperature and touch, among other things. Such a technology could be used not only in agriculture, but also in construction.

Robots need their own eyes

In Freiburg, it becomes clear not only where robotics is currently standing, but also how much background work is necessary to optimize robot functions. An American research association, for example, has developed a new filter (PoseRBPF) that makes it easier to track 6D poses of objects in videos. 6D means that in addition to the three spatial dimensions, three angles of rotation are also included. If it is possible to make it easier for robots to track objects, this ability will help them, among other things, when they have to navigate or move objects in a room.

More articles on robotics:

A contribution by:

  • Nicole gap

    Nicole Lücke does science journalism for research centers and universities, reports on medical congresses and looks after customer magazines for energy providers. She is a shareholder of content qualities. Your topics: energy, technology, sustainability, medicine / medical technology.