A few million years ago early humans discovered that they can use objects around them as ‘tools’ and with that, the stone age began. This can be considered one of the biggest moments in human history because it modernized us and paved the way for all the progress we made. Had we not discovered as cavemen that random objects could be used to our advantage, neither the Large Hadron Collider nor the James Webb Space Telescope would exist.
As we discuss this in 2022, chances are, a similar story is being unfolded for robots. Despite the advancements in AI, Machine Learning, and other new-age technologies; they could only use the tools that they were trained for. This means that a robot deployed for emergency excavation can perform specific tasks and even if a less important piece of equipment fails, it turns useless. This limits their ability to serve humans in danger and perform their duty when it is expected the most.
Keng Peng Tee, Ganesh Gowrishankar, and their colleagues undertook extensive research at I2R ASTAR Singapore and UM-CNRS LIRMM in France to make robots tactically more suitable for a wide range of work. Their work was recently published in Nature Machine Intelligence and it is attracting significant attention from across the world.
They designed a framework that could make it easier for robots to find things in their environment that could be used as tools and then use those things to manually perform tasks, even if they had never seen those things before.
Past studies in the field of robotics have shown that the systems which can use tools to complete physical tasks have a lot of potential. However, all of the methods discussed in those researches so far require prior training with tools.
Tee and Gowrishankar, therefore, focused on instilling human tool usage, ’embodiment’, and human tool characterization with their past work. This enabled the researchers to develop a new cognition framework that allowed robots to identify useful objects from their surroundings and use them to perform tasks as we do.
One of the fundamental ways we use to ascertain if an object can be useful as a tool is by using our own hands as a reference. We often depend on the shape and size of our hands and the corresponding features of the object to analyze if it were useful in achieving our objective. This newly developed cognitive framework utilizes this thought process as its basis.
Thus, robots will be able to intuitively identify tools and validate their potential by isolating the functionalities of their limbs. Next, they generate the necessary skills using the existing controllers and cameras.
As a result, the robots’ ability to utilize this framework will be largely dependent on the visual apparatus i.e. cameras and sensors. However, this can be a limitation in many cases since the shape and size of an object aren’t the only important things that are to be considered when using them. For instance, weight and surface hardness are equally important as shape and size when we want to use an item as a hammer.
The researchers are aiming to add more layers of perception to help robots make better and quicker dynamic tool use decisions. Their framework is intended to suit both existing and new robots.
Amazing, isn’t it? There’s so much we can expect out of robots in the future as they aim to draw parallels with software writing software!