INteractive robots that intuitiVely lEarn to inVErt tasks by ReaSoning about their Execution
Acronym
INVERSE
Description of the granted funding
Despite the impressive advancements in Artificial Intelligence (AI), current robotic solutions fall short of the expectations when they are requested to operate in partially unknown environments. Most of all, robots lack the cognitive capabilities to understand a task to the point of being able to perform it in a different domain. As humans, during the learning process we gain deep insights on the execution of a process, which allows us to replicate its execution in a different domain with a little effort. We are also able to invert the task execution and to react to contingencies, by focusing the attention to the most critical prediction phases. However, replicating these cognitive processes in AI-driven robots is challenging as it needs a profound rethinking of the robot learning paradigm itself. The robot needs to understand how to act and imagine, like humans do, the possible consequences of its actions in another domain. This demands for a novel framework that embraces different levels of abstraction, starting from physical interaction with the environment, passing through active perception and understanding, and ending-up with decision-making. The INVERSE project aims to provide robots with these essential cognitive abilities by adopting a continual learning approach. After an initial bootstrap phase, used to create initial knowledge from human-level specifications, the robot refines its repertoire by capitalising on its own experience and on human feedback. This experience-driven strategy permits to frame different problems, like performing a task in a different domain, as a problem of fault detection and recovery. Humans have a central role in INVERSE, since their supervision helps limit the complexity of the refinement loop, making the solution suitable for deployment in production scenarios. The effectiveness of developed solutions will be demonstrated in two complementary use cases designed to be a realistic instantiation of the actual work environments.
Show moreStarting year
2024
End year
2027
Granted funding
KONECRANES GLOBAL OY
283 906.25 €
Participant
DEMAG CRANES & COMPONENTS GMBH (DE)
150 468.75 €
Third party
MTU CIVITTA FOUNDATION (EE)
352 442.5 €
Participant
VSI CIVITTA FOUNDATION (LT)
51 307.5 €
Participant
STEINBEIS 2I GMBH (DE)
624 500 €
Participant
C.R.E.A.T.E. CONSORZIO DI RICERCA PER L'ENERGIA L AUTOMAZIONE E LE TECNOLOGIE DELL'ELETTROMAGNETISMO (IT)
587 500 €
Participant
UNIVERSITA DEGLI STUDI DI TRENTO (IT)
1 057 750 €
Coordinator
MONDRAGON GOI ESKOLA POLITEKNIKOA JOSE MARIA ARIZMENDIARRIETA S COOP (ES)
561 285 €
Participant
BOGAZICI UNIVERSITESI (TR)
745 000 €
Participant
TECHNISCHE UNIVERSITAET WIEN (AT)
1 130 322.5 €
Participant
DEUTSCHES ZENTRUM FUER LUFT - UND RAUMFAHRT EV (DE)
1 142 457.5 €
Participant
CENTRO RICERCHE FIAT SCPA (IT)
628 750 €
Participant
Amount granted
7 999 874 €
Funder
European Union
Funding instrument
HORIZON Research and Innovation Actions
Framework programme
Horizon Europe (HORIZON)
Call
Programme part
Digital, Industry and Space (11704 Artificial Intelligence and Robotics (11709 )
Topic
Novel paradigms and approaches, towards AI-driven autonomous robots (AI, data and robotics partnership) (RIA) (HORIZON-CL4-2023-DIGITAL-EMERGING-01-01Call ID
HORIZON-CL4-2023-DIGITAL-EMERGING-01 Other information
Funding decision number
101136067