In this article, we present, for the first time, a soft robot control system (SofToss) capable of throwing life-size objects toward target positions. SofToss is an open-loop controller based on deep reinforcement learning (RL) that generates, given the target position, an actuation pattern for the tossing task. To deal with the high nonlinearity of the dynamics of soft robots, we deploy a neural network to learn the relationship between the actuation pattern and the target landing position, i.e., the direct model (DM) of the task. Then, an RL method is used to predict the actuation pattern given the goal position. The proposed controller was tested on a modular soft robotic arm, I-Support, by tossing four objects of different shapes and weights in 140-mm squared target boxes. We registered a success rate of almost 65% of the throws in two actuation modalities (i.e., partial, keeping one module of the soft arm passive, and complete, with both modules active). This performance raises to 85% if one can choose the number of modules to actuate for each throwing direction. Furthermore, the results show that the proposed learning-based, real-time controller achieves a performance comparable to that of an optimization-based, nonreal-time controller. Our study contributes to the foundations for bringing soft robots into everyday life and industry by performing more complex, dynamic tasks.

SofToss: Learning to Throw Objects With a Soft Robot

Antonelli Michele Gabrio;
2023-01-01

Abstract

In this article, we present, for the first time, a soft robot control system (SofToss) capable of throwing life-size objects toward target positions. SofToss is an open-loop controller based on deep reinforcement learning (RL) that generates, given the target position, an actuation pattern for the tossing task. To deal with the high nonlinearity of the dynamics of soft robots, we deploy a neural network to learn the relationship between the actuation pattern and the target landing position, i.e., the direct model (DM) of the task. Then, an RL method is used to predict the actuation pattern given the goal position. The proposed controller was tested on a modular soft robotic arm, I-Support, by tossing four objects of different shapes and weights in 140-mm squared target boxes. We registered a success rate of almost 65% of the throws in two actuation modalities (i.e., partial, keeping one module of the soft arm passive, and complete, with both modules active). This performance raises to 85% if one can choose the number of modules to actuate for each throwing direction. Furthermore, the results show that the proposed learning-based, real-time controller achieves a performance comparable to that of an optimization-based, nonreal-time controller. Our study contributes to the foundations for bringing soft robots into everyday life and industry by performing more complex, dynamic tasks.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11697/219504
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact