Many underwater intervention tasks are today performed using manned submersibles or Remotely Operated Vehicles in tele-operation mode. Autonomous Underwater Vehicles are mostly employed in survey applications. In fact, the low bandwidth and significant time delay inherent in acoustic subsea communications represent a considerable obstacle to remotely operate a manipulation system, making it impossible for remote controllers to react to problems in a timely manner. As a result, only few AUVs are equipped with manipulators for underwater intervention.
SAUVIM (Semi Autonomous Underwater Vehicle for Intervention Mission) has been developed in order to address this challenging task. Today, it is one of the first underwater vehicles (if not the only one) capable of autonomous manipulation.
With no physical link and with no human occupants, SAUVIM will permit intervention in dangerous areas, such as deep ocean, in missions to retrieve hazardous objects, or in classified areas.
The key element in underwater intervention performed with SAUVIM is autonomous manipulation. This is a challenging technology milestone, which refers to the capability of a robot system that performs intervention tasks requiring physical contacts with unstructured environments without continuous human supervision.
The Autonomous Manipulation system, unlike teleoperated manipulation systems that are controlled by human operators with the aid of visual and other sensory feedback, must be capable of assessing a situation, including self-calibration based on sensory information, and executing or revising a course of manipulating action without continuous human intervention. It is sensible to consider the development of autonomous manipulation as a gradual passage from human teleoperated manipulation. Within this passage, the most noticeable aspect is the increase of the level of information exchanged between the system and the human supervisor.
In teleoperation with ROVs, the user sends and receives low level information in order to directly set the position of the manipulator with the aid of a visual feedback. As the system becomes more autonomous, the user may provide only a few higher level decisional commands, interacting with the task description layer. The management of lower level functions (i.e. driving the motors to achieve a particular task) is left to the onboard system. The level of autonomy is related to the level of information needed by the system in performing the particular intervention. At the task execution level, the system must be capable of acting and reacting to the environment with the extensive use of sensor data processing.
The user may provide, instead of directly operating the manipulator, higher level commands during a particular mission, such as “unplug the connector”. In this approach, the function of the operator is to decide, after an analysis of the data, which particular task the vehicle is ready to execute and successively to send the decision command. The low-level control commands are provided by a pre-programmed onboard subsystem, while the virtual reality model in the local zone uses only the few symbolic information received through the low bandwidth channel in order to reproduce the actual behavior of the system.