(click for video)
The sea represents one of Portugal’s main resources. Novel ways of exploring and exploiting maritime opportunities are of particular interest given the proposed expansion of Portugal’s continental shelf. Land-based and air-based swarm robotics systems have been subjected to an extensive study but, that reality does not hold true for swarms in an aquatic environments, mainly because it is an environment where tasks are usually expensive to conduct, due to all the operational requirements of support crews and manned vehicles. We propose an alternative approach, using collectives of relatively simple and inexpensive aquatic robots (swarms). This alternative approach, in which robots are easily replaceable, has a high potential of applicability on essential tasks such as prospecting sites for aquaculture, environmental monitoring, sea life localization, bridges inspection, sea border patrolling, and so on. Many of these tasks require distributed sensing, scalability, and robustness to faults, which can be facilitated by collectives of robots with decentralized control based on principles of self-organization.
With the development of this project we hope to achieve three main contributions: (i) explore our novel control synthesis approach in a set of maritime tasks, on real-world, (ii) develop a scalable, heterogeneous and fault-tolerant ad-hoc network architecture, for swarms of aquatic robots, and (iii) we will disclose all the developed hardware components and software under a open-source license, so other researchers (and also enthusiasts) can build their own aquatic robots.
We designed and produced a total of 10 simple, small (60 cm in length) and inexpensive (≈ 300 eur/unit) robots, using widely available, off-the-shelf hard- ware. Each robot model is a differential drive mono-hull boat. The maximum speed of the robot in straight line is 1.7 m/s (3.3 kts), and the maxi- mum angular velocity is 90◦/s. The on-board control of each robot is supported by a Raspberry Pi 2 single-board computer. Robots communicate through a IEEE 802.11g (Wi-Fi) ad-hoc wireless network by broadcasting UDP datagrams, which are received by the neighboring robots and the monitoring station. The robots form a distributed network without any central coordination or single point of failure. Each robot is equipped with GPS and compass sensors, and broadcasts its position to neighboring robots every second.
One of the main goals that we early set in the project, consists on the construction of inexpensive prototype hardware (inexpensive when compared with the majority of commercial unmanned surface vehicles available on the market), to serve as a platform where the development and research can be made. The main characteristics of this prototype are: (i) to be easy to manufacture, allowing large-scale deploy, and (ii) make use of off-the-shelf and widely available components (like motors and sensors), keeping the cost low as possible and making the maintenance easier.
#Hardware
In order to design the hull and the majority of the support components, we used CAD systems. The hulls were then milled using a CNC machine, and 12 support components where 3D printed. The use of digital manufacturing allowed us to quickly iterate and optimize the hull designs. For the hull production we used extruded polystyrene foam (XPS) since it is buoyant, easily machinable, and inexpensive. In total, we produced 19 different hulls; nine of them prototypes and 10 operational units. The final batch of operational robots were coated in Epoxy resin and fiber glass in order to waterproof the hull and make it resistant to impacts. Each unit costs approximately 300 EUR in material. Even though we have manufactured our robots using this particular method, the open-source nature of the platform easily allows for different manufacturing processes, design choices, sensory payloads, and actuators to be used.
#Control
Each robot is controlled by an artificial neural network-based controller. The inputs of the neural network are the normalized readings of the sensors, and the outputs of the network control the robot’s actuators. The sensor readings and actuation values are updated every 100 ms. The neural network controlling each robot has two actuators, which control respectively the linear speed and the angular velocity. These two values are converted to left and right motor speeds. We implemented three virtual sensors for the detection of points and objects of interest in the task environment. The sensor values are obtained by pre-processing the GPS location of the entities in the environment that the robot is currently aware of, and the current heading and position of the robot, as given by the GPS and compass.
The evolutionary process used to synthesize behavior for the swarm relies exclusively on simulation for the evaluation of the candidate solutions. It was therefore necessary to implement a simulator in which the performance of the controllers could be adequately assessed. We extended our framework JBotEvolver — a Java-based open-source framework that has been used in a large number of evolutionary robotics studies. We modeled the robots in the simulator, and developed a middle-layer that is shared by both the hardware platform (Raspberry Pi) on-board control software and the simulation in JBotEvolver, enabling the same code-base and controllers to be executed on both the real robots and on robots in simulation.
This project is being developed at BioMachines Lab, ISCTE-IUL, and Instituto de Telecomunicações.