Nithin MATHEWS' Ph.D. thesis supplementary website


On this website, we provide the source code of the robot controllers we used to conduct the experiments presented in the Ph.D. thesis entitled "Beyond self-assembly: Mergeable nervous systems, spatially targeted communication, and supervised morphogenesis for autonomous robots". The content of this website includes a description of software and system prerequisites required for the execution of robot controllers both in simulation (on local PC) when possible and on real robots. Then, executable controller commands are presented by listing the according section titles and numbers from the thesis separated by thesis chapter titles.

The marXbot and eye-bot controllers are programmed using a forked version of ARGoS2 -- the predecessor of the open source project ARGoS3 (or just ARGoS) that can be used to develop controllers for both simulated and real robots. This ARGoS version includes multiple customizations such as (i) the possibility for the user to interact with the robots (to inject faults) and with the stimulus (light source) during an ongoing simulation using built-in UI-elements, (ii) the real time visualization of messages sent through a merged nervous system (MNS) and the body plan available to the brain robot, (iii) the use of a behavioral toolkit enabling seamless reusage and combination of independent behavioral components, (iv) the implementation of SWARMORPH-script, and (v) wireless Ethernet-based communication for physically attached real (i.e., not simulated) marXbots.

Controllers for the AR.Drone were developed based on an adaptation of a software package freely available for research. The software relies on SDK version 1.6 and firmware version 1.3.3 installed on the robot. Refer to the official developer guide to find out how to install the firmware on the AR.Drone and connect to it using WiFi. We used this adapted version of the software package to channel video streams from the AR.Drone to a remote PC where vision algorithms were run. Position control data computed on the basis of these streams were then transmitted from the PC back to the AR.Drone in real time using the same channel.



Contents



ARGoS2

The ARGoS2 framework enables the execution of marXbot and eye-bot controllers both in simulation and on the real robots. Follow these steps to acquire ARGoS2 and to set up the experiments presented in this thesis on a Linux-based system.

Make sure your system meets these requirements in terms of required libraries and packages:

cmake >= 2.6
make >=3.81
Xi >= 1.0.4
FreeImage >=3.9.3
GLUT >= 3.6
GSL with dev package
PNG libPng and headers
libboost >=1.32.0
libboost_dev >=1.32.0
OpenCV >=2.1
imagemagick >=7.0.0-*
Python >=2.7
SDL 1

For systems that support apt-get, executing the following command line should be sufficient to meet the requirements:
								
 sudo apt-get install cmake libxmu-dev libxi-dev freeglut3-dev libqt4-opengl-dev libgsl0-dev g++ libfreeimage-dev imagemagick python libsdl1.2-dev libboost-all-dev libsdl-gfx1.2-dev
							

Download and extract the source tree:
								
 >> wget http://iridia.ulb.ac.be/~mathews/PhD/supp/downloads/argos2.tar.gz
 >> sudo tar xvjf argos2.tar.gz
							

Compile the simulation framework:
								
 >> cd argos2
 >> ./build_simulation_framework.sh
							

In order to compile the controller for the real marXbot (also referred to as "foot-bot"), download the toolchain from https://wiki.epfl.ch/mobots-robots/toolchain. Then, untar the corresponding archive on your computer such that the toolchain gets installed at /usr/local/angstrom. Cross-compile the controller using the following command: .
								
 >> ./build_real_robot.sh footbot
							

Please refer to this document to see how you can copy and execute controllers on a real marXbot. The document also provides assistance to resolve potential issues you may encounter when cross-compiling the controller.

Test your setup by running the first experiment in ARGoS2:
								
 >> ./build/simulator/argos -c user/nithin/experiments/mns/basic.xml
							

When executed, the experiment shows the formation of an "x"-like MNS-robot composed of 9 robots. The formation is initiated by a brain robot (pre-defined in basic.xml) only if the stimulus (i.e., the light source) illuminates in the green color. When the stimulus is perceived to be blue, the MNS-robot reacts by shrinking its size, i.e., by disconnecting one robotic unit after the other. A red stimulus halts both the growth and the shrinkage process.

All MNS-related logic is compiled into a single controller called bt_swarmorph_behavioral_controller which needs to be copied onto each marXbot part of the experiment. Also, *.xml (in this case basic.xml) use for configuring each experiment and the SWARMORPH-script (i.e., *.ml file linked in the *.xml ) need to be copied. Therefore, the execution of the test controller on a real marXbot would require you to (i) copy the controller and the two other files, (ii) log on to the marXbot, and (ii) execute the following command:
								
 >> cd nithin
 >> ./bt_swarmorph_behavioral_controller -c basic.xml -i fsmc
							


image-1 image-1
Snapshots of the experiment executed using simulated (left) and real marXbots (right). The result of the real robot experiment can be also seen in this video. Click on the images for an enlarged view.




AR.Drone software package


Download and extract the source tree:
								
 >> wget http://iridia.ulb.ac.be/~mathews/PhD/supp/downloads/ardrone.tar.gz
 >> sudo tar xvjf ardrone.tar.gz
							

Compile the software package:
								
 >> cd ardrone/helisimple/src/
 >> make
							

Connect to the adhoc WiFi network of your AR.Drone (make sure firewall lets through UDP connections on ports 5555 and 5557) and execute the following command to see a small GUI come up with a video stream from the AR.Drone's front-facing camera:
								
 >> ../bin/heli
							
The camera feed can be toggled between the front-facing and downward-facing camera using the keys Z and X. Takeoff and landing can be controlled using the keys Q and A, respectively. Pitch, roll, and yaw are controlled using the keypad numbers. Alternatively, all control can be handled via any standard joypad attached to the PC.

The software package also include the image post-processing algorithms required to detect foot-bots underneath the AR.Drone. The parameter configuration of these algorithms need to be adapted to the respective ambient light setting and is handled in ardrone/config.txt in a rather straightforward manner:
								
 config.altitude=1000

 cv.thresh.binary=98
 cv.thresh.binary.target=242

 cv.h.value.red=180
 cv.s.value.red=60
 cv.v.value.red=161

 cv.h.value.green=59
 cv.s.value.green=45
 cv.v.value.green=182

 cv.h.value.blue=25
 cv.s.value.blue=100
 cv.v.value.blue=165

 cv.h.value.target=92
 cv.s.value.target=4
 cv.v.value.target=251

 estc.group.size=4
 estc.total.robots.exp=8
							
As the content of the configuration shows, all vision related parameters are defined here. The binary parameters determine the light source While the HSV values of the RGB colors are self-explanatory. Other configuration possibilities include the standard flight attitude (in cm) that can be manually overridden using keypad or joystick entries and the ESTC parameters relevant for the experiment in Chapter 4.

The software package also includes an intuitive fine-tuning tool that can be used to fine tune the camera configurations. Execute the following command to run the tool:
								
 >> ../bin/heli 4
							


3.1.3 Speed, precision and other features

Videos of experiments conducted in this section can be found at:

http://iridia.ulb.ac.be/supp/IridiaSupp2011-001/index.html

We study the speed and precision with which connections can be formed using EDSA. For this purpose, we placed an extending robot with an open extension point to its rear (i.e., at 180 degrees ) in the center of a circle of 80 cm radius. We then placed a second robot at 12 equally separated starting positions on the circle. We considered 8 starting orientations for each starting position. For each combination of starting position and starting orientation, we let the robots execute EDSA. Exceute the following code to reproduce the experiment in ARGoS2:
								
 >> ./build/simulator/argos -c user/nithin/experiments/mns/precision.xml
							

The experiment is composed an extending foot-bot and a free foot-bot seeking for recruitment. In order to change the position of the free foot-bot, as described in the experiment, edit section arena in user/nithin/experiments/mns/precision.xml When executing the first time, this should be the setup you see with robots clearing tagged with IDs:


image-1


3.1.3.1 Adaptive recruitment. Self-assembling robots need to be able to adapt to changing mission conditions. The recruitment algorithm at the core of EDSA is able to adapt to such conditions including to malfunctioning recruits or to the availability of better suited robots. This is possible because the high-speed communication provided by the mxRAB device allows the mapping from extension points to recruited robots to be updated at every control. Hence, EDSA is able to adapt to new conditions while maintaining an optimal resource allocation w.r.t. the number of robots allocated per extension point. Execute the following experiment composed of extending robot placed in the center of the frame and two free robots, of which the one closest to the extension point (at 180 degrees) is deactivated during the first 5 seconds of the experiment. The extending robot is shown to recruit the other free robot situated further to the extension point. When the free robot closer to the extensio point is activated, the extending robot adapts to the new situation by releasing the initial recruitment and by recruiting the robot that has new become available. The robot that was initially recruited leaves the self-assembly process and becomes available for other tasks.
								
 >> ./bt_swarmorph_behavioral_controller -c user/nithin/experiments/mns/adaptive_recruitment.xml
							


3.1.3.2 Enhanced parallelism. MNS robots are able to physically connect to one another and thereby merge into larger MNS robots of different shapes and sizes. First, MNS robots consisting of a single robotic unit self-assemble into a larger spiral-shaped MNS robot with a single brain unit. Then, the MNS robot splits and each of its robotic units becomes a one-unit MNS robot. The process is repeated three times during which the MNS robots merge into three larger MNS robots with different shapes. Execute the following command to run the experiment in ARGoS2:
								
 >> ./build/simulator/argos -c user/nithin/experiments/mns/parallelism.xml
							


3.1.3.3 Morphology growth in motion. Independently of shape and size, MNS robots display consistent sensorimotor reactions to a stimulus, whilst autonomously merging their bodies and robot nervous systems. We programmed the robotic units so that when a green stimulus (of which the position and color can be changed during simulation using the UI-elements on the top) enters a robot’s sensor range, the robot "points" at the stimulus by illuminating its three closest green LEDs (in a composite MNS robot, these are the closest LEDs on the closest constituent robotic unit). When the stimulus is "too" close (i.e., proximity to any part of the MNS robot’s body exceeds a threshold), the robot retreats from the stimulus. Execute the following command to run the experiment in ARGoS2:
								
 >> ./build/simulator/argos -c user/nithin/experiments/mns/growth_in_motion.xml
							


image-1 image-1




3.3.2 Unprecedented features and self-healing features

Videos of experiments conducted in this section can be found at:

https://www.nature.com/articles/s41467-017-00109-2#Sec14


3.3.2.1 Borrowing hardware capabilities of peer robots. When a merge between two MNS robots occurs, only a single message needs to be passed up the merged nervous system from the connecting MNS robot to the brain of the MNS robot to which it connects. The information contained in the message is incrementally updated by each intermediate unit with local topological information, and the newly formed MNS robot incorporates all the sensing, actuation and computational capabilities of the units in the new body. In this experiment, two differently configured marXbots (one has a magnetic gripper able to grip previously prepaired objects) merge into a single MNS robot. Note that marXbots able to grip other objects are not implemented in simulation -- the experiment can therefore only be executed using real robots. Copy the files bt_swarmorph_behavioral_controller, magnetic_gripper.xml, magnetic_gripper.ml to a marXbot with a magnetic gripper, log on to the robot and execute the following commands:
								
 >> ./bt_swarmorph_behavioral_controller -c magnetic_gripper.xml -i fsmc
							


3.3.2.2 Autonomous adaptation to varying scales and morphologies. MNS robots are able to physically connect to one another and thereby merge into larger MNS robots of different shapes and sizes. First, MNS robots consisting of a single robotic unit self-assemble into a larger spiral-shaped MNS robot with a single brain unit. Then, the MNS robot splits and each of its robotic units becomes a one-unit MNS robot. The process is repeated three times during which the MNS robots merge into three larger MNS robots with different shapes. Execute the following command to run the experiment in ARGoS2:
								
 >> ./build/simulator/argos -c user/nithin/experiments/mns/concept.xml
							


3.3.2.3 Morphology-independent sensorimotor coordination. Independently of shape and size, MNS robots display consistent sensorimotor reactions to a stimulus, whilst autonomously merging their bodies and robot nervous systems. We programmed the robotic units so that when a green stimulus (of which the position and color can be changed during simulation using the UI-elements on the top) enters a robot’s sensor range, the robot "points" at the stimulus by illuminating its three closest green LEDs (in a composite MNS robot, these are the closest LEDs on the closest constituent robotic unit). When the stimulus is "too" close (i.e., proximity to any part of the MNS robot’s body exceeds a threshold), the robot retreats from the stimulus. Execute the following command to run the experiment in ARGoS2:
								
 >> ./build/simulator/argos -c user/nithin/experiments/mns/sensorimotor_coordination.xml
							


3.3.2.4 Fault-detection and self-healing properties. The experiment forms an "x-"like shape. When fault is injected in the brain or any other unit (for instance by inputting the marXbot ID -- of a brain or another robotic unit -- into the input field on the top in ARGoS2), the MNS robot detects the fault and self-heals from it using a heartbeat mechanism. Execute the following command to run the experiment in ARGoS2:
								
    >> ./build/simulator/argos -c user/nithin/experiments/mns/self_healing.xml
							



4.1.3 Experiments and results

Videos of experiments conducted in this chapter can be found at:

http://iridia.ulb.ac.be/supp/IridiaSupp2013-005/index.html
http://iridia.ulb.ac.be/supp/IridiaSupp2009-006/

Robot controllers to establish one-to-one STC links are presented using foot-bots and an eye-bot as the initiator robot. Run the following two commands for the lattice and random distribution, respectively:

								
    >> ./build/simulator/argos -c user/nithin/experiments/estc/1to1_grid.xml
							

To modify the total number of foot-bots (potential recipient robots) and the number of colors (signals) used, change the according attributes in the xml node robots_in_grid.
								
    >> ./build/simulator/argos -c user/nithin/experiments/estc/1to1_random.xml
							

To modify the number of colors (signals) used in the iterative elimination process, change the according attribute in the xml node robots_in_grid. To modify the total number of robots, change the attribute quantity in the xml node entity.

To execute the initiator robot behavior on the AR.Drone as described in the thesis, run the following command in the AR.Drone software package:
								
 >> ../bin/heli 6
							


image-1
A frame taken from an experiment executed using the AR.Drone and four randomly distributed marXbots. The frame shows the light source of which the location is used by the AR.Drone to feed the PID controller (used for the hovering behavior) that continuously minimizes the distance between the light source and the center of the image received from the downward-pointing camera. The frame also shows th plexiglass we used to shield the marXbots from the AR.Drone executing emergency landing behaviors.



4.3.3 Experiments and results

Videos of experiments conducted in this chapter can be found at:

http://iridia.ulb.ac.be/supp/IridiaSupp2013-005/index.html
http://iridia.ulb.ac.be/supp/IridiaSupp2009-006/

Robot controllers to establish one-to-many STC links are presented using foot-bots and an eye-bot as the initiator robot. Run the following two commands for the lattice and random distribution, respectively:

								
    >> ./build/simulator/argos -c user/nithin/experiments/estc/1toN_grid.xml
							

To modify the total number of foot-bots (potential recipient robots) and the number of colors (signals) used, change the according attributes in the xml node robots_in_grid.
								
    >> ./build/simulator/argos -c user/nithin/experiments/estc/1toN_random.xml
							

To modify the number of colors (signals) used in the iterative elimination process, change the according attribute in the xml node robots_in_grid. To modify the total number of robots, change the attribute quantity in the xml node entity.

To change the group size that should be selected by the eye-bot, modify the attribute group_size in the xml node estc.


image-1 image-1
Snapshots from simulation showing one eye-bot and 25 foot-bots. Left: foot-bots are in a lattice distribution (moore neighborhood). Right: foot-bots are randomly distributed. The first foot-bot to be selected is the one closest to the light source shown in yellow. All foot-bots are in the communication range (i.e., in the field of view of the eye-bot) at all times.


To execute the initiator robot behavior on the AR.Drone as described in the thesis, adapt the parameters related to the total number of marXbots in the exepriment and the group size in config.txt and run the following command in the AR.Drone software package:
								
 >> ../bin/heli 8
							


image-1
A frame taken from an experiment executed using the AR.Drone and 9 marXbots distributed in a square lattice. As no light source is available, the AR.Drone uses the center of the bounding box that includes all marXbots in the field of view as the input to its PID controller enabling the hovering behavior.



5.2 Case study nr. 1

Videos of experiments conducted in this section can be found at:

http://iridia.ulb.ac.be/supp/IridiaSupp2017-007/index.html

The AR.Drone was flown manually in this preliminary case study. However, the control (en- vironment modeling, on-board simulations) and communication algorithms (ESTC, transmission of self-assembly instructions) were executed entirely autonomous by the AR.Drone. The AR.Drone analyzes the images returned by its downward- pointing camera to locate the POI by detecting the point with the highest light intensity above the thresholds that can be reached by the foot-bot LEDs. All communication from the AR.Drone to the foot-bots occurs over wireless Ethernet. Start the following controller using the AR.Drone software package before flying it manually towards the light source:
								
 >> ../bin/heli 7
							
Foot-bots use their LEDs illuminated in RGB colors to trasnmitted messages to the AR.Drone. They remained autonomous throughout the course of all experiments. We present the foot-bot controller here:
								
 >> ./build/simulator/argos -c user/nithin/experiments/sm/cs1.xml
							


5.3 Case study nr. 2

Videos of experiments conducted in this section can be found at:

http://iridia.ulb.ac.be/supp/IridiaSupp2017-007/index.html

The eye-bot is assumed to have flown in advance and attached itself to the ceiling at a height of 2.96 m. Upload the following controller to the eye-bot and execute it by running:
								
 >> ./bt_eyebot_swarmorph_controller -c eyebot_sm_cs2.xml -i esmc
							
The foot-bots use their light sensors to detect the light source and drive towards it. The foot-bots depend on the eye-bot to provide the supervision necessary to successfully solve the hill-crossing task. Execute the following controller to run the experiment:
								
 >> ./build/simulator/argos -c user/nithin/experiments/sm/cs2.xml
							


5.4 Quantifying performance benefits

Videos of experiments conducted in this section can be found at:

http://iridia.ulb.ac.be/supp/IridiaSupp2010-007/index.html


Controllers for the experiments in this section have been developed in a preliminary version of ARGoS (we refer to as ARGoS1) -- in a time when marXbot and eye-bot hardware were not available and were still under development. Please acquire this version of ARGoS to do the necessary to run the experiments as follows:

								
 >> sudo apt-get install libode-dev;
 >> wget http://iridia.ulb.ac.be/~mathews/PhD/supp/downloads/argos1.tar.gz
 >> sudo tar xvjf argos1.tar.gz
							

Set the env variable and compile the simulation framework:
								
 >> export AHSSINSTALLDIR=/path/to/argos1
 >> cd argos1
 >> ./build_simulation_framework.sh swarmanoid
							

As the design of the communication hardware of both eye-bots and foot-bots were not complete and available, all communication between the aerial and ground robots are handled using a previously established LEDs and camera-based communication protocol. The three control strategies can be executed as follows:
								
 >> cd user/nithin
 >> ./simulation_build/argos -c experiments/eyebot_footbot/ncc.xml
 >> ./simulation_build/argos -c experiments/eyebot_footbot/lsm.xml
 >> ./simulation_build/argos -c experiments/eyebot_footbot/rgs.xml 
							


image-1
Preliminary version of ARGoS showing an eye-bot and 10 foot-bots in the deployment area surrounded by walls on three sides and a gap on the fourth. The light source is seen on the right.