Difference between revisions of "PhDSupervision:Alessandro Stranieri"

From IridiaWiki
Jump to navigationJump to search
m (Numbered list doesn't werk)
(Updated week 13)
Line 28: Line 28:
   
 
'''Monday, June 4<sup>th</sup>'''
 
'''Monday, June 4<sup>th</sup>'''
  +
  +
I began to port the behaviors that I implemented and tested in simulation to the real robots. I spent some times calibrating the robots' proximity sensors and
  +
debugging the run-time reported speed. Finally, the controllers seem to work fairly well, but quite differently as expected from the simulated experiments.
   
 
'''Tuesday, June 5<sup>th</sup>'''
 
'''Tuesday, June 5<sup>th</sup>'''
  +
  +
Performed internal and external parameters calibration for the 4 installed cameras. I tested the operators on still and moving robots, and the
  +
real world positions seem to be almost perfect.
   
 
'''Wednesday, June 6<sup>th</sup>'''
 
'''Wednesday, June 6<sup>th</sup>'''
  +
  +
Implemented the marker decoding strategy and tested on live-taken sequences of frames. The marker deciding works really well, although sometimes it misses
  +
a bit. My idea is to create a new set of markers containing a 3x3 grid instead of a 4x4, which is too expressive.
   
 
'''Thursday, June 7<sup>th</sup>'''
 
'''Thursday, June 7<sup>th</sup>'''
  +
  +
Performed other tests with the tracking system and at the same time began tuning of the fore-mentioned behaviors on the real robots. I am noticing several issues
  +
concerning the use of the proximity sensors and their calibration. I will try to investigate that.
   
 
'''Friday, June 8th<sup>th</sup>'''
 
'''Friday, June 8th<sup>th</sup>'''

Revision as of 16:37, 8 June 2012



Ph.D. Student: Alessandro Stranieri
Ph.D. started on: January 4th, 2010
Phone number (office): +32-2-650 31 70
Phone number (mobile): +32 488 98 51 37




Week 13

Monday, June 4th

I began to port the behaviors that I implemented and tested in simulation to the real robots. I spent some times calibrating the robots' proximity sensors and debugging the run-time reported speed. Finally, the controllers seem to work fairly well, but quite differently as expected from the simulated experiments.

Tuesday, June 5th

Performed internal and external parameters calibration for the 4 installed cameras. I tested the operators on still and moving robots, and the real world positions seem to be almost perfect.

Wednesday, June 6th

Implemented the marker decoding strategy and tested on live-taken sequences of frames. The marker deciding works really well, although sometimes it misses a bit. My idea is to create a new set of markers containing a 3x3 grid instead of a 4x4, which is too expressive.

Thursday, June 7th

Performed other tests with the tracking system and at the same time began tuning of the fore-mentioned behaviors on the real robots. I am noticing several issues concerning the use of the proximity sensors and their calibration. I will try to investigate that.

Friday, June 8thth

Worked on the Wiki to add the PostDocs supervision page.

Began working on some heavy ARGoS refactoring. I will work on my own branch on the simulator, and test the installation on two different virtualized system, in order to reduce compilation problems for the users. The first major changes will be:

  • Migration from OpenCV 2.1 to OpenCV 2.4
  • Integration of OpenCV source code directly into ARGoS
  • Refactoring of the image filtering process, in order to allow the integration of custom made image processing operators.

Week 12

Monday, May 28thst

Completed the main parts of obstacle avoidance, random walk and odometry behaviors that will be used in my experiments and to test the Arena Tracking System. They still need some tuning and testing.

Tuesday, May 29thst

Started to set up the procedure and the tools to perform the camera calibration for the arena tracking system.

Week 11

Monday, May 21st and Tuesday, May 22nd and Wednesday, May 23rd

I did some preliminary work on a behavior that I can use to perform some experiments with the real robots. For my project I intend to use some simple odometry, so that the robots can estimate their position with respect to a common reference system. My idea is to use my project as a first real test of the tracking system. In fact, by tracking the robot while they are navigating, I can also measure the accuracy of their odometry.

Thursday, May 24th

I spent a couple of hours cleaning the arena, getting read of the several things left by others. I almost freed the space under the four installed cameras, so that I can run experiments with the moving robots. I tested on a real Foot-bot a simple RandomWalk behavior developed first for the simulator, which I will use to test the marker detection.


Week 10

Monday, May 14th

I began to finalize the work on the cameras synchronization strategy. I want to enable the system to work in synchronous time-steps, within each the acquisition of all the cameras is triggered in parallel.

Tuesday, May 15th

I created a simple test, to verify the acquisition strategy. Through a simple interface I can configure start and stop the acquisition of multiple acquisition devices. I first test the strategy using simple images stored on the hard drive, and everything worked as expected. In the next days I will test the performance using the real cameras.

Wednesday, May 16th

I dedicated the day mostly to two activities:

1) fixing some small issues with the camera on the E-Puck. Now it's working much better, although Arne's student reports that the WiFi connection sometimes makes the whole system to freeze. This requires a pretty painful restart of the robot, but I am afraid there is not much I can do about it. 2) improving the design of the tracking system and continue testing it's performance.

Thursday, May 17th

I carried on the work on my project. In a simple test, two robots acquire two artificially generated images of the same scene but from different points of view. In the scene I introduced an object which both robots can detect, but whose position in the 3D space cannot be established. In the test the first robot creates a simple occupancy grid, marking as occupied all the cells where the object could lie. Then the same robot receives some information from the second one, and updates the occupancy grid. In the resulting occupancy grid, all the cells that cannot possibly contain the object have been pruned out.

I dedicated also a couple of hours, helping Ali to compile ARGoS for the real robot, and tutoring him on some basics about creating a controller for a real robot.

Friday, May 18th

I carried out some improvements on the tracking system software. Now it is possible to start and stop the acquisition of all cameras from a single point. I further tested that all the cameras performed their acquisition and processing strictly within a time step. The single threads controlling each camera are now synchronized.

Week 9

Monday, May 7nd

Started to work on the integration of the EPuck omnidirectional camera into the ARGoS system for the real robot. The work is needed so that Arne's student can perform some final tests with the real EPuck for his thesis. This is going to be quite a heavy job, as I am also using this opportunity to start a big refactoring of the vision support for all the real robots.

Tuesday, May 8nd

I carried out another step in the integration of the EPuck omnidirectional camera. Now it is possible to use the camera on the real robot as in simulation. Some tests and some code clean-up are needed, but I will take care of them once the robot is fully usable.

Wednesday, May 9nd

I started with Arne the set-up up the EPuck with new board, on which the camera is mounted. Due to some problems with the hardware, we were not able to finish the job. In the meanwhile, I helped Arne's student to compile a first controller, which will be the starting point of his work.

Thursday, May 10nd

After some first tests with the E-Puck camera, I began fixing the acquisition and image processing operators, so that now the E-Puck can perceive the LED colors as intended in Arne's student experiment. The same day I continued working on my project. Still working in a sort of simulated scenario, I enabled the robot to perceive a simple object, detected by its color, and to reproject in a 2D grid space, all the possible portions of the space that can be occupied.

Friday, May 11nd

I performed a first calibration for the E-Puck camera. I created a mapping from image space to real world distances, so that now, when the robots perceives an LED color, it can transform its position in the image to an approximation of the LED source distance. The results I obtained are pretty good, and I showed them to Arne's student, who started to port his simulation code a real robot controller. During the following week I will support his work, helping him in case of problems related to the use of the camera.

Week 8

Wednesday, May 2nd

I performed several tests to try to improve the image processing performance of the camera system. I managed to optimize the code and the Halcon library parameters, and I was able to reach a processing speed of about 25ms which is still twice as much as the desired one. During the day I also had a meeting with Mauro and a future Master student at IRIDIA who will work on the thesis I proposed.

Thursday, April 3rd

I dedicated most of the day to my project. At the moment I am working on a simplified version of a possible experimental scenario. The scenario consists of two robots with partially overlapping views and in each view there is an object that the robots can easily segment out of the image. Each robots also holds a representation of the environment as an occupancy grid, meaning that the surrounding space is divided into cubic cells with a specified resolution. A robot that has computed the points in the image generated by the object, can reproject them into the grid, marking the cells that can be potentially occupied by the object. Assuming that two or more robots have detected the same object, each of them will mark in its own space a number of cells. The intersection of the cells marked by every robots actually defines a volumetric hull, which should completely contains the object in the real-world. My current task is to implement all the logic that allows to perform the estimation of the occupied cells, by fusing the information extracted by each robot.

Friday, April 4th

I worked at the visualization of the occupancy grid generated by a robot. This provides a mean of understanding which parts of the 3D space the robot is considering occupied and which not.

I continued to test the performance of the tracking system, specifically on the marker detection task. Currently, the mark detection is performed sequentially for each frame, as the Halcon library is able to internally parallelize every operation, exploiting all cores present on the machine. For the last tests I was able to use one of the last nodes, which consists of 32 cores. Unfortunately the performance improvement is not as large as expected. For this reason, I decided to change the approach, and have a thread for each camera performing the marker detection. This approach should work, having enough powerful cores. During the next week I will test this other option.

Still regarding the Tracking System project, I almost completed my research about a possible server that we could use. I showed it also to Arne, who confirmed me that it should be exactly what I could need. Next week, I will send an E-Mail to the people at Sysgen to ask for some details and to see whether we could reach an offer.


Week 7

I dedicated almost the whole week developing the main architecture of the tracking system. In particular I outlined the implementation of:

  • The strategy for the synchronized acquisition of images. The acquisition of a frame for each camera is triggered via software at a specific time slot. This is done so that the information extracted from the images are time-consistent.
  • The strategy for the image processing. When a camera acquires a frame, it pushes it into a queue. The operator which performs the marker detection, wait for objects to be available at the other end of the queue, pops them and starts the processing.

Both the implemented strategies worked as predicted. Unfortunately, when I tested them on the cluster, I realized that the processing time for a frame is not fast enough to ensure real-time tracking. Currently one frame is processed in about 50ms. As the frame-rate we are aiming for is of 4 fps, this processing time should be made 3x faster.

On Friday, I started to configure one new epuck board, in order to test the proper functioning of the omnidirectional camera. This is done to support some final experiments of Arne's student.

Week 6

Journal

Tuesday, April 10th

The four cameras are installed in a 2x2 configuration. Connection to a local desktop pc through the switch has been tested and it seems working fine. I tested the simultaneous acquisition of the four cameras, and it performs withing the desired framerate limits. It must be also considered that the images are all sent to a slow interface on the desktop pc.

Wednesday, April 11th

Tested marker detection in one camera. I placed a marker on 5 footbots and the program detected them, giving position and orientation.

Thursday, April 12th

I carried on the work done on the simulated scenario, where two robots exchange and match features extracted form their images. Solved the geometric problem which enable the robot to better search matching candidate points for each point received by the neighbors.

Friday, April 13th

The feature matching works relatively well, but a lot of work can still be done. I tested the estimation of the 3D position of a point on a robot which has received a set of points from its neighbors.

Plans for the next week

Arena tracking system

I will complete the implementation of two parts of the tracking system:

  • Timed acquisition. The test should show that the cameras acquire a frame within a global time slots.
  • Speed of detection. While the cameras are running in parallel, the images should be processed without compromising the acquisition speed.

Thesis project

I decided to modify the approach to the enviroment mapping. Instead of detecting single points in the image, I will try to estimate the shape of nearby object. According to this approach, neighboring robots exchange the description of shapes segmentate in the images acquired by the individual platform. The robots then try to match the received descriptions to the one locally extracted and use it to estimate size and position of the object that generate that shape.


Week 5

Journal

Monday, March 12th

Work on the image acquisition code for the arena cameras. Each camera contains a thread, so that acquisition can work separately on each core.

Tuesday, March 13th

Started to write the code to configure the cameras from XML. I also roughly implemented the C++ code to performed a timed acquisition of the frames, in order to force the acquisition at a specified frame-rate. I tested this implementation and it seems to work, the program is able to make a frame available within the time specified. This means that given a desired frame rate of 4 fps, each new frame is available within a quarter of a second. Despite that, the acquisition time I recored seemed anyway longer than expected. This might due to the fact that the switch hasn't been still properly tuned and that the network interface on the PC is slower than the one which is going to be actually used.


Plans for the next week

Arena tracking system

If we will have received the camera screws, we will install the remaining 4 cameras. At this point I will start performing acquisition speed tests, meaning that I will test whether a system of 4 cameras, each managed by a single thread can meet the specified timing requirements. Once this has been assessed I will test the marker detection code.

Week 4

Journal

Monday, March 5th

Completed the first draft of the UML for the vision packages in ARGoS. Fixing the scripts for the arena camera calibration.

Tuesday, March 6th

Installed the first camera in the arena. We tested the connection through the switch, and it seems to work fine. I also performed a calibration test, with which I am having some issues, but they should be easy to fix.

Wednesday, March 7th

I worked on the first experiment design. Formerly, we wanted to work on an object recognition problem. After some considerations, I decided to conceive the first experiment instead to an image segmentation and shape estimation problem. The main reasons would be the following:

  • object recognition is a difficult problem and should be performed once the object in the image has been segmented out from the background;
  • segmentation is already an interesting problem. A multiple robot's system can benefit from knowing the position and the shapes of nearby objects. A single robot is not able to localize an unknown object from its image, nor it is able to

construct its representation. Furthermore, objects occluding each other can be potentially perceived as the same one. The objective of the first experiment will be to have a multiple robots perceiving different objects in the environment, successfully discriminate among them and localize them. All the robots must collaborate in order to create a shared representation of the environment. The performance of the experiment can be measured with respect to three aspects:

  • Adherence of the constructed representation to the real world;
  • Comparison with the result obtained by a single robot suing structure from motion
  • Quantity of information exchanged between the robots.

The first step to make, is to design an experimental environment that could suit the purposes of the experiment. For the moment I would think of a simple solution, consisting of a small set of robots ( 4/5 ) placed in an environment populated by a small set of objects. The objects should be placed so that for some of the robots, objects partially or entirely cover one another. Having acquired an image on each of the robots, I could then be able to work offline.

In the next phase I would work offline on the images acquired. The objective would be to test different features detectors to see which ones are more suited to identify the same portion of an object imaged from different view points. This would then enable me to study how to fuse the information exchanged by neighboring robots to perform the object segmentation from the background.

Thursday, March 8th

Set-up the ceiling camera on the robots in vertical position, and calibrated the internal parameters of each camera.

Friday, March 9th

Took first set of picture with the footbot. I set-up a very small scenario in which five robots are displaced at different positions in the nearby of three prays and viewing with the ceiling camera connected in frontal position. I recorded positions and orientations of footbots and of the preys. The idea is to test the features detectors implemented in OpenCV to see whether same parts in the real world can be matched across several views of them. The same in the real world, can appear quite different when imaged by different cameras at different views. If this point can be detected and matched in two or more images, the position of this point in the real world can be estimated. Two or more robots, can exchange the description of the points they detected and try to match them. Once the points have been matched, the robots can use their relative position and orientation to determine coordinates of the points they detected. This test is important as from it it depends the possibility for the robots to merge the information they extract from the acquired images.

Plans for the next week

Arena tracking system

  • Fix calibration procedure
  • Install other three cameras
  • Perform image acquisition test.

Thesis project

Week 3

Journal

Tuesday, February 28th

Implemented three scripts to perform the camera internal parameters calibration. The three scripts are used to:

  • Create a PostScript file representing the calibration plate to use for calibration. A parameter of this script is the desired size of the plate.
  • Interactively acquire a sequence of frames containing from the camera. When in execution, the window displays whether the calibration plate has been detected. In this way the user can acquire only those frames which are useful.
  • Extract the camera internal parameters. The scripts opens the image files present in a given directory and runs the calibration plate detection. At the end, it writes the calibration parameters in a file.

Wednesday, February 29th

Collected 4 papers:

  • A Multi-View Probabilistic Model for 3D Object Classes
  • A Fast and Robust Descriptor for Multiple-view Object Recognition
  • Unsupervised Feature Selection via Distributed Coding for Multi-view Object Recognition
  • Towards an efficient distributed object recognition system in wireless smart camera networks

Thursday, March 1st

Collected 4 papers:

  • Coding of Image Feature Descriptors for Distributed Rate-efficient Visual Correspondences.
  • Evaluation of Interest Point Detectors and Feature Descriptors for Visual Tracking
  • Cooperative Multi-Robot Localization under Communication Constraints
  • Distributed Robuts Data Fusion Based on Dynamic Voting

Had a meeting with Mauro. After the meeting, we agreed that the object of the first experiment will be the study of an object detection problem in a Swarm Robotics system. The main vision is to have a group of robots surrounding an object. The swarm of robots is capable of recognizing the object only by fusing the information processed. The design and realization of this experiment brings a series of problems, of which the first one would be in my opinion how to model the uncertainty of the model. This will be addressed in the following two days.

Friday, March 2nd

Today I had to begin to work on the re-factoring of parts in ARGoS concerning vision. This was requested by a student of IDSIA, and I decided to begin the work, as I will anyway need more flexibility when implementing the image processing operators for the robots.

Plans for the next week

Arena tracking system

The plan is to finish the calibration process for all the cameras that we have in the lab. I will buy the wooden board to use as a support the calibration pattern sheet, run the calibration script on the four cameras and then install them in a 2x2 matrix at the far left side of the arena. Once the installation is done, I will calibrate also the extrinsic parameters. The test for this phase would be the detection of a robot marker in the exact position in the arena coordinate system.

Week 2

Journal

Monday, February 13th

Studied the Halcon code and examples to perform a calibration. Discussed some preliminary aspects with Dhananjay, who will help me during the installation and calibration process.

Tuesday, February 14th

Decided the strategy to perform calibration and installation. The process will mainly consist of two phases. The first phase is dedicated to the calibration of the internal camera parameters. This can be done before the cameras are actually installed by acquiring different views of a specific calibration pattern. The second phase is dedicated to the installation of the cameras installation on the ceiling and the computation of the extrinsic parameters. This means calculating the transformation of the position of an object in an image, to the position in the real world, given the known distance of the object from the camera center. This implies that the positioning of the calibration pattern center to a given position in the arena coordinate system be done in a fairly precise way. I agreed with Dhananjay that I will build a vertical pole to which we will attach the frame. This vertical pole will be then positioned at known points in the ground. This means that, we should agree on what is the x, y and origin of the area coordinate system.

Studied a solution for a supervision page on the IRIDIA wiki. A possible way to do it could be the following. I can place a link in the main page of the wiki, to a page called something like “PhD Supervision”. The link is only accessible to the people belonging to a group called “PhDSupervision” to which at the beginning only Marco and Mauro belong to. From the page “PhD Supervision”, one can create a page for each student, readable and editable again only by the people belonging to the group. For the moment I have two issues: The link in the main page, although not accessible, will still be visible to everyone logged in. Maybe I can find a solution for that. In mediawiki, it is easy to restrict access to a page on a group basis, but it seems more complicated on a user basis. There are two ways to proceed quickly, while I see whether something can be done: Including the PhD students in the “PhDSupervision” group. This means every PhD student can view the supervision page of the other students; Creating a group for each student and granting access also to this group. I actually find this solution messy.

Plans for the next week

During the next week I won’t be able to dedicate much time to anything, as I am in Parma for the MIBISOC seminars.

Arena tracking system

Since it doesn’t take a lot of effort, I will prepare all the coded needed for the calibration. Once, I am back, it will only be a matter of creating the calibration pattern and running the software.

Thesis project

Week 1

Journal

Friday, February 3rd

Bought first batch of material to build the camera supports. Started planning of supports´ shape and structure. Planned camera positioning and marked the position for the cameras along the row closer to the arena windows.

Monday, February 6th

Bought other material for the camera support. Prepared a master support. Based on this master, manufactured 6 different frames to speed the preparation of the final camera supports.

Tuesday, February 7th

Had the material bought on Tuesday replaced with a set of properly cut pieces. Finished preliminary work on the support. 16 supports are ready.

Wednesday, February 8th

Collected and studied 5 papers concerning my project:

  • Vision-Based Global Localization and Mapping for Mobile Robots
  • Vision-Based Localization and Data Fusion in a System of Cooperating Mobile Robots
  • Collective Perception in a Robot Swarm
  • Distributed Sensor Fusion for Object Position Estimation by Multi-Robot Systems
  • Distributed Multirobot Localzation

The comments on the above-mentioned papers, as well as the next ones, will be given in a separate PDF file, which I will soon make available.

Thursday, February 9th

Began the creation of a frame grabber for the E-Puck camera. This is needed in order to enable Arne’s and Manuele’s students to work with the front camera on the E-Puck.

Friday, February 10th

Meeting with Mauro. The main purpose of this meeting was to outline the first steps to take towards the definition of the thesis subject and of the first experiments. We agreed on a set of aspects that have to be addressed and mainly concerning the general message of my work and the settings of the first experiments. As a result of the meeting, we established that the next following two/three weeks will be dedicated to: Elaboration of a clear and sound message that will be the main purpose of my thesis. This means considering the definition of what for the moment I will call Swarm perception and the listing of the related works. Acquisition of a first data-set of images to work on a validation experiment. To put it briefly, the experiment involves a swarm of robots which use vision and local communication to improve the estimation of an object distance. The purpose of this first test is twofold: Study the kind of information that the robots should be able to extract from the acquired images and how much of this information should they exchange with each other; Prove that the idea of improving the capabilities of environment perception can be enhanced by means of a ¨Swarm Robotics¨ approach is worth investigating.

This phase will actually have a further purpose. The vision support in ARGoS needs a work of refactoring aimed at: Ease the introduction of new computer algorithms on the robots; Update the currently used version of OpenCV; Introduce image acquisition capabilities for the E-Puck; [Optional] Allow the uses of computer vision algorithm also in simulation (to be discussed with Carlo).

During this first phase I will propose a possible way of action to address this point. The output will eventually be the steps to be taken in order to accomplish that. If everything goes according to plan, the mentioned steps will be taken in what I and Mauro agreed to be the second phase, which will be dedicated to: Refactoring of the vision support in ARGoS; Coding of the new software modules that will be used in my work.

Given the bigger priority of the first phase, I will not comment any further on the second one.

Plans for the next week

As for this past week, I will dedicate Monday and Tuesday to the work on the tracking system, while the rest of the week will be dedicated to the work on my project.

Arena tracking system

With the help of Ali, I will begin the installation of four cameras in a 2x2 configuration. Before this, we will mark once and for all the positions of all cameras on the ceiling wooden structure. At the same time, I will start working on: A vision-based procedure to aid the camera positioning. This means that I will try to use the camera itself and software to aid the alignment of the camera with the ground plane. The strategy for the camera calibration. This step will surely involve the construction of a proper calibration structure (a board with a pre-defined pattern)

This second point is likely to take also the following week.

Thesis project

Regarding my project, I plan to dedicate the rest of the week to the following activities: Creation of a document where I define the focus of my studies. In the beginning this document will mainly contain: The definition of my study and the comments on the long term vision My comments on the related works I have been reading and collecting so far Creation of a second document, where I will describe my first experimental work.

I would consider the first document as an incubation draft of what could go in the end in the introduction and state of the art of my thesis. The second document instead would eventually grow into a conference paper. These two documents will be prepared in LaTeX, but I plan to share the PDF version in Google docs.