Difference between revisions of "PhDSupervision:Alessandro Stranieri"
(Update with second week) |
m (Protected "PhDSupervision:Alessandro Stranieri": Granting access to PhDSupervision group [edit=PhDSupervision:move=PhDSupervision:read=PhDSupervision]) |
(No difference)
|
Revision as of 12:22, 16 February 2012
Week 2
Monday, February 13th
Studied the Halcon code and examples to perform a calibration. Discussed some preliminary aspects with Dhananjay, who will help me during the installation and calibration process.
Tuesday, February 14th
Decided the strategy to perform calibration and installation. The process will mainly consist of two phases. The first phase is dedicated to the calibration of the internal camera parameters. This can be done before the cameras are actually installed by acquiring different views of a specific calibration pattern. The second phase is dedicated to the installation of the cameras installation on the ceiling and the computation of the extrinsic parameters. This means calculating the transformation of the position of an object in an image, to the position in the real world, given the known distance of the object from the camera center. This implies that the positioning of the calibration pattern center to a given position in the arena coordinate system be done in a fairly precise way. I agreed with Dhananjay that I will build a vertical pole to which we will attach the frame. This vertical pole will be then positioned at known points in the ground. This means that, we should agree on what is the x, y and origin of the area coordinate system.
Studied a solution for a supervision page on the IRIDIA wiki. A possible way to do it could be the following. I can place a link in the main page of the wiki, to a page called something like “PhD Supervisionâ€. The link is only accessible to the people belonging to a group called “PhDSupervision†to which at the beginning only Marco and Mauro belong to. From the page “PhD Supervisionâ€, one can create a page for each student, readable and editable again only by the people belonging to the group. For the moment I have two issues: The link in the main page, although not accessible, will still be visible to everyone logged in. Maybe I can find a solution for that. In mediawiki, it is easy to restrict access to a page on a group basis, but it seems more complicated on a user basis. There are two ways to proceed quickly, while I see whether something can be done: Including the PhD students in the “PhDSupervision†group. This means every PhD student can view the supervision page of the other students; Creating a group for each student and granting access also to this group. I actually find this solution messy.
Plans for the next week
During the next week I won’t be able to dedicate much time to anything, as I am in Parma for the MIBISOC seminars.
Arena tracking system
Since it doesn’t take a lot of effort, I will prepare all the coded needed for the calibration. Once, I am back, it will only be a matter of creating the calibration pattern and running the software.
Thesis project
Week 1
Friday, February 3rd
Bought first batch of material to build the camera supports. Started planning of supports´ shape and structure. Planned camera positioning and marked the position for the cameras along the row closer to the arena windows.
Monday, February 6th
Bought other material for the camera support. Prepared a master support. Based on this master, manufactured 6 different frames to speed the preparation of the final camera supports.
Tuesday, February 7th
Had the material bought on Tuesday replaced with a set of properly cut pieces. Finished preliminary work on the support. 16 supports are ready.
Wednesday, February 8th
Collected and studied 5 papers concerning my project:
- Vision-Based Global Localization and Mapping for Mobile Robots
- Vision-Based Localization and Data Fusion in a System of Cooperating Mobile Robots
- Collective Perception in a Robot Swarm
- Distributed Sensor Fusion for Object Position Estimation by Multi-Robot Systems
- Distributed Multirobot Localzation
The comments on the above-mentioned papers, as well as the next ones, will be given in a separate PDF file, which I will soon make available.
Thursday, February 9th
Began the creation of a frame grabber for the E-Puck camera. This is needed in order to enable Arne’s and Manuele’s students to work with the front camera on the E-Puck.
Friday, February 10th
Meeting with Mauro. The main purpose of this meeting was to outline the first steps to take towards the definition of the thesis subject and of the first experiments. We agreed on a set of aspects that have to be addressed and mainly concerning the general message of my work and the settings of the first experiments. As a result of the meeting, we established that the next following two/three weeks will be dedicated to: Elaboration of a clear and sound message that will be the main purpose of my thesis. This means considering the definition of what for the moment I will call Swarm perception and the listing of the related works. Acquisition of a first data-set of images to work on a validation experiment. To put it briefly, the experiment involves a swarm of robots which use vision and local communication to improve the estimation of an object distance. The purpose of this first test is twofold: Study the kind of information that the robots should be able to extract from the acquired images and how much of this information should they exchange with each other; Prove that the idea of improving the capabilities of environment perception can be enhanced by means of a ¨Swarm Robotics¨ approach is worth investigating.
This phase will actually have a further purpose. The vision support in ARGoS needs a work of refactoring aimed at: Ease the introduction of new computer algorithms on the robots; Update the currently used version of OpenCV; Introduce image acquisition capabilities for the E-Puck; [Optional] Allow the uses of computer vision algorithm also in simulation (to be discussed with Carlo).
During this first phase I will propose a possible way of action to address this point. The output will eventually be the steps to be taken in order to accomplish that. If everything goes according to plan, the mentioned steps will be taken in what I and Mauro agreed to be the second phase, which will be dedicated to: Refactoring of the vision support in ARGoS; Coding of the new software modules that will be used in my work.
Given the bigger priority of the first phase, I will not comment any further on the second one.
Plans for the next week
As for this past week, I will dedicate Monday and Tuesday to the work on the tracking system, while the rest of the week will be dedicated to the work on my project.
Arena tracking system
With the help of Ali, I will begin the installation of four cameras in a 2x2 configuration. Before this, we will mark once and for all the positions of all cameras on the ceiling wooden structure. At the same time, I will start working on: A vision-based procedure to aid the camera positioning. This means that I will try to use the camera itself and software to aid the alignment of the camera with the ground plane. The strategy for the camera calibration. This step will surely involve the construction of a proper calibration structure (a board with a pre-defined pattern)
This second point is likely to take also the following week.
Thesis project
Regarding my project, I plan to dedicate the rest of the week to the following activities: Creation of a document where I define the focus of my studies. In the beginning this document will mainly contain: The definition of my study and the comments on the long term vision My comments on the related works I have been reading and collecting so far Creation of a second document, where I will describe my first experimental work.
I would consider the first document as an incubation draft of what could go in the end in the introduction and state of the art of my thesis. The second document instead would eventually grow into a conference paper. These two documents will be prepared in LaTeX, but I plan to share the PDF version in Google docs.