Scenario suggestions

From IridiaWiki
Revision as of 17:12, 30 April 2008 by Arne (talk | contribs)
Jump to navigationJump to search

IRIDIA collective proposal

Our proposal involves extending the scenario to allow for more sophisticated multiple hand-bot behavioural dynamics. We propose replacing the single book that had to be retrieved (and that was criticised by the referees) with multiple objects spread across some shelves. The objects would be rectangular objects with varying attributes (e.g. material, size, weight, LED colour etc). The task would then become to first identify some appropriate subset of the objects, retrieve them from the shelves, and then (possibly) to use them to build some form of structure.

We imagine that eye-bots could detect the presence of objects on the shelf, but not be able to discriminate between objects with different attributes - such discrimination would require close up sensing by the hand-bots. Thus the eye-bot could direct foot-bots and hand-bots to the appropriate shelves, but only the hand-bots could pick out the required subset of objects. Hand-bot discrimination could either be through close up camera based sensing, or through some kind of manipulation (lifting, give it a squeeze!). For an attribute like object length (or weight), multiple hand-bots might need to cooperate to find / select the right objects.

Depending on the parameters we use to set up this scenario, we think there are many interesting research possibilities, especially to explore swarm dynamics - both heteregeneous and within a swarm of homogeneous hand-bots. For example, with a high density of hand-bots, the hand-bots could collectively search the vertical plane in a similar fashion to which the eye-bots search the ceiling plane by forming some kind of communicating network. Alternatively, when the hand-bot density is low, foot-bots could be used as markers to deliniate already explored segments of the plane. Multiple hand-bots might be required to lift long, bendy, or overly heavy objects.

To integrate this proposal into the existing swarmanoid documentation, we propose to extend the existing task and environment complexity matrices as follows.

Task Complexity Parameters:

  • Search / Select
    • Retrieve single object
    • Retrieve multiple objects
    • Retrieve subset of objects with certain set of attributes
    • Retrieve different types objects with given attributes in given ratios
    • Retrieve objects in a given order


Environment Complexity Parameters

  • Target Object Quantity
    • 1 object to be found and retrieved (default)
    • Many objects to be found and retrieved
  • Target Object Attributes
    • All objects are of the same type
    • Objects have varying attributes (material, length, weight, etc)
  • Target Object Movability
    • Single robot can move object
    • Cooperation required due to nature of object (e.g., too heavy for a single robot) (default)
  • Target Object Grippability
    • Gripping easy— Object designed to be easily grippable (default)
    • Gripping hard or requires cooperation
  • Target Object Location
    • Target on floor
    • Target raised (e.g., on shelf or table) (default)
  • Target Object Visibility (size, luminosity, etc.)
    • Easily detectable— Object designed to make recognition easy (default)
    • Not easy to detect