Video
- Multi-Robot Exploration and Target Searching (Simulation by Player/Stage and GloMoSim)
Multi-robot searching for targets and "bridge" communication gaps, with random mobility
Multi-robot searching for targets and "bridge" communication gaps, with swarm intelligence
The above two video clips show multi-robot cooperative exploration and target searching in indoor environments, by random mobility and swarm intelligence. Random mobility just lets robots move aimlessly with collision avoidance; swarm intelligence let nearby robots choose different orientation to enlarge the coverage of the environment. In these videos, the left side is the simulation window, in which robots explore and search for both embodied targets and virtual targets (communication gaps). The right side is the monitor window, in which the simulation status (including hop-counts of each static sensor) is presented.
These two simulation clips show that:
1. This is an ad-hoc network. Static sensors try to obtain connection to reference nodes at corners. Each static sensor will get its hop-count table. They can localize themselves in the environment by hop-count-based localization if they have connection to at least 3 reference nodes (for triangulation).
2. Mobile robots search and find embodied targets. When a target is found, the robot will stay beside the target and send target-found information to the outside (through four reference nodes at corners). Also, this robot will start to localize itself and the target by hop-count-based localization.
3. Mobile robots search and find communication gaps. When a communication gap is found, the robot will deploy a new static sensor at the gap to enhance the network connectivity. When network connectivity is improved, localization of sensors is improved.
Multi-robot searching to "bridge" communication gaps, with perimeter-based searching algorithm
Multi-robot searching to "bridge" communication gaps, with swarm intelligence searching algorithm
The above two video clips show multi-robot cooperative target searching in outdoor environments, by perimeter-based searching and swarm intelligence searching. The perimeter-based searching let robots search at the boundary of sensor cluster (a group of interconnected sensors), to find the communication gap between clusters; the swarm intelligence search let nearby robots choose different orientation to enlarge the coverage. In these videos, the left side is the simulation window, in which robots search for virtual targets (communication gaps). The right side is the monitor window, in which the simulation status (including hop-counts of each static sensor) is presented.
These two simulation clips show that:
1. This is an ad-hoc network. Static sensors try to obtain connection to reference nodes at corners. Each static sensor will get its hop-count table (the four numbers shown around each static sensor).
2. Mobile robots search and find communication gaps. When a communication gap is found, the robot will deploy a new static sensor at the gap to enhance the network connectivity.
- Multi-Robot Exploration and Target Searching (Experiment by AI-MRKIT Robot)
Multi-robot searching for targets, with swarm intelligence
Multi-robot searching for communication gaps, with swarm intelligence
The above two video clips show multi-robot cooperative exploration and target searching in indoor environments, by swarm intelligence. Swarm intelligence let nearby robots choose different orientation to enlarge the coverage of the environment.
These two experiment video clips show that:
1. This is an ad-hoc network. Static sensors try to obtain connection to reference nodes at corners. Each static sensor will get its hop-count table. They can localize themselves in the environment by hop-count-based localization if they have connection to at least 3 reference nodes (for triangulation).
2. Mobile robots search and find embodied targets. When a target is found, the robot will stay beside the target and send target-found information to the outside (through four reference nodes at corners). Also, this robot will start to localize itself and the target by hop-count-based localization.
3. Mobile robots search and find communication gaps. When a communication gap is found, the robot will stay there as a relay to bridge the gap. When network connectivity is improved, localization of sensors is enabled or improved.
- Multi-Robot Exploration and Target Searching (Experiment by KOALA Robot)
Multi-robot searching for targets, with random mobility
Multi-robot searching for targets, swarm intelligence
The above two video clips show multi-robot cooperative exploration and target searching in indoor environments, by random mobility and swarm intelligence. Random mobility just lets robots move aimlessly with collision avoidance; swarm intelligence lets nearby robots choose different orientation to enlarge the coverage of the environment. These two experiment video clips show that mobile robots search and find targets. When a target is found, the robot will stay beside the target and send target-found information to the outside.
Multi-robot searching for targets, with random mobility
Multi-robot searching for targets, swarm intelligence
The above two simulation clips further show and compare the difference between random mobility and swarm intelligence.
- Multi-Robot Cooperative Target Tracking (Simulation by Webot)
Multi-robot tracking of targets, without cooperation
Multi-robot tracking of targets, with cooperation
The above two simulation clips show and compare the difference between un-cooperative and cooperative. If two robots find a same target, it is highly desirable that one of the robots keeps tracking the target, and the other robot leaves. If both robots track the same target, it is a waste of robot resource.
Multi-robot tracking of targets, original potential field-based tracking
Multi-robot tracking of targets, potential field-based control with all-adjust heuristic
Multi-robot tracking of targets, potential field-based control with selective-adjust heuristic
Multi-robot tracking of targets, two learning robots
Multi-robot tracking of targets, three learning robots
The above five simulation clips show and compare different tracking algorithms, as the following:
1. For original potential-field-based tracking, if the attractive force and repulsive force balance, robots will move together with the same target. This is undesirable cooperation.
2. The heuristic that let all robots decrease the attractive force to target (if robots find each other) may not achieve cooperation if the adjustment of attractive force is not appropriate.
3. The heuristic that only let further robot decrease attractive force can improve the cooperation.
4. When a robot is learning, it may randomly choose different behaviors. If the behavior is correct and reward is received, the robot will be reinforced to choose the same behavior under such scenario in future.
5. When a large number of robots are learning, they might interference with each other. Coordination of learning is of importance.
- Multi-Robot Mobility enhanced Localization in Ad-Hoc Sensor Networks (Simulation by Webot)
Multi-robot mobility enhanced localization, by random mobility
Multi-robot mobility enhanced localization, by intelligent mobility
The above two simulation clips show multi-robot mobility enhanced localization in ad-hoc networks, by random and intelligent mobility. In these videos, the static sensors are not shown. Please refer to Static Sensor Location to find the location of static sensors. (Static sensor S8, S12 and S27 are at sparse area. They should have more neighbors to get better hop-count for localization).
These two experiment video clips show that:
1. The random mobility let robots move aimlessly. It is difficult to let them approach to the sensor-sparse area to improve the hop-count and localization accuracy.
2. The auction-based intelligent mobility let robots move toward sensor-sparse area, such that they can improve the hop-count and the localization accuracy proactively.
Publications
Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan SEAH.
A Potential Field-based Approach for Multi-Robot Tracking of Multiple Moving Targets.
In Proc. 1st International Conference on Humanoid, Nanotechnology, Information Technology,
Communication and Control, Environment, and Management, March 2003, Manila, Philippines.
Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan SEAH.
A Searching and Tracking Framework for Multi-Robot Observation of Multiple Moving Targets.
International Journal of Advanced Computational Intelligence and Intelligent Informatics, 8(1),
pp.14-22. 2004.
Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan SEAH.
Searching and Tracking for Multi-Robot Observation of Moving Targets.
In Proc. 8th Conference on Intelligent Autonomous Systems, March 2004, Amsterdam, Netherlands, pp.157-164.
Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan SEAH.
Multi-Robot Concurrent Learning in Museum Problem.
In Proc. 7th International Symposium on Distributed Autonomous Robotic Systems, June 2004, Toulouse, France.
Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan SEAH.
Multi-Robot Concurrent Learning of Fuzzy Rules for Cooperation.
In Proc. 6th IEEE International Symposium on Computational Intelligence in Robotics and Automation,
June 2005, Espoo, Finland, pp.713-719.
Tirthankar BANDYOPADHYAY, Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan
SEAH.
Visibility-based Exploration in Unknown Environment Containing Polygonal Obstacles.
In Proc. 12th International Conference on Advanced Robotics, July 2005, Seattle, USA, pp.484-491.
Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan SEAH.
Reinforcement Learning of Cooperative Behaviors for Multi-Robot Tracking of Multiple Moving Targets.
In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, August 2005, Edmonton, Canada, pp.1289-1294.
Winston Khoon Guan SEAH, Hwee-Xian TAN, Zheng LIU, and Marcelo H. ANG Jr.
Multiple-UUV Approach for Enhancing Connectivity in Underwater Ad-hoc Sensor Networks.
In Proc. MTS/IEEE OCEANS Conference, September 2005, Washington, USA, pp.2263-2268.
Winston Khoon Guan SEAH, Zheng LIU, Joo Ghee LIM, S.V. RAO, and Marcelo H. ANG Jr.
TARANTULAS: Mobility-enhanced Wireless Sensor-Actuator Networks.
In Proc. IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing,
June 2006, Taichung, Taiwan, pp.548-551.
Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan SEAH.
Multi-Robot Concurrent Learning of Cooperative Behaviors for the Tracking of Multiple Moving Targets.
International Journal on Vehicle Autonomous Systems (IJVAS), Vol.4, Nos.2-4, pp.196-215, 2006.
Terence Chung Hsin SIT, Zheng LIU, Marcelo H. ANG Jr., and Winston Khoon Guan SEAH.
Multi-Robot Mobility Enhanced Hop-Count-based Localization in Ad-Hoc Networks.
Journal of Robotics and Autonomous Systems, Vol.55, No.3, pp.244-252, March 2007.
Last update: August 2008