research

The Adobe Flash Plugin is needed to display this content.

Our focus is on visual navigation based on images grabbed using a single camera, while we are also working on fusing data from additional sensors if available (e.g. inertial data, GPS signal). The main challenges we are faced with are robustness and efficiency of algorithms since our helicopters exhibit high motion dynamics and provide limited computational capabilities on-board.

You can find all relevant Publications here as well as some freely available Software that we have developed here. Below are a couple of the videos of our helicopters and algorithms in action. For a complete list of videos with both earlier and most recent work, check out our dedicated Youtube Channel.




Visual-Inertial SLAM for a Small Helicopter in Large Outdoor Environments


Accepted for publication in IROS 2012 M. W. Achtelik, S. Lynen, S. Weiss, L. Kneip, M. Chli and R. Siegwart

In this video, we present our latest results towards fully autonomous flights with a small helicopter. Using a monocular camera as the only exteroceptive sensor, we fuse inertial measurements to achieve a self-calibrating power-on-and-go system, able to perform autonomous flights in previously unknown, large, outdoor spaces. Our framework achieves Simultaneous Localization And Mapping (SLAM) with previously unseen robustness in onboard aerial navigation for small platforms with natural restrictions on weight and computational power. We demonstrate successful operation in flights with altitude between 0.2-70 m, trajectories with 350 m length, as well as dynamic maneuvers with track speed of 2 m/s. All flights shown are performed autonomously using vision in the loop, with only high-level waypoints given as directions.


sFly: Search and Rescue of Victims in GPS-Denied Environments


Demo proposal of the sFly European Project (2009-2011). The demo simulates a search and rescue operation in an outdoor GPS-denied disaster scenario. No laser, no GPS, and Vicon or other external cameras are used for navigation and mapping, but just onboard cameras and IMUs. All the processing runs onboard, on a Core2Duo processing unit. This video gives an overview of the existing modules being used for the final demonstration. The mission consists of first collecting images for creating a common global map of the working area with 3 helicopters, then engaging positions for an optimal surveillance coverage of the area, and finally detecting the transmitter positions.


Automatic 3D mapping & texturing during flight


An example of automatic mapping during a controlled flight of 200m at a village near Zurich. The helicopter flies at an altitude of more than 15m. As well as performing visual SLAM, a 3D mesh is also created using the 3D points of the SLAM map and texture is projected on the facades of the mesh.


Autonomous MAV Navigation with On-Board Monocular SLAM


Autonomous flights of a UAV in unknown environments. With feeds from a single camera and an IMU we perform keyframe-based SLAM to estimate the pose which is fed to the UAV controller. All calculations are done on-board. To maintain constant computational complexity old key-frames are deleted from the SLAM map, resulting to a visual odometry style approach. This framework is demonstrated first with an indoors flight (~50m) and later on outdoors for a trajectory of about 25m. This video is played 2x the normal speed.

For more videos check out our dedicated Youtube Channel.

You are here: startresearch
Valid CSS Driven by DokuWiki Valid XHTML 1.0