Keyframe-Based Visual-Inertial Odometry and SLAM Using Nonlinear Optimisation
Here, we fuse inertial measurements with visual measurements: due to the complementary characteristics of these sensing modalities, they have become a popular choice for accurate SLAM in mobile robotics. While historically the problem has been addressed with filtering, advancements in visual estimation suggest that non-linear optimisation offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a probabilistic cost function that combines reprojection error of landmarks and inertial terms. We ensure real-time operation by limiting the optimisation to a bounded window of keyframes by applying various marginalisation strategies. Keyframes may be spaced in time by arbitrary intervals, while old measurements are still kept as linearised error terms.
Former collaborators:
Simon Lynen (Previously ETH Zurich, now Google)
Dr Mike Bosse (Previously ETH Zurich, Zoox)
Dr Vincent Rabaud (Previously Willow Garage, now OpenCV Foundation)
Dr Kurt Konolige (Previously Willow Garage, Google)
Andreas Forster (Previously ETH Zurich, now Facebook)
Dr Margarita Chli (ETH Zurich)
Prof. Roland Siegwart (ETH Zurich)
Dr Paul Furgale (Previously ETH Zurich, now Facebook)
OKVIS 2.0 for the FPV Drone Racing VIO Competition 2020(S Leutenegger), 2020.
[bibtex]
2019
Journal Articles
[]
Fully autonomous micro air vehicle flight and landing on a moving target using visual–inertial estimation and model-predictive control(D Tzoumanikas, W Li, M Grimm, K Zhang, M Kovac and S Leutenegger), In Journal of Field Robotics, volume 36, 2019.
[bibtex]
Conference and Workshop Papers
[]
KO-Fusion: dense visual SLAM with tightly-coupled kinematic and odometric tracking(C Houseago, M Bloesch and S Leutenegger), In 2019 International Conference on Robotics and Automation (ICRA), 2019.
[bibtex]
2017
Conference and Workshop Papers
[]
Dense rgb-d-inertial slam with map deformations(T Laidlow, M Bloesch, W Li and S Leutenegger), In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.
[bibtex]
2015
Journal Articles
[]
Keyframe-based visual–inertial odometry using nonlinear optimization(S Leutenegger, S Lynen, M Bosse, R Siegwart and P Furgale), In The International Journal of Robotics Research, SAGE Publications, volume 34, 2015.
[bibtex]
2014
Conference and Workshop Papers
[]
A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM(J Nikolic, J Rehder, M Burri, P Gohl, S Leutenegger, PT Furgale and R Siegwart), In 2014 IEEE international conference on robotics and automation (ICRA), 2014.
[bibtex]
PhD Thesis
[]
Unmanned solar airplanes: Design and algorithms for efficient and robust autonomous operation(S Leutenegger), PhD thesis, ETH Zurich, 2014.
[bibtex]
2013
Journal Articles
[]
State estimation for legged robots-consistent fusion of leg kinematics and IMU(M Bloesch, M Hutter, MA Hoepflinger, S Leutenegger, C Gehring, CD Remy and R Siegwart), In Robotics, MIT Press, volume 17, 2013.
[bibtex]
Optical Flow and SLAM with Event Cameras (Imperial College)
Event cameras are novel camera systems that sense intensity change independently per pixel and report these events of change — brighter or darker by a specific amount — with a very accurate timestamp. As such, they are inspired from biology (retina) and offer the potential to overcome difficulties with motion blur or dynamic range that standard frame-based cameras face.
We have been looking at two different challenges: first, we tried to simply reconstruct both video and optical flow from the events: the approach should be able to deal with any scene content. Second, we tackled reconstruction of semi-dense depth and intensity keyframes along with general camera motion, where the scene is assumed to be static — effectively SLAM with an event camera.
Former collaborators:
Patrick Bardow (Previously Dyson Robotics Lab at Imperial College London, now Google)
Prof. Andrew Davison (Imperial College London)
Hanme Kim (previously Robot Vision Group at Imperial College London, now Toyota Research Institute)
Event-based vision: A survey(G Gallego, T Delbruck, G Orchard, C Bartolozzi, B Taba, A Censi, S Leutenegger, A Davison, J Conradt, K Daniilidis and others), In arXiv preprint arXiv:1904.08405, 2019.
[bibtex]
2016
Conference and Workshop Papers
[]
Real-time 3D reconstruction and 6-DoF tracking with an event camera(H Kim, S Leutenegger and AJ Davison), In European Conference on Computer Vision, 2016.
[bibtex]
[]
Simultaneous optical flow and intensity estimation from an event camera(P Bardow, AJ Davison and S Leutenegger), In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
[bibtex]
2015
Conference and Workshop Papers
[]
Towards visual slam with event-based cameras(M Milford, H Kim, S Leutenegger and A Davison), In The problem of mobile sensors workshop in conjunction with RSS, 2015.
[bibtex]
[]
Place recognition with event-based cameras and a neural implementation of SeqSLAM(M Milford, H Kim, M Mangan, S Leutenegger, T Stone, B Webb and A Davison), In Innovative Sensing for Robotics: Focus on Neuromorphic Sensors Workshop at IEEE International Conference on Robotics and Automation (ICRA), 2015.
[bibtex]