English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

A Self-Contained Teleoperated Quadrotor: On-Board State Estimation and Indoor Obstacle Avoidance

MPS-Authors
/persons/resource/persons192619

Odelga,  M
Project group: Autonomous Robotics & Human-Machine Systems, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cybernetics Approach to Perception & Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

https://icra2018.org/
(Table of contents)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

ICRA-2018-Odelga.pdf
(Any fulltext), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Odelga, M., Stegagno, P., Kochanek, N., & Bülthoff, H. (2018). A Self-Contained Teleoperated Quadrotor: On-Board State Estimation and Indoor Obstacle Avoidance. In IEEE International Conference on Robotics and Automation (ICRA 2018) (pp. 1-8).


Cite as: https://hdl.handle.net/21.11116/0000-0001-7D4E-D
Abstract
Indoor operation of unmanned aerial vehicles (UAVs) poses many challenges due to the lack of GPS signal and cramped spaces. The presence of obstacles in an unfamiliar environment requires reliable state estimation and active algorithms to prevent collisions. In this paper, we present a teleoperated quadrotor UAV platform equipped with an on-board miniature computer and a minimal set of sensors for this task. The platform is capable of highly accurate state-estimation, tracking of desired velocity commanded by the user and ensuring collision-free navigation. The robot estimates its linear velocity through a Kalman filter integration of inertial and optical flow (OF) readings with corresponding distance measurements. An RGB-D camera serves the purpose of providing visual feedback to the operator and depth measurements to build a probabilistic, robo-centric obstacle model, allowing the robot to avoid collisions. The platform is thoroughly validated in experiments in an obstacle rich environment.