The overall aim of present work is to develop a considerable robust and simple method for the estimation of the relative position of an unmanned aerial vehicle (UAV) with respect to a runway and automatic landing of UAV based on camera images. Various automatic landing systems based on INS, GPS, ILS and tracking radar are not always available or applicable due to complexity, cost or limits from operating environment. Vision based landing has been found as an attractive since it is passive, low cost and does not require any special equipment other than a camera and a vision processing unit onboard. An uncalibrated camera is used to give high accuracy position information for automatic landing on the runway. In addition to camera, only an airspeed sensor is added because the velocity relative to the air is very important during landing. The special feature of this method is that neither a calibrated camera (referring to focus length and mounting angles relative to the aircraft) is required nor the identification of special points of the runway and their 3-D location has to be known. The only reference to the 3-D world which has to be known is the width of the runway stripe. The robustness and the minimum requirement of real-world 3-D data yield a lower effort for certification, calibration and maintenance of a camera based positioning system. The proposed algorithm extracts the runway boundary lines using Hough transform and estimates the position, lateral offset of UAV with respect to the runway centre line. An aircraft model for simulation and control system design purposes is also presented. Finally, some simulation results using Flight Gear simulation software, where the described method has been implemented, are explained.
Srikanth SaripalliJ.F. MontgomeryGaurav S. Sukhatme
Run YeChao TaoBin YanTing Yang
Vilas K. ChitrakaranD.M. DawsonJian ChenM. Feemster