Humans locate and track objects and other humans in the environment using audio, vision and a basic sense of intuition based on doA of sound. We also use prior results of search in other rooms. Through this project, we have tried to implement human search and rescue behavior on a robot with a goal of tracking human. There are two parts to this project, for detecting sound source on a map, we use stereo audio using two microphone arrays which output doA and using triangulation, a potential goal with highest probability of human presence is selected. In the next part, we consider a case of a floor with multiple rooms. If we have to find a person in another room, who may or may not call or may just call once, we start looking in nearest room and keep updating the probability of human presence in all rooms based on current result; if human was not found in current room and if we hear a sound, we travel in that direction and search rooms in that direction first and confirm human presence using vision. We have tried to implement this human behavior in robot through novel algorithm inspired from Monte Carlo localization with the difference in point generation. We make use of probability and sensor fusion.--Author's abstract
Gaurav SinghPaul GhanemTaşkın Padır
Gyan TatiyaJonathan FrancisIngrid NavarroNariaki KitamuraEric NybergJivko SinapovJean Oh
Xingchen WangYan DingBeichen ShaoFei MengChao Chen
Xinzhu LiuDi GuoHuaping LiuFuchun Sun