3-D rigid body tracking using vision and depth sensors

Gedik O. S., ALATAN A. A.

IEEE Transactions on Cybernetics, vol.43, no.5, pp.1395-1405, 2013 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 43 Issue: 5
  • Publication Date: 2013
  • Doi Number: 10.1109/tcyb.2013.2272735
  • Journal Name: IEEE Transactions on Cybernetics
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.1395-1405
  • Ankara Yıldırım Beyazıt University Affiliated: No


In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. © 2013 IEEE.