Visual navigation with SURF feature matching (simulation version)

Last week, I did a presentation regarding the progress of my research work in front of Laboratory member. It’s mainly about the autonomous mobile robot navigation based on computer vision technology. Please refer to the below video for further detail :

As shown in the above, there are two main display in the left hand side and right hand side. In the left hand side, i call it a “Command prompt debug display”. Its main purpose is to give a detail information on what is actually happen when the program is running.

As in the right hand side, there are “Reference image panel” and “Real-time image panel”. In Reference image panel, there are group of images that were previously taken in the preliminary experiment. Each image represents a scene where many of image-feature points are detected from the landscape of robot’s path. Image that is categorized as Reference is considered to be the best scene for matching and can be used as a landmark at significant distance and direction.

As for the Real time images, it shows images from the real-time view of robot’s camera.

In this experiment, I am trying to do a real-time scene matching that would probably used for the purpose of visual navigation in autonomous mobile robot.

Source code and related work can be referred to the following links =
Previous work
http://www.cs.gunma-u.ac.jp/~ohta/TkbChrng.html

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: