Archive for the ‘Research & Activity’ Category

[Wisata Jepang] Tokyo Sky Tree Night View

Kamehame in Odaiba!

Bagi rekan-rekan yang berencana jalan-jalan ke Jepang, coba deh luangkan waktu untuk mampir ke Odaiba (daerah pinggir pantai Tokyo). Di sana banyak sekali terdapat objek wisata menarik seperti ‘Rainbow Bridge’ (jembatan pelangi), patung Liberty (tiruan) dan stasiun televisi Fuji.
Pemandangan akan sangat indah ketika hari mulai menjelang malam. Kerlip lampu kendaraan dan iluminasi lampu Jembatan Pelangi (Rainbow Bridge) semakin menambah semaraknya kota Tokyo di malam hari.
Sembari asik menikmati pemandangan malam di Odaiba, kami pun menyempatkan diri untuk belajar Kamehame di pelataran dekat jembatan Odaiba: (hehe…)

*Foto diedit dengan aplikasi smartphone



On Job Training

IBM  Setelah sekitar tiga bulan training di kantor pusat (Tokyo) dan kantor di Chiba, akhirnya saya diberikan kesempatan untuk OJT (on job training) di daerah Toyosu, IBM Tokyo Laboratory.

Master Thesis Defense 2013

Master Thesis Defense:
February 21st, 2011 Time: 14:55 – 15:55
Media Room, Department of Computer Science, Gunma University Kiryu Campus
*This paper has been accepted for publication in the Asian Conference on Pattern Recognition 2013 (in conjunction with IAPR–International Association of Pattern Recognition).
Thesis: Pdf file will be available soon

“Performance Evaluation of Feature Descriptors for Application in
Outdoor-scene Visual Navigation”

Abstract: During the past decades, scientists
have made numerous efforts in developing autonomous mobile robots.
One of the popular method to be applied for visual navigation is
scene matching algorithm. The idea is to localize the robot
position by finding out matching between the input running scenes
and a set of reference images. However, to achieve an appropriate
matching between the compared images is not an easy task. A
frequent change in the illumination intensity is one of the biggest
challenge during the experiment of scene matching. Unfortunately,
there is no quantitative evaluation of feature matching
performance, especially focusing in the problem of illumination
changes. This thesis presents an investigation of a number of
popular feature detectors and descriptors in matching the outdoor
environment and observe the performance in three different lighting
scenes, i.e., sunny, cloudy daytime, and cloudy evening, captured
within the same route by the autonomous mobile robot. To give an
equality comparison, we applied the Lowe’s matching procedure
[Low04] for all compared methods. The matching percentage and ROC
curve are used for the evaluation measurements. As for the
experimental results, the hybrid performance of FAST detector and
SURF descriptor gives the best evaluation measurement by showing
the largest value of area under ROC curve. In addition, FAST, and
SURF show advantages on their speed in extracting local image
features which is favorable for real time application. On the other
hand, SIFT achieves its stability in nearly all situations. ASIFT
presents the highest number of extracted keypoints although it
suffers from the problem of computation complexity and redundant
matches. Video

Bachelor Thesis 2011

Bachelor Thesis Presentation:
Date: February 23rd, 2011
Time: 13:00 – 15:00. 7 sessions. (10 minutes presentation & 5 minutes discussion)
Place: Media Room, Department of Computer Science, Gunma University Kiryu Campus

(in Japanese)
“Polygon recognition algorithm for the purpose of image pattern matching and its application to autonomous mobile robot”

『図形パターン照合 のための多角形認識アルゴリズムとその自律走行ロボットへの応用』

The problem of detecting and recognizing polygon shapes in images becomes one of important research topics in the field of image processing. In this paper, we present an effective approach to recognize polygon shapes based on object contour approximation. Its application is then deployed into our participation in The Tsukuba Challenge autonomous mobile robot competition. Experimental results indicate that the proposed technique shows promising achievement on recognizing a unique (triangle) shape on the automatic door near the Goal area.
画像認識の様々な状況で必要とされる図形パターンの照合を行うため、本研究では多角形の認識を用いたアルゴリズムを考案し、実験 した。このアルゴリズムでは、画像内にある物体の輪郭と認識目的の形状との類似度を計算することによって図形を認識する。このアルゴリズ ムを自律型走行ロボット大会「つくばチャレンジ」のロボットの環境認識に適用して例を述べる。このアルゴリズムによってゴール付近の自動 ドアにある特徴的な形状(三角形)を認識し、ロボットをゴールへと誘導する。

Power Point Slide
File = Presentation.pptx


Matlab recommended toolbox for Image processing and Computer Vision

MATLAB 2012a Student Version
Add-on products that extend MATLAB and Simulink:
– Control System Toolbox
– Image Processing Toolbox
– Optimization Toolbox
– DSP System Toolbox
– Signal Processing Toolbox
– Simulink Control Design
– Statistics Toolbox
– Symbolic Math Toolbox

Additional purchase for Computer Vision System Toolbox.
I recommend this toolbox to ease your work on Computer Vision!
Sample source code:
Detect SURF Interest Points in a Grayscale Image

%Detect interest points in a grayscale, image and mark their locations.
I = imread('c:/images/cameraman.tif');
points = detectSURFFeatures(I);
imshow(I); hold on;

Continue reading

[OpenCV] SIFT implementation in OpenCV 2.4

#include "opencv2\opencv.hpp"

#include <stdio.h>
#include <opencv2/legacy/legacy.hpp>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/nonfree/features2d.hpp>

using namespace std;
using namespace cv;

int main(  )
        //source image
	char* img1_file = "C:/images/box.png";
	char* img2_file = "C:/images/box.png";

	// image read
	Mat tmp = cv::imread( img1_file, 1 );
	Mat in  = cv::imread( img2_file, 1 );

	/* threshold      = 0.04;
	   edge_threshold = 10.0;
	   magnification  = 3.0;	*/

	// SIFT feature detector and feature extractor
	cv::SiftFeatureDetector detector( 0.05, 5.0 );
	cv::SiftDescriptorExtractor extractor( 3.0 );

	/* In case of SURF, you apply the below two lines
	cv::SurfFeatureDetector detector();
	cv::SurfDescriptorExtractor extractor();

	// Feature detection
	std::vector keypoints1, keypoints2;
	detector.detect( tmp, keypoints1 );
	detector.detect( in, keypoints2 );

	// Feature display
	Mat feat1,feat2;
	drawKeypoints(tmp,keypoints1,feat1,Scalar(255, 255, 255),DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
	drawKeypoints(in,keypoints2,feat2,Scalar(255, 255, 255),DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
	imwrite( "feat1.bmp", feat1 );
	imwrite( "feat2.bmp", feat2 );
	int key1 = keypoints1.size();
	int key2 = keypoints2.size();
	printf("Keypoint1=%d \nKeypoint2=%d", key1, key2);

	// Feature descriptor computation
	Mat descriptor1,descriptor2;
	extractor.compute( tmp, keypoints1, descriptor1 );
	extractor.compute( in, keypoints2, descriptor2 );
	/*int desc1 = descriptor1.size;
	int desc2 = descriptor2.size;
	printf("Descriptor1=%d \nDescriptor2=%d", desc1, desc2);*/

	// corresponded points
	std::vector matches;

	// L2 distance based matching. Brute Force Matching
	BruteForceMatcher<L2 > matcher;

	// Flann-based matching
	//FlannBasedMatcher matcher;

	// display of corresponding points
	matcher.match( descriptor1, descriptor2, matches );

	// matching result
	Mat result;
	drawMatches( tmp, keypoints1, in, keypoints2, matches, result );

	// output file
	imwrite( "result.bmp", result );

	// display the result
	namedWindow("SIFT", CV_WINDOW_AUTOSIZE );
	imshow("SIFT", result);
	waitKey(0); //press any key to quit

	return 0;

Input image:box



%d bloggers like this: