Archive for June, 2012

[Books] Brain Rules by John Medina

In this post I would like to write a summary about a book currently I am reading. The purpose is simple, I just want to point out what ideas that are important I got from the book.


“Brain Rules”
12 Principles for Surviving and Thriving at Work, Home, and School

by John Medina (official website, or order from Amazon.com)

exercise
Rule #1: Exercise boosts brain power.

survival
Rule #2: The human brain evolved, too.

wiring
Rule #3: Every brain is wired differently.

attention
Rule #4: We don’t pay attention to boring things.

short-term memory
Rule #5: Repeat to remember.

long-term memory
Rule #6: Repeat to remember.

sleep
Rule #7: Sleep well, think well.
In one research, a 26-minutes nap in the afternoon improved NASA pilot’s performance by 34 percent.

stress
Rule #8: Stressed brains don’t learn the same way.

sensory integration
Rule #9: Stimulate more of the senses.

vision
Rule #10: Vision trumps all other senses.

gender
Rule #11: Male and female brains are different

exploration
Rule #12: We are powerful and natural explorers.

Visual navigation with SURF feature matching (simulation version)

Last week, I did a presentation regarding the progress of my research work in front of Laboratory member. It’s mainly about the autonomous mobile robot navigation based on computer vision technology. Please refer to the below video for further detail :

As shown in the above, there are two main display in the left hand side and right hand side. In the left hand side, i call it a “Command prompt debug display”. Its main purpose is to give a detail information on what is actually happen when the program is running.

As in the right hand side, there are “Reference image panel” and “Real-time image panel”. In Reference image panel, there are group of images that were previously taken in the preliminary experiment. Each image represents a scene where many of image-feature points are detected from the landscape of robot’s path. Image that is categorized as Reference is considered to be the best scene for matching and can be used as a landmark at significant distance and direction.

As for the Real time images, it shows images from the real-time view of robot’s camera.

In this experiment, I am trying to do a real-time scene matching that would probably used for the purpose of visual navigation in autonomous mobile robot.

Source code and related work can be referred to the following links =
Previous work
http://www.cs.gunma-u.ac.jp/~ohta/TkbChrng.html