Posts Tagged ‘OpenCV’

Install OpenCV 4 on Mac OS

Install OpenCV 4.0 on Mac OS

Step 1. Install XCode
1.1. First, we need to install the latest XCode.
Grab the info and download the binary from the below Apple website:

Or, you may download XCode from Apple App Store -> find the XCode app -> install XCode.

1.2. After installation completed, open XCode and accept license agreement.

Step 2. Install Homebrew

2.1. Install the Mac community package manager, Homebrew.

$ ruby -e "$(curl -fsSL"

Then, update the Homebrew definitions:

$ brew update

2.2. Add Homebrew to PATH
In order to make our work simple, don’t forget to add Homebrew to our working PATH at .bash_profile file.

$ echo "# Homebrew" >> ~/.bash_profile
$ echo "export PATH=/usr/local/bin:$PATH" >> ~/.bash_profile

Step 3. Install OpenCV prerequisites using Homebrew

3.1. Install Python 3.6

$ brew install python3

Verify whether the Python installation OK or not by typing the below command:

$ which python3

$ python3

Python 3.6.5 (default, Jun 17 2018, 12:13:06) 
[GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

3.2. Install CMake and QT (optional)
(This code is only executed if you need CMake and QT for your development)
If you need to work the OpenCV project with CMake and QT, you can execute the below code. If you only need Python, skip this procedure.

$ brew install cmake
$ brew install qt5

Later in this installation, we need to specify the QT path to a variable:

$ QT5PATH=/usr/local/Cellar/qt/5.12.2

*make sure the above path is available on your environment. Check the availability by making sure the path (ls -l /usr/local/Cellar)

Step 4: Install Python dependencies for OpenCV 4
We will install the Python dependencies for OpenCV 4 in this procedure.

$ sudo -H pip3 install -U pip numpy

Now the pip is installed, next, we can install virtualenv and virtualenvwrapper, two tools for managing virtual environments. Python virtual environments are a best practice for Python development and recommended to take full advantage of them.

$ sudo -H python3 -m pip install virtualenv virtualenvwrapper

$ VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3

$ echo "VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3" >> ~/.bash_profile
$ echo "# Virtual Environment Wrapper" >> ~/.bash_profile
$ echo "source /usr/local/bin/" >> ~/.bash_profile

$ source /usr/local/bin/

The virtualenvwrapper tool provides various of terminal commands:
-mkvirtualenv : Used to “make a virtual environment”
-rmvirtualenv : Destroys a virtual environment
-workon : Activates a virtual environment
-deactivate : Deactivates the current virtual environment
-Refer to this link for more information.

Now, let’s create a Python virtual environment for OpenCV.
In this command, the virtual environment for Python 3 and OpenCV4 will be defined as py3cv4. You may take your own virtual environment name as you wish.

$ mkvirtualenv py3cv4 -p python3

The command result may look like this:

Running virtualenv with interpreter /usr/local/bin/python3
Using base prefix '/usr/local/Cellar/python/3.6.5_1/Frameworks/Python.framework/Versions/3.6'
New python executable in /Users/admin/.virtualenvs/cv/bin/python3.6
Also creating executable in /Users/admin/.virtualenvs/cv/bin/python
Installing setuptools, pip, wheel...
virtualenvwrapper.user_scripts creating /Users/admin/.virtualenvs/cv/bin/predeactivate
virtualenvwrapper.user_scripts creating /Users/admin/.virtualenvs/cv/bin/postdeactivate
virtualenvwrapper.user_scripts creating /Users/admin/.virtualenvs/cv/bin/preactivate
virtualenvwrapper.user_scripts creating /Users/admin/.virtualenvs/cv/bin/postactivate
virtualenvwrapper.user_scripts creating /Users/admin/.virtualenvs/cv/bin/get_env_details

Next, let’s install NumPy, CMake, and other library while we’re inside the environment.

$ pip install cmake numpy scipy matplotlib scikit-image scikit-learn ipython dlib

# quit virtual environment
$ deactivate

Step #5: Compile OpenCV 4 for macOS

5.1. Download OpenCV 4
Navigate to our working folder and download both opencv and opencv_contrib.
In this command, we will create opencv and opencv_contrib folder inside home folder.

$ mkdir -p ~/opencv ~/opencv_contrib
$ git clone
$ cd opencv
$ git checkout master
$ cd ..

$ git clone
$ cd opencv_contrib
$ git checkout master
$ cd ..

Navigate back to OpenCV repo and create & enter a build directory.

$ cd ~/opencv
$ mkdir build
$ cd build

Now we are ready to execute the CMake.
Make sure to use the workon command before executing the cmake command as shown below.
Notes: I am using py3cv4 as virtual environment. If you are using other name for virtual environment, you should change the below code as your own environment.
This command will take several times. (in my environment it took about 50 minutes ^^;)

$ workon py3cv4
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
    -D OPENCV_PYTHON3_INSTALL_PATH=~/.virtualenvs/py3cv4/lib/python3.7/site-packages \
    -D PYTHON3_LIBRARY=`python -c 'import subprocess ; import sys ; s = subprocess.check_output("python-config --configdir", shell=True).decode("utf-8").strip() ; (M, m) = sys.version_info[:2] ; print("{}/libpython{}.{}.dylib".format(s, M, m))'` \
    -D PYTHON3_INCLUDE_DIR=`python -c 'import distutils.sysconfig as s; print(s.get_python_inc())'` \
    -D BUILD_opencv_python2=OFF \
    -D BUILD_opencv_python3=ON \
    -D WITH_TBB=ON \
    -D WITH_V4L=ON \
    -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \

This is required for OpenCV 4 if you want access to patented algorithms for educational purposes.
-Once CMake has finished, you will see the following information in the terminal:

*If you need QT in your project, don’t forget to add the below command.
The QT5PATH should be defined in the previous step (3.2. Install CMake and QT)


Up to this step, if your CMake output is good to go you can kick off the compilation via:

$ make -j$(sysctl -n hw.physicalcpu)
$ sudo make install

When the process is finished 100%, the screenshot should be like this:

5.2. Install imutils

$ workon py3cv4
$ pip install imutils

Step 6: Test your macOS + OpenCV 4

#Activate your Virtual Environment
$ workon py3cv4

$ python
>>> import cv2
>>> cv2.__version__
>>> exit()

Lets Run Our First OpenCV Application!
You may clone one of my OpenCV sample from the below GitHub resource.

#Activate your Virtual Environment
$ workon py3cv4

$ git clone

Cloning into 'opencv'...
remote: Enumerating objects: 46, done.
remote: Counting objects: 100% (46/46), done.
remote: Compressing objects: 100% (42/42), done.
remote: Total 46 (delta 2), reused 46 (delta 2), pack-reused 0
Unpacking objects: 100% (46/46), done.

$ ls 

$ cd opencv/1_experiment/1_face_recognition_adrian/

#Execute the Python program for Realtime Face Recognition: 
$ python --detector face_detection_model \
	--embedding-model openface_nn4.small2.v1.t7 \
	--recognizer output/recognizer.pickle \
	--le output/le.pickle

#Change directory to Face Detection program
$ cd ../2_face_detection_deeplearning

$ python --image leaders.jpg --prototxt deploy.prototxt.txt --model res10_300x300_ssd_iter_140000.caffemodel 

#When you finish the program, deactivate Virtual Environment
$ deactivate

Sample result:


Install OpenCV 4 on macOS (C++ and Python)

Install OpenCV 4 on macOS

Harris Corner Detection

Press the Stop button (■) to pause the current slide.

This slideshow requires JavaScript.

You may refer the previous article : click here (Corner Detection).

*source code and other files (pictures, etc) will be updated soon.

Kinect and OpenCV

After struggling several days with all stuff related to OpenKinect (libfreenect) and Microsoft Visual Studio 2008, finally I could execute the experiment on getting the Kinect RGB-Depth image wrapped with the OpenCV2.1 library functions.

Special thanks to Tisham Dhar who wrote a very nice article on his blog   :
You can access the source code from his google code page : freenectopencv.cpp

Or, the below code is taken from Tisham’s page which then combined with Canny Filter operation :
(comment out all the glview.c code and replace with the below source code)

/* freenectopencv.cpp
Copyright (C) 2010 Arne Bernin
This code is licensed to you under the terms of the GNU GPL, version 2 or version 3;

 * Makefile for ubuntu, assumes that libfreenect.a is in /usr/lib, and libfreenect.h is in /usr/include
 * make sure you have the latest version of freenect from git!

* Makefile
CXXFLAGS = -O2 -g -Wall -fmessage-length=0 `pkg-config opencv --cflags ` -I /usr/include/libusb-1.0

OBJS = freenectopencv.o

LIBS = `pkg-config opencv --libs` -lfreenect

TARGET = kinectopencv

$(CXX) -o $(TARGET) $(OBJS) $(LIBS)

all: $(TARGET)

rm -f $(OBJS) $(TARGET)

*************************************************************************************************** * End of Makefile

#include <stdio.h>
#include <string.h>
#include <math.h>

#include <libfreenect.h>
#include <pthread.h>


#include <cv.h>
#include <highgui.h>

#define FREENECTOPENCV_WINDOW_D "Depthimage"
#define FREENECTOPENCV_WINDOW_N "Normalimage"

IplImage* depthimg = 0;
IplImage* rgbimg = 0;
IplImage* tempimg = 0;
IplImage* canny_img = 0;
IplImage* canny_temp = 0;
pthread_mutex_t mutex_depth = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t mutex_rgb = PTHREAD_MUTEX_INITIALIZER;
pthread_t cv_thread;

// callback for depthimage, called by libfreenect
void depth_cb(freenect_device *dev, void *depth, uint32_t timestamp)

        cv::Mat depth8;

        mydepth.convertTo(depth8, CV_8UC1, 1.0/4.0);
        pthread_mutex_lock( &mutex_depth );
        memcpy(depthimg->imageData,, 640*480);
        // unlock mutex
        pthread_mutex_unlock( &mutex_depth );


// callback for rgbimage, called by libfreenect

void rgb_cb(freenect_device *dev, void *rgb, uint32_t timestamp)

        // lock mutex for opencv rgb image
        pthread_mutex_lock( &mutex_rgb );
        memcpy(rgbimg->imageData, rgb, FREENECT_VIDEO_RGB_SIZE);
        // unlock mutex
        pthread_mutex_unlock( &mutex_rgb );

 * thread for displaying the opencv content
void *cv_threadfunc (void *ptr) {
		cvNamedWindow( "Canny Image", CV_WINDOW_AUTOSIZE );
		cvNamedWindow( "Depth Canny", CV_WINDOW_AUTOSIZE );

        // use image polling
        while (1) {
                //lock mutex for depth image
                pthread_mutex_lock( &mutex_depth );
                // show image to window
                cvCanny(depthimg, canny_temp, 50.0, 200.0, 3);

				cvShowImage("Depth Canny", canny_temp);
                //unlock mutex for depth image
                pthread_mutex_unlock( &mutex_depth );

                //lock mutex for rgb image
                pthread_mutex_lock( &mutex_rgb );
                // show image to window
				cvCvtColor(tempimg, canny_img, CV_BGR2GRAY);
                cvShowImage(FREENECTOPENCV_WINDOW_N, tempimg);

				// Canny filter
				cvCanny(canny_img, canny_img, 50.0, 200.0, 3);
				cvShowImage("Canny Image", canny_img);
                //unlock mutex
                pthread_mutex_unlock( &mutex_rgb );

                // wait for quit key
                if( cvWaitKey( 15 )==27 )


		return NULL;


int main(int argc, char **argv)

        freenect_context *f_ctx;
        freenect_device *f_dev;

        int res = 0;
        int die = 0;
        printf("Kinect camera test\n");

        if (freenect_init(&f_ctx, NULL) < 0) {
                        printf("freenect_init() failed\n");
                        return 1;

                if (freenect_open_device(f_ctx, &f_dev, 0) < 0) {
                        printf("Could not open device\n");
                        return 1;

        freenect_set_depth_callback(f_dev, depth_cb);
        freenect_set_video_callback(f_dev, rgb_cb);
        freenect_set_video_format(f_dev, FREENECT_VIDEO_RGB);

        // create opencv display thread
        res = pthread_create(&cv_thread, NULL, cv_threadfunc, (void*) depthimg);
        if (res) {
                printf("pthread_create failed\n");
                return 1;

        printf("init done\n");


        while(!die && freenect_process_events(f_ctx) >= 0 );


Please notice that I am using : libfreenect for Windows + Microsoft Visual Studio 9 (2008) + OpenCV 2.1

SURF-based Image Recognition

Here are the steps :

1. Compute the gray-scale and calculate the SURF features from the model image

2. Turn on the camera and get the real-time input image. Convert each frame to gray-scale

3. Compute the SURF features of the gray-scale camera frame

4. At this step, we want to compare between “model image” and “input image” (camera frame).
For all features of the model, for all features of the camera frame, determine if they represent the same point (calculation of their distance and thresholding) ;

5. Once we have obtained the pairs of associated points, we determine the homography matching all these pairs (using RANSAC or least median squares algorithm) ;

6. Drawing of the projection of the input frame in the illustration frame using this homography.

reference :

Application in outdoor visual navigation:

SIFT implementation in OpenCV 2.4

Continue reading

[OpenCV] Corner Detection

Let’s try to detect corner from an image.
In this sample, I am using my own picture = building.jpg.
You can edit the below source code and try to detect corner from different image.

I am implementing two types of function to detect corner :
1. cvCornerMinEigenVal
The function cvCornerMinEigenVal is to calculate and store the minimal eigen value of derivative covariation matrix for every pixel, i.e. min(λ1, λ2) in terms of the previous function.

2. cvCornerHarris
Harris edge detector. You can refer to the original paper = A Combined Corner and Edge Detector

You may also refer to the next article : click here (Harris Corner Detection).

Source code :

#include <stdio.h>
#include <cv.h>
#include <highgui.h>

int main (void)
	int i, corner_count = 150;
	IplImage *dst_img1, *dst_img2, *src_img_gray;
	IplImage *eig_img, *temp_img;
	CvPoint2D32f *corners;

	//image file
	char imagePath[256] = "c:\\images\\building.jpg";
	printf("%s\n", imagePath);

	dst_img1 = cvLoadImage (imagePath, CV_LOAD_IMAGE_ANYCOLOR | CV_LOAD_IMAGE_ANYDEPTH);
	dst_img2 = cvCloneImage (dst_img1);
	src_img_gray = cvLoadImage (imagePath, CV_LOAD_IMAGE_GRAYSCALE);
	eig_img = cvCreateImage (cvGetSize (src_img_gray), IPL_DEPTH_32F, 1);
	temp_img = cvCreateImage (cvGetSize (src_img_gray), IPL_DEPTH_32F, 1);
	corners = (CvPoint2D32f *) cvAlloc (corner_count * sizeof (CvPoint2D32f));

	// (1)Corner detection using cvCornerMinEigenVal
	cvGoodFeaturesToTrack (src_img_gray, eig_img, temp_img, corners, &corner_count, 0.1, 15);
	cvFindCornerSubPix (src_img_gray, corners, corner_count,
					  cvSize (3, 3), cvSize (-1, -1), cvTermCriteria (CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03));
	// (2)Draw the detected corner
	for (i = 0; i < corner_count; i++)
	cvCircle (dst_img1, cvPointFrom32f (corners[i]), 3, CV_RGB (255, 0, 0), 2);

	//Message for debugging
	printf("MinEigenVal corner count = %d\n", corner_count);

	// (3)Corner detection using cvCornerHarris
	corner_count = 150;
	cvGoodFeaturesToTrack (src_img_gray, eig_img, temp_img, corners, &corner_count, 0.1, 15, NULL, 3, 1, 0.01);
	cvFindCornerSubPix (src_img_gray, corners, corner_count,
					  cvSize (3, 3), cvSize (-1, -1), cvTermCriteria (CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03));
	// (4)Draw the detected corner
	for (i = 0; i < corner_count; i++)
	cvCircle (dst_img2, cvPointFrom32f (corners[i]), 3, CV_RGB (0, 0, 255), 2);

	//Message for debugging
	printf("Harris corner count = %d\n", corner_count);

	// (5)Display the result
	cvNamedWindow ("EigenVal", CV_WINDOW_AUTOSIZE);
	cvShowImage ("EigenVal", dst_img1);
	cvNamedWindow ("Harris", CV_WINDOW_AUTOSIZE);
	cvShowImage ("Harris", dst_img2);
	cvWaitKey (0);

	cvDestroyWindow ("EigenVal");
	cvDestroyWindow ("Harris");
	cvReleaseImage (&dst_img1);
	cvReleaseImage (&dst_img2);
	cvReleaseImage (&eig_img);
	cvReleaseImage (&temp_img);
	cvReleaseImage (&src_img_gray);

	return 0;

Result :

%d bloggers like this: