Tuesday, August 16, 2011

Kinect and OpenKinect on Apple Mac OSX

Adrian, in Kinect-3D!
The Kinect is a revolutionary sensor for robotics - the low price point has seen an explosion of interest and applications in this field. (Hopefully we will see the same for the Neato XV11 low cost laser scanner one day). I was very impressed by the data the Kinect returned, it is a fantastic device.

To get started, just follow the OSX install instructions at OpenKinect. You will need MacPorts. A copy of the instructions is here:
sudo port install git-core
sudo port install libtool
sudo port install libusb-devel
git clone https://github.com/OpenKinect/libfreenect.git
cd libfreenect
mkdir build
cd build
ccmake ..
cmake ..
make
sudo make install
(Press 'g' to generate your cmake make files). Very simple.

Now check under system profiler to see that the Kinect device is detected.

I had the following error when I tried to read from the device:
Isochronous transfer error: 1 

Thankfully, this was easy to solve with a fully self-contained installer from AS3 kinect OSX download package. You should now be able to read the color image, depth map, and command the servo.

The next step will likely be to interpret the data in some form. The Point Cloud Library will be very useful for processing the data. A tutorial on Point Cloud Library is available, although many features are breezed over.

But to begin with PCL, you will need Eigen, a fast vector and matrix math library. (Also used in MRPT)
First download Eigen, and unzip it. Building Eigen is quite straight forward:
cd eigen-eigen-3.0.1
mkdir build
cd build/
cmake ../
sudo make install

PCL has a self installing package, and will install itself to /usr/local/lib/libpcl_*
Here is a simple test program from the tutorial:


#include
#include "pcl/io/pcd_io.h" 
#include "pcl/point_types.h" 

int main (int argc, char** argv) { 
    pcl::PointCloud cloud; 
    //Fill the cloud data 
    cloud.width = 5
    cloud.height = 1
    cloud.is_dense = false
    cloud.points.resize(cloud.width * cloud.height); 
    
    for (size_t i = 0; i < cloud.points.size(); ++i) { 
        cloud.points[i].x = 1024 * rand() / (RAND_MAX + 1.0); 
        cloud.points[i].y = 1024 * rand() / (RAND_MAX + 1.0); 
        cloud.points[i].z = 1024 * rand() / (RAND_MAX + 1.0); 
    } 
    
    pcl::io::savePCDFileASCII("test_pcd.pcd", cloud); 
    std::cerr << "Saved " << cloud.points.size () << " data points to test_pcd.pcd. " << std::endl; 
    
    for (size_t i = 0; i < cloud.points.size(); ++i) 
        std::cerr << "    " << cloud.points[i].x << " " << cloud.points[i].y << " " << cloud.points[i].z << std::endl; 
    
    return(0); 



This can be compiled directly with:
g++ pcd_write.cpp -c -I /usr/local/include/pcl-1.0/ -I /usr/local/include/eigen3/ 
g++ pcd_write.o /usr/local/lib/libpcl_io.dylib 

Finally, attached is a youtube video showing some very impressive visual SLAM surface reconstruction with the Kinect (See KinectFusion project page).


Of course, you will probably want some image stabilization technology to get smooth input videos, so why not try an Owl?


Happy Hacking!

No comments: