Demo software to visualize, calibrate and process Kinect cameras output


This software was partly developed in the RoboticsLab and aims at providing a simple toolkit to start playing with Kinect data and develop standalone computer vision programs without the hassle of integrating existing libraries. The project is divided in a library called nestk and some demo programs using it. The library itself is easy to integrate to an existing project using cmake: just copy the nestk folder as a subfolder of your project and you should be able to start working with Kinect data. You can get more information on the nestk page?.

Current features include:

  • Grab kinect images and visualize / replay them
  • Support for libfreenect and OpenNI/Nite backends
  • Extract skeleton data / hand point position (Nite backend)
  • Integration with OpenCV and PCL
  • Multiple Kinect support and calibration
  • Calibrate the camera to get point clouds in metric space (libfreenect)
  • Export to meshlab/blender using .ply files
  • Demo of 3D scene reconstruction using a freehand Kinect
  • Demo of people detection and localization
  • Demo of gesture recognition and skeleton tracking using Nite
  • Demo of 3D model estimation of objects lying on a table (based on PCL table top object detector)
  • Demo of multiple kinect calibration
  • Linux, MacOSX and Windows support

Support


Please send your questions, patches, … to rgbdemo@googlegroups.com . You need to subscribe first, and you can consult the archives of the mailing list there: http://groups.google.com/group/rgbdemo .

New: we now also offer custom research and development through our new company: manctl.

Download


New features since v0.6.0


  • Various bug fixes
  • Compatibility with OpenCV 2.3.1
  • Cleanup viewer interface
  • Experimental infrared support with OpenNI (still buggy)
  • Camera/Projector calibration thanks to Mariano Tepper and Christian Parsons

New features since v0.5.0


  • Grabbing and calibration of multiple Kinects
  • Detection and modeling of objects lying on a table
  • Better PCL integration

You can have a look at the new object detection demo on the following video:

The multikinect calibration on the following video:

The 3D freehand reconstruction demo on the following video:

The people detection feature on the following video:

And a snapshot of the skeleton and hand point tracking here:

Running test programs from binaries


Mac binaries

You can get Intel Mac binaries from there: RGBDemo-0.6.1-Darwin.dmg (LGPL License). Just copy the RGBDemo folder to your Applications directory.

Windows binaries

You can get Win32 binaries from there: RGBDemo-0.6.1rc1-Win32.zip (LGPL License).

  • You will have to install OpenNI/Nite drivers. You can download them from OpenNI website, or you can use the copy provided in the Drivers directory.
  • Important: You first need to install OpenNI, then SensorKinect, then Nite, in this order. You can find the license key for Nite on the same website. The free license for Kinect devices is 0KOIk2JeIBYClPWVnMoRKn5cdY4=

Compiling from source


Compilation on Linux (Ubuntu)

  • OpenCV >= 2.2 is required. It can be downloaded from OpenCV website. Note: OpenCV 2.3.1 is not supported in RGBDemo 0.6.0. It is supported in the latest github version though.
  • An optional install of PCL (then you need to enable NESTK_USE_PCL cmake variable)
  • Install required packages, e.g. on Ubuntu 10.10:
sudo apt-get install libboost-all-dev libusb-1.0-0-dev libqt4-dev libgtk2.0-dev cmake libglew1.5-dev libgsl0-dev libglut3-dev libxmu-dev
  • Untar the source, use the provided scripts to launch cmake and compile:
tar xvfz rgbdemo-0.6.1-Source.tar.gz
cd rgbdemo-0.6.1-Source
./linux_configure.sh
./linux_build.sh

Compilation on Mac

You will need:

  • An install of QT
  • An install of `OpenCV > 2.2.
  • An optional install of PCL (then you need to enable NESTK_USE_PCL cmake variable)
  • Note: as of version 0.5.0, libusb is included in the library, so no need to install it.

Then run the following commands:

tar xvfz rgbdemo-0.6.0-Source.tar.gz
cd rgbdemo-0.6.0-Source
./macosx_configure.sh
./macosx_build.sh

The configure script might ask for libusb installation. Say yes if you don’t have it installed.

If you still experience some issues with libusb, or have a custom install, you can try:

cmake -DLIBUSB_1_INCLUDE_DIR=$HOME/libusb/include -DLIBUSB_1_LIBRARY=$HOME/libusb/lib/libusb-1.0.dylib build

supposing that you have it installed in $HOME/libusb.

Compilation on Windows

It has been tested with MinGW and Visual Studio 10 so far. Note that OpenNI backend is NOT available for Mingw.

You cannot use both libfreenect and OpenNI backends on Windows. You have to choose between one of them. By default, OpenNI backend will be compiled.

If you want to compile with libfreenect backend, you will first need to install the libfreenect drivers from OpenKinect Windows.

If you want to compile using Visual Studio 2008:

  • Install QT binaries for MSVC 2008?.
  • Install `OpenCV > 2.2.0 from source.
  • An optional install of PCL (then you need to enable NESTK_USE_PCL cmake variable)
  • Install OpenNI, SensorKinect, and Nite (in this order).
  • Add QT bin path to the Path environment variable, or specify QMAKE path in CMake
  • Run CMake
  • Open the generated solution in Visual Studio.

If you want to compile using Visual Studio 2010:

Here is a step-by-step procedure for Min GW?, in case you want to use libfreenect:

  • Install QT opensource for Windows. This will also install Min GW?.
  • Add C:\Qt\2010.05\Min GW?\bin to the Path environment variable
  • Install and run cmake on rgbdemo
  • Disable the NESTK_USE_OPENNI cmake variable
  • Open the CMakeLists.txt in Qt Creator? or compile manually using mingw-make.

Running the viewer


  • Binaries are in the build/bin/ directory, you can give it a try without calibration using:
build/bin/rgbd-viewer

If you get an error such as:

libusb couldn't open USB device /dev/bus/usb/001/087: Permission denied.
libusb requires write access to USB device nodes.
FATAL failure: freenect_open_device() failed

Give access rights to your user with:

sudo chmod 666 /dev/bus/usb/001/087

Or install the udev rules provided by libfreenect.

Switching between backends

There are two supported backends for Kinect devices, libfreenect and Open NI?/Nite. By default, if the NESTK_USE_OPENNI Cmake variable is enabled, demo programs will choose the OpenNI backend. If you want to switch to the libfreenect backend, you can use the freenect command line option:

build/bin/rgbd-viewer --freenect

High resolution mode

When using the OpenNI backend, you can enable high RGB resolution mode to get 1280×1024 color images @ 10Hz with the highres option:

build/bin/rgbd-viewer --highres

Calibrating your Kinect (libfreenect backend)


Note: this is only necessary if you want to use the libfreenect backend.

A sample calibration file is provided in data/kinect_calibration.yml. However, you should be able to get a more accurate mapping by estimating new parameters for each Kinect. Below is the procedure I follow.

1. Build a calibration pattern as shown in KinectCalibration?. You can use the Chessboard_A4.pdf or Chessboard_A3.pdf file in the data/ directory for this. I recommend printing the chessboard on a sheet of paper and glue it on a peace of carton. It is not necessary anymore to cut the carton around the paper.

2. Grab some images of your chessboard using the viewer (File / Grab frame or Ctrl-G). WARNING: you need to grab images in Dual IR/RGB more (enable it in the Capture menu). By default it will save them into directories grab1/view????. These directories contain the raw files, raw/color.png, raw/depth.raw and raw/intensity.raw that corresponds to the color image, the depth image (in meters), and the IR image normalized to grayscale.

To get an optimal calibration, grabbed images should ensure the following:

  • Cover as most image area as possible. Especially check for coverage of the image corners.
  • Try to get the chessboard as close as possible to the camera to get better precision.
  • For depth calibration, you will need some images with IR and depth. But for stereo calibration, the depth information is not required, so feel free to cover the IR projector and get very close to the camera to better estimate IR intrinsics and stereo parameters. The calibration algorithm will automatically determine which grabbed images can be used for depth calibration.
  • Move the chessboard with various angles.
  • I usually grab a set of 30 images to average the errors.
  • Typical reprojection error is < 1 pixel. If you get significantly higher values, it means the calibration probably failed.

3. Run the calibration program:

build/bin/calibrate_kinect_ir --pattern-size 0.025 grab1

The pattern size correspond to the size in meters of one chessboard square. It should be 0.025 (25mm) for the A4 model.

This will generate the kinect_calibration.yml file storing the parameters for the viewer, and two files calibration_rgb.yaml and calibration_depth.yaml for ROS compatibility.

Note with Mac binaries: if there is a grab1 directory in the current directory, it will be loaded automatically.

Running the viewer with calibration

  • Just give it the path to the calibration file:
build/bin/rgbd-viewer --calibration kinect_calibration.yml

New since RGBDemo v0.4.0: if there is a kinect_calibration.yml file in the current directory, it will be loaded automatically.

New since RGBDemo v0.5.0: if you are using the OpenNI backend, then the calibration parameters will be determined automatically.

  • You should get a window similar to this:
  • The main frame is the color-encoded depth image. By moving the mouse, you can see the distance in meters towards a particular pixel. Images are now undistorted.
  • You can filter out some value and normalize the depth color range with the filter window (Show / Filters). The Edge filter is recommended.
  • You can get a very simple depth-threshold based segmentation with Show / Object Detector
  • You can get a 3D view in Show / 3D Window.
  • By default you get a grayscale point cloud. You can activate color:
  • And finally textured triangles :
  • You can also save the mesh using the Save current mesh button, it will store in into a current_mesh.ply file that you can open with Meshlab Meshlab:
  • The associated texture is written into a current_mesh.ply.texture.png file and can be loaded into the UV editor in Blender.

Calibrating your Kinect (OpenNI backend)


Note: you might first want to read the previous section to get an overview of the calibration system.

OpenNI comes with a predefined calibration stored in the firmware that can directly output aligned depth and color images with a virtual constant focal length. Most applications will be happy with this calibration and do not require any additional step. However, some computer vision applications such as robotics might need a more accurate calibration. RGBDemo comes with two utilities to refine the predefined calibration.

Using calibrate-openni-intrinsics

This program takes as an input a set of images with a checkerboard, as output by rgbd-viewer. It also takes the initial calibration file, which can be created using File/Save calibration file in rgbd-viewer.

Then here is an example of usage:

./calibrate-openni-intrinsics --pattern-size 0.025 grab1 calibration.yml

The openni_calibration.yml file should contain the refined parameters.

Getting Infrared Images


  • You can activate the IR mode in the capture menu. There is also a dual RGB/IR mode alternating between the two modes.

Note: this is currently only available with libfreenect backend

Moving the Tilt motor


This is only possible with the libfreenect backend. Open the Filters window and you can set the Kinect tilt on the bottom slider.

Replay mode


  • You can grab RGBD Images? using the File/Grab Frame command. This stores the files into viewXXXX directories (see the Calibration section), that can be replayed later using the fake image grabber. This can be activated using the —image option:
build/bin/rgbd-viewer --calibration kinect_calibration.yml --image grab1/view0000
  • You can also replay a sequence of images stored in a directory with the —directory option:
build/bin/rgbd-viewer --calibration kinect_calibration.yml --directory grab1

This will cycle through the set of viewXXXX images inside the grab1 directory.

Note: You will also need a calibration file if you used OpenNI backend to grab the images. You can get one by running the viewer and selecting File/Save calibration parameters.

Interactive scene reconstruction


  • You can try an experimental interactive scene reconstruction mode using the build/bin/rgbd-reconstructor program. This is similar to the interative mapping of Intel RGBD but still in a preliminar stage. The relative pose between image captures is determined using SURF feature points matching and RANSAC.

In this mode, point clouds will progressively be aggregated in a single reference frame using a Surfel representation to avoid duplicates and smooth out the result.

  • Note: As of version 0.5.0, you can enable ICP refinement if NESTK_USE_PCL cmake variable was enabled (by default on Linux) and using the —icp option.

People detection


  • Launch rgbd-people-tracker. You need to specify a configuration file. Here an example of command line:
build/bin/rgbd-people-tracker --config data/tracker_config.yml

Calibration and config files will be loaded automatically is they are in the current directory.

Body tracking and gesture recognition


  • Launch rgbd-skeletor.

If you make the calibration pose, you should be able to see your joints. If you are interested into a minimal body tracking example, you can have a look at nestk/tests/test-nite.cpp. Enable the NESTK_BUILD_TESTS cmake variable to compile it.

Model acquisition of objects lying on a table


  • Launch rgbd-object. You might want to enable the —highres flag to get better color textures.

The Kinect must be looking at a dominant plane. Hitting “Acquire new models” should compute a 3D model for all the objects on the table. Note that objects that are too close to each other (about 5cm) might get merged into a single one. The models can be saved into individual objectXX.ply files using the @@Save meshes” button. On the right image you will see a reprojection of the models on the color image, along with the estimated volume of each object in mm3.

  • Note: PCL support is required by this demo.

Using multiple kinects


  • Launch rgbd-multikinect. You need to plug each Kinect on a different usb hub. You can set the number of connected devices with the —numdevices flag. Then you can switch between different devices using the number keys, or the Devices menu. A calibration file can be set for each device using e.g. —calibration2 calibration2.yml to set the parameters of the second device. To calibrate the extrinsics of each Kinect, you can:
  1. Use the 3D view and check calibration mode to manually move the current view until it matches the reference one.
  2. Once you’re close to a good alignment, the “Refine with ICP” button can help to finalize the registration.
  3. Another option is to use the calibrate-multiple-kinects program. You will first need to grab images of checkerboards seen by both cameras using uncalibrated rgbd-multikinect. Then you can call the calibration program with, e.g.:

./calibrate-multiple-kinects grab0 grab1 calibration1.yml calibration2.yml --pattern-size 0.025

grab0 and grab1 are the directories containing the grabbed checkboards. grab0 correspond to the reference camera, and grab1 to the Kinect whose extrinsics will be computed. calibration1.yml and calibration2.yml are the calibration files containing the intrinsics of each Kinect. These can be obtained automatically from OpenNI by using File/Save Calibration Parameters in rgbd-multikinect after activating the corresponding device. These files are usually identical though. —pattern-size is the same as in the calibration section. If calibration is successful, a calibration_multikinect.yml file will be generated, containing the computed R_extrinsics and T_extrinsics matrices, respectively the 3D rotation matrix and the 3D translation vector of the second camera w.r.t. the first one.

This file can then be fed to rgbd-multikinect:

./rgbd-multikinect --calibration2 calibration_multikinect.yml --numdevices 2
  • Note: PCL support is required by this demo.