Documentation.Demos History

Show minor edits - Show changes to output

January 29, 2012, at 06:36 PM by 87.217.160.138 -
Added line 6:
* [[ObjectModelAcquisition|rgbd-scan-markers]]: Object model acquisition using a marker board.
January 29, 2012, at 06:36 PM by 87.217.160.138 -
Changed lines 1-154 from:
!! Running the viewer (rgbd-viewer)
----

* Binaries are in the @@build/bin/@@ directory, you can give it a try without calibration using:
[@
build/bin/rgbd-viewer
@]

If you get an error such as:

[@
libusb couldn't open USB device /dev/bus/usb/001/087: Permission denied.
libusb requires write access to USB device nodes.
FATAL failure: freenect_open_device() failed
@]

Give access rights to your user with:
[@
sudo chmod 666 /dev/bus/usb/001/087
@]
Or install the udev rules provided by libfreenect.

!!! Switching between backends

There are two supported backends for Kinect devices, @@libfreenect@@ and @@OpenNI/Nite@@. By default, if the @@NESTK_USE_OPENNI@@ Cmake variable is enabled, demo programs will choose the `OpenNI backend. If you want to switch to the libfreenect backend, you can use the @@freenect@@ command line option:
[@
build/bin/rgbd-viewer --freenect
@]

!!! High resolution mode

When using the `OpenNI backend, you can enable high RGB resolution mode to get 1280x1024 color images @ 10Hz with the @@highres@@ option:
[@
build/bin/rgbd-viewer --highres
@]

!!! Running the viewer with calibration

* Just give it the path to the calibration file:
[@
build/bin/rgbd-viewer --calibration kinect_calibration.yml
@]

'''New since `RGBDemo v0.4.0''': if there is a @@kinect_calibration.yml@@ file in the current directory, it will be loaded automatically.

* You should get a window similar to this:
%height=240px% Attach:viewer_output_main_v2.png

* The main frame is the color-encoded depth image. By moving the mouse, you can see the distance in meters towards a particular pixel. Images are now undistorted.

* You can filter out some value and normalize the depth color range with the filter window (Show / Filters). The Edge filter is recommended.
%height=240px% Attach:viewer_output_filters.png

* You can get a very simple depth-threshold based segmentation with Show / Object Detector
%height=240px% Attach:viewer_output_detection.png

* You can get a 3D view in Show / 3D Window.
%height=240px% Attach:viewer_output_view3d_cloud.png

* By default you get a grayscale point cloud. You can activate color:
%height=240px% Attach:viewer_output_view3d_cloud_color.png

* And finally textured triangles :
%height=240px% Attach:viewer_output_view3d_triangles.png

* You can also save the mesh using the @@Save current mesh@@ button, it will store in into a @@current_mesh.ply@@ file that you can open with Meshlab [[http://meshlab.sourceforge.net/|Meshlab]]:
%height=320px% Attach:viewer_output_meshlab.png

* Or import into [[http://www.blender.org/|Blender]]:
%height=320px% Attach:viewer_output_blender.png

* The associated texture is written into a @@current_mesh.ply.texture.png@@ file and can be loaded into the UV editor in Blender.

!! Getting Infrared Images
----

* You can activate the IR mode in the capture menu. There is also a dual RGB/IR mode alternating between the two modes.
%height=280px% Attach:viewer_output_ir.png
'''Note: this is currently only available with libfreenect backend'''

!! Moving the Tilt motor
----

This is only possible with the @@libfreenect@@ backend. Open the @@Filters@@ window and you can set the Kinect tilt on the bottom slider.

!! Replay mode
----

* You can grab RGBDImages using the @@File/Grab Frame@@ command. This stores the files into @@viewXXXX@@ directories (see the Calibration section), that can be replayed later using the fake image grabber. This can be activated using the @@--image@@ option:
[@
build/bin/rgbd-viewer --calibration kinect_calibration.yml --image grab1/view0000
@]

* You can also replay a sequence of images stored in a directory with the @@--directory@@ option:
[@
build/bin/rgbd-viewer --calibration kinect_calibration.yml --directory grab1
@]
This will cycle through the set of viewXXXX images inside the @@grab1@@ directory.

'''Note:''' You will also need a calibration file if you used `OpenNI backend to grab the images. You can get one by running the viewer and selecting @@File/Save calibration parameters@@.

!! Interactive scene reconstruction
----

* You can try an experimental interactive scene reconstruction mode using the @@build/bin/rgbd-reconstructor@@ program. This is similar to the interative mapping of [[http://ils.intel-research.net/projects/rgbd|Intel RGBD]] but still in a preliminar stage. The relative pose between image captures is determined using SURF feature points matching and RANSAC.

In this mode, point clouds will progressively be aggregated in a single reference frame using a Surfel representation to avoid duplicates and smooth out the result.

* '''Note: ''' As of version 0.5.0, you can enable ICP refinement if @@NESTK_USE_PCL@@ cmake variable was enabled (by default on Linux) and using the @@--icp@@ option.

!! People detection
----

* Launch @@rgbd-people-tracker@@. You need to specify a configuration file. Here an example of command line:
[@
build/bin/rgbd-people-tracker --config data/tracker_config.yml
@]
Calibration and config files will be loaded automatically is they are in the current directory.

!! Body tracking and gesture recognition
----

* Launch @@rgbd-skeletor@@.
If you make the calibration pose, you should be able to see your joints. If you are interested into a minimal body tracking example, you can have a look at @@nestk/tests/test-nite.cpp@@. Enable the @@NESTK_BUILD_TESTS@@ cmake variable to compile it.

!! Model acquisition of objects lying on a table
----

* Launch @@rgbd-object@@. You might want to enable the @@--highres@@ flag to get better color textures.
The Kinect must be looking at a dominant plane. Hitting "Acquire new models" should compute a 3D model for all the objects on the table. Note that objects that are too close to each other (about 5cm) might get merged into a single one. The models can be saved into individual @@objectXX.ply@@ files using the @@Save meshes" button. On the right image you will see a reprojection of the models on the color image, along with the estimated volume of each object in mm3.

* Note: PCL support is required by this demo.

!! Using multiple kinects
----

* Launch @@rgbd-multikinect@@. You need to plug each Kinect on a different usb hub. You can set the number of connected devices with the @@--numdevices@@ flag. Then you can switch between different devices using the number keys, or the @@Devices@@ menu. A calibration file can be set for each device using e.g. @@--calibration2 calibration2.yml@@ to set the parameters of the second device. To calibrate the extrinsics of each Kinect, you can:

# Use the 3D view and check calibration mode to manually move the current view until it matches the reference one.
# Once you're close to a good alignment, the "Refine with ICP" button can help to finalize the registration.
# Another option is to use the @@calibrate-multiple-kinects@@ program. You will first need to grab images of checkerboards seen by both cameras using uncalibrated @@rgbd-multikinect@@. Then you can call the calibration program with, e.g.:

[=
./calibrate-multiple-kinects grab0 grab1 calibration1.yml calibration2.yml --pattern-size 0.025
=]

@@grab0@@ and @@grab1@@ are the directories containing the grabbed checkboards. @@grab0@@ correspond to the reference camera, and @@grab1@@ to the Kinect whose extrinsics will be computed. @@calibration1.yml@@ and @@calibration2.yml@@ are the calibration files containing the intrinsics of each Kinect. These can be obtained automatically from `OpenNI by using @@File/Save Calibration Parameters@@ in @@rgbd-multikinect@@ after activating the corresponding device. These files are usually identical though.  @@--pattern-size@@ is the same as in the calibration section. If calibration is successful, a @@calibration_multikinect.yml@@ file will be generated, containing the computed @@R_extrinsics@@ and @@T_extrinsics@@ matrices, respectively the 3D rotation matrix and the 3D translation vector of the second camera w.r.t. the first one.

This file can then be fed to @@rgbd-multikinect@@:
[@
./rgbd-multikinect --calibration2 calibration_multikinect.yml --numdevices 2
@]

* Note: PCL support is required by this demo.
to:
* [[Viewer|rgbd-viewer]]: RGBD Image grabber and viewer.
* [[Reconstructor|rgbd-reconstructor]]: Freehand 3D modeling of a scene (RGBD Slam)
* [[PeopleDetection|rgbd-people-tracker]]: People detector.
* [[BodyTracking|rgbd-skeletor]]: Body tracking and gesture recognition. Wrapper around Nite events.
* [[ObjectModelAcquisition|rgbd-scan-topview]]: Rough object model acquisition from a single top view.
* [[MultipleKinect|rgbd-multikinect]]: Multiple kinect grabber and viewer.
January 26, 2012, at 06:13 PM by 163.117.150.79 -
Changed line 1 from:
!!! Running the viewer (rgbd-viewer)
to:
!! Running the viewer (rgbd-viewer)
Changed lines 23-24 from:
!!!! Switching between backends
to:
!!! Switching between backends
Changed lines 30-31 from:
!!!! High resolution mode
to:
!!! High resolution mode
Changed lines 37-38 from:
!!!! Running the viewer with calibration
to:
!!! Running the viewer with calibration
Deleted line 45:
Changed line 74 from:
!!! Getting Infrared Images
to:
!! Getting Infrared Images
Changed line 81 from:
!!! Moving the Tilt motor
to:
!! Moving the Tilt motor
Changed line 86 from:
!!! Replay mode
to:
!! Replay mode
Changed line 102 from:
!!! Interactive scene reconstruction
to:
!! Interactive scene reconstruction
Changed line 111 from:
!!! People detection
to:
!! People detection
Changed line 120 from:
!!! Body tracking and gesture recognition
to:
!! Body tracking and gesture recognition
Changed line 126 from:
!!! Model acquisition of objects lying on a table
to:
!! Model acquisition of objects lying on a table
Changed line 134 from:
!!! Using multiple kinects
to:
!! Using multiple kinects
January 26, 2012, at 05:47 PM by 163.117.150.79 -
Changed lines 1-5 from:
(:htoc:)

[[<<]]

!! Running the viewer (rgbd-viewer)
to:
!!! Running the viewer (rgbd-viewer)
January 26, 2012, at 05:47 PM by 163.117.150.79 -
Changed line 5 from:
!!! Running the viewer (rgbd-viewer)
to:
!! Running the viewer (rgbd-viewer)
January 26, 2012, at 05:38 PM by 163.117.150.79 -
Added lines 2-3:

[[<<]]
January 26, 2012, at 05:37 PM by 163.117.150.79 -
Deleted lines 0-1:
(:title [=Kinect `RGBDemo v0.6.1=]:)
Deleted lines 1-2:

[[<<]]
January 26, 2012, at 05:37 PM by 163.117.150.79 -
Added lines 1-6:
(:title [=Kinect `RGBDemo v0.6.1=]:)

(:htoc:)

[[<<]]

January 26, 2012, at 05:37 PM by 163.117.150.79 -
Added lines 132-153:

* Note: PCL support is required by this demo.

!!! Using multiple kinects
----

* Launch @@rgbd-multikinect@@. You need to plug each Kinect on a different usb hub. You can set the number of connected devices with the @@--numdevices@@ flag. Then you can switch between different devices using the number keys, or the @@Devices@@ menu. A calibration file can be set for each device using e.g. @@--calibration2 calibration2.yml@@ to set the parameters of the second device. To calibrate the extrinsics of each Kinect, you can:

# Use the 3D view and check calibration mode to manually move the current view until it matches the reference one.
# Once you're close to a good alignment, the "Refine with ICP" button can help to finalize the registration.
# Another option is to use the @@calibrate-multiple-kinects@@ program. You will first need to grab images of checkerboards seen by both cameras using uncalibrated @@rgbd-multikinect@@. Then you can call the calibration program with, e.g.:

[=
./calibrate-multiple-kinects grab0 grab1 calibration1.yml calibration2.yml --pattern-size 0.025
=]

@@grab0@@ and @@grab1@@ are the directories containing the grabbed checkboards. @@grab0@@ correspond to the reference camera, and @@grab1@@ to the Kinect whose extrinsics will be computed. @@calibration1.yml@@ and @@calibration2.yml@@ are the calibration files containing the intrinsics of each Kinect. These can be obtained automatically from `OpenNI by using @@File/Save Calibration Parameters@@ in @@rgbd-multikinect@@ after activating the corresponding device. These files are usually identical though.  @@--pattern-size@@ is the same as in the calibration section. If calibration is successful, a @@calibration_multikinect.yml@@ file will be generated, containing the computed @@R_extrinsics@@ and @@T_extrinsics@@ matrices, respectively the 3D rotation matrix and the 3D translation vector of the second camera w.r.t. the first one.

This file can then be fed to @@rgbd-multikinect@@:
[@
./rgbd-multikinect --calibration2 calibration_multikinect.yml --numdevices 2
@]
January 26, 2012, at 05:36 PM by 163.117.150.79 -
Added lines 36-73:

!!!! Running the viewer with calibration

* Just give it the path to the calibration file:
[@
build/bin/rgbd-viewer --calibration kinect_calibration.yml
@]

'''New since `RGBDemo v0.4.0''': if there is a @@kinect_calibration.yml@@ file in the current directory, it will be loaded automatically.


* You should get a window similar to this:
%height=240px% Attach:viewer_output_main_v2.png

* The main frame is the color-encoded depth image. By moving the mouse, you can see the distance in meters towards a particular pixel. Images are now undistorted.

* You can filter out some value and normalize the depth color range with the filter window (Show / Filters). The Edge filter is recommended.
%height=240px% Attach:viewer_output_filters.png

* You can get a very simple depth-threshold based segmentation with Show / Object Detector
%height=240px% Attach:viewer_output_detection.png

* You can get a 3D view in Show / 3D Window.
%height=240px% Attach:viewer_output_view3d_cloud.png

* By default you get a grayscale point cloud. You can activate color:
%height=240px% Attach:viewer_output_view3d_cloud_color.png

* And finally textured triangles :
%height=240px% Attach:viewer_output_view3d_triangles.png

* You can also save the mesh using the @@Save current mesh@@ button, it will store in into a @@current_mesh.ply@@ file that you can open with Meshlab [[http://meshlab.sourceforge.net/|Meshlab]]:
%height=320px% Attach:viewer_output_meshlab.png

* Or import into [[http://www.blender.org/|Blender]]:
%height=320px% Attach:viewer_output_blender.png

* The associated texture is written into a @@current_mesh.ply.texture.png@@ file and can be loaded into the UV editor in Blender.
January 26, 2012, at 05:29 PM by 163.117.150.79 -
Deleted lines 0-2:
!! Running the demos
----

January 26, 2012, at 05:29 PM by 163.117.150.79 -
Changed lines 1-4 from:
!!! Running the viewer
to:
!! Running the demos
----

!!! Running the viewer (rgbd-viewer)
January 26, 2012, at 05:12 PM by 163.117.150.79 -
Added lines 1-95:
!!! Running the viewer
----

* Binaries are in the @@build/bin/@@ directory, you can give it a try without calibration using:
[@
build/bin/rgbd-viewer
@]

If you get an error such as:

[@
libusb couldn't open USB device /dev/bus/usb/001/087: Permission denied.
libusb requires write access to USB device nodes.
FATAL failure: freenect_open_device() failed
@]

Give access rights to your user with:
[@
sudo chmod 666 /dev/bus/usb/001/087
@]
Or install the udev rules provided by libfreenect.

!!!! Switching between backends

There are two supported backends for Kinect devices, @@libfreenect@@ and @@OpenNI/Nite@@. By default, if the @@NESTK_USE_OPENNI@@ Cmake variable is enabled, demo programs will choose the `OpenNI backend. If you want to switch to the libfreenect backend, you can use the @@freenect@@ command line option:
[@
build/bin/rgbd-viewer --freenect
@]

!!!! High resolution mode

When using the `OpenNI backend, you can enable high RGB resolution mode to get 1280x1024 color images @ 10Hz with the @@highres@@ option:
[@
build/bin/rgbd-viewer --highres
@]

!!! Getting Infrared Images
----

* You can activate the IR mode in the capture menu. There is also a dual RGB/IR mode alternating between the two modes.
%height=280px% Attach:viewer_output_ir.png
'''Note: this is currently only available with libfreenect backend'''

!!! Moving the Tilt motor
----

This is only possible with the @@libfreenect@@ backend. Open the @@Filters@@ window and you can set the Kinect tilt on the bottom slider.

!!! Replay mode
----

* You can grab RGBDImages using the @@File/Grab Frame@@ command. This stores the files into @@viewXXXX@@ directories (see the Calibration section), that can be replayed later using the fake image grabber. This can be activated using the @@--image@@ option:
[@
build/bin/rgbd-viewer --calibration kinect_calibration.yml --image grab1/view0000
@]

* You can also replay a sequence of images stored in a directory with the @@--directory@@ option:
[@
build/bin/rgbd-viewer --calibration kinect_calibration.yml --directory grab1
@]
This will cycle through the set of viewXXXX images inside the @@grab1@@ directory.

'''Note:''' You will also need a calibration file if you used `OpenNI backend to grab the images. You can get one by running the viewer and selecting @@File/Save calibration parameters@@.

!!! Interactive scene reconstruction
----

* You can try an experimental interactive scene reconstruction mode using the @@build/bin/rgbd-reconstructor@@ program. This is similar to the interative mapping of [[http://ils.intel-research.net/projects/rgbd|Intel RGBD]] but still in a preliminar stage. The relative pose between image captures is determined using SURF feature points matching and RANSAC.

In this mode, point clouds will progressively be aggregated in a single reference frame using a Surfel representation to avoid duplicates and smooth out the result.

* '''Note: ''' As of version 0.5.0, you can enable ICP refinement if @@NESTK_USE_PCL@@ cmake variable was enabled (by default on Linux) and using the @@--icp@@ option.

!!! People detection
----

* Launch @@rgbd-people-tracker@@. You need to specify a configuration file. Here an example of command line:
[@
build/bin/rgbd-people-tracker --config data/tracker_config.yml
@]
Calibration and config files will be loaded automatically is they are in the current directory.

!!! Body tracking and gesture recognition
----

* Launch @@rgbd-skeletor@@.
If you make the calibration pose, you should be able to see your joints. If you are interested into a minimal body tracking example, you can have a look at @@nestk/tests/test-nite.cpp@@. Enable the @@NESTK_BUILD_TESTS@@ cmake variable to compile it.

!!! Model acquisition of objects lying on a table
----

* Launch @@rgbd-object@@. You might want to enable the @@--highres@@ flag to get better color textures.
The Kinect must be looking at a dominant plane. Hitting "Acquire new models" should compute a 3D model for all the objects on the table. Note that objects that are too close to each other (about 5cm) might get merged into a single one. The models can be saved into individual @@objectXX.ply@@ files using the @@Save meshes" button. On the right image you will see a reprojection of the models on the color image, along with the estimated volume of each object in mm3.

* Note: PCL support is required by this demo.