More than one year ago I started adding Kinect support to ARToolKit using libfreenect. I had some results but didn’t have enough time to publish a fully tested version of it. Now I know it ins’t very useful because Ubuntu already detects Kinect as a webcam (creating a /dev/videoX device for it, or at least that happend to me with Ubuntu 12.04 UPDATE: I figured out that this happens because of the gspca_kinect
module introduced in Linux 3.0 kernel), so you can use it with ARToolKit via the GStreamer video input. Still I wanted to learn more on how ARToolKit works on its inside. Now I’ve added some more features and support not only for the RGB camera but also for the IR camera, tilt and LED.
The nature of the IR camera makes detecting markers very difficult as they are very blurry when not very close to the Kinect, but hey it’s been fun hacking with libfreenect. I’ll try to publish the code ASAP. I still want to test it more.
Can you publish the code or at least write about it so that we can see how you get hold of the Depth data. There are other people who do it on ROS (http://adriangerardcooke.com/kinect-on-ubuntu-12-04/), I will take a look at it, but if you have also done it, it can be useful.
Sorry wrong link, http://www.ros.org/wiki/ar_kinect.
Yes, I’ll publish it ASAP. It’s not very complicated, I based the code in one of the libfreenect basic examples. Hope I can finish it this week or at least publish a working but simple version.
Be aware that it only reads the data from the camera and depth sensor, it does not do any pose estimation or anything related to that.
hello, any updates on the source code?
you did here answers my question in mind. atleast I had a proof the ARToolkit is possible on IR cam.
thank you very much!