ir tracking in live performance

Here are some notes about how we went about tracking performers' movements in real-time for Encoded. For some additional background please check out Toby K's excellent article Introduction to tracking live performers using threshold tracking and infrared light.

lighting

In order to track performers independent of computer projections we lit the scene with infra-red light. There are a number of ways to do this, but we chose two: using traditional stage lights with Lee 181 (Congo Blue) gels and using a specially purchased IR floodlight. For the stage lights, two Lee 181 gels can be used to further reduce the visible light.

Using normal stage lights gives us a number of advantages. These lights are much more controllable and allow us to light the performers without also lighting the screen/wall that we are projecting onto. What we are aiming for is for the performers to be clearly lit with IR light, but the background to be as dark as possible. (See Toby K's article for more on this.)

The IR floodlight we used is from InfraRed Intelligence, but they can certainly be found elsewhere too. The advantage is that the light output by these is invisible to the naked eye - unlike the congo blue stage lights which are low level but still visible. The disadvantage is that these are floodlights and so it's harder to focus the IR illumination on just the performers and not the background.

camera

We use a black and white Firefly MV camera, specifically the 0.3 MP B&W Firefly MV 1394a Camera, 1/3-Inch CMOS, CS- Mount. This offers a good frame-rate (60 fps) and sufficient resolution for real-time tracking for our purposes.

This camera does not come with a lens. We chose one from Peau Productions, a Vari-Focal CS Lens, 2.8 - 10mm. The camera has a C / CS mount, so you could chose another lens if you needed a wider field of view or had particular needs. We've found this one suitable for most of the performance situations we find ourselves in.

In order to allow the IR light through while blocking visible light we needed to fit a small filter behind the lens of the camera. This requires unscrewing the lens, cutting a small square of filter material and placing it so that it covers the sensor. The lens is then simply screwed into place. We've found that the lens holds the filter material in place without requiring any adhesive, etc. as long as the filter is carefully cut to size.

The filter that we used is a Lee number 87 purchased from MediaVision Australia. We bought a single sheet 100mm x 100mm which was more than enough. Smaller sheets are also probably available. Each camera only requires a square about 10mm x 10mm.

processing

We use a small tracking module based on the OpenCV library to process the video from the camera. We don't use blob tracking or skeletal tracking but lower-level optical-flow tracking. This gives us a matrix each frame which shows the movement of pixels since the previous frame. The advantage is that this means that anything moving can be used to 'stir' the fluid simulation we've been using in our performances. However, this can of course also be a disadvantage if there are other moving objects on stage that you don't want to track. Blob tracking can also be used here, but we've not pursued this much ourselves as we haven't yet had the need. This would allow blobs of minimum or maximum size to be excluded from the analysis, for example, but may also tend to introduce discontinuities into the tracking.

other options

If you don't want to go with the lower-level IR tracking described above, the Microsoft Kinect might be worth considering. It works exceptionally well really, and can provide detailed skeletal tracking (ie. the 3D position of hands, elbows, knees, feet, etc.) which opens up lots of interesting creative possibilities. The downside is that it has only a limited range (around 5m) and can tend to get confused in dance performances, because of the range of unusual poses and body positions that it encounters.

To get started, an application like Synapse or Simple OpenNI will let you get the skeleton data via Open Sound Control, which you can then feed into the audio/visual application of your choice.

fluid sim

Some basic instructions for those who attended the Boulder, Colorado workshop.