Several times, In a computer image projection, you need to adjust the projector position according to the geometry and position of the projection image. For example:

  • You may need to align the images from two or more projectos, to create a single large image on a screen
  • You need to project on a screen that is not rectangular
  • You need to project on a screen that is not orthogonal to the projector

In these cases you need to phisically distort the image before it reaches the screen. You can move, tilt or flip the projector, or put a window between the projector and the screen.

But what if you cold do that by software. What if you could just press a button and the whole projection system calibrate itself, detecting the position and shape of the screen, and then adjusting the image BEFORE it is projected, so the projected image you see in the end is perfect?

Well, a guy called Johnny Chung Lee did this. In a 2004 paper he presented a technique based on light sensores, located around the screen. Before starting to project the desired images, its projector system projected a series of light patterns, and used them to detect the position of the screen.

This isn't really complicated. Whenever the projector send a new image pattern, it checks all the sensor and memorizes if they are sensing light at that moment. After running all the different patterns, it can calculate a sensor position by checking when the sensor received and when it didn't received light.

The video below is my version of Johnny Lee's experiment. It's a reverse engineering task. Actually, my project had one advantage: I didn't need to send Gray code patterns, since I was controlling the clock on both end. :)

I've used an Arduindo and cheap LDRs to detect the patterns. If I had used solid-state light detectors, it could be really fast!