Several times, on a computer projecting system, we don't have the projector and the projection screen correctly aligned. Some times we don't even have a flat (plane) screen. Other times we wan't to join the images of many projectors in order to comprise a single, larger image.
Many modern video projectors provide the means to adjust the image to fit on a tilted wall, but these adjustments have their limitations. Fortunately for you, your boss and co-workers, these things are not a big problem when you're just watching a PowerPoint presentation. But, some times, the image quality is mandatory. For example, when you're on a flight simulator, you want full immersion. If the images you see are distorted or not perfectly aligned, your brain won't believe you're on flying capsule, thousands feet above earth. And, if you're inside an flight simulator, you want your brain deluded be the images your eye see!
So, let's say that you want to project the following image
As you can see, the image is too large to fit on a single projector. So, we need to slice it in 3 different images and use 3 video projectors horizontally-aligned to create a single large image.
Let's say our flight simulator uses a cylindrical screen, like the on seen from above in the next image
The first problem is depicted here in blue color: If the projection has overlapping parts, we can't get the original large image we're looking for. We end up with something like this:
You can clearly see the problem here: Besides not being correcly aligned, the image has brighter spots where the slices overlap. This is'nt good for an immersion system.
Moreover, remember that our screen is cylindrical. But the projectores send flat/plane images. This lead to a distorted image, since the projector "expects" the image hits a flat surface, and not a cylindrical one. This problem is even worse when we use a spherical surface.
And how can we solve those problems? Many times we can simply adjust the angles at the projectors and use modern (And really expensive!) projectors that can pre-distort the images according to the screen geometry. But this takes a long time and may not be perfect, because there's a lot of human interaction.
So, how about creating a system to automatically detect the screen position and it's geometry, so we could matematically calculate the distortions in advance? Then we could fix the image before it's projected, perfecting the image hitting the screen surface.
This project uses an Arduino and some light sensors to help the computer generating the image to detect where the screen edges are. If you look back at our projection system above, this is the equivalent to find where are the red dots on the screen. If our system detects where those dots are, it would be able to ajust the images to be aligned around the dots. We aren't fixing the geometry distortions yet, but we could minimize this problem if we have a lot of detection points (red dots) around the image, breaking an spherical surface in a series of small flat surfaces.
The sensor uses an APC 220 to stablish a wireless connection with the computer generating the image. By using data collected by the sensor, the computer can adjust the image before sending it to the projectors. (In this video below I've used a USB cable due to some problems with the wireles receiver driver)
You can see the light sensors (LDRs) placed on each corner of our projection screen (The white peace of paper behind the circuitry).
This project is an update on a previous rig I did in 2011. The original project was inspired ("Reverse-engineered") from a paper published in 2004 by Johnny Chung Lee
In the original project I've followed Johnny's original work, using a sequence of binary patterns to detect the screen position. This method is really fast. But, this time, I was more concerned in precision that speed. So I changed the patterns sequence for 2 scanning beams sweeping the screen.
This didn't gave me just more precision. Its also more futuristic, with this Hollywood-like light beam crossing the room, scanning every inch of the room. This result in such a good visual effect that is hard to not thing about several other applications for this project, like video mapping.
So, check the video of this rig working:
(Note: Some parts of this video were edited to speed-up the light beam sweep)
Algorithm:
The surface detection algorithm is pretty simple and straightforward, once you understand it: You need to find the 4 corners of the surface projection as it is "seen" by the projector lens (Imagine that the projector lens is an eye)
So you just need to find the 4-sided polygon, comprised by points P1,P2,P3 and P4. Why this is not simply a rectangle, like the projection surface? Because, in a rectangle, all 4 angles are 90 degrees. Unless the surface projection is perfectly aligned in front the projector, you'll always need something more like a rhomboid or a trapezoid.
So let's define our four points as P1(x1,y1), P2(x2,y2), P3(x3,y3) and P4(x4,y44). We need to find four vertical positions (y1, y2, y3 and y4) and two horizontal positions (x1, x2, x3 and x4).
I'll explain the process for detecting the horizontal points, since the vertical detection follows the exact same algorithm, expect the it goes on the Y-axis, instead of the X-axis:
The detection puts a sequence of patterns on the projector. Those patterns hit the light sensors, placed on the corners of the projection surface, and this information (There's light or there's no light) is sent back to the computer, so it can detect an approximation of the four x positions (One for each light sensor). The more we advance on the pattern sequence, the better the approximation for the x positions.
The first pattern divides the projection screen in two halves: Left side is white (lit) and the right side is black (No light). Once this pattern is projected over the surface projection, the 4 sensors will receive light or not depending on which half of the screen they are. So, the computer reads the sensors and detects if each of x1, x2, x3 and x4 are on the left or right side, depending if they are receiving light or not.
Imagine that we are projecting a FullHD image (1920x1080). At this point, we know that each X corresponds to a pixel either between [0-959] OR between [960-1919]
So, by repeating this process several times by halving the pattern width each time, we can get better approximations: On the first step we detected that x1 was on the left half. On the second step we detect that x1 was on the right quarter (half of a half) of the left half. And it goes until we have a good enough approximations for all 4 points.
At this point we repeat the same process to detect the vertical positions and we finally have the 4 points that correspond to the corners of the projection screen.