
When me and Jouni were at the Surface training in Munich Jouni wrote an application that uses the raw images from the cameras. The table consists of five cameras, four cameras that take pictures of the corners and one that takes a picture in the center. Then there is a process that goes from the raw images to a binary version of the image with each pixel either black or white for faster processing. The screen is divided into a grid containing one-inch squares. The next step is to place each frames contact data into shared memory and tell the SDK that data is available and ready to use.
The fact that cameras are used and not just a touch screen is what makes Microsoft Surface so special. Things that happen on the surface are recognized and even things that happen above the surface are noticed. About one inch above things starts to happen.


The resulting image
You have mentioned that screen is divided into a grid of one-inch squares. Do you know why they do this?
SvaraRaderaHi Nesher,
SvaraRaderaI'm not sure but I asume that this is done for performance reasons. Every square is processed to find out if there is any contact placed on that square to be further processed.
(Sorry for not answering earlier).
Regards