fredag 23 januari 2009

Contact: The act or state of touching

While being in contact with the Surface you handle more information than when just placing your finger on a traditional touch-enabled screen. A Contact in ‘Microsoft Surface language’ is the kind of object that touches your Surface screen. The Surface SDK gives you the possibility to recognize different kinds of objects. The ones you can use are Finger, Tagged object or Blob.
  • Finger is self-explanatory; your finger in touch with the touch-enabled screen.
  • Tagged object are objects with an identity tag placed on them. This makes it possible to interpret exactly which object is being placed on the surface table.
  • Blob is an arbitrary object placed on the screen. You might have seen the demo with a wine glass with a surrounding aura – voila a blob.
Ok, so what to do with contacts. The touch-enabled controls for Surface can handle events from user interactions – that is different states of the contacts on the screen. These interactions are called gestures or manipulations.

Gestures are user activity which doesn’t work differently on whatever element is affected. A tapping or pressing action is a tap or push whether it’s a button or custom user control. Manipulations on the other hand are interactions that depend on what control is being touched and how. A manipulation user action has a context. That means that for one control a twirling motion means to rotate the control, but for another the same user interaction means to twist the control’s content.

Using the WPF layer of the Surface SDK the Surface controls are enabled to capture the following events:
  • ContactDown
  • ContactUp
  • ContactTapGesture
  • ContactHoldGesture
  • ContactEnter
  • ContactLeave
  • ContactChanged
More about the mentioned contact events when we come back to the WPF layer in a coming blog posts.

Inga kommentarer:

Skicka en kommentar