söndag 9 januari 2011

Sonicspree on Surface 2

The new version of Sonicspree is completely rewritten from the ground and up. It has a new design, new architecture and new frameworks but with the same engaging and social gameplay. The goal of the game is to find a matching album cover to a song playing before one of your competitors does. The faster you do it, the more points you get. If you guess wrong you will lose some of your points.

One of the biggest challenges with the first version of Sonicspree was how to do with music and album covers when installing the application on different units. That version used local mp3 files with embedded graphics. This time we wanted a more flexible solution so we turned to the Swedish online streaming music service Spotify. So this version of Sonicspree is powered by music from Spotify.

Sonicspree_3_

Sonicspree was built using Visual Studio 2010 and Blend 4 in a tight collaboration between developers, user interface designers and interaction designers. To make a clear separation in the developer/designer workflow we have used a Model View ViewModel design pattern and the MVVM Light Toolkit.

One game in Sonicspree now consists of several gamerounds and each gameround consists of five songs. The dice still play an important role in Soncispree and they can be rolled before each gameround to decide the genres to use.

Sonicspree_1

When a song starts playing, the players drags the hidden album covers to the center of the table to reveal the cover. When someone finds the correct cover they drag it home to its own nest to make a guess. If it is correct a new song starts playing and if its wrong the nest shakes and spits out the cover.

Sonicspree_2

lördag 8 januari 2011

Surface 2 and Sonicspree

This week at CES in Las Vegas Microsoft showed the next version of Surface for the public for the first time. It was at the Ballmer keynote that the world got a first glimpse of what’s to come. With the first version of Microsoft Surface Microsoft the hardware was built by Microsoft but this time they have teamed up with Samsung to create the “Samsung SUR40 for Microsoft Surface” witch is the official name. The new unit is pretty much everything you wanted and asked for in an upgrade plus some extra! It is a 40-inch 1920x1280 display covered with Gorilla Glass (same kind of glass as in many smartphones). The first version of Surface has a glass with a matte finish witch is very different from the feel we now will experience.

microsoft_surface_sur40-580x386

The new version is only four inches thick and uses a completely new technology called Pixel Sense. With Pixel Sense every pixel acts like a “camera” and can supply information about what’s happening on the glass. All this information is processed in 60 frames per second. Since it is only four inches thick, the SUR40 can also be wall mounted and the SDK will notice at what angle the unit is at and can respond accordingly. It will only detect change in angle in landscape mode but not in portray mode. An application can change its appearance based on how much the monitor is tilted. As a developer you will not get an event when the tilt changed but that is understandable since it still is a pretty large piece of hardware!

There has been a TAP program running for the new Surface since May 2010. Connecta has once again teamed up with Ergonomidesign and collaborated to develop an application on the new hardware and with the new SDK. One of the biggest challenges we have had was the limited access to hardware. The first time we got to see it was in November, on site at Microsoft in Redmond, and we are still waiting to get continuous access to hardware. So we have basically developed everything using only the new SDK and sent bit to the Surface team in Redmond for testing. The reports that we have gotten back has been positive and that really is a good rating for the SDK, i.e. it is possible to develop great apps without having access to hardware. But to get it perfect you still need hardware because you will notice things when several people are using the app at the same time that you never will see in a simulated environment.

Speaking of simulation, the old simulator from v1 is gone. Please welcome the Microsoft Surface Input Simulator!

inputsimulator

With the Microsoft Surface Input Simulator, the application is running in full screen on your monitor and input is simulated to let you as a developer work with fingers, blobs and tags similar to as it was done in v1. The input simulator can also simulate monitor tilt in 360 degrees. On limitation so far is that your screen resolution has to be at 1920x1280 to get the exact same view as you will get on real hardware but that high resolution is not a requirement to start your app.

The application that we have created for Samsung SUR40 for Microsoft Surface is a new version of the popular music game Sonicspree. We will write more about that in coming posts but here is the new logo and screen shot.

icon

gameplay

Please follow #Sonicspree on Twitter for continuous news about where you will be able to experience Sonicspree and Samsung SUR40 for Microsoft Surface during the following months.

fredag 6 augusti 2010

Swag

The Surface team is taking Swag to a whole new level!

Thanks Anders for the lovely swim cap!



måndag 12 april 2010

Surface Toolkit for Windows Touch Beta!

Today is a big day for a Microsoft developer like myself. Not only is Visual Studio 2010 released but we also see a long awaited sign of life from the Surface Team! Today, for Surface Parters only, Microsoft releases The Microsoft Surface Toolkit for Windows Touch Beta. This was first announced at PDC and is said to be Microsoft Surface controls, templates, and samples to easily create applications that are optimized for multi-touch interaction and that run on Windows Touch PCs.

The Microsoft Surface Toolkit for Windows Touch Beta is a set of controls, APIs, templates, sample applications and documentation currently available for Surface developers. With the .NET Framework 4.0, Windows Presentation Framework 4.0, and this toolkit, Windows Touch developers can quickly and consistently create advanced multi-touch experiences for Windows Touch PCs. One really interesting part is that this toolkit is supposed to provid a jump-start for Surface application developers to prepare for the next version of Microsoft Surface.

What the next version of Surface is, or when it is, is still hidden for the masses but boy, do I look forward to it!

Will be back with more after I have downloaded and played with the toolkit!


tisdag 2 mars 2010

Mobile Surface comming?

At the olny Surface session at PDC09, Robert Levy said that they are looking at ways to make Surface smaller, cheaper and vertical. This is of course very interesting and especially to see how they have solved one of the key features of Surface - interaction with physical objects.

This is now starting to bare fruit and rumors are starting to surface that something will appear at TechFest 2010. TechFest is an internal annual event that brings researchers from Microsofts Research's locations around the world to Redmond to share their latest work.

http://blogs.zdnet.com/microsoft/?p=5435&tag=col1;post-5435

torsdag 11 februari 2010

A case of OCGM

Recently there have been some talking about OCGM and it’s impact on NUI. OCGN is a design philosophy proposed by Ron George and is suppose to be to NUI in the same way WIMP is to GUI. OCGM is pronounced Occam as in Occam’s Razor and it’s an abbreviation for:

  • Objects – “Objects are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface.”
  • Containers – “Containers will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit.
  • Gestures – “Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it.
  • Manipulations – “Manipulations are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent.

I’ve cited Ron from his first post about OCGM. I recommend you read the post and I also recommend reading this paper about OCGM by Ron George and Joshua Blake.

To understand OCGM further, I would like to make a little retrospective on one my previous Microsoft Surface applications and see how the application fits into OCGM (or should it be the other way around?).

My first Surface project was SonicSpree. To summaries the application: SonicSpree is a game of guessing songs, where the players goal is to combine the song currently playing with it’s corresponding album art. The actual game element is to find the correct album art by  and then drag it into the players nest / home. A simple idea. Finding the album art though is like playing Memory. From the start, all album arts cards are facing down but can be flipped by dragging an album art card into the center. When faced up, the player can make a guess by dragging the album art card into his or hers home to receive a point.

sonicspree
If we start with identifying what kind of objects used in SonicSpree the most obvious one is of course the actual album art card the users actually interact with to play the game. The other kind of objects used in SonicSpree are actually the physical dices. A new game round can be started by throwing the dices onto the Surface.

As for the containers, SonicSpree uses two of them: the players nest and “the edge”, as we have called it during development. The nest holds the correct album arts the player have collection and the mysterious edge is actually the container for holding all album art cards that are not currently interacted by the players. As you can see the containers doesn’t resemble each other but they both help organizing the the same kind of objects. 

Continuing with manipulations used in SonicSpree and it’s now it gets a bit interesting. First, moving the album art cards. This is probably the most basic manipulation with table based multi touch NUI, especially on Microsoft Surface as ScatterView is very easy and basic control to use. The next part I’m not sure about. Whether the events are counted as many manipulation or if the entire sequence of events is counted as a gesture. What I am refering to is the throwing and removal of the physical dices. First, you as a player throws the dices to randomly select music and secondly removing the dices to start the game. There are actual two most natural manipulations you can make with a couple of dices. But on the other hand, the whole sequence of events (throwing and removing of the dices) can be seen as a gesture as it on completion starts a new game round. Or can it even actually be both?

Talking about gestures, I think I can define two more gestures in the game. First, moving an album art card from the edge to the middle of the screen (illustrated as the circle in the picture above) to flip the card to actually see the album art. Secondly, moving a flipped card into a players nest to make a guess.

I will end my retrospective here and I think SonicSpree did adept the OCGM philosophy quite well, perhaps it was thanks to the UX and design people of Ergonomidesign? But I think OCGM can give us NUI developers the language and abstraction to create NUI applications, not only multi touch ones. Maybe in time we will see more specific design philosophy like WIMP for multi touch NUI, but I like OCGM.

onsdag 20 januari 2010

ISurfaceScrollInfo and You, Epilogue

The holidays are long gone and now it’s time for me to end this blog series. In the previous post was the last post about actually implementing the ISurfaceScrollInfo interface, but I wanted to end this with talking about the solution for SurfacePagePanel.

The behavior of the SurfacePagePanel is to only show one page (list item) at a time, or as far as it can display only one page. To do that I need the SurfacePagePanel to take control of the panning between pages. If you remember from my last post, I mentioned that I implemented a “peak” functionality. Peaking allows the user to look at the adjacent pages but more with a rubber band kind of feeling. I think you need the rubber band feeling on a Microsoft Surface because:

  1. The area of use is larger.
  2. The panel is probably not constraint by a physical border, like the edge of a mobile device.

How is the peaking functionality implemented in SurfacePagePanel? Although I mentioned the solution in the Part Three, I had to rewrite the code.Why? Because I didn’t understand it! ;). Nothing made sense to me when I read the code so I ended up rewriting it. However, the idea is the same as before, to keep the x-value of the output vector within a certain range. In my code I use a logarithmic function to cap x-values. But that is not all. To make the explanation easier I start with showing of a graph of two curves:

peak_math
figure 1: logarithmic and linear curve

Well, the curves represents how the corresponding mathematical function maps the input value to an output value, in our case mapping the x-value from input to the output vector. If I only were to use the logarithmic function the result would be that the panning would go faster than the contact movement at the beginning of the panning, because of inclination of the curve. Therefore I mixed in a linear curve. The idea is to let the linear curve control the mapping of the x-value until a crossing point (where the two curves intersect). After that I use the logarithmic function. To control the crossing point, or the intersection, I alter the altitude of the logarithmic curve by multiplying the function with a specified factor. In the graph above I’ve used a factor of 30. This means that when the x-value reaches 60, the the logarithmic function seize control of the mapping. This is how it looks In code:

236 public Vector ConvertToViewportUnits(Point origin, Vector offset)

237 {

238 if (_isMoving || !_panningOrigin.HasValue)

239 {

240 return new Vector(0.0, 0.0);

241 }

242

243 const int logBase = 2;

244 const double scaleFactor = 0.2;

245

246 var elasticityLength = GetScrollOwnerElasticityLength() * scaleFactor;

247 var absHorizontalOffset = Math.Abs(offset.X);

248 var direction = offset.X / absHorizontalOffset;

249 absHorizontalOffset *= scaleFactor;

250 var thresholdFactor = elasticityLength/Math.Log(elasticityLength, logBase);

251 var cappedOffset = Math.Min(absHorizontalOffset, Math.Log(absHorizontalOffset, logBase) * thresholdFactor);

252

253 return new Vector(cappedOffset * direction, offset.Y);

254 }

At line 250 I determine the crossing point factor of the logarithmic function using the Elasticity property of the ScrollOwner. That’s how the peak function is implemented.

To change page the user can either peak far enough or use a flick gesture. Doing that I listen to the ContactUp event in the SurfacePagePanel. Look at the code executed on the event:

610 private void OnScrollOwnerContactUp(object sender, ContactEventArgs e)

611 {

612 //The first contact has been captured.

613 if (_isMoving || !e.Contact.IsFingerRecognized ||

614 e.Contact.IsTagRecognized || _scrollOwner.ContactsCaptured.Count > 1 || !_panningOrigin.HasValue)

615 {

616 return;

617 }

618

619 var point = e.GetPosition(ScrollOwner);

620 var destinationIndex = DetermineNextFocusedChildIndex(point);

621 _panningOrigin = null;

622 e.Handled = true;

623 MoveViewportToChild(destinationIndex);

624 }

Essentially; first I get the the page that I will move to, which can either be the next, previous or the current one. Second I programmatically pan to that page. That is done using a KeyFrame animation. For the moment I inserted a “bounce” effect just like on the iPhone and the Android and the code for doing all this looks like this:

737 private AnimationTimeline BuildMovementAnimation(double offset, double direction, Duration animationDuration)

738 {

739 var turningPointTime = TimeSpan.FromMilliseconds(animationDuration.TimeSpan.TotalMilliseconds * 0.7);

740 var turningPointOffset = offset + (direction * GetBounceElasticityLength());

741 var destinationOffset = offset;

742

743 var animation = new DoubleAnimationUsingKeyFrames { Duration = animationDuration };

744 var startFrame = new SplineDoubleKeyFrame(HorizontalOffset, KeyTime.FromTimeSpan(TimeSpan.FromSeconds(0.0)));

745 var turningPointFrame = new SplineDoubleKeyFrame(turningPointOffset, KeyTime.FromTimeSpan(turningPointTime), new KeySpline(0.8, 0.8, 0.0, 1.0));

746 var endFrame = new SplineDoubleKeyFrame(destinationOffset, KeyTime.FromTimeSpan(animationDuration.TimeSpan), new KeySpline(0.5, 1.0, 0.5, 1.0));

747

748 animation.KeyFrames.Add(startFrame);

749 animation.KeyFrames.Add(turningPointFrame);

750 animation.KeyFrames.Add(endFrame);

751

752 return animation;

753 }

I’m sorry for the code formatting, but once again I blame the blog theme ;). As you see, the bounce always occur after 70% of the animation duration.

Well, that concludes this blog series about how I implemented ISurfaceScrollInfo for the SurfacePagePanel. I hoped you liked it and happy “surfacing”.