torsdag 11 februari 2010

A case of OCGM

Recently there have been some talking about OCGM and it’s impact on NUI. OCGN is a design philosophy proposed by Ron George and is suppose to be to NUI in the same way WIMP is to GUI. OCGM is pronounced Occam as in Occam’s Razor and it’s an abbreviation for:

  • Objects – “Objects are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface.”
  • Containers – “Containers will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit.
  • Gestures – “Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it.
  • Manipulations – “Manipulations are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent.

I’ve cited Ron from his first post about OCGM. I recommend you read the post and I also recommend reading this paper about OCGM by Ron George and Joshua Blake.

To understand OCGM further, I would like to make a little retrospective on one my previous Microsoft Surface applications and see how the application fits into OCGM (or should it be the other way around?).

My first Surface project was SonicSpree. To summaries the application: SonicSpree is a game of guessing songs, where the players goal is to combine the song currently playing with it’s corresponding album art. The actual game element is to find the correct album art by  and then drag it into the players nest / home. A simple idea. Finding the album art though is like playing Memory. From the start, all album arts cards are facing down but can be flipped by dragging an album art card into the center. When faced up, the player can make a guess by dragging the album art card into his or hers home to receive a point.

sonicspree
If we start with identifying what kind of objects used in SonicSpree the most obvious one is of course the actual album art card the users actually interact with to play the game. The other kind of objects used in SonicSpree are actually the physical dices. A new game round can be started by throwing the dices onto the Surface.

As for the containers, SonicSpree uses two of them: the players nest and “the edge”, as we have called it during development. The nest holds the correct album arts the player have collection and the mysterious edge is actually the container for holding all album art cards that are not currently interacted by the players. As you can see the containers doesn’t resemble each other but they both help organizing the the same kind of objects. 

Continuing with manipulations used in SonicSpree and it’s now it gets a bit interesting. First, moving the album art cards. This is probably the most basic manipulation with table based multi touch NUI, especially on Microsoft Surface as ScatterView is very easy and basic control to use. The next part I’m not sure about. Whether the events are counted as many manipulation or if the entire sequence of events is counted as a gesture. What I am refering to is the throwing and removal of the physical dices. First, you as a player throws the dices to randomly select music and secondly removing the dices to start the game. There are actual two most natural manipulations you can make with a couple of dices. But on the other hand, the whole sequence of events (throwing and removing of the dices) can be seen as a gesture as it on completion starts a new game round. Or can it even actually be both?

Talking about gestures, I think I can define two more gestures in the game. First, moving an album art card from the edge to the middle of the screen (illustrated as the circle in the picture above) to flip the card to actually see the album art. Secondly, moving a flipped card into a players nest to make a guess.

I will end my retrospective here and I think SonicSpree did adept the OCGM philosophy quite well, perhaps it was thanks to the UX and design people of Ergonomidesign? But I think OCGM can give us NUI developers the language and abstraction to create NUI applications, not only multi touch ones. Maybe in time we will see more specific design philosophy like WIMP for multi touch NUI, but I like OCGM.

onsdag 20 januari 2010

ISurfaceScrollInfo and You, Epilogue

The holidays are long gone and now it’s time for me to end this blog series. In the previous post was the last post about actually implementing the ISurfaceScrollInfo interface, but I wanted to end this with talking about the solution for SurfacePagePanel.

The behavior of the SurfacePagePanel is to only show one page (list item) at a time, or as far as it can display only one page. To do that I need the SurfacePagePanel to take control of the panning between pages. If you remember from my last post, I mentioned that I implemented a “peak” functionality. Peaking allows the user to look at the adjacent pages but more with a rubber band kind of feeling. I think you need the rubber band feeling on a Microsoft Surface because:

  1. The area of use is larger.
  2. The panel is probably not constraint by a physical border, like the edge of a mobile device.

How is the peaking functionality implemented in SurfacePagePanel? Although I mentioned the solution in the Part Three, I had to rewrite the code.Why? Because I didn’t understand it! ;). Nothing made sense to me when I read the code so I ended up rewriting it. However, the idea is the same as before, to keep the x-value of the output vector within a certain range. In my code I use a logarithmic function to cap x-values. But that is not all. To make the explanation easier I start with showing of a graph of two curves:

peak_math
figure 1: logarithmic and linear curve

Well, the curves represents how the corresponding mathematical function maps the input value to an output value, in our case mapping the x-value from input to the output vector. If I only were to use the logarithmic function the result would be that the panning would go faster than the contact movement at the beginning of the panning, because of inclination of the curve. Therefore I mixed in a linear curve. The idea is to let the linear curve control the mapping of the x-value until a crossing point (where the two curves intersect). After that I use the logarithmic function. To control the crossing point, or the intersection, I alter the altitude of the logarithmic curve by multiplying the function with a specified factor. In the graph above I’ve used a factor of 30. This means that when the x-value reaches 60, the the logarithmic function seize control of the mapping. This is how it looks In code:

236 public Vector ConvertToViewportUnits(Point origin, Vector offset)

237 {

238 if (_isMoving || !_panningOrigin.HasValue)

239 {

240 return new Vector(0.0, 0.0);

241 }

242

243 const int logBase = 2;

244 const double scaleFactor = 0.2;

245

246 var elasticityLength = GetScrollOwnerElasticityLength() * scaleFactor;

247 var absHorizontalOffset = Math.Abs(offset.X);

248 var direction = offset.X / absHorizontalOffset;

249 absHorizontalOffset *= scaleFactor;

250 var thresholdFactor = elasticityLength/Math.Log(elasticityLength, logBase);

251 var cappedOffset = Math.Min(absHorizontalOffset, Math.Log(absHorizontalOffset, logBase) * thresholdFactor);

252

253 return new Vector(cappedOffset * direction, offset.Y);

254 }

At line 250 I determine the crossing point factor of the logarithmic function using the Elasticity property of the ScrollOwner. That’s how the peak function is implemented.

To change page the user can either peak far enough or use a flick gesture. Doing that I listen to the ContactUp event in the SurfacePagePanel. Look at the code executed on the event:

610 private void OnScrollOwnerContactUp(object sender, ContactEventArgs e)

611 {

612 //The first contact has been captured.

613 if (_isMoving || !e.Contact.IsFingerRecognized ||

614 e.Contact.IsTagRecognized || _scrollOwner.ContactsCaptured.Count > 1 || !_panningOrigin.HasValue)

615 {

616 return;

617 }

618

619 var point = e.GetPosition(ScrollOwner);

620 var destinationIndex = DetermineNextFocusedChildIndex(point);

621 _panningOrigin = null;

622 e.Handled = true;

623 MoveViewportToChild(destinationIndex);

624 }

Essentially; first I get the the page that I will move to, which can either be the next, previous or the current one. Second I programmatically pan to that page. That is done using a KeyFrame animation. For the moment I inserted a “bounce” effect just like on the iPhone and the Android and the code for doing all this looks like this:

737 private AnimationTimeline BuildMovementAnimation(double offset, double direction, Duration animationDuration)

738 {

739 var turningPointTime = TimeSpan.FromMilliseconds(animationDuration.TimeSpan.TotalMilliseconds * 0.7);

740 var turningPointOffset = offset + (direction * GetBounceElasticityLength());

741 var destinationOffset = offset;

742

743 var animation = new DoubleAnimationUsingKeyFrames { Duration = animationDuration };

744 var startFrame = new SplineDoubleKeyFrame(HorizontalOffset, KeyTime.FromTimeSpan(TimeSpan.FromSeconds(0.0)));

745 var turningPointFrame = new SplineDoubleKeyFrame(turningPointOffset, KeyTime.FromTimeSpan(turningPointTime), new KeySpline(0.8, 0.8, 0.0, 1.0));

746 var endFrame = new SplineDoubleKeyFrame(destinationOffset, KeyTime.FromTimeSpan(animationDuration.TimeSpan), new KeySpline(0.5, 1.0, 0.5, 1.0));

747

748 animation.KeyFrames.Add(startFrame);

749 animation.KeyFrames.Add(turningPointFrame);

750 animation.KeyFrames.Add(endFrame);

751

752 return animation;

753 }

I’m sorry for the code formatting, but once again I blame the blog theme ;). As you see, the bounce always occur after 70% of the animation duration.

Well, that concludes this blog series about how I implemented ISurfaceScrollInfo for the SurfacePagePanel. I hoped you liked it and happy “surfacing”.

söndag 17 januari 2010

Surface at PDC09

(Should have posted this two months ago… Found it as a draft in Live Writer today…)

At the last day of PDC 2009 I attended the only session about Microsoft Surface. It was presented by Robert Levy who is Program Manager for the Surface team and his equivalent from the WPF team. The session was interesting and it’s always to see the fun stuff that the Surface team has cocked up. This time it was the Surface Monster that stole the show. Videos and more info is available on http://www.surface.com/monster

The WPF-demos showed of the multi touch capabilities of WPF 4 where you can scale, rotate and transform objects on touch enabled hardware. The example is similar as in my previous blog post (multi touch). Please note that that example is for beta 1 for Visual Studio and that some things have changed to beta 2. I plan to post an upgrade soon.

The session was probably perfect for those who were new to touch or new to Surface and also draw some applauds from the crowd. For a developer already up and running with Surface it was fun but not much new. I would have liked to see another session on the program with Surface SDK Deep Dive or Performance Tips When Developing for Microsoft Surface or Surface from the Trenches – Experiences from a Real World Surface project. What I look for is more level of depth with two sessions, one introduction and one a bit deeper.

One new announcement was the Surface Touch Pack for Windows 7 that will enable you to use the same controls in WPF for Windows 7 as you do in Surface which is really cool! Robert also “announced” that they are working on units that will be cheaper, thinner and wall mountable. That wasn’t too hard to guess and no real details were available. I was hoping to hear something about Surface SDK 2.0 that hopefully will be on the way with some new controls, new gestures and maybe detection of a hand in the contact events.

Earlier on the conference I did get a chance to show the two applications that I have been part of the development team for, Sonicspree and HelpingHands. The feedback from Robert and another guy whose name I can’t remember were all positive. Not sure that they liked what we had done with the element menu but I think they bought the reasons we had for doing what we did. We have changed the behavior of the element menu in HelpingHands so that it will act more like a toggle menu and stick even when the user removes her finger.

All in all it has been a good PDC from a Surface point of view. Cool stuff on the session, one on one time with the team and confidence in that the applications we develop at Connecta are top notch!

söndag 20 december 2009

ISurfaceScrollInfo and you, Part Three.

Hi again. Last time I talked about the IScrollInfo interface and how it is implemented in my SurfacePagePanel. Now it’s the time to talk about the ISurfaceScrollInfo! As I said in the last post ISurfaceScrollInfo extends the IScrollInfo interface. ISurfaceScrollInfo has the extra capabilities to react on two basic NUI (Natural User Interface) gestures associated with the Microsoft Surface: Panning and Flicking. Both Panning and Flicking are common NUI gestures but I believe Panning is the most common and natural thing to do.

So what is Panning and Flicking? If you are not interested in reading my explanation you can skip this section! But Panning and Flicking is like moving and object, say an apple. Panning is equivalent to pick up the apple and placing it back gently on another spot, the movement begins when you pick it up and stops when you place it back down again. You as a user are control of it’s movements. Flicking is more like throwing the apple. You are directly not in control of it’s movement. It’s the same with flicking. The moment you release your contact from the Microsoft Surface, the virtual physics kicks in and scrolls the items list until it stops.

ISurfaceScrollInfo extends with three new methods, which helps you control the scrolling when Panning and Flicking:

  • ConvertFromViewportUnits(origin, offset) : vector - Converts horizontal and vertical offsets, in viewport units, to device-independent units that are relative to the given origin in viewport units.
  • ConvertToViewportUnits(origin, offset) : vector - Converts horizontal and vertical offsets to viewport units, given an offset in device-independent units that are relative to the given origin in viewport units.
  • ConvertToViewportUnitsForFlick(origin, offset) : vector - Converts horizontal and vertical offsets to viewport units, given an offset in device-independent units that are relative to the given origin in viewport units.

I’ve inserted the actual documentation summery from the MSDN for each method. What can also be read in the documentation is that the results from the convert to methods are later used when setting the vertical and horizontal offset (using the SetHorizontalOffset and SetVerticalOffset methods from IScrollInfo interface). To be more precise, ConvertToViewportUnits is called all the time during panning and ConvertToViewportUnitsForFlick is called once the panning is complete if needed. The documentation isn’t that clear on when ConvertFromViewportUnits is called by the framework, but it is suppose to reverse the conversion done by the convert to methods.

So how are these methods implemented in SurfacePagePanel? The standard implementation should be to just return the offset argument, as it is done in the Continuous Planning List. But in my case I want to control the panning and flicking. Unlike how the iPhone and Android based phones UI works, I want to constraint the panning movement. The constraint is keeping the currently focused page in the center but you should also be are able to peek at the next item at each side of the page. This constraint I have in the ConvertToViewportUnits method:

  236 public Vector ConvertToViewportUnits(Point origin, Vector offset)

  237 {

  238     if (_isMoving || !_panningOrigin.HasValue)

  239     {

  240         return new Vector(0.0, 0.0);

  241     }

  242 

  243     var absOffset = Math.Abs(GetScrollOwnerElasticityLength());

  244     var direction = offset.X / absOffset;

  245     const int scaleFactor = 4; // trail and error generated factor for better user experience.

  246     var offsetChoice = Math.Min(absOffset, Math.Log10(absOffset) * scaleFactor);

  247 

  248     return new Vector(offsetChoice * direction, offset.Y);

  249 }

The important code here is that I’m using the logarithmic calculations to keep the constraint as it caps horizontal offset. This is an example how you can alter the panning.

If we continue with ConvertToViewportUnitsForFlick you will see that it is not as exciting as ConvertToViewportUnits :

  278 public Vector ConvertToViewportUnitsForFlick(Point origin, Vector offset)

  279 {

  280     _hasFlicked = true;

  281     return new Vector(0.0, 0.0);

  282 }

Here I return an empty vector and there is a reason for it, because it prevents the ScrollViewer to continue scrolling when flicking. I use flicking as one of the methods to indicate to switch page, so I need to control the scrolling myself. In my next and last post of this blog series I will talk about how I finalized my SurfacePagePanel.

Oh, by the way. Remember that the arguments to ConvertToViewportUnitsForFlick are based on the result from ConvertToViewportUnits. In my solution I got a nasty little side effect. The offset argument to ConvertToViewportUnitsForFlick can be used to determine the direction of the flick, but due to my calculation in ConvertToViewportUnits the flick direction was occasionally reversed. Meaning when the user flick to the left, the offset indicates right. I can’t explain why and how it occasionally was reversed, but it did happen.

So what did I do the implementation of ConvertFromViewportUnits? Well, I  used the standard implementation and returned the offset argument. As I don’t see any negative side effects in doing that I leave it with that. Secondly I not really sure how I should properly implement it in my case. If you know more about ConvertFromViewportUnits and want to share it with me, feel free to send me an email explaining it! To prevent any spam mails, my email is: first name dot surname at Connecta dot se. My first name and surname is shown as the author of this post.

With this, I have gone through the methods in ISurfaceScrollInfo interface. But I feel like writing another blog post to wrap things up with my SurfacePagePanel. But this post marks the ending of implementing ISurfaceScrollInfo. Stay tuned to the epilogue of ISurfaceScrollInfo and You series.

onsdag 2 december 2009

ISurfaceScrollInfo and you, Part Two.

It’s been a while since I wrote the last post, but blame on the flu. But now I have the energy to continue this “blog series”.

In the last post I talked about how to generally create a custom panel in WPF and I also showed how I implemented MeasureOverride and ArrangeOverride for our SurfacePagePanel. Now I will continue this blog series with a part of the ISurfaceScrollInfo interface. I will actually start with looking at the IScrollInfo, which ISurfaceScrollInfo extends from.

Before diving into the IScrollInfo interface I will post a few reference links which have helped me. Read the reference links because there’s a lot of information there:

IScrollInfo, when implemented, tells a ScrollViewer how a particular panel is scrolled. If a panel doesn’t implement IScrollInfo the ScrollViewer will scroll the panel according to some default behavior. Before jumping in how to implement IScrollInfo I want to explain a couple of concepts you need to understand:

  • Viewport
  • Extent

Viewport is the area of the panel that is visible to the user. Looking at figure 1 the Viewport represents the red solid rectangle. For instance, in our case the SurfacePagePanel is suppose to reside within a SurfaceListBox. The SurfaceListBox controls how much we are able to see of it items and thus it’s size will implicitly be our Viewport.

The Extent on the other hand is the total area all measured items and it’s seen as the dotted rectangle in figure 1. If we once again look at our case, arranging 10 items horizontally where each item is 300 pixels wide and 300 pixels in height will give us an Extent which is 3000 pixels wide (10 items times 300 pixels) and 300 pixels in height. In other words, the Extent is the total area needed to display all items at once.

viewport_extent2figure 1: Viewport (red rectangle) and Extent (dotted rectangle).
The gray rectangles represents items.

If we continue looking at the members of the IScrollInfo interface we see there’s a lot of them. However, many of the members are scrolling methods (methods which are called for certain user actions):

  • MouseWheelUp()
  • LineUp()
  • PageUp()
  • Etc…

Considering a Surface context these methods are not that important (mainly because you need a mouse and keyboard to access these methods) so these methods doesn’t have an implementation.

Moving on to the IScrollInfo members that composes the real scrolling logic:

  • ViewportWidth – The width of the Viewport
  • ViewportHeight – The height of the Viewport
  • ExtentWidth – The width of the Extent
  • ExtentHeight – The height of the Extent
  • VerticalOffset – How much the Viewport is offset vertically according to the upper right corner of the Extent
  • HorizontalOffset – How much the Viewport is offset horizontally to the upper right corner of the Extent
  • SetVerticalOffset – Sets the Viewports vertical offset
  • SetHorizontalOffset – Sets the Viewports horizontal offset
  • CanVerticalScroll – Whether the panel can scroll it’s content vertically
  • CanHorizontalScroll – Whether the panel can scroll it’s content horizontally
  • MakeVisible – Scrolls a specific item (specified as a Visual) to a desired location (specified as a rectangle).

Now, let’s go through how these methods are implemented in the SurfacePagePanel. As you might imagine, the Viewport and Extent is calculated during the measuring pass:

152 protected override Size MeasureOverride(Size availableSize)

153 {

154 var resultSize = new Size(0, 0);

155 var extent = new Size(0, 0);

156

157 foreach (UIElement child in Children)

158 {

159 child.Measure(availableSize);

160 resultSize.Width = Math.Max(resultSize.Width,

161 child.DesiredSize.Width);

162 resultSize.Height = Math.Max(resultSize.Height,

163 child.DesiredSize.Height);

164 extent.Width += child.DesiredSize.Width;

165 }

166

167 resultSize.Width = double.IsPositiveInfinity(availableSize.Width)

168 ? resultSize.Width : availableSize.Width;

169 resultSize.Height = double.IsPositiveInfinity(availableSize.Height)

170 ? resultSize.Height : availableSize.Height;

171 extent.Height = resultSize.Height;

172

173 if ((_viewport != resultSize _extent != extent)

174 && ScrollOwner != null)

175 {

176 _viewport = resultSize;

177 _extent = extent;

178

179 ScrollOwner.InvalidateScrollInfo();

180 }

181

182 return resultSize;

183 }

Code 1: Viewport and Extent calculated in MeasureOverride.

I hope you can read the code, but there isn’t much space with this blogspot theme. But if you can read it, the Viewport is simply the availableSize given to us (or the size of the largest child element in case of infinitive availabeSize). The Extent, on the other hand, is actually calculated. Extent’s width is the sum of all the child elements measured width (row 164) and the height is simply the height of the resultSize (row 171) which indirectly is the height of the Viewport (row 176).

When both the Viewport and the Extent is calculated ViewportWidth, ViewportHeight, ExtentWidth and ExtentHeight are easily implemented as they just return the value of the corresponding properties of Viewport and Extent.

As our SurfacePagePanel only can scroll horizontally CanVerticalScroll is set to false, VerticalOffset always returns 0 and SetVerticalScroll is not implemented. Aa a side note: at first I threw a NotImplementedException from the SetVerticalScroll method but the fact is SetVerticalScroll is called at least once by ScrollViewer. So don’t go throwing NotImplementedException everywhere because you never know if or when it hits you in the face.

Let’s look at the corresponding properties and methods for the horizontal behavior. As you might’ve expected it’s the SetHorizontalOffset that controls the position of the Viewport. To control the Viewport offset a translation transform is used as the panels RenderTransform. Changing the translation transform also changes what is seen through the Viewport. As seen in the code below, SetHoriztonalOffset validates the input and calls the SetViewport method, which is a general method for setting the Viewport.

501 public void SetHorizontalOffset(double offset)

502 {

503 if (!CanHorizontallyScroll)

504 {

505 return;

506 }

507

508 if (offset == _viewportOffset.X)

509 {

510 return;

511 }

512

513 SetViewport(offset, _viewportOffset.Y);

514 }

Code 2: Implementation of SerHorizontalOffset.

578 private void SetViewport(double newHorizontalOffset,

579 double newVerticalOffset)

580 {

581 //Cap the offset values.

582 newHorizontalOffset = Math.Max(0,

583 Math.Min(newHorizontalOffset,

584 ExtentWidth - ViewportWidth));

585 newVerticalOffset = Math.Max(0,

586 Math.Min(newVerticalOffset,

587 ExtentHeight - ViewportHeight));

588

589 _viewportOffset = new Point(newHorizontalOffset, newVerticalOffset);

590 _renderTransform.X = -_viewportOffset.X;

591 _renderTransform.Y = -_viewportOffset.Y;

592

593 if (ScrollOwner != null)

594 {

595 ScrollOwner.InvalidateScrollInfo();

596 }

597 }

Code 3: Implementation of SetViewport.

Seen in the code for SetViewport the ScrollOwner is notified about the changes by calling InvalidateScrollInfo. This is important to keep the ScrollViewer in sync with the panels scrolling data.

To summarize this post we talked about the Viewport and Extent and their roll in scrolling a panels content. I also showed the code for how the scrolling, or the placement of the Viewport, is implemented in the SurfacePagePanel using translate transforming.

Next post will be about the ISurfaceScrollInfo, I promise!

torsdag 12 november 2009

Helping Hands

The last two months I have been working with a team at Ergonomidesign to create a new Surface Application. It is called Helping Hands and envisions the future of integrated health care. How do you, as a potential patient, prevent your lifestyle from becoming an illness? How can you recognize and prevent e.g. Coronary Artery Disease, before it is too late? Welcome to have a look at the future of patient management and treatment in 2015.

The application will exhibit at the world’s largest medical trade fair, Medica/Compamed in Germany next week and by then we will be able to say and show a lot more of the application.

The team creating Helping Hands consists of graphical designers, interaction designers and developers from Ergonomidesign and developers and architects from Connecta.

Right now I can only show you a glimpse of what’s in the application so more is to come during next week!

Ergonomidesign_Future_of_Health_Care_no_logo

 

Also check out the extremely cool custom byte tag with the look of a dragon!

bild

torsdag 29 oktober 2009

ISurfaceScrollInfo and you, Part One.

Before getting into detail about how to implement the ISurfaceScrollInfo, I want to talk about creating a custom panel. But why create a custom panel? Of course there are several ways (as usual) to accomplish a behavior like the page panel, but I think creating a new panel is the best way to take advantage of the WPF framework.

The idea of implementing a custom panel is that we can use it with a ListBox control, SurfaceListBox to be more specific. By implementing a custom panel for a ListBox, we can actually tell the ListBox how we want to layout it’s items.

Now to creating a custom panel. In general, when a panel displays it’s content it does two things: measuring and arranging. First everything in the panel is measured. Measuring is needed to determine the size of the panel and the size depends on one thing: the sizes of the containing items. It is during measuring we have the chance to determine how much space we need to layout the content. In WPF this is the moment when the items desiredSize is set.

After all items are measured, they are arranged. The point of arranging is quite obvious. This is the time when we place each item relatively to each other. Where items are placed depends heavily on their size and that’s why measuring is done before the arrangement. The arrangement is “view independent”, which means you don’t have to think about where you place the items in respect of what is actually viewed to the user. In our case, this is taken care of by SurfaceScrollViewer and that’s the whole point of implementing ISurfaceScrollInfo later on.

To control the measurement and arrangement there are two methods that needs to be overridden in our custom panel:

  • protected override Size MeasureOverride(Size availableSize)
  • protected override Size ArrangeOverride(Size finalSize)

Big surprise huh? As you see MeasueOverride receives a size which describes the available space we have to layout our items. Constrains with other words. Here’s an example: a ListBox which measures 300 times 80 gives an availableSize of 298 times 76. The size returned from the method is the space we want (we may or may not get it). In our implementation only basic measurements are done:

protected override Size MeasureOverride(Size availableSize)
{
var resultSize = new Size(0, 0);

foreach (UIElement child in Children)
{
child.Measure(availableSize);
resultSize.Width = Math.Max(resultSize.Width, child.DesiredSize.Width);
resultSize.Height = Math.Max(resultSize.Height, child.DesiredSize.Height);
, }

resultSize.Width = double.IsPositiveInfinity(availableSize.Width) ? resultSize.Width : availableSize.Width;
resultSize.Height = double.IsPositiveInfinity(availableSize.Height) ? resultSize.Height : availableSize.Height;

return resultSize;
}

It’s important that we call Measure on each child or else we won’t have any desired sizes. We can also see that we tell WPF that we need the same space as given to us to layout our items. (Notice the safety precaution if we get infinite available size. It can happen.)

Now for arranging the items. The argument here is the size that WPF is willing to give us and, as before, it may or may not be equal to the size we wanted earlier. In this implementation all items are placed horizontally. Nothing fancy:

protected override Size ArrangeOverride(Size finalSize)
{
if (Children.Count == 0)
{
return finalSize;
}

var startOffset = 0;
foreach (UIElement child in Children)
{
var destination = new Rect(startOffset, 0.0, child.DesiredSize.Width, child.DesiredSize.Height);
child.Arrange(destination);
startOffset += child.DesiredSize.Width;
}

return finalSize;
}
Here we return the same size as given to us. The documentation only say: “The actual size used.” but I think this is probably important when doing more advanced layouting. In our case returning the same size works fine.

That’s all we need to do for measuring and arranging our items. Next post we will start looking at the ISurfaceScrollInfo interface! Stay tuned.