Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Article Zero: Building a UI Platform

0.00/5 (No votes)
27 Feb 2005 1  
Building a UI platform in C#

Article Selector

Introduction

In regards to the graphical user-interface, the flow of visionary ideas is essentially:

Vannevar Bush -> Douglas Englebart -> Alan Kay

Interesting that in 2005, sixty years after Vannevar wrote about the memex, forty years after Englebart invented the mouse, and nearly thirty years after Kay invented the GUI, we sit with a black-box implementation of a GUI, born from the watered-down ideas of these greats. What we need is a modern, object-oriented library of highly focused classes that can be combined to create cutting-edge UI as necessary.

That is the aim of this article series, and it’s a big goal, so let’s get moving.

A quick survey of the modern UI landscape (at least as it relates to C#) reveals:

  • Avalon, oh yes, it’s coming
  • Xamlon, here, and we encourage you to check it out
  • VG.net, draw cutting edge vector graphics via 100% managed code
  • MyXaml, get declarative with C# today

Essentially, you can draw cool vector graphics in your C# application via Xamlon. You can move your declarative C# into XML using MyXaml. With VG.net, you can leverage GDI+ via C# wrapper classes that draw vector graphics.

Any one of these technologies can help you improve your application, save Avalon, which remains in beta. But at the end of the day, we find ourselves back where we started: the controls we depend on remain trapped in a black-box and a huge area of potential innovation remains utterly blocked. We think C# and the .NET Framework is big enough and strong enough to rectify this situation. So let’s get down to brass tacks...

Next Step

First, we need a simple application (call it ApplicationZero), to help us grow the core control logic. This necessarily means hit-testing, drag-and-drop, hot control detection (more hit-testing), and most importantly smooth, fast, flashless painting. Here is ApplicationZero:

In this application, you can drag the solid rectangle and drop it in the empty frame. The solid rectangle changes color when the mouse is positioned over it, and retains that color during a drag. The frame changes color when the solid rectangle intersects it during a drag. Dropping the solid rectangle anywhere over the frame causes it to land squarely in the frame (a “snap-to” behavior).

Some tests for ApplicationZero:

  • Paint the solid rectangle.
  • Paint the frame.
  • Paint the solid rectangle hot.
  • Paint the frame hot.
  • Drag the solid rectangle one pixel to the left.
  • Drag the solid rectangle until it intersects the frame.
  • Drag the solid rectangle over the frame and drop it.

Coding these tests isn’t so trivial. How do we know that the solid rectangle painted correctly? How do we simulate a drag? Let’s take on the painting problem first.

Painting Strategy

To get flashless painting, we are definitely going to have to buffer the painting operations. That is, we will need to paint controls to a Bitmap and then blast select areas of the bitmap to the screen. This now age-old technique was invented by Dan Inglis at Xerox’s Palo Alto Research Center in the 70’s. If you want to get down and dirty with blending and 2D graphics, there are numerous excellent resources available, some listed at the end of this article.

For now, we’re going to cut to the chase: C# relies on GDI+ to do its drawing. The GDI+ equivalent of Dan Inglis’ technique is found in the DrawImage method, which we will use to draw portions of an offscreen bitmap to the Graphics object of a Windows.Form. To achieve the cool effect you see in ApplicationZero (where the frame appears beneath the translucent rectangle), GDI+ uses alpha-blending. Alpha-blending creates a serious performance problem for DrawImage, because the illusion of transparency is created by combining the pixels of the two images. In GDI+ this can be slow: very slow. To get DrawImage working efficiently, we have to use a special type of Bitmap, one that allows a simpler (and faster) combining algorithm to be used (note the constructor in the code that follows).

Taking some baby-steps toward workable buffering code we have:

Create the buffer:

using System.Drawing;
using System.Drawing.Imaging;
using System.Windows.Forms;

Form form = new System.Windows.Form();

Bitmap offscreen = new Bitmap(form.ClientSize.Width, 
      form.ClientSize.Height, PixelFormat.Format32bppPArgb);

using (Graphics graphics = Graphics.FromImage(offscreen))
{
    graphics.Clear(Color.White);
}

Draw the "control" offscreen:

using (Graphics graphics = Graphics.FromImage(offscreen))
{
    graphics.DrawRectangle(Brushes.Black, 10, 10, 100 , 100);
}

And finally, blast the “control” to the form:

using (Graphics graphics = form.CreateGraphics())
{
    graphics.DrawImage(offscreen, 10, 10, 100, 100);
}

Simple as this code is, it embodies the essence of a buffered painting technique. With this strategy formulated, we can now consider testing. So how should we verify that a control is painting correctly? As it turns out, this is another benefit of buffer-based painting – a full representation of the form's client area is available in the offscreen bitmap. Sample a small area of that Bitmap, say, the area where a control is supposed to appear, and you’ve got a way to verify correctness.

Sampling

Notice how we say sample. We are going to sample the area where the control should appear. This means we will save the value of only some of the pixels in the control area. Why? Why not save the whole offscreen bitmap per test? Good question. The main problem is that the entire offscreen bitmap is often too exact. It’s an over-specification of the test results. For example, assume we have a multi-control test which considers the interaction between a textbox and a radio button. It’s a stress test and contains other controls as well. If the painting for any of the controls on the form changes in the slightest, our testing framework will throw a red on that test.

But so what, let's forge ahead. The next problem would be: research the red by inspecting the two Bitmaps, finding the difference (or differences) between them and then determining the cause. This can often lead to research unrelated to the case at hand, since the master Bitmap contains a full representation of all controls on the form. Now, that’s not all bad, sometimes you can find some really nasty bugs by researching things “unrelated to the case at hand.” The real problem though, is that saving the entire Bitmap can result in all tests going red with only the slightest change to a single control. Want to run two thousand cases through a Bitmap Differ, manually researching why each change occurred? Not helpful.

Sampling then, has proven to be a much more productive approach, because it allows us to focus on the one control we are actually testing, ignoring the ones we are not. This reduces the sensitivity of the master to changes in the system, and generally keeps entire test sets from going red after the slightest painting tweaks.

If we are not going to use a Bitmap Differ to isolate the changes in painting results, what are we going to use? Glad you asked. We’ll use a sampling viewer, which displays the colors found in the pixels sampled, arranged (roughly) as the pixels are arranged in the control. Place two of these viewers side-by-side (one with the live results, the other with the master), and we can identify differences at a glance.

So let’s enhance our testing framework to support this new style of testing. We really need two things. A Windows form to host the test, and a Monitor with sample viewers. The enhanced testing framework is shown below:

Appearing at the top left, we have the Windows host form, which is created by the case. All controls created during a test reside in this form. Our new case ancestor, PaintCase, has a setup routine which creates the form and a teardown routine which closes it. At the bottom left, we have the new Monitor, with two sample viewers – live results on the left, master on the right. The second case is selected and you can see that the colors match.

Let's look more closely at the samplers:

The sampling pattern consists of three diagonal pixels from each corner of the control and one pixel at the midpoint of each side of the control. For the control we are sampling here, this technique is certainly more than sufficient.

Coding ApplicationZero

To get to this point, obviously we have to do some coding. The basic architecture looks like this:

  • FormBlaster - contains the buffer and blasts the buffer to the form
  • Control - defines the basic "paintable" object
  • ControlOverlay - represents the ClientArea of the form, contains all child controls
  • Bounds - declares a single abstract routine: Contains, useful for hit-testing
  • Rectangle - a typical rectangle, implements Contains
  • Stone - the solid, draggable rectangle in ApplicationZero
  • StonePad - the drag target in ApplicationZero

The flow of the Paint call is as follows:

Simple, simple, simple, yes. But that’s exactly what we want at this point. If you were watching like a hawk, you probably noticed that the last two cases in the Case List have empty circle icons. This indicates that the case has no master. We have no implementation for “hotness” at this point, so these cases cannot paint correctly. In the next article (Article One: Drag And Drop), we finish ApplicationZero, taking up hit-testing and dragging.

Links

Updates

  • Feb 4th, 2005 - UICaseBaseSource.zip contains extensive comments on how to move forward with UICaseBase for a new set of cases

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here