Introduction
After a long time of writing mostly business logic and simple UIs (on the web and desktops) that use off the bench libraries, a while ago I decided to figure out what it takes to draw objects in a screen and have a user interface that allows the user to move those around by simple drags, similar to dragging windows. Since I did not find an article around this, I decided to figure things on my own and then post my findings and thoughts.
I wrote this for myself, but finally decided to write this article for two reasons:
- It is a learning tool for someone in my same spot
- We can collaborate on how to do these things in a different manner
If you find this interesting, go to my next article:
Background
I wanted to be able to draw shapes and manage those around the screen, almost like done in the Windows Explorer UI.
Functional Goals
My objective was to have an application that could do the following:
- Draw any objects
- User can select or multi-select them
- Access to a selection box
- User can drag items around
- Ability to zoom
- Ability to have multiple viewports of the same
- Use real world units to draw these items
Simple but different; and enough to let me have some fun.
Design Goals
- Easy to extend
- Easy to move over to a different platform, e.g. WPF or Java?
- Maintainable
Code Overview (Design)
At a high level, the application consists of a universe of shapes (the model), a manager of gestures (the controller), and a UI which provides the drawing area and knows how to interpret gestures (the view). I guess this is more of an MVC pattern. I also rely on interfaces such as IShape
among others, at least in the areas where I felt that I wanted to be extensible.
The View and Controller Logic
The view is pretty much on Form1.cs (great name) and Canvas.cs for drawing. Form1
tells the canvas to draw themselves and interprets gestures which then get passed on to different managers (controllers) to manage the objects. Taking Canvas
out as its own class enables me to easily create different viewports of the same universe. In my case for example, one viewport shows you the regular view, the other one is zoomed and rotated.
Interpreting Gesture and Managing Them
The part that I found the most fun to get to work correctly was multi-selection. I wanted it to be very intuitive and standard and thus modeled it after Windows Explorer (at least to a great extent).
All gestures are interpreted by a gesture manager which then tells an action manager how to manage the shapes. Again, the universe and the shape manager are decoupled.
Each action has its own manager as I felt that this simplified the overall logic flow. I guess at some point it could become too much, but at least you know that every class will be simple and have a single purpose.
The Shape Model
The basic object here is IShape
. Whenever one wants to introduce a new shape, one just needs to implement IShape
and probably ISelectable
. The universe is made of 0, 1, or more shapes (a list). One could question why ISelectable
and IShape
are separate, maybe I was a bit eager to decouple things, in a larger system it probably is more needed. What drove me to this case at one point is that I wanted to have a drawing shape for the multi select bounding box which was itself not selectable (however, I also avoid it being selectable by never making it part of the universe). Regardless, even at this point, it could’ve been implemented as IsSelectable
on the IShape
interface. So, a design decision which probably would end up as just legacy code in a large system.
Using the Code
Hopefully, the overview above tells most of the story. Here are some details that I think are important.
At the heart of the model, you have IShape
, IDrawable
, and ISelectable
.
public interface IShape : IDrawable
{
bool Contains(PointF p);
void Move(PointF delta);
void FillRegion(Region region);
}
IDrawable
and ISelectable
are very simple interfaces with a single method and here they're mostly used as tag interfaces. As I said above, it is debatable whether we need to do this, and at the end of the day it is a design decision. For example, this could enable us to very easily have background images which can never be touched or moved. In any case, whenever you have a new shape, you just implement those methods and interfaces and then it can be part of the shape universe.
The paint
method simply paints the shape into the Graphics
parameter that has been passed in. You can assume that paint
for any object is done from the bottom up, i.e., just draw your shape, if something is behind, it will get hidden by this instance of the shape. For selected items, I just outline them. For performance reasons, the graphics path is calculated only whenever this model changes. In this case, the only method that can change things is the Move
, but one could similarly do a Resize
if needed.
public void Paint(Graphics g)
{
g.FillPath(Brushes.Yellow, gp);
if (_isSelected)
g.DrawPath(Pens.Black, gp);
}
public void Move(PointF d)
{
_border.X += d.X;
_border.Y += d.Y;
gp.Reset();
gp.AddEllipse(_border);
}
The Contains
and FillRegion
methods are methods that help the framework see if the mouse or selection box is over them (if they are selectable). Again, here I simply reuse the graphics path computed above to help out. The Contains
method is rather straight forward using the IsVisible
method in our friend GraphicsPath
. FillRegion
is used to see if a bounding box is selecting our shape. The way this works is: the framework passes an empty region to FillRegion
and expects this code to fill it with the desired shape. Then the selection box is logically And
ed on top. If there are any overlaps, then the resulting Region
will not be empty, thus indicating that the shape should be selected.
public void FillRegion(Region r)
{
r.Union(gp);
}
In the Canvas
, Matrix
from Drawing2D
comes to the rescue. This enables us to change the orientation of the viewport and scale it as desired. The only thing you have to be careful of is to use the MatrixOrder.Append
parameter, otherwise it may not work as expected. This transform can then be used on paint
:
dc.Transform = GetWorldToViewTransform();
and when interpreting mouse selections:
gestureControl.HandleMouseDown(
((Canvas)sender).TransformToWorldCoordinates(e.Location),
IsControlPressed());
Points of Interest
Handling the user selection and gestures is mostly on the mouse up event, not the mouse down as I originally thought. As you can see, now I simply return false
.
Each canvas
has its own coordinate system and can convert between itself and a parent coordinate system (or in this case, the world coordinate). This enables us to do fun things to have two completely different views of the universe. In my sample, I show perhaps a bigger portion of the universe (zoomed out by 50%) and also rotated (for fun).
I have a shape group (which happens to be also a shape). This by the way is how the universe is implemented. However, the idea here would be that it should be trivial to implement a group shapes (such as in PowerPoint). All we’d need to do is remove the involved shapes from the universe, add them to a shape group, and then add this back to the universe.
Hit testing was something I didn’t know quite how to handle, specifically in a fashion that is easily reusable. I ended up doing filling a region with the shape to hit test, and doing a logical AND
with the current cursor. If anything comes up, it means that there’s a hit. This is good since it easily handles many kinds of shapes, including hollow ones such as a donut.
Finally, I tried to do some form of conversion between real life coordinates and sizes to screen. I failed miserably at it. You can see the code of what I tried to do this, but this didn’t work and at the end of the day it wouldn’t work if you have multiple monitors with different resolutions/DPIs (e.g. how do you make the shape change in pixel size as it goes from one monitor to the other?).
Enjoy!
History
- 15th March, 2011: Initial post