Introduction
.NET 3.5 introduced a few new language features; oh and there was that little thing called LINQ. But with all the excitement, sometimes a few things get missed. Such as the new 3D elements that are available to WPF developers in .NET 3.5. This article will discuss the use of some of these new 3D related elements that are now in .NET 3.5.
In order to demonstrate some (yeah, I don't use all of the new elements in the attached demo app), I picked something simple. I decided to have a bunch of 3D meshes within a Viewport3D
(3D scene) that could be clicked on to open blog entries. It allow users to switch to three different blogs, my own, Josh Smith's and Karl Shifflett's.
Here is what this article contains:
A Video of the Demo App
Due to the nature of 3D, the only way that I can do the attached demo application any justice is to show you a video which shows it in action. As such, please click on the image below to see a video of the demo application in action:
The New .NET 3.5 Elements
There are three new 3D related elements in .NET 3.5, each of which is described below. But in order to understand why these new elements are so cool, you need to go back to .NET 3.0 land. In .NET 3.0, ModelVisual3D
elements were simple elements, and ModelVisual3D
elements did not support the routed event handling capabilities offered by their 2D counterparts. So when working with 3D elements and wanting to respond to some mouse events or perform hit testing, you had to do it manually. Now this sounds OK, but the only place you could actually carry out hit testing or even listen to routed events was actually on the Viewport3D
(3D scene) that contained the various ModelVisual3D
elements (3D objects).
This worked something like this:
private void viewport_MouseDown(object sender, MouseButtonEventArgs e)
{
Viewport3D viewport = (Viewport3D)sender;
Point location = e.GetPosition(viewport);
HitTestResult hitResult = VisualTreeHelper.HitTest(viewport, location);
if (hitResult != null && hitResult.VisualHit == SOME_VISUAL)
{
}
RayMeshGeometry3DHitTestResult meshHitResult =
hitResult as RayMeshGeometry3DHitTestResult;
if (meshHitResult != null && meshHitResult.ModelHit == SOME_MODEL)
{
}
if (meshHitResult != null && meshHitResult.MeshHit == SOME_MESH)
{
}
}
Now this isn't that bad for one ModelVisual3D
element (3D object). But if you have loads, this just isn't that fun. This is why the new ContainerUIElement3D
/ContainerUIElement3D
elements are cool. They are proper full blown elements that support input, focus, and events. So we don't have to do code like the above any more; we simply wire up a routed event handler and write the appropriate code in the code-behind. Much better. Let's have a look at these new elements in a bit more detail.
ContainerUIElement3D
ContainerUIElement3D
simply provides a container for ModelUIElement3D
objects. ContainerUIElement3D
is a ModelUIElement3D
object, which provides support for input, focus, and events in 3D. Stealing from MSDN, this is an example with two cube ModelUIElement3D
elements within a ContainerUIElement3D
. Notice the use of the ContainerUIElement3D MouseDown
routed event and the ModelUIElement3D MouseDown
routed events. Full fledged events on 3D objects, cool!
<Viewport3D>
<Viewport3D.Camera>
<PerspectiveCamera Position="8,3,0"
LookDirection="-8,-3,0" />
</Viewport3D.Camera>
-->
<ContainerUIElement3D MouseDown="ContainerMouseDown">
<ContainerUIElement3D.Transform>
<RotateTransform3D>
<RotateTransform3D.Rotation>
<AxisAngleRotation3D x:Name="containerRotation"
Axis="0, 1, 0" Angle="0" />
</RotateTransform3D.Rotation>
</RotateTransform3D>
</ContainerUIElement3D.Transform>
-->
<ModelUIElement3D MouseDown="Cube1MouseDown">
<ModelUIElement3D.Transform>
<TranslateTransform3D OffsetZ="1.5" />
</ModelUIElement3D.Transform>
<ModelUIElement3D.Model>
<GeometryModel3D Geometry="{StaticResource CubeMesh}">
<GeometryModel3D.Material>
<DiffuseMaterial
x:Name="cube1Material"
Brush="Blue" />
</GeometryModel3D.Material>
</GeometryModel3D>
</ModelUIElement3D.Model>
</ModelUIElement3D>
-->
<ModelUIElement3D MouseDown="Cube2MouseDown">
<ModelUIElement3D.Transform>
<TranslateTransform3D OffsetZ="-1.5" />
</ModelUIElement3D.Transform>
<ModelUIElement3D.Model>
<GeometryModel3D Geometry="{StaticResource CubeMesh}">
<GeometryModel3D.Material>
<DiffuseMaterial
x:Name="cube2Material"
Brush="Green" />
</GeometryModel3D.Material>
</GeometryModel3D>
</ModelUIElement3D.Model>
</ModelUIElement3D>
</ContainerUIElement3D>
-->
<ModelVisual3D>
<ModelVisual3D.Content>
<PointLight Color="White" Position="3, 10, 4" />
</ModelVisual3D.Content>
</ModelVisual3D>
</Viewport3D>
ModelUIElement3D
As previously stated, these are new elements that provide rendering of a 3D model that supports input, focus, and events. Using the same example as before, notice the ModelUIElement3D MouseDown
routed event:
<!---->
<ModelUIElement3D MouseDown="Cube2MouseDown">
<ModelUIElement3D.Transform>
<TranslateTransform3D OffsetZ="-1.5" />
</ModelUIElement3D.Transform>
<ModelUIElement3D.Model>
<GeometryModel3D Geometry="{StaticResource CubeMesh}">
<GeometryModel3D.Material>
<DiffuseMaterial x:Name="cube2Material" Brush="Green" />
</GeometryModel3D.Material>
</GeometryModel3D>
</ModelUIElement3D.Model>
</ModelUIElement3D>
</ContainerUIElement3D>
Viewport2DVisual3D
Although the demo application doesn't actually use this, I can say a few words about this new element. Basically, the Viewport2DVisual3D
element is an element that can be used within a Viewport3D
(3D scene), but hosts an interactive 2D Element, such as a Button
. I've taken the following example code straight from MSDN. The following example shows how to place a button, a 2-D object, on a 3-D object. Note that you must set the IsVisualHostMaterial
attached property on the material on which you wish to have the 2D visual placed.
<Viewport3D>
-->
<Viewport2DVisual3D>
-->
<Viewport2DVisual3D.Transform>
<RotateTransform3D>
<RotateTransform3D.Rotation>
<AxisAngleRotation3D Angle="40" Axis="0, 1, 0" />
</RotateTransform3D.Rotation>
</RotateTransform3D>
</Viewport2DVisual3D.Transform>
-->
<Viewport2DVisual3D.Geometry>
<MeshGeometry3D Positions="-1,1,0 -1,-1,0 1,-1,0 1,1,0"
TextureCoordinates="0,0 0,1 1,1 1,0"
TriangleIndices="0 1 2 0 2 3"/>
</Viewport2DVisual3D.Geometry>
<Viewport2DVisual3D.Material>
<DiffuseMaterial Viewport2DVisual3D.IsVisualHostMaterial="True"
Brush="White"/>
</Viewport2DVisual3D.Material>
<Button>Hello, 3D</Button>
</Viewport2DVisual3D>
</Viewport3D>
Demo App
So what does the attached demo app actually do? Well, it looks like the following diagram when it first starts:
Where a number of 3D objects (ModelUIElement3D
elements) are shown to represent blog entries. Moving the mouse over one of these individual ModelUIElement3D
elements causes them to grow in scale, and when the mouse is moved out, they shrink back to their original size.
This is achieved as follows:
<Tools:TrackballDecorator >
<Viewport3D>
<Viewport3D.Camera>
<PerspectiveCamera x:Name="camera" Position="-2,2,40"
LookDirection="2,-2,-40" FieldOfView="90" />
</Viewport3D.Camera>
<ContainerUIElement3D x:Name="container" />
<ModelVisual3D>
<ModelVisual3D.Content>
<DirectionalLight Color="White" Direction="-1,-1,-1"/>
</ModelVisual3D.Content>
</ModelVisual3D>
</Viewport3D>
</Tools:TrackballDecorator>
So this sets up the Viewport3D
(3D scene) and does the normal 3D stuff like create a Light and a Camera, but it also sets up a ContainerUIElement3D
(container to host other ModelUIElement3D
elements).
The only other thing to note here is that I am using a TrackballDecorator
decorator element. This is not actually part of the .NET 3.0/3.5 frameworks, but is part of a DLL that the WPF 3D team released, called 3dTools, which is an Open Source CodePlex project located right here.
This TrackballDecorator
decorator element allows the user to pan and scroll around the entire Viewport3D
(3D scene) in 3D space. It's very cool.
In the code-behind, a number of ModelUIElement3D
elements are added which represent the blog entries, where each ModelUIElement3D
element uses a tesselate (sphere in my case) MeshGeometry3D
mesh. This is done something like this:
this.Resources.Add("sphereMesh", Tesselate.Create(10, 10, 5));
.....
.....
ModelUIElement3D sphere1 =
CreateSphere(brushes[1], points3D[0].X, points3D[0].Y, points3D[0].Z);
container.Children.Add(sphere1);
feedsForShapes.Add(sphere1, feeds[0]);
.....
.....
private ModelUIElement3D CreateSphere(Brush materialBrush, double OffsetX,
double OffsetY, double OffsetZ)
{
ModelUIElement3D sphere3D = new ModelUIElement3D();
sphere3D.MouseEnter += new MouseEventHandler(Sphere_MouseEnter);
sphere3D.MouseLeave += new MouseEventHandler(Sphere_MouseLeave);
sphere3D.MouseDown += new MouseButtonEventHandler(sphere3D_MouseDown);
GeometryModel3D sphere3D_Geom = new GeometryModel3D(
this.Resources["sphereMesh"] as MeshGeometry3D,
new DiffuseMaterial(materialBrush));
sphere3D.Model = sphere3D_Geom;
Transform3DGroup transGroup = new Transform3DGroup();
ScaleTransform3D scaleTrans = new ScaleTransform3D(1, 1, 1);
TranslateTransform3D translateTrans =
new TranslateTransform3D(OffsetX, OffsetY, OffsetZ);
RotateTransform3D rotateTrans = new RotateTransform3D();
rotateTrans.Rotation = new AxisAngleRotation3D(new Vector3D(0, 1, 0), 1);
transGroup.Children.Add(scaleTrans);
transGroup.Children.Add(translateTrans);
transGroup.Children.Add(rotateTrans);
sphere3D.Transform = transGroup;
return sphere3D;
}
The actual blog entries are read using a very simple bit of XLINQ, which is as follows:
public List<feedentry> GetFeedEntries(FeedMember feedMember)
{
try
{
XElement feeds = XElement.Load(GetFeedUrl(feedMember));
if (feeds.Element("channel") != null)
{
var items = (from f in feeds.Element("channel").Elements("item")
select new FeedEntry
{
Link = f.Element("link").Value,
Title = f.Element("title").Value
}).Take(10);
return items.ToList();
}
else
{
return null;
}
}
catch(Exception ex)
{
System.Diagnostics.Debug.WriteLine(ex.Message + "\r\n");
return null;
}
}
The next section talks a little bit more about how the tesselates are created.
Tesselate Creation
Within WPF, there are no standard meshes that can be used as GeometryModel3D
properties. Some times, there are example meshes; for example, DirectX has a well-known teapot. No such luck in WPF. You have to do it manually. The following code shows how you can create a simple square MeshGeometry3D
mesh:
<!---->
<GeometryModel3D.Geometry>
<MeshGeometry3D
TriangleIndices="0,1,2 2,3,0"
TextureCoordinates="0,1 1,1 1,0 0,0"
Positions="-0.5,-0.5,0 0.5,-0.5,0 0.5,0.5,0 -0.5,0.5,0" />
</GeometryModel3D.Geometry>
Perhaps this could do with a little explanation. The Positions
property is the position in 3D space X ,Y, Z planes. We can see that if this was mapped out, we would get something like:
And the TriangleIndices
property is the indices of the triangles that make up GeometryModel3D.Geometry
; in this case, a simple square which is made from two separate triangles. This is how 3D works. Let's see these two triangles (basically, triangles are the building blocks of any 3D mesh):
But what I wanted for this article was some nice sphere. Or to be more precise, a tessalate.
Tesselation: A tessellation or tiling of the plane is a collection of plane figures that fills the plane with no overlaps and no gaps. One may also speak of tessellations of the parts of the plane or of other surfaces.
-- http://en.wikipedia.org/wiki/Tesselate
But what does that really mean in terms of a 3D mesh? Well, consider the following image:
We can see that dividing the sphere into divisions both around (theta) and up from the bottom pole to the top pole (phi), we can create rectangles. And we can treat each rectangle as we did with the above simple square MeshGeometry3D
. This would look something like the following figure. We can see the triangles that are forming our mesh.
There is a helper class called tesselate
in the attached demo project which produces the appropriate MeshGeometry3D
; here is the main part of that class:
public static MeshGeometry3D Create(int tDiv, int pDiv, double radius)
{
double dt = DegToRad(360.0) / tDiv;
double dp = DegToRad(180.0) / pDiv;
MeshGeometry3D mesh = new MeshGeometry3D();
for (int pi = 0; pi <= pDiv; pi++)
{
double phi = pi * dp;
for (int ti = 0; ti <= tDiv; ti++)
{
double theta = ti * dt;
mesh.Positions.Add(GetPosition(theta, phi, radius));
mesh.Normals.Add(GetNormal(theta, phi));
mesh.TextureCoordinates.Add(GetTextureCoordinate(theta, phi));
}
}
for (int pi = 0; pi < pDiv; pi++)
{
for (int ti = 0; ti < tDiv; ti++)
{
int x0 = ti;
int x1 = (ti + 1);
int y0 = pi * (tDiv + 1);
int y1 = (pi + 1) * (tDiv + 1);
mesh.TriangleIndices.Add(x0 + y0);
mesh.TriangleIndices.Add(x0 + y1);
mesh.TriangleIndices.Add(x1 + y0);
mesh.TriangleIndices.Add(x1 + y0);
mesh.TriangleIndices.Add(x0 + y1);
mesh.TriangleIndices.Add(x1 + y1);
}
}
mesh.Freeze();
return mesh;
}
Can't remember exactly where this code came from, but it was more than likely from the WPF 3D team's blog: http://blogs.msdn.com/wpf3d/.
History
- 27/03/08: Initial release.