Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Collision Detection in XNA

0.00/5 (No votes)
20 Apr 2009 1  
Collision detection in XNA

[This article continues the prelude about collision detection published here. It refreshes and completes the older Collision detection in XNA posts –parts I, II and III-, written a long time ago and which were demanded to be completed many times. Finally, here it is!]

Simple Collision Detection

XNA includes simple intersection tests for shapes like: Bounding Spheres, AABB (Axis Aligned Bound Box), Planes, Rays, Rectangles, Frustums, etc, and any combination of them. And what is even more useful, 3D models already have bounding spheres for their parts.

Using those tests, almost any kind of game-like intersection can be achieved. You must remember that a Mesh-Whatever intersection is expensive (depending in the number of polygons, of course), and should be left for special cases in which a very high intersection accuracy is needed. So, it’s usually preferred to approximate a complex geometry by a bunch of spheres or boxes, than using the real triangles (see Part 1).

There´s a very good post at Sharky´s blog about XNA collisions, specially focused in approximating generic shapes with bounding spheres. You can find it here.

Accurate Collision Detection

Despacho_2_Low

As commented earlier, sometimes a more accurate intersection method is needed. For example, for lighting calculations (where tracing rays are the best and more usual approach to modeling lights – see picture on your right-), the Ray-Mesh intersection test seems to be the best option. In D3D, there's a Mesh.Intersect method ready for you, that performs the desired intersection test, but in XNA, there's not such a method, and we will have to do it on our own.

To do it, we will need a system-memory copy of the geometry. Unfortunately, meshes are usually created with the ReadOnly flag in XNA (to make their management fast), what won´t allow us to access their geometry at runtime. To do so, we´ll have to deal with Custom Content Processing.

Note: Here, you will find an introduction to Custom Content Processing.

Implementing a Custom Content Processor for Collision Detection

The solution to the previous problem is to make a custom content processor that stores a copy of the geometry information at build time, where there´s still access to it. All the information needed will be stored in a new class we will name MeshData.

public class MeshData
{
    public VertexPositionNormalTexture[] Vertices;
    public int[] Indices;
    public Vector3[] FaceNormals;

    public MeshData(VertexPositionNormalTexture[] Vertices, int[] Indices,
        Vector3[] pFaceNormals)
    {
        this.Vertices = Vertices;
        this.Indices = Indices;
        this.FaceNormals = pFaceNormals;
    }
}

You can put in here all the information you need. By the moment, it will be enough to store the vertices, indices and face normals.

When VisualStudio processes each model with our ContentProcessor, it will write the model´s data to an XNB file. When it finds a MeshData object, it will search for a writer that is able to serialize it, so we have to write our custom ContentTypeWriter for the MeshData class:

[ContentTypeWriter]
public class ModelVertexDataWriter : ContentTypeWriter<MeshData>
{
    protected override void Write(ContentWriter output, MeshData value)
    {
        output.Write((int)value.Vertices.Length);
        for (int x = 0; x < value.Vertices.Length; x++)
        {
            output.Write(value.Vertices[x].Position);
            output.Write(value.Vertices[x].Normal);
            output.Write(value.Vertices[x].TextureCoordinate);
        }

        output.Write(value.Indices.Length);
        for (int x = 0; x < value.Indices.Length; x++)
            output.Write(value.Indices[x]);

        output.Write(value.FaceNormals.Length);
        for (int x = 0; x < value.FaceNormals.Length; x++)
            output.Write(value.FaceNormals[x]);
    }

    public override string GetRuntimeType(TargetPlatform targetPlatform)
    {
        return typeof(MeshData).AssemblyQualifiedName;
    }
    public override string GetRuntimeReader(TargetPlatform targetPlatform)
    {
        return "ContentProcessors.ModelVertexDataReader, ContentProcessors," +
            "Version=1.0.0.0, Culture=neutral";
    }
}

In a similar way, when the ContentPipeline tries to read back the XNB file, it will search for a deserializer for the type MeshData, so we have to write our own ContentTypeReader:

public class ModelVertexDataReader : ContentTypeReader<MeshData>
{
    protected override MeshData Read(ContentReader input, MeshData existingInstance)
    {
        int i = input.ReadInt32();
        VertexPositionNormalTexture[] vb = new VertexPositionNormalTexture[i];
        for (int x = 0; x < i; x++)
        {
            vb[x].Position = input.ReadVector3();
            vb[x].Normal = input.ReadVector3();
            vb[x].TextureCoordinate = input.ReadVector2();
        }

        i = input.ReadInt32();
        int[] ib = new int[i];
        for (int x = 0; x < i; x++)
            ib[x] = input.ReadInt32();

        i = input.ReadInt32();
        Vector3[] normals = new Vector3[i];
        for (int x = 0; x < i; x++)
            normals[x] = input.ReadVector3();

        return new MeshData(vb, ib, normals);
    }
}

Finally, our Custom Content Processor that fills up the MeshData objects for each model that goes through it. Note: Some parts have been taken from ZiggyWare

[ContentProcessor(DisplayName = "Custom Mesh Processor")]
 public class PositionNormalTexture : ModelProcessor
 {
     public override ModelContent Process(NodeContent input,
         ContentProcessorContext context)
     {
         ModelContent model = base.Process(input, context);
         foreach (ModelMeshContent mesh in model.Meshes)
         {
             // Put the data in the tag.
             VertexPositionNormalTexture[] vb;
             MemoryStream ms = new MemoryStream(mesh.VertexBuffer.VertexData);
             BinaryReader reader = new BinaryReader(ms);

             VertexElement[] elems = mesh.MeshParts[0].GetVertexDeclaration();
             int num =
                  mesh.VertexBuffer.VertexData.Length / VertexDeclaration.GetVertexStrideSize(
                  elems, 0);

             vb = new VertexPositionNormalTexture[num];
             for (int i = 0; i < num; i++)
             {
                 foreach (VertexElement e in elems)
                 {
                     switch (e.VertexElementUsage)
                     {
                     case VertexElementUsage.Position:
                         vb[i].Position.X = reader.ReadSingle();
                         vb[i].Position.Y = reader.ReadSingle();
                         vb[i].Position.Z = reader.ReadSingle();
                         break;
                     case VertexElementUsage.Normal:
                         vb[i].Normal.X = reader.ReadSingle();
                         vb[i].Normal.Y = reader.ReadSingle();
                         vb[i].Normal.Z = reader.ReadSingle();
                         break;
                     case VertexElementUsage.TextureCoordinate:
                         if (e.UsageIndex != 0)
                             continue;
                         vb[i].TextureCoordinate.X = reader.ReadSingle();
                         vb[i].TextureCoordinate.Y = reader.ReadSingle();
                         break;
                     default:
                         Console.WriteLine(e.VertexElementFormat.ToString());
                         switch (e.VertexElementFormat)
                         {
                         case VertexElementFormat.Color:
                             reader.ReadUInt32();
                             break;
                         case VertexElementFormat.Vector3:
                             reader.ReadSingle();
                             reader.ReadSingle();
                             reader.ReadSingle();
                             break;
                         case VertexElementFormat.Vector2:
                             reader.ReadSingle();
                             reader.ReadSingle();
                             break;

                         }
                         break;
                     }
                 }
             } // for i < num

             reader.Close();

             int[] ib = new int[mesh.IndexBuffer.Count];
             mesh.IndexBuffer.CopyTo(ib, 0);
             Vector3[] normals = new Vector3[mesh.IndexBuffer.Count / 3];
             for (int i = 0, conta = 0; i < mesh.IndexBuffer.Count; i += 3, conta++)
             {
                 Vector3 v0 = vb[mesh.IndexBuffer[i]].Position;
                 Vector3 v1 = vb[mesh.IndexBuffer[i + 1]].Position;
                 Vector3 v2 = vb[mesh.IndexBuffer[i + 2]].Position;
                 Vector3 edge1 = v1 - v0;
                 Vector3 edge2 = v2 - v0;
                 Vector3 normal = Vector3.Cross(edge1, edge2);
                 normal.Normalize();
                 normals[conta] = normal;
             }

             mesh.Tag = new MeshData(vb, ib, normals);
         } // foreach mesh
         return model;
     }
 }

Now that we have all the information needed, we will focus on the Collision Detection implementation itself.

Implementing the Ray-Mesh Test using the MeshData

Many people think that the D3D Mesh.Intersect method does some kind of optimized "magic" to test for intersection, but in fact it just loops through all the triangles of the mesh doing a triangle-ray intersection test, and keeping track of the closest collision point (or all of them, depending on the overloaded version you use). Of course, it applies some well known optimizations, like quick discarding polygons, back faces, and so on. That is exactly what we have to do now with the information generated at the Content Processor.

The following method performs a Ray-Mesh test getting as parameter a MeshData object generated by the previous content processor. Note that a lot of optimization can be done here, quick discarding triangles. Just Google a bit for it.

public static bool RayMesh(Vector3 orig, Vector3 dir, MeshData pMesh,
    ref Vector3 pContactPoint, ref float pDist, ref int pFaceIdx)
{
    Vector3 maxContactPoint = Vector3.Zero;
    int maxFaceIdx = -1;
    float minT = float.MaxValue;
    for (int i = 0, countFace = 0; i < pMesh.Indices.Length; i += 3, countFace++)
    {
        int ia = pMesh.Indices[i];
        int ib = pMesh.Indices[i + 1];
        int ic = pMesh.Indices[i + 2];
        Vector3 v0 = pMesh.Vertices[ia].Position;
        Vector3 v1 = pMesh.Vertices[ib].Position;
        Vector3 v2 = pMesh.Vertices[ic].Position;

        double t = 0f;
        double u = 0f;
        double v = 0f;
        if (RayTriangle(orig, dir, v0, v1, v2, ref t, ref u, ref v))
        {
            Vector3 appPoint = orig + (dir * (float)t);
            if (t < minT)
            {
                minT = (float)t;
                maxFaceIdx = countFace;
                maxContactPoint = appPoint;
            }
        }
    }
    pContactPoint = maxContactPoint;
    pFaceIdx = maxFaceIdx;
    pDist = minT;
    return (minT < float.MaxValue);
}

The only part left is the Ray-Triangle intersection test, but there is so much information around the net about this issue that I´ll just leave it for you. However, you can check the following links:

Hope the above four references are enough, and that you liked the post.

Enjoy!

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here