View the source online at http://celerity.codeplex.com/.
Introduction
This article will walk you through the process of designing and creating a game like Celerity: Sensory Overload, using XNA 4 in Visual Studio 2012 for a Windows 8 Ultrabook™ and publishing it in the Intel AppUp store.
This article is an entry in the runners up Ultrabook™ Article Competition. As per the rules it borrows heavily from the original game article, but focuses on teaching you how we made it and dives deeper into the code.
Background
Celerity was an entry in the App Innovation Contest. It is a game inspired by:
If you're unfamiliar with Celerity, check out this brief video to get the basic idea:
Contents
This article is divided into the overall tasks which went into making the game, followed by a reflection on the competition experience and other notes.
To enable some of these tasks to be done in parallel and hit the deadline I brought a few friends in to help; @Dave_Panic (3D trickery guru), @BreadPuncher (game theorist & graphic designer) and @PatrickYtting (maestro).
Reading the competition brief carefully, there was a clear emphasis on both showing off hardware sensor capabilities and coming up with something innovative. This led to two key themes with the game being sensor controls and an innovative re-take on IR Head Tracking which can be achieved with only a basic webcam through face detection.
The inclinometer naturally lends itself to tilt-based steering, so the first idea was free flight steering. Having tried a few tunnel games, some of them do use this, but I found the games tended to simply be more playable when the controls were based on a more simple rotation of the tunnel, so the control was constrained to only tilting left and right, with the forward direction automatically remaining aligned with the tunnel.
At this stage the brief was "build some sort of tunnel game which works on the Ultrabook™, uses sensors for steering and achieves the head tracking effect with only a webcam input". From there we specified a minimum viable product.
I cannot stress enough how much defining the minimum viable product helped in this competition. Time was so tight and the potential for endlessly adding small additional features which might break the deadline was significant. As it turned out the game was shipped with one single feature added beyond the bare minimum (the inclusion of sensor-fired smart bombs).
The minimum viable product was defined in abstract terms as being:
- Something which constitutes a game (to fit the competition category)
- Something which makes natural and logical use of Ultrabook™ sensors
- Something which includes our innovative camera-based head tracking idea
And specifically having the following features:
- Product shall feature moving through a textured 3D tunnel
- Product shall feature sensor-based steering via tunnel rotation
- Product shall feature head-tracking-informed view matrix adjustment
- Product shall feature a 3D avatar (ship) and 3D collidable hazards
- Product shall feature sounds and music
- Product shall feature touch-enabled, responsive UI
- Product shall feature a timer (so the player can attempt to improve on previous attempts)
- Product shall meet Intel AppUp criteria
As I was familiar with XNA and C# I wanted to use them to build the game, however there are a number of obstacles in having them work smoothly on Windows 8. There were some immediate obstacles which had to be overcome to make the project viable:
- Visual Studio 2012 does not support XNA projects
- XNA Game Studio 4 doesn't (seem to) install on Windows 8
- XNA Touch is deliberately disabled for Windows (eek!)
- Windows 8 libraries (for the sensors) can't be accessed in my then-current Windows 7 development environment
The Solution
To enable XNA Game Studio 4 I used Aaron Stebner's solution, which is essentially to:
To get Visual Studio 2012 to recognise XNA projects I used Steve Beaugé's solution:
- Copy VS2010's XNA Game Studio 4 folder (in VS2010's extension folder) to VS2012's extension folder
- With a text editor, manually edit the new copy of the
extension.vsixmanifest
file to include the supported versions:
<SupportedProducts>
<VisualStudio Version="11.0">
<Edition>VSTS</Edition>
<Edition>VSTD</Edition>
<Edition>Pro</Edition>
<Edition>VCSExpress</Edition>
<Edition>VPDExpress</Edition>
</VisualStudio>
</SupportedProducts>
If this isn't the first time you're running VS2012 you'll probably find that VS has cached the available extension list. You can tell it to refresh this list by entering this in the VIsual Studio Command Prompt:
"C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe" /setup
A nice little trick for opening the VS Command Prompt within a particular folder is to use this shell script:
Failing that, create a new XNA project in VS2010 and open it in VS2012. Now, not only do XNA projects open and play nicely, but the standard Content Pipeline works perfectly.
If you've ever developed your own game then you'll appreciate how dead and flat the experience can be until sound music and sounds are added. I was very fortunate in having a first rate professional composer on board, but for most people you will most likely be looking at sourcing stock sound and music. I would recommend sticking with the stock sound effect options and advise against recording your own sound effects unless you either have some experience or find there is no other way to get a sound effect that works. Poorly rendered sound can really destroy the immersion in a game.
A simple web search for "royalty free music" or "royalty free sound effects" should return you plenty of options. It might cost a small amount, but the quality will give your game a boost.
Remember to always double-check the license. Many libraries of "royalty free" files often contain files which actually have restrictions on them, and the last thing you want is a company taking you to court.
Another approach might be to join an online game-making community and network with music and audio experts who may be willing to help for free or for a cut of profits.
When working with a technology like XNA almost all aspects of UI layout are co-ordinate based. With no friendly vector panels to work with it is especially challenging to create a responsive UI, that is a UI which works well on different target screen resolutions, Part of the Intel AppUp testing process is to ensure that there is no clipped text at several different resolutions, so we had to bear this in mind.
The approach for Celerity was to divide the UI into 3 distinct logical panels:
- "TL" The top left panel which holds the icon and Back icon button
- "TR" The top right panel which holds various icon buttons such as Mute
- "UI" The main centre panel which holds everything else
A very simple approach to making the layout responsive was taken, which was to anchor TL to the top left corner, TR to the top right corner and UI to the centre of the screen. However simple the approach, the UI always looks like it fits the screen well.
Even though the overall layout is slightly fluid, a traditional grid was used to layout elements within the panels. I chose a 25x25 pixel grid but use whatever works for you. Using a grid means that UI elements within a panel will appear orderly and aligned.
Once you have the basic layout described you'll need to work out the co-ordinates of the origin and size of each UI element. I chose to do this on paper, on a printed out Grid. It was an invaluable reference when I was actually building the UI in code as it gets very messy very quickly. Here's the sheet as I had it:
Whilst settling on this particular grid system, I created some wireframes for the UI. You'll notice that the final UI varies from these slightly, being condensed from several screens to one. This was just to keep the app as simple as possible, but also partly the result of natural evolution during software development. It's normal to discover a better way of doing things later on, and the initial design is only a guide not a contract.
I brought @BreadPuncher (a.k.a. Lorc) on board to handle vector icon design. His brief was very much to fit into the Microsoft Design Language style so that everything would look consistent. For those of you who are not fortunate enough to either have graphic skills or a friend who can provide them your most likely options are either to network and find an artist/designer or to use stock graphics. Again, just search the internet for royalty free icons or graphics to suit your needs. There may be a small one-off fee but it will often give your app the visual edge it needs. If your budget permits you can always commission a professional, of course.
I did encounter a nasty issue when importing the graphic .png
files. The files rendered from Inkscape were appearing with alpha artefacts when rendered by XNA.
I fiddled for a long while with various combinations of XNA rendering modes, and whilst some of them produced the desired effect with his images, they would interfere with rendering other assets such as fonts.
Thankfully I stumbled on a tip-off to use the program PixelFormer. Whilst it's intended for editing, we simply used it to open images from Inkcape, and then re-export them, but with the pre-multiplied Alpha on. Various sources on the internet suggested that Inkscape already exported with pre-multiplied Alpha, but this didn't seem to be the case in practice. Running the images through PixelFormer did the trick.
Whilst this application is relatively small, there is still a benefit to dividing the codebase into logical elements or modules. I'll make no claims that I have achieved a perfect example of separation in this rushed project but the principle is sound.
95%+ of the code fell neatly into the following categories:
- Content and Content Libraries
- Input
- Game Logic
- Computer Vision
- 3D World & Contents
- UI Elements
- Audio
- Utility Classes
I created a folder and namespace for each. The Game class does little on its own other than to instantiate and co-ordinate activity between the modules based on the above categories:
- InputModule
- GameLogic
- CVModule
- WorldModule
- UILayer
- AudioModule
This gives us a very tidy Game class. The constructor simply contains:
public CelerityGame()
{
Content.RootDirectory = CeleritySettings.ContentRootDirectory;
graphics = GraphicsDeviceUtil.Create(this);
audio = new AudioModule();
cv = new CVModule();
input = new InputModule(this.Window.Handle);
logic = new GameLogic(audio, input);
ui = new UILayer(logic);
world = new WorldModule(graphics, audio, input, logic);
ui.OnClose += (s, e) => { this.Exit(); };
}
Then in each of the event methods the Game class calls the same method on any child modules which need processing. For example, in the LoadContent()
method, the UILayer
and WorldModule
are told to load their content:
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
ui.Load(GraphicsDevice, spriteBatch, Content);
world.Load(Content);
}
The one part of the code which did get somewhat untidy was the UI layer. The code for this isn't the best, but it works and for small changes isn't too bad to deal with. A major UI change could well bring tears to my eyes, however! Take this section as an idea which didn't quite pan out as well as I envisaged.
The classes I used for the UI layer were as follows:
UILayer
- Top level controller UIGeometry
- Coordinate, offset and size referenceUIControlHierarchy
- A list (i.e. not a true hierarchy) of all the composite controls in the UI UIEntity
- Represents one composite control in the UI (e.g. a button), usually a specialised subclass is used UIEntityItem
- Represents a sub-component of a composite control in the UI (e.g. the text on a button) UIDrawCondition
- A class used to represent a logical scenario in which the control is either drawn or not drawn
I'm certain there are better approaches for coding the UI so I don't want to dwell on this section. Feel free to explore the source code if you're interested, but I'd suggest checking out existing UI libraries before rolling your own complex UI engine.
One approach which might be of value is my use of a static
ImageLibrary
class and
.resx
as a means of accesses images via type-safe names rather than hard-coded strings.
First, create a resource file and define all your image paths:
Now create a static class called ImageLibrary
, or something similar, and have one private static variable as follows:
static GraphicsDevice graphics;
Add the following simple helper method:
static Texture2D Get(string path)
{
return Texture2D.FromStream(graphics, TitleContainer.OpenStream(path));
}
Now, for each image you want to use add a public static
Texture2D
variable, e.g.:
public static Texture2D InputKeys;
public static Texture2D InputGamepad;
public static Texture2D InputTilt;
The to tie it all together and make the class useful add a Load method like this...
public static void Load(GraphicsDevice graphicsDevice)
{
graphics = graphicsDevice;
InputKeys = Get(ResxImg.InputKeys);
InputGamepad = Get(ResxImg.InputGamepad);
InputTilt = Get(ResxImg.InputTilt);
}
You can now access the
Texture2D
data in code easily by access the public static member in the
ImageLibrary
. For example, to get the
InputKeys
image, you'd use:
var InputKeys = ImageLibrary.InputKeys;
One last trick is that you can create flat-colour general purpose textures on the fly by using this alternative code in the load method for a given texture:
TextureGrey = new Texture2D(graphics, 1, 1);
TextureGrey.SetData(new[] { Palette.OverlayGrey });
Where you have defined a palette somewhere along the lines of:
using Microsoft.Xna.Framework;
namespace Celerity.ColourPalette
{
public static class Palette
{
public static Color Accent = new Color(0, 204, 255, 255);
public static Color SecondaryAccent = new Color(255, 51, 0, 255);
public static Color AccentPressed = new Color(0, 204, 255, 128);
public static Color MidGrey = new Color(128, 128, 128, 255);
public static Color OverlayGrey = new Color(76,76,76, 165);
public static Color SemiTransparentWhite = new Color(255, 255, 255, 128);
public static Color White = Color.White;
public static Color Black = Color.Black;
}
}
The problem of creating a tunnel was broken down into two classes: one to deal with the section of the tunnel we can actually see, TunnelSection
, and one to deal with the tunnel as a whole, Tunnel
.
TunnelSection
will have the following responsibilities:
- Construct the vertices of this piece of the tunnel
- Construct the texture coordinates for each vertex
- Draw the tunnel
Tunnel
will:
- Maintain the position of the
TunnelSection
s (it is actually the tunnel that moves, rather than the player, in order to prevent coordinates growing too large) - Define the actual shape or curvature of the tunnel as a whole
A section of tunnel made from triangles has the following properties:
public float Radius { get; set; } public int NumSegments { get; set; } public int TunnelLengthInCells { get; set; } private float cellSize;
The vertices are created by this method:
void ConstructVertices()
{
int numVertex = NumSegments * TunnelLengthInCells;
vertices = new VertexPositionColorTexture[numVertex];
float sectionAngle = 2 * (float)Math.PI / NumSegments;
int vertexCounter = 0;
for (int i = 0; i < TunnelLengthInCells; i++)
{
for (int j = 0; j < NumSegments; j++)
{
Matrix rotationMatrix = Matrix.CreateRotationZ(j * sectionAngle);
vertices[vertexCounter].Position = Vector3.Transform(
new Vector3(0.0f, this.Radius, 0.0f), rotationMatrix);
vertices[vertexCounter].Position.Z = -cellSize * i;
vertexCounter++;
}
}
}
First new vertex is created with an x and z coordinate of zero, and a y coordinate equal to the desired radius. That point is then rotated around the origin by the appropriate angle and moving on to the next point. Once this has been done for a full circle the process is repeated, but the point is moved further away by the distance defined by cellSize
.
Since the square sections that make up the tunnel ought to remain looking square it is necessary to work out the distance between 2 points in the ring of vertices. This is done by creating a point at (0, radius, 0)
rotating around the z axis appropriate angle (2 * (float)Math.PI / NumSegments)
then measuring the distance between the 2 points. As follows:
float CalculateSectionSize()
{
Vector3 point1 = new Vector3(0.0f, this.Radius, 0.0f);
Vector3 point2 = Vector3.Transform(point1, Matrix.CreateRotationZ(2 * (float)Math.PI / NumSegments));
return Vector3.Distance(point1, point2);
}
With the vertices in place the next step is to fill the index buffer. The index buffer tells the GPU which vertices to use in which triangle. It’s a list that points to the index of each vertex in the vertex array.
void ConstructIndices()
{
int indexCount = TunnelLengthInCells * NumSegments * 6;
indices = new short[indexCount];
int indexCounter = 0;
for (int i = 0; i < vertices.GetUpperBound(0) - NumSegments; i += NumSegments)
{
for (int j = 0; j < NumSegments; j++)
{
indices[indexCounter] = (short)(i + j);
indices[indexCounter + 1] = (short)(i + j + NumSegments);
indices[indexCounter + 2] = (short)(i + j + 1);
if (j == NumSegments - 1)
{
indices[indexCounter + 2] = (short)i;
}
if (j < NumSegments - 1)
{
indices[indexCounter + 3] = (short)(i + j + 1);
indices[indexCounter + 4] = (short)(i + j + NumSegments);
indices[indexCounter + 5] = (short)(i + j + NumSegments + 1);
}
else
{
indices[indexCounter + 3] = (short)(i + j + NumSegments);
indices[indexCounter + 4] = (short)(i);
indices[indexCounter + 5] = (short)(i + j + 1);
}
indexCounter += 6;
}
}
}
Here the code loops though the vertices in the same order they were created, wiring up the triangles as it goes. The trick is to have them all winding clockwise so backface culling will not make the triangles invisible, which requires a little mental visualisation to work out which vertices ought to be wired into the triangle based on a given point in the triangle. Also, notice there is a special case at the end of the ring of vertices. If this special case were not present the code would create triangles that corkscrewed along the tunnel and ended up leaving a single triangle gap at the start and end of the tunnel section.
Here is a TunnelSection
:
And here are several TunnelSection
s sewn together to form a Tunnel
, with a nice curve for good measure:
Multiple Colours with a single Texture
The tunnel and obstacles are rendered using the same basic texture, only the colour can be altered. This supports a white tunnel with a variety of coloured obstacles. Different colours are either defined in code or calculated at run-time without having to generate a huge number of identical texture files. This requires some simple mathematics, which you have most likely encountered before.
Firstly a base greyscale texture is required:
In a shader program colours are represented using a value of 0.0 to 1.0 for each component. So, for example, white would be { Red = 1.0, Green = 1.0, Blue = 1.0 }
and black would be { Red = 0.0, Green = 0.0, Blue = 0.0 }
.
Given that anything multiplied by 1.0 remains unchanged and anything multiplied by 0.0 will always be 0.0 the colour of the texture map may be transformed to have any base colour by simply multiplying their components. Taking all white pixels from the base texture { R = 1.0, G = 1.0, B = 1.0 }
and multiplying each component by the 100% red { R = 1.0, G = 0.0, B = 0.0 }
gives 100% red. Conversely, multiplying by an all black pixel from the texture { R = 0.0, G = 0.0, B = 0.0 }
results in a black output colour.
Here you can see the results of multiplying each pixel in the texture by a shade of blue { R = 0.0, G = 0.5, B = 1.0 }
:
This is very simple to implement into a shader program. First add a variable to the shader’s parameters to hold the colour:
float4 Color;
The texture and texture sampler are defined thus:
texture TunnelTexture;
sampler2D textureSampler = sampler_state { Texture = (TunnelTexture);
MipFilter = LINEAR;
MagFilter = LINEAR;
MinFilter = LINEAR;
AddressU = Wrap;
AddressV = Clamp; };
If you are unfamiliar with texture samplers check out one of the many tutorials available online, for example here.
The next step is simply a matter of multiplying the value retrieved by the texture sampler by the value that was passed into the Color parameter. Like so:
float4 output = Color * tex2D(textureSampler, input.TexUV);
output.a = 1.0f;
Depth Cueing
Depth Cueing or fading to black/fog colour is an important part of giving scenes depth, subtly helping the player judge distance and hiding objects coming into view. Fortunately it’s very easy to add to the shader code.
Celerity took the dead simple approach of fading to black based on the distance to the far clipping plane, which should be defined when the projection matrix is created. For example:
projection = Matrix.CreatePerspectiveFieldOfView((float)Math.PI / 4.0f, graphics.GraphicsDevice.Viewport.AspectRatio, 0.01f, farClip);
The value of far clip is passed into the shader program so a variable is added to the shader’s parameters.
float FarClip;
The vertex shader output struct is modified to carry depth information like so:
struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 TexUV : TEXCOORD0;
float Depth : TEXCOORD1;
};
The vertex shader output function is then modified to write the depth information to the struct:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.TexUV = input.TexUV;
output.Depth = output.Position.z;
return output;
}
Now that the depth information can be accessed by the pixel shader function, this code will modify the output colour.
float dist = saturate(input.Depth / FarClip);
Dividing input.Depth
by FarClip
gives a value between 0.0 and 1.0, 1.0 being the result if the current pixel is at or beyond the FarClip
distance. The saturate intrinsic function will make sure the value does not exceed 1.0.
As the desired effect is fading to black at the maximum distance each component is multiplied by 1.0 minus the result of our division.
dist = 1.0f - dist;
output.r *= dist;
output.g *= dist;
output.b *= dist;
Without Depth Cueing (no fade):
With Depth Cueing (fading to black at maximum distance):
Collision detection can be hard
Collision detection can be very challenging, especially within a 3D environment. The mathematics can be mind-bending in some cases. There is almost always a trade-off between accuracy and efficiency of the calculation, and the balance must always be based on the context.
Fortunately, in Celerity matters can be greatly simplified by the realisation that, although the game appears to be in 3 dimensions, it’s actually really only operating in 2 dimensions. The player's ship only ever either travels forward down the tunnel or around the outside of the tunnel. The ship never moves along the Y axis, only the Z and X axes.
How the tunnel is made
The ship travels down an infinitely long tunnel. Representing this literally, a computer would quickly run into floating point error problems as the magnitude of some variables will increase rapidly, leading to large differences in precision, in turn resulting in large errors in calculations. In short, things would break.
Instead the ship and camera remain fixed, oriented around the origin. The tunnel itself is moved, towards the camera. To simplify the tunnel motion even further the tunnel sections are only advanced in the Z direction. This approach requires translating the tunnel through X and Y in order to centre it about the origin. It is also necessary to rotate the camera to point down the tunnel, to make it appear as though the player is looking down the tunnel.
Tunnel and objects are first created straight with no curves
Vertices are then perturbed to create the curves
The important thing about this is that the front and backs of the obstacles remain parallel to the XY plane. This means that a complex 3D non-axis-aligned collision detection can be performed by an easy 2D axis-aligned collision detection.
Unwrapping the tunnel
As the player only moves along 2 dimensions, in order to determine if a collision has taken place only 3 pieces of information are required:
- The angle at which the object is located
- The angular width of the object
- The Z coordinate of the object
To visualise how the tunnel unwraps, imagine that the Celerity tunnel is a grid drawn on the inside of a toilet roll. Take a pair of scissors and cut along the roll so that it unfolds flat, showing you a flat grid:
The tunnel surface, unwrapped to form the new 2D coordinate system
The width of each obstacle can be calculated simply. As the obstacles are cubes the same size as each grid square in the tunnel wall, their width is equal to 2p/10
. It’s 2p as we’re working in radians, with 10 being the number of sub-divisions in the tunnel. The Z coordinate is the same Z coordinate as used in the 3D representation of the tunnel.
The CollisionRect
class was created to hold collision data for each obstacle:
class CollisionRect
{
public Vector2[] points;
public bool zUnset = true;
public CollisionRect(float tunnelCellSize, float tunnelNumSegments, float angle, float z)
{
float rads = (float)(2 * Math.PI);
float widthOver2 = (float)Math.PI / tunnelNumSegments;
float heightOver2 = tunnelCellSize / 2;
points = new Vector2[4]
{
new Vector2(rads - angle - widthOver2, z - heightOver2),
new Vector2(rads - angle + widthOver2, z - heightOver2),
new Vector2(rads - angle - widthOver2, z + heightOver2),
new Vector2(rads - angle + widthOver2, z + heightOver2)
};
}
public void SetZ(float z)
{
for(int i = 0; i < 4; i++)
{
points[i].Y += z;
}
this.zUnset = false;
}
}
The class is fairly simple. There is an array of type Vector2
to hold the coordinates of each corner of the box, and a constructor which creates the box.
An important consideration when updating the positions of the collision boxes is to copy the Z coordinate from the 3D world position rather than calculating the new position. If a new position were to be calculated then minute differences in the numbers would accumulate and the positions would quickly become out of sync.
A similar class is used to maintain the player’s position in the world. As there is no easy way to determine the angular size of the ship model, as the data is implicit and tucked away inside the .fbx
model, the width and height were determined through trial and error.
AABB or Axis Aligned Bounding Box collision in 2D
With all these elements in place collision detection becomes possible. In the 2D abstraction, this process is simple as the process is only concerned with Axis Aligned Bounding Boxes, or AABBs.
2 intersecting AABBs
The algorithm for detecting intersection is as follows: For each point in the green box, if x > A.x
and x < B.x
and y > B.y
and y < A.y
then the point is inside the orange box. If any points are inside then they intersect, or "collide".
Due to the earlier simplification of unwrapping the tunnel from a tube to a flat sheet, there are a couple of special cases to consider. The coordinates must wrap around. This is achieved by performing 2 detections on boxes that are on the join/seam, at positions 0 or 9 in a zero-based tunnel of 10 segments. One detection in the boxes' normal position and one transposed by ±2p depending on what side of the join the box is located.
I used XACT, Microsoft's cross-platform audio library and toolset, to power Celerity's layered music and sound effects. It is both reasonably straight-forward and fairly powerful.
I used the "Audio Creation Tool" to import a number of music layers and sound effects. In this tool you can easily group sounds into "Categories", which can be treated like audio channels in your game code. Categories can have various parameters sent to them dynamically, such as source location (wow!) and volume. The wave files themselves may be declared as looping so that in-game this is automatic.
Seeing it for the first time halfway through the project I was nervous to take on this unfamiliar technology but highly recommend it. I was up and running within an hour or so thanks to this very simple XACT tutorial. I barely scratched the surface of XACT in this project, but even with the basics we have a dynamic music score.
Here's a walkthrough of the AudioModule
, the class used to handle audio logic in the game. Note the difference between Play
(for one-shot SFX) and PlayCue
(for looping song WAVs).
To code with XACT you'll first need the following using
statement:
using Microsoft.Xna.Framework.Audio;
Now some simple instance members:
bool hasMusicStarted = false;
AudioEngine engine;
WaveBank waveBank;
SoundBank soundBank;
AudioCategory musicChannel1;
AudioCategory musicChannel2;
AudioCategory musicChannel3;
AudioCategory musicChannel4;
AudioCategory ambienceChannel;
AudioCategory sfxChannel;
The bool
is just a flag we can check to see if we've already started the music playing so we only do it once. The XACT objects contain most of the functionality, and the various AudioCategory
objects represent the different channels.
The Initialize method is self-explanatory, initialising the instance variables:
public void Initialize()
{
engine = new AudioEngine(AudioLibrary.PathEngine);
waveBank = new WaveBank(engine, AudioLibrary.PathWaveBank);
soundBank = new SoundBank(engine, AudioLibrary.PathSoundBank);
musicChannel1 = engine.GetCategory(AudioLibrary.ChannelMusic1);
musicChannel2 = engine.GetCategory(AudioLibrary.ChannelMusic2);
musicChannel3 = engine.GetCategory(AudioLibrary.ChannelMusic3);
musicChannel4 = engine.GetCategory(AudioLibrary.ChannelMusic4);
ambienceChannel = engine.GetCategory(AudioLibrary.ChannelAmbience);
sfxChannel = engine.GetCategory(AudioLibrary.ChannelSFX);
}
The Update method is a little more interesting:
public void Update(float chaosFactor)
{
float muteMultiplier = GlobalGameStates.MuteState == MuteState.Muted ? 0f : 1f;
if (!hasMusicStarted)
{
hasMusicStarted = true;
if (CeleritySettings.PlayMusic)
{
soundBank.PlayCue(AudioLibrary.Music_Layer1);
soundBank.PlayCue(AudioLibrary.Music_Layer2);
soundBank.PlayCue(AudioLibrary.Music_Layer3);
soundBank.PlayCue(AudioLibrary.Music_ShortIntro);
soundBank.PlayCue(AudioLibrary.Ambience);
}
}
float normalVolume = 1f * muteMultiplier;
ambienceChannel.SetVolume(normalVolume);
musicChannel1.SetVolume(normalVolume);
musicChannel2.SetVolume(DynamicVolume(chaosFactor, 0.3f) * muteMultiplier);
musicChannel3.SetVolume(DynamicVolume(chaosFactor, 0.7f) * muteMultiplier);
musicChannel4.SetVolume(normalVolume);
sfxChannel.SetVolume(normalVolume);
engine.Update();
}
Here I multiply all volumes by a mute multiplier, allowing me to globally silence all channels at will. The next section is where the music is initially triggered with the PlayCue
method. Then the volumes of the various channels are set, some dynamic to the level of in-game tension, known as the "Chaos Factor". Finally we call Update()
on the AudioEngine
object.
Here's the DynamicVolume
helper method:
float DynamicVolume(float chaosFactor, float threshold)
{
if (threshold >= 1f)
{
throw new ArgumentException("Audio Module - Dynamic Volume: Threshold must be less than 1.");
}
float off = 0f;
return chaosFactor > threshold ? (chaosFactor - threshold) / (1f - threshold) : off;
}
Next follows a number of publicly exposed individual PlayCue
-based methods for different in-game sound effects, for example:
public void PlayCrash()
{
soundBank.PlayCue(AudioLibrary.Ship_HitsBlock);
}
The Game Logic class is very simple. It provides a means of triggering and responding to game events, tracking timings and most importantly of all, managing the Chaos Factor, the abstract numeric value which represents the current global degree of tension. The Chaos Factor informs things such as the density at which blocks are generated, the speed of the player's ship and the complexity of the music. As it rises over time, the game becomes increasingly harder. This value is reset when the player crashes.
The Chaos Factor is calculated as:
public float ComputeChaos(double time)
{
return (float)(1.0 - (1.0 / Math.Exp(time / 20.0)));
}
Sensors
It was not as straight-forward to access the sensors as I'd hoped. Another surprise in developing for Windows 8 was that the Sensor namespace was exclusive to WinRT, and therefore unavailable to Desktop applications. Thankfully Intel provided the answer.
The trick was to open up the .csproj
file with a text editor and simply add a target platform version number. This allows you to now add a reference to "Windows" from Visual Studio, which would otherwise be unavailable. Inside this library is the Windows.Devices.Sensors
namespace.
I had problems combining this directly with my XNA project, however simply putting the sensors in a separate project, which in turn referenced System.Runtime.WindowsRuntime.dll
, allowed everything to build and inter-operate smoothly. I put any direct reference to any WinRT objects encapsulated within a proxy class, so my XNA project was only interacting with simple data types from the other project.
The sensors I currently use from this namespace are:
- Inclinometer for indicating how much the device is tilted left/right for steering
- Accelerometer for providing a "Shaken" event, which is used to trigger a smart bomb
The code for reading these values is thankfully very straight forward:
Inclinometer
The readings on offer are the Pitch
, Roll
and Yaw
; these might be thought of as the X, Y and Z respectively. For tilting the screen to the left and right I simply subscribe my proxy to the Roll
's ReadingChanged
event, expose a public property of the last read value and then read the property from XNA once per Update
call.
Something like this:
public class SensorProxy()
{
const uint inclineDesiredInterval = 16;
Inclinometer incline;
public double InclineY { get; set; }
public SensorProxy()
{
incline = Inclinometer.GetDefault();
if(incline != null)
{
uint minInclineInterval = incline.MinimumReportInterval;
incline.ReportInterval = minInclineInterval;
incline.ReadingChanged += (s, e) => { InclineY = e.Reading.RollDegrees; };
}
}
}
Accelerometer
Wiring up the accelerometer was even easier than the inclinometer, as there is a pre-built Shaken event. This means I, as the developer, don't have to worry about calibrating the sensitivity. I just listen for an event and respond to it.
I wanted to entirely encapsulate all the sensor namespace objects, due to previous issues, and so I kept the direct event internal to the proxy class, and in the handler raised a new plain EventHandler
event. This was to prevent the caller from having to reference the AccelerometerShakenEventArgs
object.
public event EventHandler OnShaken;
Accelerometer accel;
accel = Accelerometer.GetDefault();
if(accel != null)
{
accel.Shaken += (s, e) => { If (OnShaken != null) OnShaken(this, new EventArgs()); };
}
Screen Orientation Issue
When I first tested the inclinometer steering I ran straight into a somewhat ironic issue. I was tilting the screen left and the reading was coming through for a second or two, but then my screen starting flipping around. The simple orientation sensor was detecting the change in angle and assuming I wanted to switch to portrait mode. With XNA running in full screen mode this went what I can only describe as "a bit mental". Auto-scaling horror!
This was clearly going to interfere with one of the core concepts of the application so I had to nip it in the bud. Thankfully a very simple method in my SensorProxy
, called from XNA during the Initialize phase, does the trick.
public void LockOrientation()
{
DisplayProperties.AutoRotationProperties = DisplayOrientations.Landscape;
}
XNA Touch on Windows 8
As Shawn Hargreaves writes, Touch in XNA was unfortunately deliberately disabled for Windows, limited to use on the Windows Phone 7. That means whilst the namespace Microsoft.Xna.Framework.Input.Touch
does include the easy-to-use TouchPanel
class, it simply doesn't work. It does nothing.
Mercifully, Shawn does indicate two approaches for making it work. To make touch work in Windows 8 we in fact turn to Windows 7's touch implementation.
I took the first of Shawn's suggestions, namely using the .NET Interop Library. Tucked away in the .zip
file is the crucial assembly Windows7.Multitouch.dll
. Once we've added a reference to this library in our project, using it in XNA requires a slight side-step, in that we have to provide our touch-handling class an IntPtr
to the application's window. That sounds tricky, but in reality we just pass it the following from our main Game
instance:
input = new InputModule(this.Window.Handle);
On the receiving end, here is the constructor of my InputModule
, the class I use to process the various forms of input:
Windows7.Multitouch;
using Windows7.Multitouch.Win32Helper;
public InputModule(IntPtr windowHandle)
{
touchHandler = Factory.CreateHandler<TouchHandler>(windowHandle);
touchHandler.TouchUp += (s, e) =>
{
lastTouchPoint = e.Location;
hasUnprocessedTouch = true;
};
}
Whilst the sight of the IntPtr
type may be scary for some, we let the built in classes deal with the details. All we need to process is the simple and friendly TouchUp
event (or others as your needs may be). You may be able to spot from the above that this isn't technically a multi-touch implementation. Our simplistic UI didn't really warrant full multi-touch as currently there is only ever single button taps to handle, but I'll likely rework this when I come to do thumb-controlled weapons in a future update.
Head Tracking in Celerity
In Johnny Lee's video, above, he is able to detect the position of the user's head in 3D space through infra red (IR) LEDs and an IR camera.
The effect is fantastic but requires a special IR sensor and also that the user wears an IR-emitting device. I needed to create this effect using only sensors on an Intel® Ultrabook™, and no other equipment. Thankfully the Ultrabook™ has an integrated webcam.
The application polls the webcam for frames and passes them to a Computer Vision (CV) image-processing library, EMGUCV. This returns a rectangle representing a detected face within the bounding box of the camera's view. As the user moves their face, the rectangle will move around relative to the bounds, giving me a relative X/Y offset of the user's face. The X & Y of the 3D world's view can be skewed in relation to the user's own physical position. Note that we need to flip the image horizontally as the webcam is looking in the opposite direction to us.
This effect can be taken even further, as the rectangle representing the user's face inherently has a size. This gets larger as they move towards the webcam, so we can also determine the relative Z position, too.
It's helpful if the user starts with their head roughly central and not too close, which we encourage with the intro menu design. There the user can see if they're vaguely "calibrated" before starting.
Using EMGUCV
Whilst basic use of the library is fairly simple, this aspect of the program was not without its challenges. Performance of EMGUCV within XNA was initially terrible due to the basic single-threaded approach taken by XNA.
Rather than get too complex, I chose to limit the polling rate and also place calls to my QueryCamera() method like this:
Parallel.Invoke(() => QueryCamera(elapsedMilliseconds));
This worked really well for my desktop development machine but not so well on the Ultrabook™. The reason would appear to be the lack of drivers for the prototype Ultrabook™, as using a 3rd party webcam with drivers rather than the Ultrabook™'s own integrated device worked fine.
Using a 3rd party webcam was impractical for the competition and the head tracking was a major selling point of our entry so I instead made a workaround. I offered 3 "modes" of operation, which were in effect 3 polling speeds for the camera; off, slow and fast. The Ultrabook™ could only handle frame requests coming in about 8 times per second, whereas the desktop's "fast" mode operated at a much smoother 30 requests per second.
Here is the main method of interest in the CVModule
. Don't be too alarmed by the formulae towards the end, they're just turning absolute rectangle positions into relative positions. All the tricky face detection itself is handled by the library in the DetectHaarCascade()
call.
void DetectFaces()
{
if (minFaceSize == null || minFaceSize.IsEmpty)
{
minFaceSize = new DR.Size(grayframe.Width / 8, grayframe.Height / 8);
}
var faces = grayframe.DetectHaarCascade(
haar,
scaleFactor,
minNeighbours,
HAAR_DETECTION_TYPE.DO_ROUGH_SEARCH,
minFaceSize
)[0];
IsFaceDetected = faces.Any();
if (IsFaceDetected)
{
foreach (var face in faces)
{
if (isFirstFaceCapture)
{
isFirstFaceCapture = false;
currentEMA.Width = face.rect.Width;
currentEMA.Height = face.rect.Height;
previousEMA.Width = face.rect.Width;
previousEMA.Height = face.rect.Height;
}
lastX = face.rect.X;
lastY = face.rect.Y;
lastWidth = face.rect.Width;
lastHeight = face.rect.Height;
currentEMA.Width = (int)(alphaEMA * lastWidth + inverseAlphaEMA * previousEMA.Width);
currentEMA.Height = (int)(alphaEMA * lastHeight + inverseAlphaEMA * previousEMA.Height);
previousEMA.Width = currentEMA.Width;
previousEMA.Height = currentEMA.Height;
}
}
DR.PointF ellipseCenterPoint = new DR.PointF(lastX + lastWidth / 2.0f, lastY + lastHeight / 2.0f);
DR.SizeF ellipseSize = new DR.SizeF(currentEMA.Width, currentEMA.Height);
FaceEllipse = new Ellipse(ellipseCenterPoint, ellipseSize, 0);
FaceCentrePercentX = 1 - ellipseCenterPoint.X / (float)grayframe.Width;
FaceCentrePercentY = ellipseCenterPoint.Y / (float)grayframe.Width;
FaceSizePercentWidth = ellipseSize.Width / (float)grayframe.Width;
FaceSizePercentHeight = ellipseSize.Height / (float)grayframe.Height;
HeadPos.X = -1f + (2f * FaceCentrePercentX);
HeadPos.Y = -1f + (2f * FaceCentrePercentY);
HeadPos.Z = -1f + (2f * FaceSizePercentHeight);
}
This step nearly prevented our submission. For those unfamiliar with creating installers, a fiddly and arcane world of scripting awaits. The slightest fault at this stage and your target store will reject your submission.
If you have the time and patience I'd recommend those looking to publish an XNA game look into WiX, but I only had a few hours to produce the installer so opted for an expensive but simple solution, Advanced Installer professional. The beauty of it for XNA developers is that it understands what a dependency on XNA GS 4 is out of the box, so supporting XNA is as easy as checking a box.
Here is a visual guide to some simple settings in Advanced Installer which worked for me:
Product Details
This form is very simple but don't forget to give your installer an icon on this screen:
Install Parameters
I believe Intel have changed their policy on requiring Silent Installs now, but use these settings to be on the safe side:
Digital Signature
One of the nice perks of having Advanced Installer is that you can simply hand it your certificate and tell it to sign the installer for you. I find this much simpler and quicker than using SignTool.exe
via a command line.
Pre-Requisites
This is the screen which made Advanced Installer worth its money for me. Here you tell it that the app is dependent on the user already having XNA GS 4 installed, and as a result it will automatically install it for them if they don't have it.
Launch Conditions
The flip-side of the pre-requisites is preventing the app from installing if the user has an incompatible operating system.
Files & Folders
This is where you add the files and folders which you want installed on your target machine. This is usually just a dump of your Release
folder. Also, you can add a shortcut to the application to put on the user's desktop.
Since full blown XNA is desktop-only and won't work on WinRT, the Windows Store is not an option. Intel's AppUp store provides a good desktop alternative. The AppUp SDK provides options for license keys, upgrade mechanisms and all the goodies you'd expect.
To limit the chances of Celerity being rejected just before the deadline I opted to keep things very simple and did not implement the SDK. Whilst the SDK is potentially very handy, nothing in Celerity relied on it.
Whilst I won't advise on working with the SDK, here are a few tips for keeping the submission simple and upping your chances of acceptance:
Test, Test, Test!
You need to test your installer on a completely fresh install of the target operating system(s). It's so easy to forget to tell the installer that the app requires the .NET Framework or something similar. These easy-to-make problems are also easy to catch, so don't be lazy and test the installer on a fresh install. If you have the software to do it, you can test in a VM. Test on a spare PC. Test on your friend's PC. Everywhere you can.
Also, for hardware-dependent code make sure you test on the hardware. It's obvious, but so easy to skip. "Of course my gamepad code works! Look at it, it's so simple!". We caught ourselves thinking the same thing. A quick test revealed that the left and right were reversed on the DPad. Oops. The code looks fine with or without that little *-1
, so test it with the actual hardware.
The principle isn't just to make your program as good as possible. A small installer flaw which your users may tolerate might mean rejection from the store.
Keywords
In the Application Description and Keywords sections in the AppUp Submission pages you have a chance to expose your app in the store's search results. Describe your app accurately and succinctly, but also bear in mind how your users are going to find your app in the store.
Spelling and Grammar
Your image is at stake every time you type something your user will see. Double check it, and have someone else check it. This goes for both store descriptions of your app, and any text in your app, too. Not only will this irritate your users and cause them to think less of you but many mistakes may lead to rejection from the store.
Compelling Screenshots
Your app may have limited visuals, but make sure the screenshots give your users an indication of what the app does and how. They will most likely look at the screenshots and decide based on those whether they want to give up their time and or money to download your app. If the screenshot has virtually nothing on it they have no reason to take the gamble.
Application Icon
AppUp apply a semi-transparent sheen to the image you provide to give all the store apps a consistent look. I didn't realise until it was too late, but if you have a mostly white background (as we do) the effect its lost slightly, and the look of your icon will suffer alongside vivid, colourful icons. Don't get me wrong, I love our current icon, but only with hindsight do I recognise the opportunity to play into the visual filtering which will take place later down the line. Worth considering.
Pricing
Be honest about your app. Should you really be charging in the top 5% of apps in the store? Does your app offer functionality and quality in the same league as Acid Music? If it's your first app consider giving it away. It's difficult to do when you've put your heart and soul into an app, but at the same time which would you prefer. A very limited number of sales and small income, or zero income but loads of people enjoying and talking about your app. You can use your first launch to test the water with an idea, for the vanity of it or perhaps to show people what you're capable of. It's not such a tragedy if you don't take any money for people enjoying your app.
Availability
It goes without saying that the more operating systems you support and the more countries you offer then the more people stand to encounter your app. The flip side, is that I'll repeat you must test on every operating system you offer.
Performance Requirements
I found filling in this section of the submission tremendously difficult. How do I, as a developer, know what Windows Experience Index numbers to attribute to my app? All I can offer as advice is start high, get your app accepted, and then expand outwards with future updates. When you get rejected you know you've gone too low. It worked for us! Not ideal, but what's a dev to do?
If anyone has better advice on this issue please post a comment below as I'm genuinely interested.
I believe this was the first code-related contest I've entered. It was a tiring, exciting, nail-biting, challenging and surprisingly social experience. The amount of threads on the competition page and comments in people's articles are testament to the fantastic amount of support and constructive criticism the community gave itself.
In a conversation with one of my colleagues a few days before the deadline he asked if we were going to make it on time and I honestly had no idea. We'd made so much progress and the rate at which features were completed and issues were being resolved we were right on the border line. I knew of 6 distinct and unrelated major issues still outstanding (writing an installer for the first time, signing an app for the first time, getting touch working in XNA, unimplemented collision detection, getting the sensors working and a worrying tendency to just show a black screen). At that frenzied stage, however, a strange sense of peace came over me, as I knew regardless of winning or even being accepted in time, we had pushed ourselves further than we realised we were able.
We had done ourselves proud.
My last piece of advice is regarding how we managed to get those last issues nailed in the closing hours of the project. Searching the net for solutions, a couple of them, in context were said to be impossible. How were they resolved?
Sheer, bloody-minded determination.
I taught myself a profound lesson in those days which does not appear to be mentioned in any of the software development texts I've encountered. I now earnestly believe that a trait of an effective programmer, which I briefly experienced for myself and now recognise in others, was that down-right refusal to give up on a problem has this funny habit of leading to a solution.
I'll close with a quote from Einstein, "It's not that I'm so smart, it's just that I stay with problems longer."