I made this game for some one else as a sample on how to do something. But looking at it, I thought that the code may contain information that is useful to some others so I've decided to share it.
In this remake of an old game, I made use of the DirectDraw APIs to handle the rendering of graphics for several Windows Mobile resolutions. I make use of an old technique called MIP MAPS to address the resolution issues that some have encountered while insulating the games logic from differences in resolution. The same binary will run on Windows Mobile Standard/SmartPhone and Windows Mobile Professional/Pocket PC without the need for any modification.
The game also makes use of the Samsung Windows Mobile SDK to take advantage of some of the user interaction features that their phones provide, including the accelerometer, the wheel key, haptic feedback, and the notification LED. I discuss this program in three sections: the games logic, DirectDraw, and the Samsung Windows Mobile SDK.
To run the game, you must install two cabs. One cab contains the Samsung Mobile SDK files and the other containing the game. Even if you are not using a Samsung device, you will need to install the Samsung SDK cab.
To compile this game, you will need to download the Samsung Windows Mobile SDK (even if you are not running this on a Samsung device). The SDK is available from their Tools and SDK page for free, and only requires a quick and simple registration.
This game does not support the accelerometer in the HTC devices. Not having possession of such a device or access to such a device implementation, testing with the HTC accelerometer is not a possibility for me. However, if you have access to such a device and knowledge on how to use its accelerometer, the amount of code that requires updating is minimal.
The objective of this game is to clear out a field of glass orbs. Orbs will be erased whenever three of a color are in contact with each other. The player is able to shoot new orbs into the game field to attempt to match their colors. But if the collection of fields reaches the bottom of the playing area, the game is over.
Windows Mobile devices come in all shapes and sizes, and this is a game that is heavily dependent on the shape of the playing field. I decided to make the playing field a square and fit it on the device screen. If your device's screen is wider than it is tall, then the areas on the far left and far right of the screen will be unused. If the screen is taller than it is wide, then the areas on the top and the bottom of the screen will be unused.
The game's logic itself is completely unaware of the actual size of the screen. The playing area (represented by the PlayArea
class) uses floating point numbers to represent the position of all objects within the world. All items within the world have locations that are defined with a structure named PointF
. Like the POINT
structure, PointF
has two fields named x
and y
. While the POINT
structure has integer data within these fields, the PointF
structure uses floating point numbers. Within the world of this game, there only exists a single type of object, the orb.
The colored orbs within the game are represented by the Orb
class. All orbs are of the same size, so I used a static field named Orb::radius
to set their diameter (currently set at 32.0). An orb has a position, a velocity, a color, and can have a flag set to indicate that it is falling off of the playing field. Orbs that are falling off the field cannot interact with any other orbs. So, an orb set as falling is, for all purposes of interaction, gone, and nothing more than an ephemeral reminder of what once was.
The Orb
class has a member method named IsTcouching(Orb*)
. Given a reference to another orb, this function will return true
if the orbs are within touching distance and false
other wise. As previously mentioned, a falling orb cannot interact with anything else. So, even if two orbs overlap, if at least one of them is set to falling, then this function will always return false
. An orb's position is defined by the coordinates of its center point. We can use the Pythagorean Theorem to get the distance between the two center points of two spheres. If p1
were the position of one orb and p2
were the position of another orb, then the two orbs are touching if sqrt((p1.x-p2.x)^2+(p1.y-p2.y)^2)<=Orb::radius
. Mobile devices tend to have weaker processors, so whenever something can be made computationally simpler, it should be. In this case, I can simplify the computation by squaring both sides, yielding (p1.x-p2.x)^2+(p1.y-p2.y)^2>=(Orb::radius)^2
. The Orb::IsTouching
method uses the latter inequality to determine if two orbs are touching.
To move an orb, its velocity needs to be set to something other than zero. The Orb::SetVelocity(float x, float y)
method is used to do this. Velocity is expressed in terms of coordinate displacement per second (remember, our world is 512 by 512 in size). Once the velocity of the orb is set, the Orb::Step(float timeDelta)
method will be called periodically to calculate the orb's new position after the number of time units specified in timeDelta
. If an orb begins to pass outside of the world coordinates, then it will either bounce in the opposite direction (if it were leaving the left or right side of the world) or come to a complete stop (if it hits the top portion of the world). The only time an orb will have a velocity moving it to the lower portion of the world is when it is falling. Falling orbs are allowed to pass out of the world coordinates on the lower portion of the world before being deleted and their memory reclaimed.
#pragma once
#include "stdafx.h"
#include "common.h"
#include <list>
using namespace std;
#define ORB_RADIUS 32
#define ORB_TOUCHING_DISTANCE2 ((ORB_RADIUS*2)*(ORB_RADIUS*2))
class Orb
{
private:
bool falling;
static const int touchingDistanceSquared =
ORB_TOUCHING_DISTANCE2;
static RECTF BoundingBox;
OrbColor color;
PointF position;
PointF velocity;
public:
static const int radius = ORB_RADIUS;
Orb(OrbColor color, float x, float y);
void SetVelocity(float x, float y);
void GetVelocity(PointF* vel);
void GetPosition(PointF* pos);
OrbColor GetColor();
bool IsTouching(Orb* otherOrb);
void Step(float timeUnit);
bool IsFalling() {return falling;};
void SetFalling();
bool IsMoving() {return (velocity.x!=0)||(velocity.y!=0);}
PointF GetPosition() { return position; }
};
The playing area is represented by the PlayArea
class. It contains a collection of all of the orbs in play within the game and decoupled knowledge of the state of the game controller (more on this later). The PlayArea
class also keeps track of the angle at which the player wishes to shoot the next orb. The PlayArea
class provides some valuable information on groups of orbs. If the player successfully makes a matching set of orbs, the PlayArea
class will detect the match and all orbs involved in it. Orbs that are affected by the removal of matched orbs and suspended in mid air unattached to other secured orbs are detected by this class so that they can be set to falling status. PlayArea::LoadLeveL
loads an arrangement of orbs from a text file, and sets up a level for us. And, the PlayArea::Clear
method can be used to remove all orbs from the playing field. Most of the information on the state of the game is contained within this class. I make use of the Standard Template List class for containing the orbs. If you've never used the standard template library before, getting familiarity with it is highly encouraged as knowing it may lead to improved productivity.
#pragma once
#include "stdafx.h"
#include <list>
#include "common.h"
#include "Orb.h"
#include "GameController.h"
using namespace std;
#define PLAY_AREA_SIZE 512
class PlayArea
{
private:
list<orb*> orbList;
list<orb*> orbDestructionList;
GameController* gameController;
Orb* loadedOrb;
Orb* movingOrb;
int GetColorsInPlay();
OrbColor nextOrbColor;
public:
PlayArea(GameController* controller)
{orbList;gameController=controller;loadedOrb=NULL;movingOrb = NULL;}
PlayArea(int playAreaSize);
Orb* PlaceOrb(OrbColor color, float x, float y);
void LoadOrb(OrbColor color);
void SetCannonAngle(float angle);
void IncrementAngle(float incrementAmount);
void LoadLevel(LPWSTR levelData);
list<orb*>* GetOrbList() { return &orbList; }
Orb* GetLoadedOrb() { return loadedOrb; }
Orb* GetMovingOrb() { return movingOrb; }
void FireOrb(float angle);
void StopMovingOrb();
REQUIRESDELETE MyOrbList* GetIntersectingOrbList(Orb*);
OrbColor GetNextOrbColor() { return nextOrbColor; }
void SelectNextColor();
REQUIRESDELETE list<Orb*>* PlayArea::GetMatchingSet(Orb* targetOrb);
list<Orb*>* GetSuspendedOrbs();
void DestroyOrb(Orb* target);
void Clear();
int GetOrbCount();
Orb* GetLowestOrb();
};
The interface of the GameLogic
class is the most simple of those used within this program. The class exposes two methods: GameLogic::LoadLevel
opens a text file and passes its contents to PlayArea::LoadLevel
and GameLogic::Step
executes an interaction of the game's processing. Every time GameLogic::Step
is called, moving orbs are advanced, the conditions for matching an orb or for game over are checked, and the appropriate response to these conditions is made.
The logic that occurs around a moving orb coming into contact with a stationary orb demands more explanation. Because orb movement is discrete instead of continuous, when one orb comes in contact with another, chances are the two orbs will slightly overlap. When this occurs, the orb is moved backwards to the point at which the orbs still touch but are not overlapping. The explanation for this process is simple, but the implementation is slightly more complex. The implementation appears below.
current->GetPosition(&orb2Position);
movingOrb->GetPosition(&orb1Position);
float dx = orb2Position.x-orb1Position.x;
float dy = orb2Position.y-orb1Position.y;
float targetDistance = Orb::radius*2-(float)sqrt((double)(dx*dx+dy*dy));
if(targetDistance!=0.0)
{
movingOrb->GetVelocity(&velocity);
float speed = (float)sqrt((velocity.x*velocity.x)+
(velocity.y*velocity.y));
float adjTime = targetDistance/speed;
movingOrb->SetVelocity(-velocity.x,-velocity.y);
movingOrb->Step(adjTime);
movingOrb->SetVelocity(velocity.x,velocity.y);
}
The game controller class exists as a level of abstraction from whatever the actual input device may be. Some one playing this game could be providing portions of input with the directional pad, wheel key (such as the one on the Samsung Blackjack II SGH-i617), accelerometer, or the screen. Rather than convolute the game's logic with the possible states for each one of these input devices, I've created a class that packages the information that the game needs to know. A simplified view of the input that the user must provide is whether or not an action button was pressed (to fire the orb) and the selection of the angle at which the ball will be shot. So, the GameController
class has two pieces of information: angle
and actionButtonPressed
. In the case of an accelerometer, a user can directly select an angle by tilting the device. In the case of the wheelkey or control pad, the user incrementally selects an angle. GameController::SetAngle
allows an angle to directly be set, and GameController::IncrementAngle
increments (or decrements) the input angle. For the action button, GameController::SetActiobButtonPressed
will set the flag that notifies the game logic that the action button was pressed. It is up to the game to clear the flag with GameController::ClearActionButton
. When the game needs to read the state of the controller, the GameController::IsActionButtonPressed
returns the state of the action button flag and GameController::GetAngleDeg
returns the angle in degrees, while GameController::GetAngleRad
returns the angle in radians.
#pragma once
#include "Stdafx.h"
#include "common.h"
#define MAX_CONTROLLER_ANGLE 80.0f
class GameController
{
private:
float angle ;
bool actionButtonPressed;
void CheckAngleBoundaries();
public:
GameController();
float IncrementAngle(float amount);
float GetAngle();
float GetAngleRad();
float SetAngle(float newAngle);
void PressActionButton();
void ClearActionButton();
bool IsActionButtonPressed();
};
From the standpoint of the game's logic, the PlayAreaRenderer
class handles the drawing of the game object. In actuality, this class creates a to-do list that contains a list of all of the operations that must occur to render the game scene. The list of items that must be performed is stored in the DisplayList
class. Unlike most of the lists used within this program, the DisplayList
class contains a static sized list. Using a dynamic list in this class would have a negative impact on performance and on memory fragmentation since it is being regenerated several times a second (around 30-40, depending on the performance of the device).
#pragma once
#include "stdafx.h"
#include "common.h"
#include "PlayArea.h"
#include "Orb.h"
#include "DisplayList.h"
#include "OrbRenderer.h"
class PlayAreaRenderer
{
private:
OrbRenderer* orbRenderer;
PlayArea* playArea;
DisplayList* displayList;
GameController* gameController;
public:
PlayAreaRenderer(OrbRenderer * or, PlayArea* pa,
GameController *gc, DisplayList* dl)
{
playArea = pa;
orbRenderer = or;
displayList = dl;
gameController = gc;
}
void Render();
};
#pragma once
#include "stdafx.h"
#include "common.h"
struct DisplayItem
{
RECT itemSource;
RECT itemDestination;
};
class DisplayList
{
private:
DisplayItem itemList[100];
int itemCount;
public:
DisplayList(){itemCount=0;}
void Clear() {itemCount=0;}
void AddItem(RECT* itemSource, RECT* itemDestination);
void Render(IDirectDrawSurface* spriteSource,
IDirectDrawSurface* destinationSurface, POINT offset);
};
The GameEventNotification
class serves as a decoupling layer for sending notifications back to the user. It is called whenever one of four events occurs: an orb is fired, an orb has stopped, the next color for the orb is decided, or the user has successfully made a color match. If I want to do anything to notify the user of one of these actions, the implementation for the notification would be called from within this class. The code as presented here will generate haptic feedback whenever the orb is fired or stops (more information on haptic feedback is available below in the "Samsung Windows Mobile SDK" section), or change the color of the notification LED to communicate the next orb color.
#pragma once
#include "Stdafx.h"
#include "common.h"
#include "SmiHaptics.h"
class GameEventFeedback
{
SmiHapticsNote launchNote[1];
bool hasNotificationLed;
bool canBlink;
bool hasHapticFeedback;
SMI_HAPTICS_HANDLE hapticsHandle;
public:
GameEventFeedback();
~GameEventFeedback();
void OrbFired();
void NextColorDecided(OrbColor);
void OrbStopped();
void ColorMatchMade();
};
I mentioned earlier that this game can be rendered on several Windows Mobile devices with different resolutions. I handled the rendering on multiple resolutions through a technique known as mipmaps. Mip is short for the Latin phrase multim im parvo meaning "many things in a small place". The use of mipmaps goes back more than 20 years, but they are still relevant in today's world. Many 3D systems make use of mipmaps to hold texture details, and Microsoft's DeepZoom technology uses it extensively. To explain what a mipmap is, look at the following image of the orbs.
The image contains the orbs that I used in the game at several resolutions. When an orb is being drawn to the screen, the set of orbs that most closely matches the present needs are used. A class named OrbRenderer
contains the logic for choosing an orb for a given need. The class' constructor contains the only logic that is used to make decisions based on resolution. Most of the rest of the game is resolution independent. The class' constructor performs calculations based on the orb radius, the size of the play area in logical units (world units), and the physical size of the screen surface on which it will draw.
OrbRenderer::OrbRenderer(
float logicalPlayAreaSize,
float logicalOrbRadius,
float physicalPlayAreaSize,
DisplayList* displayList)
{
this->targetDisplayList=displayList;
logicalSize = logicalPlayAreaSize;
this->scalingFactor = physicalPlayAreaSize/logicalPlayAreaSize ;
destinationOrbSize.x=destinationOrbSize.y=
(int)logicalOrbRadius*2.0f*scalingFactor;
destinationOrbRadius = destinationOrbSize.x/2;
if(destinationOrbSize.x>=48)
{
sourceOrigin.x=0;
sourceOrigin.y=0;
orbDisplacement.x=64;
orbDisplacement.y=0;
orbImageSize.x=orbImageSize.y=64;
}
else if (destinationOrbSize.x>=24)
{
sourceOrigin.x=0;
sourceOrigin.y=64;
orbDisplacement.x=32;
orbDisplacement.y=0;
orbImageSize.x=orbImageSize.y=32;
}
else if (destinationOrbSize.x>=12)
{
sourceOrigin.x=0;
sourceOrigin.y=96;
orbDisplacement.x=16;
orbDisplacement.y=0;
orbImageSize.x=orbImageSize.y=16;
}
else
{
sourceOrigin.x=0;
sourceOrigin.y=112;
orbDisplacement.x=8;
orbDisplacement.y=0;
orbImageSize.x=orbImageSize.y=8;
}
}
|
|
Side-by-side screen shots of a VGA and QVGA rendering displayed at the same size.
|
DirectDraw exposes its functionality through COM interfaces. As with any COM interface, you must remember to release the interface when you no longer need it. Not doing so can lead to memory leaks, so you will hear me reiterate the importance of releasing a COM interface throughout this article. The name of the interface that is used to get access to most of the DirectDraw functionality is IDirectDraw
. An instance of an IDirectDraw
implementation is acquired through the function DirectDrawCreate
. Here is the calling signature for DirectDrawCreate
:
HRESULT WINAPI DirectDrawCreate(
GUID FAR* lpGUID,
LPDIRECTDRAW FAR* lplpDD,
IUnknown FAR* pUnkOuter
);
For our usage, only the second parameter is important. The other two parameters should be set to NULL
. The function will retrieve an object that implements IDirectDraw
and assign its address to the interface pointer that is specified in the second parameter.
A DirectDraw program can run in one of two modes or cooperation levels: exclusive or normal. The mode selected will have an impact on how your program interacts with the rest of the system (hence the name "cooperation level"). In exclusive mode, you own the entire display and are able to make use of page flipping. In normal mode, you don't have access to page flipping and must be mindful of where you draw. Given those two pieces of information, exclusive mode looks to be the more attractive mode. But, the freedom gained through exclusive mode brings on more responsibility. While in exclusive mode, notifications (such as an incoming call) cannot get through. So, you must keep watch for notifications and respond accordingly. For most of this article, we will be working in normal mode. The cooperation level is set through IDirectDraw::SetCooperationLevel
. The calling signature for this method follows:
HRESULT SetCooperativeLevel(
HWND hWnd,
DWORD dwFlags
);
The first parameter is the handle to the top-level window for your application. The second parameter is a flag indicating the mode to be set. To use exclusive mode, pass the flags DDSCL_FULLSCREEN | DDSCL_EXCLUSIVE
together. To use normal mode, pass the flag DDSCL_NORMAL
.
Once DirectDraw is initialized, we are almost ready do begin drawing. But, we need an object on which to draw. For the DirectDraw API, the object that is the target of a drawing operation is called a surface. Surfaces implement the IDirectDraw
surface interface. A surface can either be on-screen or off-screen. An on-screen surface is associated with the viewable display memory. Off-screen surfaces can exist within a non-visible portion of the devices video memory or within main memory. The method IDirectDraw::CreateSurface
will create an IDirectDrawSurface
object for us. Its calling signature follows:
HRESULT CreateSurface(
LPDDSURFACEDESC lpDDSurfaceDesc,
LPDIRECTDRAWSURFACE FAR* lplpDDSurface,
IUnknown FAR* pUnkOuter
);
The first parameter contains information about the surface to be created. The second parameter is an output parameter. A pointer to the newly created surface will be passed into the variable referenced in the second parameter. The third parameter can be null
. The first parameter deserves more explanation. Exactly what is a DDSURFACEDESC
?
Many functions in DirectX stack their parameters in a structure. IDirectDraw::CreateSurface
is no exception. The DDSURFACEDESC
parameter packages the information about the surface that is to be created. Some of the fields are optional depending on the type of surface that we are trying to create. To know which fields we have populated and which ones we have not, a member named dwFlags
must be set with flags that indicate which fields we have populated. Another member function named dwSize
must be set to the sizeof
the DDSURFACEDESC
structure. With future versions of DirectDraw, the size of the DDSURFACEDESC
could grow, so DirectDraw uses this parameter to know the size of the DDSURFACEDESC
structure that we are using. To create a surface on the visible video memory, the only parameter that must be set is the ddsCaps
member. The ddsCaps
member is of type DDSCAPS
. The Windows Mobile implementation of DDSCAPS
has a single field named dwCaps
. The field accepts flags. The flag DDSCAPS_PRIMARYSURFACE
is needed to create a surface in visible video memory.
In a typical program running in the normal cooperative mode, the unconfined drawing of a program is undesirable. DirectDraw offers a mechanism for restricting the areas on the screen (or any other DirectDraw surface) on which we draw through DirectDraw clippers. DirectDraw clippers implement the IDirectDrawClipper
interface. These objects are based off of GDI regions. So, you can provide a list of rectangular regions in which drawing cannot occur. For our program, we will use a much simpler method for creating a clipper. To only allow drawing in the area occupied by our client window, we can pass a handle to our window to IDirectDrawClipper::SetHWnd
. The clipper itself is created through IDirectDraw::CreateClipper
.
If you are familiar with GDI programming, then the concept of Blting is nothing new. BLT stands for Bit Block Transfer, or transferring data in one block of memory to another. Most modern graphics APIs will have an equivalent of this functionality. For the DirectDraw blits performed within this program, I've set black to be the color that represents transparency. In my original mipmap images, the black pixels will result in no pixels being rendered there. If you want to test this out, drive black circles within the centers of my sphere images, and you will see donuts in the game instead of spheres.
Samsung's Windows Mobile SDK is the first (and as of yet, only) instance I've come across of an OEM supporting developers in accessing the specific features of their devices. On other devices, if you would like to access features not exposed through the Windows Mobile APIs, then you have to resort to a bit of reverse engineering and hacking (or wait for someone else to do that) before you can use it. If the feature is implemented a different way in a future revision of the device, you must take that into account too.
With Samsung's Windows Mobile SDK, the details of how a feature is implemented in each Samsung device are abstracted away by a consistent interface. Their SDK lets you query whether or not a capability exists, and if it exists, you are able to get further details about what it can and cannot do. More information on the SDK can be found at the Samsung Mobile Innovator site.
I used the SDK for very specific purposes. On the Samsung Blackjack, there is a jog dial type interface called the wheelkey. It is placed just below the display. For this game, the wheelkey is a natural method of controlling the game. If you monitor the Windows messages associated with the wheel keym they come across as VK_UP
and VK_DOWN
keypresses. Using the Samsung Mobile SDK, I can distinguish between a VK_UP
or VK_DOWN
message that was caused by some one pressing the directional pad from one generated with the wheelkey.
case VK_UP:
{
UINT scanCode = (lParam & 0x00FF0000) >> 16;
if(TRUE == SmiKeyMsgIsFromWheelKey(wParam,scanCode))
{
g_gameController->IncrementAngle(-5);
}
}
break;
case VK_DOWN:
{
UINT scanCode = (lParam & 0x00FF0000) >> 16;
if(TRUE == SmiKeyMsgIsFromWheelKey(wParam,scanCode))
{
g_gameController->IncrementAngle(5);
}
}
break;
Picture of Wheelkey on Samsung Blackjack (i617)
The accelerometer in the Samsung devices can be read asynchronously or synchronously. Because of the discrete nature of this game, synchronous reads work fine for me. It only takes two lines of code to see if the the game is being run on Samsung hardware with an accelerometer.
SmiAccelerometerCapabilities accelCaps;
g_hasAccelerometer = (SMI_SUCCESS == SmiAccelerometerGetCapabilities(&accelCaps));
If a Samsung accelerometer is present, then g_hasAccelerometer
will be set equal to true
. Within the game loop, the accelerometer vector is read. The accelerometer always returns a vector that points in whatever direction is physically down. If the user has changed the orientation of their device, then I must take this into account too. The orientation of the user's device can be found by reading a Registry key. The key returns the value 0, 90, 180, or 270 to indicate by how many degrees the device's screen has been rotated.
DWORD GetScreenOrientation()
{
DWORD retVal ;
HKEY hKey;
DWORD dataSize = sizeof(DWORD);
RegOpenKeyEx(HKEY_LOCAL_MACHINE,TEXT("System\\GDI\\Rotation"),NULL,NULL,&hKey);
RegQueryValueEx(hKey,TEXT("Angle"),NULL,NULL,(LPBYTE)&retVal, &dataSize);
RegCloseKey(hKey);
return retVal;
}
This value is used to reorient the vector that we read from the accelerometer. Once I have a proper value, I perform a little trig to figure out by how many degrees the device has been rotated. When being played on a device with an accelerometer, the ball will always try to shoot in the direction that is physically pointed up.
if(g_hasAccelerometer)
{
SmiAccelerometerVector vect,rotatedVect;
if(SMI_SUCCESS==SmiAccelerometerGetVector(&vect))
{
switch(screenOrientation)
{
case 0:
rotatedVect = vect;
break;
case 90:
rotatedVect.x=-vect.y;
rotatedVect.y=vect.x;
rotatedVect.z=vect.z;
break;
case 180:
rotatedVect.x=-vect.x;
rotatedVect.y=-vect.y;
rotatedVect.z=vect.z;
break;
case 270:
rotatedVect.x=vect.y;
rotatedVect.y=-vect.x;
rotatedVect.z=vect.z;
break;
}
float angle;
float absAngle;
float distance;
if(rotatedVect.y==0)
angle = 0;
else
{
angle = -atan(rotatedVect.x / rotatedVect.y);
angle = (angle/(2*3.141592))*360.0;
}
g_gameController->SetAngle(angle);
}
}
Some Samsung devices have a notification LED that can illuminate in seven different colors: red, yellow, green, blue, purple, and white. If this game is run on such a device, the color of the next orb is indicated by the notification LED. The LED will shine for up to 4 seconds to indicate the next color. The notification LED can be activated with SmiLedTurnOn
. The function requires a COLORREF
variable that indicates the color and some other params that indicate the blinking pattern and length of time that the LED should be turned on. I use the code in the GameEventFeedback::NextColorDecided
method:
void GameEventFeedback::NextColorDecided(OrbColor color)
{
SmiLedAttributes attrib;
switch(color)
{
case Orb_Red: attrib.color=RGB(0xFF,0x00,0x00); break;
case Orb_Yellow: attrib.color=RGB(0xFF,0xFF,0x00); break;
case Orb_Green: attrib.color=RGB(0x00,0xFF,0x00); break;
case Orb_Blue: attrib.color=RGB(0x00,0x00,0xFF); break;
case Orb_Teal: attrib.color=RGB(0x00,0xFF,0xFF); break;
case Orb_Purple: attrib.color=RGB(0xFF,0x00,0xFF); break;
case Orb_White: attrib.color=RGB(0xFF,0xFF,0xFF); break;
default: attrib.color=0;break;
};
attrib.onTime=750;
attrib.offTime=250;
attrib.duration=4000;
attrib.pattern= SMI_LED_PATTERN_SOLID;
SmiLedTurnOn(SMI_LED_ID_NOTIFICATION,&attrib);
}
The notification LED is turned on here to indicate a red orb will be presented next
For Samsung devices with haptic feedback capabilities, you will feel a feedback when an orb is fired or when it hits the other orbs. Haptic feedback is invoked by defining an array of haptic notes. A haptic note contains information on the intensity and the shape of the vibration. For my purposes, a single note suffices, so I created an array of one element to contain the note data. The array is within the GameEventFeedback
class.
SmiHapticsNote launchNote[1];
The constructor of the GameEventFeedback
class detects whether or not haptic feedback hardware is present. If the hardware is present, the note is populated according to the hardware's capabilities.
GameEventFeedback::GameEventFeedback()
{
hapticsHandle=NULL;
SmiLedCapabilities ledCaps;
SmiHapticsCapabilities hapticCaps;
SMI_RESULT result;
result = SmiLedGetCapabilities(SMI_LED_ID_NOTIFICATION,&ledCaps);
if(result==SMI_SUCCESS)
{
hasNotificationLed = true;
canBlink = ledCaps.blinkIsSupported;
}
result = SmiHapticsGetCapabilities(&hapticCaps);
if(SMI_SUCCESS==result)
{
if(SMI_SUCCESS==SmiHapticsOpen(&hapticsHandle))
{
hasHapticFeedback = true;
launchNote[0].duration=max(hapticCaps.minPeriod,400);
launchNote[0].magnitude=255;
if(hapticCaps.startEndMagIsSupported)
{
launchNote[0].startingMagnitude=64;
launchNote[0].endingMagnitude=255;
}
launchNote[0].period=max(hapticCaps.minPeriod,0.);
if(hapticCaps.noteStyleIsSupported)
launchNote[0].style=SMI_HAPTICS_NOTE_STYLE_STRONG;
}
else
{
hasHapticFeedback = false;
}
}
}
Once the note is created and a handle to the hardware is open, a note sequence can be played with a single call to SmiHapticsPlayNotes
. This is done in both the GameEventFeedback::OrbFired
and GameEventFeedback::OrbStopped
methods.
void GameEventFeedback::OrbFired()
{
SmiHapticsPlayNotes(hapticsHandle,1,launchNote,FALSE,NULL);
}
void GameEventFeedback::OrbStopped()
{
SmiHapticsPlayNotes(hapticsHandle,1,launchNote,FALSE,NULL);
}
Some Samsung devices such as the Epix have an input device that functions as both a mouse pad and a directional pad. The user can change the mode of the pad. In mouse mode, it won't do one much good in the game. So, the game will automatically detect the presence of this input device and switch it to cursor mode when the program starts up.
SmiOpticalMouseCapabilities mouseCaps;
SmiOpticalMouseGetCapabilities(&mouseCaps);
if(mouseCaps.multiOperationModeIsSupported)
{
SmiOpticalMouseSetMode(SMI_OPTICAL_MOUSE_MODE_NAVIGATION);
}
Picture of the optical mouse pad on a Samsung phone
A common error that developers make when attempting to write their first graphic intensive application on Windows Mobile is to use the default message processing structure that Visual Studio creates for you. The default message processing structure is designed for desktop applications. In most such applications, unless something happens (an e-mail comes in, the user presses a button, etc.), the application is doing nothing. The default message processing structure is designed for those types of scenarios. Attempting to use that structure on a graphic intensive program or any program for which the visual state is in constant flux will result in sub-par performance. After using the inappropriate structure, I've heard developers conclude that the reason for their subpar experience is because of some inadequacy in Windows or because of the WM_PAINT
message not having a high enough priority while what is actually happening is the wrong pattern is being applied to their program.
A more ideal message processing structure follows. In the following structure, the program will be able to continuously update its display. While you will be able to achieve smoother video with this message processing loop, it is also associated with higher power consumption. When the game loses focus, I have the code to conform to the old message processing structure in which the thread is blocked until there is a message to process. If the user leaves the game running in the background, this will prevent the game from draining the battery.
while(keepRunning)
{
if(g_bHasFocus)
{
if(PeekMessage(&msg,NULL,0,0,TRUE))
{
if(msg.message==WM_QUIT)
keepRunning=false;
if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
else
{
Sleep(0);
}
}
else
{
if(GetMessage(&msg,NULL,0,0))
{
if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
}
}
I've only given the game a single level. The format for the level layout is in a plain text file. On each line, the word "ORB" is followed by the X and Y coordinate for the orb, and then followed by a number indicating the orb color (see the OrbColor
enumeration for a mapping of numbers to colors). Each line is terminated by a semicolon. If you decide to create new level files or edit the one I have provided, here are some things to note:
- All of your orbs should either touch each other or touch the ceiling. If they don't, then they will be flagged as suspended orbs and fall when the first orb is shot.
- The file is encoded with 16-bit characters. If you add a file using 8-bit characters, it will fail horribly.
- Remember that the coordinates are in a world that is 512 units by 512 units
- Don't create any layouts with a sphere at the bottom of the screen. If you do, the game will be over as soon as it starts.
In Closing
As mentioned before, I made this code as an example for some one. I don't plan on updating it. However, portions of this code were derived from some research I am doing on the various Windows Mobile Graphic APIs. Once I've completed my research, I hope to have examples up on using all of the graphic APIs. I skimmed through DirectDraw without going into detail in this paper, but I have a much more detailed guide put together. The DirectDraw portion of the guide is about 15 pages so far. Direct3D, GDI/GDI+, DirectShow, and OpenGL ES will all be parts of the guide. I plan to publish each section as its own article in weeks to come.
History
- 2009 July 30 - Initial publication.