Introduction
This walkthrough will cover the creation of an OpenGL application including the following topics:
- Angle Calculation
- Perspective
- Billboarding
- Depth Buffer
- Multipass Rendering
- Animation
- Accelerometer
- Touch events
- Persisting user settings
The application allows the user to:
- Move the camera anywhere in the scene
- Rotate the scene or the camera
- Show and hide objects in the scene
- Display the FPS
- Change the billboard method
- Use the phone angle to set the view angle
The project is built using Eclipse and the Android SDK.
Background
I created this app as an exercise for learning OpenGL. I couldn't find a fountain app for the Android, so I figured that was a good place to start. About 10% of Android users are still using OpenGL ES 1.1 so I wrote this application using that version.
This tutorial assumes you already have the Eclipse environment up and running. If you are new to Eclipse and Android development, I recommend going through the temperature converter tutorial which can be found here.
Using the Code
You can create the project by going through the steps listed below. If you prefer to load the entire project, download\unzip the project file, then open Eclipse and choose File->Import..->General->Existing Projects and choose the root folder of the FountainGL
project.
Let's begin:
Start Eclipse (I'm using Eclipse Classic version 3.6.2).
Choose File -> New -> Project -> Android -> Android Project
Click Next.
Fill in the fields as shown below. You can use any version of Android 2.1 or later.
Click Finish.
Once the project is created, add this icon to the AutoRing\res\drawable-hdpi folder. You can drag it directly to the folder in Eclipse or you can use Windows Explorer. Overwrite the existing file in that folder.
icon.png
If you are not using a high resolution device (you probably are), you can copy the icon to the drawable-mdpi and drawable-ldpi folders also.
Right Click on the FountainGL
project and choose New->Class.
Enter the Name, Package and Superclass as shown below. Also check the 2 checkboxes indicated (though we will overwrite these method stubs).
Click Finish.
Coding the FountainGLRenderer Class
This class will contain the bulk of our application code.
Open FountainGLRenderer.java.
Remove all the existing code from this file.
Add the package name and imports needed for our application.
package droid.fgl;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import javax.microedition.khronos.opengles.GL11;
import android.app.Activity;
import android.content.Context;
import android.content.res.Configuration;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.opengl.GLSurfaceView;
import android.opengl.GLSurfaceView.Renderer;
import android.opengl.GLU;
import android.os.Handler;
import android.os.SystemClock;
import android.view.MotionEvent;
import android.widget.FrameLayout;
import android.widget.TextView;
Create the FountainGLRenderer
class. Our class will implement Renderer
so we can combine our render code and the OpenGL callbacks in a single class.
public class FountainGLRenderer extends GLSurfaceView implements Renderer
{
Add the variables needed for the fountain and ball animation. elapsedRealtime()
returns the number of milliseconds since system bootup.
private static float mAngCtr = 0;
long mLastTime = SystemClock.elapsedRealtime();
Add the variables needed for processing touch\drag events.
float mDragStartX = -1;
float mDragStartY = -1;
float mDownX = -1;
float mDownY = -1;
Add the variables used to store camera angle and position. We add .0001 to initial values because exact right (or 0) angles can lead to divide by 0 errors. We could check for 0 at each calculation, but this is easier.
static float mCamXang = 0.0001f;
static float mCamYang = 180.0001f;
static float mCamXpos = 0.0001f;
static float mCamYpos = 60.0001f;
static float mCamZpos = 180.0001f;
Add the variables used to set the camera view direction.
float mViewRad = 100;
static float mTargetY = 0;
static float mTargetX = 0;
static float mTargetZ = 0;
Add the variables used to set the scene rotation angle.
static float mSceneXAng = 0.0001f;
static float mSceneYAng = 0.0001f;
Add the variables used to store screen information.
float mScrHeight = 0;
float mScrWidth = 0;
float mScrRatio = 0;
float mClipStart = 1;
Add the constants used for angle conversion.
final double mDeg2Rad = Math.PI / 180.0;
final double mRad2Deg = 180.0 / Math.PI;
Add the mResetMatrix
flag. This is set whenever the camera moves forward or back so we can update the clip region.
boolean mResetMatrix = false;
Add the variables used for FPS (Frames Per Second) calculation and display. Note the TextView
can also be used to display debug information.
int[] mFrameTime = new int[20];
int mFramePos = 0;
long mStartTime = SystemClock.elapsedRealtime();
int mFPSDispCtr = 0;
float mFPS = 0;
TextView mTxtMsg = null;
final FountainGLRenderer mTagStore = this;
Handler mThreadHandler = new Handler();
Add the object index constants and buffer length array. We will store the vertex array in the GPU memory which requires an index and length when reading. We can't use 0
as an index because it is reserved by OpenGL.
final int mFLOOR = 1;
final int mBALL = 2;
final int mPOOL = 3;
final int mWALL = 4;
final int mDROP = 5;
final int mSPLASH = 6;
int[] mBufferLen = new int[] {0,0,0,0,0,0,0};
Add the parameters used for object creation. These are optimized for my Hauwei Ideos. mBallHSliceCnt
must be even because we will render the ball in 2 halves.
int mBallRad = 10;
int mBallVSliceCnt = 32;
int mBallHSliceCnt = 32;
int mStreamCnt = 10;
int mDropsPerStream = 30;
int mRepeatLen = 180/mDropsPerStream;
float mArcRad = 30;
float[][] dropCoords = new float[mStreamCnt*mDropsPerStream][3];
int mPoolSliceCnt = mStreamCnt;
float mPoolRad = 57f;
Add the variables used to store the accerometer values. The accelerometer can be used to set the camera view angle. mOrientation
stores the current phone orientation.
public float AccelZ = 0;
public float AccelY = 0;
int mOrientation = 0;
Add the variables used to store user options.
public boolean ShowBall = true;
public boolean ShowFloor = true;
public boolean ShowFountain = true;
public boolean ShowPool = true;
public boolean RotateScene = true;
public boolean UseTiltAngle = false;
public boolean MultiBillboard = true;
public boolean ShowFPS = true;
public boolean Paused = false;
Add the constructor for FountainGLRenderer
. The activity is passed in so we can alter the layout and add a TextView
for displaying the fps. setRenderer()
tells OpenGl that this class will do the rendering and initializes the surface. We also create the listener for the accelerometer so the view angle can be adjusted based on phone tilt. Note that the accerometer returns the same X\Y values regardless of orientation so we need to choose which sensor to use.
FountainGLRenderer(Activity pActivity)
{
super(pActivity);
FrameLayout layout = new FrameLayout(pActivity);
mTxtMsg = new TextView(layout.getContext());
mTxtMsg.setBackgroundColor(0x00FFFFFF);
mTxtMsg.setTextColor(0xFF777777);
layout.addView(this);
layout.addView(mTxtMsg);
pActivity.setContentView(layout);
setRenderer(this);
((SensorManager)pActivity.getSystemService
(Context.SENSOR_SERVICE)).registerListener(
new SensorEventListener() {
@Override
public void onSensorChanged(SensorEvent event) {
if (mOrientation ==
Configuration.ORIENTATION_PORTRAIT)
AccelY = event.values[1];
else
AccelY = event.values[0];
AccelZ = event.values[2];
}
@Override
public void onAccuracyChanged
(Sensor sensor, int accuracy) {}
},
((SensorManager)pActivity.getSystemService(Context.SENSOR_SERVICE))
.getSensorList(Sensor.TYPE_ACCELEROMETER).get(0),
SensorManager.SENSOR_DELAY_NORMAL);
}
Add the onSurfaceCreated
callback. This is called only once when the surface is first created. We set the background color and create the vertex arrays for our objects.
@Override
public void onSurfaceCreated(GL10 gl1, EGLConfig pConfig)
{
GL11 gl = (GL11)gl1;
gl.glClearColor(0f, 0f, 0f, 1.0f);
BuildFloor(gl);
BuildBall(gl);
BuildPool(gl);
BuildWall(gl);
BuildDrop(gl);
BuildSplash(gl);
}
Add the BuildFloor
method. This generates the vertices for the triangles that make up the floor. The floor is a 7x7 grid merged with a 6x6 grid. To create a checker pattern, we only draw alternate squares. The other squares are empty. After creating the vertex array, it is stored in GPU memory.
void BuildFloor(GL11 gl)
{
int sqrSize = 20;
float vtx[] = new float[1530];
int vtxCtr = 0;
for (int x=-130, offset=0; x<130; x+=sqrSize, offset=sqrSize-offset)
{
for (int y=-130+offset; y<130; y+=(sqrSize*2))
{
vtx[vtxCtr] = x;
vtx[vtxCtr+ 1] =-2;
vtx[vtxCtr+ 2] = y;
vtx[vtxCtr+ 3] = x+sqrSize;
vtx[vtxCtr+ 4] =-2;
vtx[vtxCtr+ 5] = y;
vtx[vtxCtr+ 6] = x;
vtx[vtxCtr+ 7] =-2;
vtx[vtxCtr+ 8] = y+sqrSize;
vtx[vtxCtr+ 9] = x+sqrSize;
vtx[vtxCtr+10] =-2;
vtx[vtxCtr+11] = y;
vtx[vtxCtr+12] = x;
vtx[vtxCtr+13] =-2;
vtx[vtxCtr+14] = y+sqrSize;
vtx[vtxCtr+15] = x+sqrSize;
vtx[vtxCtr+16] =-2;
vtx[vtxCtr+17] = y+sqrSize;
vtxCtr+=18;
}
}
StoreVertexData(gl, vtx, mFLOOR);
}
Add the BuildBall method. The ball is created as a grid (longitude\latitude). The top portion of the method calculates all the vertices in the ball. The bottom portion arranges the vertices to generate triangles (each quad is 2 triangles). We only generate vertices for alternating quads. When we draw the ball, we will render the same vertices twice, rotating the ball and changing the color in between renders. Note that the top and bottom rows are created as quads (4 corners), even though they are rendered as triangles (3 corners). This is because every quad in the top row has the same top vertices. OpenGL ignores triangles with no area so performance is not an issue. | |
void BuildBall(GL11 gl)
{
float x[][] = new float[mBallVSliceCnt+1][mBallHSliceCnt+1];
float y[][] = new float[mBallVSliceCnt+1][mBallHSliceCnt+1];
float z[][] = new float[mBallVSliceCnt+1][mBallHSliceCnt+1];
for (int vCtr = 0; vCtr <= mBallVSliceCnt; vCtr++)
{
double vAng = 180.0 / mBallVSliceCnt * vCtr;
float sliceRad = (float) (mBallRad * Math.sin(vAng * mDeg2Rad));
float sliceY = (float) (mBallRad * Math.cos(vAng * mDeg2Rad));
float vertexY = sliceY;
float vertexX = 0;
float vertexZ = 0;
for (int hCtr = 0; hCtr <= mBallHSliceCnt; hCtr++)
{
double hAng = 360.0 / mBallHSliceCnt * hCtr;
vertexX = (float) (sliceRad * Math.sin(hAng * mDeg2Rad));
vertexZ = (float) (sliceRad * Math.cos(hAng * mDeg2Rad));
y[vCtr][hCtr]=vertexY+60;
x[vCtr][hCtr]=vertexX;
z[vCtr][hCtr]=vertexZ;
}
}
int hCnt = x[0].length;
int vCnt = x.length;;
float vtx[] = new float[mBallVSliceCnt*mBallHSliceCnt/2*2*3*3];
int vtxCtr = 0;
for (int vCtr = 1; vCtr < vCnt; vCtr++)
for (int hCtr = 1+vCtr%2; hCtr < hCnt; hCtr += 2)
{
vtx[vtxCtr] = x[vCtr-1][hCtr-1];
vtx[vtxCtr+ 1] = y[vCtr-1][hCtr-1];
vtx[vtxCtr+ 2] = z[vCtr-1][hCtr-1];
vtx[vtxCtr+ 3] = x[vCtr][hCtr-1];
vtx[vtxCtr+ 4] = y[vCtr][hCtr-1];
vtx[vtxCtr+ 5] = z[vCtr][hCtr-1];
vtx[vtxCtr+ 6] = x[vCtr-1][hCtr];
vtx[vtxCtr+ 7] = y[vCtr-1][hCtr];
vtx[vtxCtr+ 8] = z[vCtr-1][hCtr];
vtx[vtxCtr+ 9] = x[vCtr][hCtr-1];
vtx[vtxCtr+10] = y[vCtr][hCtr-1];
vtx[vtxCtr+11] = z[vCtr][hCtr-1];
vtx[vtxCtr+12] = x[vCtr-1][hCtr];
vtx[vtxCtr+13] = y[vCtr-1][hCtr];
vtx[vtxCtr+14] = z[vCtr-1][hCtr];
vtx[vtxCtr+15] = x[vCtr][hCtr];
vtx[vtxCtr+16] = y[vCtr][hCtr];
vtx[vtxCtr+17] = z[vCtr][hCtr];
vtxCtr+=18;
}
StoreVertexData(gl, vtx, mBALL);
}
Add the BuildPool
method. This creates the water as a triangle fan where every triangle has a common central vertex.
void BuildPool(GL11 gl)
{
float vtx[] = new float[(mPoolSliceCnt+2)*3];
int vtxCtr = 0;
vtx[vtxCtr] = 0;
vtx[vtxCtr+1] = 4f;
vtx[vtxCtr+2] = 0;
for (float fAngY = 0;fAngY <= 360;fAngY += 360/mPoolSliceCnt)
{
vtxCtr+=3;
vtx[vtxCtr] = mPoolRad*(float)Math.sin(fAngY*mDeg2Rad);
vtx[vtxCtr+1] = 4f;
vtx[vtxCtr+2] = mPoolRad*(float)Math.cos(fAngY*mDeg2Rad);
}
StoreVertexData(gl, vtx, mPOOL);
}
Add the BuildWall
method. This creates the wall of the pool as a triangle strip where every triangle shares a side with the triangle next to it. Note that the radius is set 2 points larger than the pool in order to prevent Z-fighting (triangle overlap). We will discuss Z-fighting later in this walkthrough.
void BuildWall(GL11 gl)
{
int wallSliceCnt = mPoolSliceCnt;
float wallRad = mPoolRad+2;
float vtx[] = new float[(wallSliceCnt+1)*2*3];
int vtxCtr = 0;
vtx[vtxCtr] = 0;
vtx[vtxCtr+1] = -1;
vtx[vtxCtr+2] = wallRad;
vtxCtr+=3;
vtx[vtxCtr] = 0;
vtx[vtxCtr+1] = 9;
vtx[vtxCtr+2] = wallRad;
for (float ftnAngY = 360/wallSliceCnt;
ftnAngY <= 360; ftnAngY += 360/wallSliceCnt)
{
vtxCtr+=3;
vtx[vtxCtr] = wallRad*(float)Math.sin(ftnAngY*mDeg2Rad);
vtx[vtxCtr+1] = -1;
vtx[vtxCtr+2] = wallRad*(float)Math.cos(ftnAngY*mDeg2Rad);
vtxCtr+=3;
vtx[vtxCtr] = wallRad*(float)Math.sin(ftnAngY*mDeg2Rad);
vtx[vtxCtr+1] = 9;
vtx[vtxCtr+2] = wallRad*(float)Math.cos(ftnAngY*mDeg2Rad);
}
StoreVertexData(gl, vtx, mWALL);
}
Add the BuildDrop
method. This creates the vertices for a single drop in the fountain. Every drop has the same coordinates. When we draw the fountain, we use glTranslate
and glRotate
to adjust the position\angle of each drop.
void BuildDrop(GL11 gl)
{
float vtx[] = {
0f, 0f, 0,
-1f,-1f, 0,
1f,-1f, 0
};
StoreVertexData(gl, vtx, mDROP);
}
Add the BuildSplash
method. This creates the vertices all the splash triangles. A single splash is just a ring of triangles around the drop where it hits the water. The splash triangles never move but are scaled up through the pool when drawn. We'll discuss this later in the walkthrough.
void BuildSplash(GL11 gl)
{
int triCnt = 6;
int vtxCnt = mStreamCnt*9*triCnt;
float[] vtx = new float[vtxCnt];
int vtxCtr = 0;
for (float ftnAngY = 0;ftnAngY < 360;ftnAngY += 360/mStreamCnt)
{
float dropX = mArcRad*1.5f*(float)Math.sin(ftnAngY*mDeg2Rad);
float dropZ = mArcRad*1.5f*(float)Math.cos(ftnAngY*mDeg2Rad);
float mid = 0;
int triCtr = 0;
for (float sAngY = 0;sAngY < 360;sAngY += 360/(2*triCnt))
{
float realAngY = sAngY+ftnAngY;
float sX = (float)Math.sin(realAngY*mDeg2Rad)*(1+2*mid)+dropX;
float sZ = (float)Math.cos(realAngY*mDeg2Rad)*(1+2*mid)+dropZ;
vtx[vtxCtr] = sX;
vtx[vtxCtr+1] = 0+mid*3;
vtx[vtxCtr+2] = sZ;
if (mid%2==0)
{
if (triCtr == 0)
{
vtx[vtxCtr+triCnt*9-3] = sX;
vtx[vtxCtr+triCnt*9-2] = 0;
vtx[vtxCtr+triCnt*9-1] = sZ;
}
else
{
vtx[vtxCtr+3] = sX;
vtx[vtxCtr+4] = 0;
vtx[vtxCtr+5] = sZ;
vtxCtr+=3;
}
triCtr++;
}
else
if (triCtr == triCnt) vtxCtr+=3;
vtxCtr+=3;
mid = 1-mid;
}
}
StoreVertexData(gl, vtx, mSPLASH);
}
Add the StoreVertexData
method. This stores the vertex data for each of the objects in the GPU memory. Using the GPU memory gives us a huge performance increase because we do not need to pass the vertex data to the GPU each time we render the scene. The vertex data is stored in memory using an object index. We will use this same index when rendering the objects. We also store the buffer length which will be needed when we retrieve the data. GL_STATIC_DRAW
indicates that the vertices will not be changed.
void StoreVertexData(GL11 gl, float[] pVertices, int pObjectNum)
{
FloatBuffer buffer = ByteBuffer.allocateDirect
(pVertices.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(pVertices);
(gl).glBindBuffer(GL11.GL_ARRAY_BUFFER, pObjectNum);
buffer.position(0);
(gl).glBufferData(GL11.GL_ARRAY_BUFFER,
buffer.capacity()*4, buffer, GL11.GL_STATIC_DRAW);
(gl).glBindBuffer(GL11.GL_ARRAY_BUFFER, 0);
mBufferLen[pObjectNum] = buffer.capacity()/3;
}
Add the onSurfaceCreated
callback. This is called after onSurfaceCreated
and each time the phone orientation changes. We initialize the viewport and projection matrix. glLoadIdentity()
clears any transforms or rotations we have set. We calculate the distance between the camera and the scene center so we can set the clip region. glFrustumf
(discussed later) sets the parameters for the projection view. We then enable the depth test so foreground objects are drawn over background objects. We add ModelView
to the matrix stack so we can draw objects using standard cartesian coordinates. Lastly, we set mOrientation
with the current phone orientation.
@Override
public void onSurfaceChanged(GL10 gl, int pWidth, int pHeight)
{
gl.glViewport(0, 0, pWidth, pHeight);
mScrHeight = pHeight;
mScrWidth = pWidth;
mScrRatio = mScrWidth/mScrHeight;
gl.glMatrixMode(GL11.GL_PROJECTION);
gl.glLoadIdentity();
float camDist = (float)Math.sqrt(mCamXpos*mCamXpos+mCamYpos*
mCamYpos+mCamZpos*mCamZpos);
mClipStart = Math.max(2, camDist-185);
gl.glFrustumf(
-mScrRatio*.5f*mClipStart,
mScrRatio*.5f*mClipStart,
-1f*.5f*mClipStart,
1f*.5f*mClipStart,
mClipStart,
mClipStart+185+Math.min(185, camDist));
gl.glEnable(GL11.GL_DEPTH_TEST);
gl.glMatrixMode(GL11.GL_MODELVIEW);
mOrientation = getResources().getConfiguration().orientation;
}
Begin the onDrawFrame
callback. We render the scene here. This is called continuously by the OpenGL system. OpenGL assumes there is constant animation requiring constant screen updates. Continuous rendering can be turned off by calling setRenderMode(RENDERMODE_WHEN_DIRTY)
then calling requestRender()
to render the scene.
We cast the gl1
parameter to OpenGL 1.1 so we can get the additional 1.1 functionality. This cast will fail if 1.1 is not supported by the device. According to the Android website, every Android device now supports OpenGL ES 1.1.
@Override
public void onDrawFrame(GL10 gl1)
{
GL11 gl = (GL11)gl1;
Add the flag check in case the user moved the camera. If the camera distance changes, we need to update the clipping region so it is aligned with the scene. onSurfaceChanged
does the actual update.
if (mResetMatrix)
{
onSurfaceChanged(gl, (int)mScrWidth, (int)mScrHeight);
mResetMatrix = false;
}
Add code to clear the color and depth buffers and reset the matrix. The color and depth buffers are recalculated for each frame.
gl.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
Add code to calculate the X angle based on the phone tilt. We will discuss angle calculations later in this walkthrough. AccelY
and AccelZ
are set in the sensor listener created in the constructor. Note that we don't let the angle pass 90 because the scene would be upside down.
if (UseTiltAngle)
{
if (RotateScene)
{
float HypLen = (float)Math.sqrt
(mCamXpos*mCamXpos+mCamZpos*mCamZpos);
mSceneXAng = 90-(float)Math.atan2(AccelY,AccelZ)*(float)mRad2Deg;
if (mSceneXAng > 89.9) mSceneXAng = 89.9f;
if (mSceneXAng < -89.9) mSceneXAng = -89.9f;
float HypZLen = (float)Math.sqrt(mCamXpos*mCamXpos+
mCamYpos*mCamYpos+mCamZpos*mCamZpos);
mCamYpos = HypZLen*(float)Math.sin(mSceneXAng*mDeg2Rad);
float HypLenNew = HypZLen*
(float)Math.cos(mSceneXAng*mDeg2Rad);
mCamZpos *= HypLenNew/HypLen;
mCamXpos *= HypLenNew/HypLen;
}
else
{
mCamXang = (float)Math.atan2(AccelY,AccelZ)*(float)mRad2Deg - 90;
if (mCamXang > 89.9) mCamXang = 89.9f;
if (mCamXang < -89.9) mCamXang = -89.9f;
ChangeCameraAngle(0, 0);
}
}
Add the gluLookAt
call. This tells the OpenGL system where the camera is and its view direction. The actual values of the target variables don't matter; only the direction from the camera (if the camera is at 0,0,0 then target 1,2,3 will have the same result as target 2,4,6). The 100 value is to set the up vector. Positive Y is up in our scene so we set Y=100. It can be any positive number.
GLU.gluLookAt(gl, mCamXpos, mCamYpos, mCamZpos, mTargetX, mTargetY,
mTargetZ, 0f, 100.0f, 0.0f);
Add the code to calculate the elapsed time since the last frame was rendered. mAngCtr
is set based on the time change. We do this because some frames take longer than others and we want to maintain and smooth animation. A larger time gap will result in a larger angle jump causing the animation to catch up. If the animation is paused, we stop skip the angle change. Note that onDrawFrame
is still continuously called even when paused.
long now = SystemClock.elapsedRealtime();
long diff = now - mLastTime;
mLastTime = now;
if (!Paused)
{
mAngCtr += diff/100.0;
if (mAngCtr > 360) mAngCtr -= 360;
}
Add the call to DrawSceneObjects
. This is where all the objects in our scene get drawn to the screen.
DrawSceneObjects(gl);
Finish the onDrawFrame
method by adding the code to calculate and display the FPS (Frames Per Second). The mFrameTime
array stores the frame times for the last 20 frames. To get the average frame time, we just get the time between this frame and 20 frames ago and divide by 20. The FPS display is updated every 10 frames. We will discuss this calculation in more detail later.
if (ShowFPS)
{
int thisFrameTime = (int)(SystemClock.elapsedRealtime()-mStartTime);
mFPS = (mFrameTime.length)*1000f/(thisFrameTime-mFrameTime[mFramePos]);
mFrameTime[mFramePos] = (int)(SystemClock.elapsedRealtime()-mStartTime);
if (mFramePos < mFrameTime.length-1)
mFramePos++;
else
mFramePos=0;
if (++mFPSDispCtr == 10)
{
mFPSDispCtr=0;
SetStatusMsg(Math.round(mFPS*100)/100f+" fps");
}
}
}
Add the DrawSceneObjects
method. All the scene objects are drawn from here. For each object (except the fountain), we set the color then call DrawObject
to render the vertices for the object. For the ball, we use mAngCtr
to set the current angle of rotation. We only stored vertices for half of the ball, so we rotate the ball by one slice and re-render the same vertices with a different color. For the splashes, the splash triangles were created at Y=0. We want to scale at Y=0 then move the scaled splash to the surface. The operations seem out of order here (move then scale), but it seems that openGL does some things in reverse. The mRepeatLen
is used so the splash cycles with the drop movement. No billboarding is used for the splash triangles since they surround the drop. The splashes are only shown if the fountain and pool are shown.
void DrawSceneObjects(GL11 gl)
{
if (ShowBall)
{
gl.glPushMatrix();
gl.glColor4f(.5f, .5f, .5f, 1);
gl.glRotatef(mAngCtr, 0.0f, 1.0f, 0f);
DrawObject(gl, GL11.GL_TRIANGLES, mBALL);
gl.glPopMatrix();
gl.glPushMatrix();
gl.glColor4f(0.7f, 1f, 0.7f, 1f);
gl.glRotatef(mAngCtr+360f/mBallHSliceCnt, 0.0f, 1.0f, 0f);
DrawObject(gl, GL11.GL_TRIANGLES, mBALL);
gl.glPopMatrix();
}
if (ShowFountain)
DrawFountain(gl);
if (ShowPool)
{
gl.glColor4f(0.2f, 0.0f, 0.0f, 1f);
DrawObject(gl, GL11.GL_TRIANGLE_STRIP, mWALL);
gl.glColor4f(0.2f, 0.0f, 0.6f, 1f);
DrawObject(gl, GL11.GL_TRIANGLE_FAN, mPOOL);
}
if (ShowFountain && ShowPool)
{
gl.glPushMatrix();
gl.glColor4f(.9f, 0.9f, 0.9f, 1f);
gl.glTranslatef(0, 3, 0);
gl.glScalef(1f, Math.abs((
mRepeatLen/2f-mAngCtr%(mRepeatLen))*0.4f), 1f);
DrawObject(gl, GL11.GL_TRIANGLES, mSPLASH);
gl.glPopMatrix();
}
if (ShowFloor)
{
gl.glColor4f(0.0f, 0.0f, 0.4f, 1f);
DrawObject(gl, GL11.GL_TRIANGLES, mFLOOR);
}
}
Add the DrawObject
method. This renders the vertices in the GPU buffer for the specified object index. The shape type passed in (GL_TRIANGLES
/GL_TRIANGLE_STRIP
/GL_TRIANGLE_FAN
) tells OpenGL how the vertices are organized in memory.
void DrawObject(GL11 gl, int pShapeType, int pObjNum)
{
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, pObjNum);
gl.glVertexPointer(3, GL11.GL_FLOAT, 0, 0);
gl.glDrawArrays(pShapeType, 0, mBufferLen[pObjNum]);
gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, 0);
}
Add the SetStatusMsg
method. This updates the TextView
with new text. mTagStore
is used to pass the new text to the Runnable
class. If we use a non-final local variable, the compiler will complain. We need to use the Runnable
class so the text update does not block the render process.
public void SetStatusMsg(String pMsg)
{
mTagStore.setTag(pMsg);
mThreadHandler.post(new Runnable() {
public void run() { mTxtMsg.setText(mTagStore.getTag().toString()); }
});
}
Add the SetShowFPS
method. This sets the ShowFPS
flag and clears the TextView
(in case ShowFPS
is false
).
public void SetShowFPS(boolean pShowFPS)
{
ShowFPS = pShowFPS;
SetStatusMsg("");
}
Add the SwapCenter
method. This alternates the rotation center between camera and scene. If the rotation is set to scene, the camera always looks at the scene center (0,0,0) and the camera moves around the center (we don't actually rotate the scene). If the rotation is set to camera, the camera turns and we move the view target.
public void SwapCenter()
{
RotateScene = !RotateScene;
if (RotateScene)
{
float hypLen = (float)Math.sqrt(mCamXpos*mCamXpos+
mCamZpos*mCamZpos);
mSceneYAng = (float)Math.atan2(mCamXpos,mCamZpos)*(float)mRad2Deg;
mSceneXAng = (float)Math.atan2(mCamYpos,hypLen)*(float)mRad2Deg;
mTargetX = mTargetY = mTargetZ = 0;
}
else
{
mCamYang = mSceneYAng+180;
mCamXang = -mSceneXAng;
ChangeCameraAngle(0,0);
}
}
Add the ChangeSceneAngle
method. This is called when the RotateScene
flag is set and the user rotates the view. We move the camera around the center of the scene (0,0,0) keeping the same distance. We will discuss angle calculations later in this walkthrough.
void ChangeSceneAngle(float pChgXang, float pChgYang)
{
float hypLen = (float)Math.sqrt(mCamXpos*mCamXpos+
mCamZpos*mCamZpos);
if (pChgYang != 0)
{
mSceneYAng += pChgYang;
if (mSceneYAng < 0) mSceneYAng += 360;
if (mSceneYAng > 360) mSceneYAng -= 360;
mCamXpos = hypLen*(float)Math.sin(mSceneYAng*mDeg2Rad);
mCamZpos = hypLen*(float)Math.cos(mSceneYAng*mDeg2Rad);
}
if (pChgXang != 0)
{
float hypZLen = (float)Math.sqrt
(hypLen*hypLen+mCamYpos*mCamYpos);
mSceneXAng += pChgXang;
if (mSceneXAng > 89.9) mSceneXAng = 89.9f;
if (mSceneXAng < -89.9) mSceneXAng = -89.9f;
mCamYpos = hypZLen*(float)Math.sin(mSceneXAng*mDeg2Rad);
float HypLenNew =
hypZLen*(float)Math.cos(mSceneXAng*mDeg2Rad);
mCamZpos *= HypLenNew/hypLen;
mCamXpos *= HypLenNew/hypLen;
}
}
Add the ChangeCameraAngle
method. This is called when the RotateScene
flag is not set and the user rotates the view. We rotate the camera around its center point. We then update the camera target view point based on the updated angle. The distance between the camera and the target remains constant.
void ChangeCameraAngle(float pChgXang, float pChgYang)
{
mCamXang += pChgXang;
mCamYang += pChgYang;
if (mCamYang > 360) mCamYang -= 360;
if (mCamYang < 0) mCamYang += 360;
if (mCamXang > 89.9) mCamXang = 89.9f;
if (mCamXang < -89.9) mCamXang = -89.9f;
mTargetY = mCamYpos+mViewRad*(float)Math.sin(mCamXang*mDeg2Rad);
mTargetX = mCamXpos+mViewRad*(float)Math.cos(mCamXang*mDeg2Rad)*
(float)Math.sin(mCamYang*mDeg2Rad);
mTargetZ = mCamZpos+mViewRad*(float)Math.cos(mCamXang*mDeg2Rad)*
(float)Math.cos(mCamYang*mDeg2Rad);
}
Add the MoveCamera
method. This called when the camera moves forward or backward. If the RotateScene
flag is set, the camera always moves toward\away from the scene center (0,0,0). It can never pass the center. If RotateScene
is not set, the camera moves towards\away from the camera target and the target is adjusted to match (distance to target stays constant). We set mResetMatrix
to true
so the clip region is updated during the next frame render.
void MoveCamera(float pDist)
{
if (RotateScene)
{
float curdist = (float)Math.sqrt(
mCamXpos*mCamXpos +
mCamYpos*mCamYpos +
mCamZpos*mCamZpos);
if (pDist < 0 && curdist + pDist < 0.01)
pDist = 0.01f-curdist;
float ratio = pDist/curdist;
float chgCamX = (mCamXpos)*ratio;
float chgCamY = (mCamYpos)*ratio;
float chgCamZ = (mCamZpos)*ratio;
mCamXpos += chgCamX;
mCamYpos += chgCamY;
mCamZpos += chgCamZ;
}
else
{
float ratio = pDist/mViewRad;
float chgCamX = (mCamXpos-mTargetX)*ratio;
float chgCamY = (mCamYpos-mTargetY)*ratio;
float chgCamZ = (mCamZpos-mTargetZ)*ratio;
mCamXpos += chgCamX;
mCamYpos += chgCamY;
mCamZpos += chgCamZ;
mTargetX += chgCamX;
mTargetY += chgCamY;
mTargetZ += chgCamZ;
}
mResetMatrix = true;
}
Add the onTouchEvent
callback. This is called when the user touches the screen or drags across it. If the user touches and releases without dragging (drag 5 pixels or less), we assume it's a tap and move the camera forward\backward. If the user drags, we update the view angle based on drag distance. Tapping at the top of screen moves the camera forward. Tapping at the bottom moves the camera back.
public boolean onTouchEvent(final MotionEvent pEvent)
{
if (pEvent.getAction() == MotionEvent.ACTION_DOWN)
{
mDragStartX = pEvent.getX();
mDragStartY = pEvent.getY();
mDownX = pEvent.getX();
mDownY = pEvent.getY();
return true;
}
else if (pEvent.getAction() == MotionEvent.ACTION_UP)
{
if ((Math.abs(mDownX - pEvent.getX()) <= 5) &&
(Math.abs(mDownY - pEvent.getY()) <= 5))
{
if (pEvent.getY() < mScrHeight/2.0)
MoveCamera(-5);
else if (pEvent.getY() >
mScrHeight/2.0)
MoveCamera(5);
}
return true;
}
else if (pEvent.getAction() == MotionEvent.ACTION_MOVE)
{
if (Math.abs(pEvent.getX() - mDragStartX) > 5)
{
if (RotateScene)
ChangeSceneAngle(0,
(mDragStartX - pEvent.getX())/3f);
else
ChangeCameraAngle(0,
(mDragStartX - pEvent.getX())/3f);
mDragStartX = pEvent.getX();
}
if (Math.abs(pEvent.getY() -
mDragStartY) > 5)
{
if (RotateScene)
ChangeSceneAngle(
(pEvent.getY() - mDragStartY)/3f, 0);
else
ChangeCameraAngle(
(mDragStartY - pEvent.getY())/3f, 0);
mDragStartY = pEvent.getY();
}
return true;
}
return super.onTouchEvent(pEvent);
}
Add the DrawFountain
method. This calculates the billboard angle at 0,0,0 and calculates the position of each drop. We assume each drop travels in an arc so we just divide the arc (180 degrees) by the drop count and use that as the drop position. Each drop only travels a short distance (mRepeatLen
) the repeats. mAngCtr
(set in onDrawFrame
) is used to increase the angle offset each frame, creating the animation. You could add some randomness here so each drop has a slightly different path, but for now, all the drops will follow the same arc.
void DrawFountain(GL11 gl)
{
float angY = 270-(float)Math.atan2(mCamZpos,mCamXpos)*
(float)mRad2Deg;
float hypLen = (float)Math.sqrt(mCamXpos*mCamXpos+
mCamZpos*mCamZpos);
float angX = (float)Math.atan2(mCamYpos,hypLen)*(float)mRad2Deg;
int dropCtr = 0;
for (float ftnAngY = 0;ftnAngY < 360;ftnAngY += 360/mStreamCnt)
{
float arcAng = mAngCtr%(mRepeatLen);
for (;arcAng < 180;arcAng += mRepeatLen)
{
float dropRad = 0.75f*(mArcRad-mArcRad*
(float)Math.cos(arcAng*mDeg2Rad));
dropCoords[dropCtr][1] = 1.5f*mArcRad*
(float)Math.sin(arcAng*mDeg2Rad);
dropCoords[dropCtr][0] = dropRad*
(float)Math.sin(ftnAngY*mDeg2Rad);
dropCoords[dropCtr][2] = dropRad*
(float)Math.cos(ftnAngY*mDeg2Rad);
dropCtr++;
}
}
gl.glColor4f(0.5f, 0.5f, 1f, 1f);
DrawDropTriangles(gl, angX, angY, dropCoords);
}
Add the DrawDropTriangles
method. This renders each drop as a separate triangle. The pDropCoords
array only has the top vertex of each triangle. If the MultiBillboard
flag is set, we recalculate the billboard angle for each drop so each drop appears to be a perfect triangle facing the camera. If MultiBillboard
is not set, we just use the billboard angle for (0,0,0). We will discuss billboarding later.
void DrawDropTriangles(GL11 gl, float pAngX, float pAngY, float[][] pDropCoords)
{
int TriCnt = pDropCoords.length;
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, mDROP);
gl.glVertexPointer(3, GL11.GL_FLOAT, 0, 0);
for (int ctr = 0;ctr < TriCnt;ctr++)
{
gl.glPushMatrix();
gl.glTranslatef(
pDropCoords[ctr][0], pDropCoords[ctr][1],pDropCoords[ctr][2]);
if (MultiBillboard)
{
float hypLen = 0;
float distX = mCamXpos-pDropCoords[ctr][0];
float distY = mCamYpos-pDropCoords[ctr][1];
float distZ = mCamZpos-pDropCoords[ctr][2];
hypLen =
(float)Math.sqrt(distX*distX+distZ*distZ);
pAngY = 270-(float)Math.atan2(distZ,distX)*(float)mRad2Deg;
pAngX = (float)Math.atan2(distY,hypLen)*(float)mRad2Deg;
}
gl.glRotatef(pAngY, 0, 1, 0);
gl.glRotatef(pAngX, 1, 0, 0);
gl.glDrawArrays(GL11.GL_TRIANGLES, 0, mBufferLen[mDROP]);
gl.glPopMatrix();
}
gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, 0);
}
Add the ShowMaxDepthBits
method. This method will determine the maximum size of the depth buffer for your device. It is not called in our application, but can be useful for testing.
void ShowMaxDepthBits()
{
EGL10 egl = (EGL10)EGLContext.getEGL();
EGLDisplay dpy = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY);
EGLConfig[] conf = new EGLConfig[100];
egl.eglGetConfigs(dpy, conf, 100, null);
int maxBits = 0;
int[] value = new int[1];
for(int i = 0; i < 100 && conf[i] != null; i++)
{
egl.eglGetConfigAttrib(dpy, conf[i], EGL10.EGL_DEPTH_SIZE, value);
maxBits = value[0]>maxBits ? value[0] : maxBits;
}
SetStatusMsg("DepthBits "+maxBits);
}
Finish the FountainGLRenderer
class with two test methods. These were used during testing but are no longer called by the application. They may be useful for debugging or adding additional objects to the scene. For maximum performance, it is better to use the StoreVertexData
\DrawObject
methods, though that requires a more complicated setup.
void DrawQuad(GL11 gl, float[] pX, float[] pY,
float[] pZ)
{
float[] vtx = new float[12];
int i = 0;
vtx[i++]=pX[0]; vtx[i++]=pY[0]; vtx[i++]=pZ[0];
vtx[i++]=pX[1]; vtx[i++]=pY[1]; vtx[i++]=pZ[1];
vtx[i++]=pX[3]; vtx[i++]=pY[3]; vtx[i++]=pZ[3];
vtx[i++]=pX[2]; vtx[i++]=pY[2]; vtx[i++]=pZ[2];
FloatBuffer buffer;
ByteBuffer vbb =
ByteBuffer.allocateDirect(vtx.length * 4);
vbb.order(ByteOrder.nativeOrder());
buffer = vbb.asFloatBuffer();
buffer.put(vtx);
buffer.position(0);
gl.glVertexPointer(3, GL11.GL_FLOAT, 0, buffer);
gl.glDrawArrays(GL11.GL_TRIANGLE_STRIP, 0, 4);
}
void DrawPoint(GL11 gl, float pVertexX, float pVertexY, float pVertexZ)
{
FloatBuffer buffer;
float[] vtx = new float[3];
int i=0;
vtx[i++]=pVertexX; vtx[i++]=pVertexY; vtx[i++]=pVertexZ;
ByteBuffer vbb =
ByteBuffer.allocateDirect(vtx.length * 4);
vbb.order(ByteOrder.nativeOrder());
buffer = vbb.asFloatBuffer();
buffer.put(vtx);
buffer.position(0);
gl.glVertexPointer(3, GL11.GL_FLOAT, 0, buffer);
gl.glDrawArrays(GL11.GL_POINTS, 0, 1);
}
}
Coding the FountainGLActivity Class
This is the class that gets used when the application first starts. For our application, it is used to create the FountainGLRenderer
class and process the options menu.
Open FountainGLActivity.java.
Remove all the existing code from this file.
Add the package
name and import
s needed for the Activity.
package droid.fgl;
import droid.fgl.FountainGLRenderer;
import android.app.Activity;
import android.os.Bundle;
import android.view.Menu;
import android.view.MenuItem;
import android.view.Window;
import android.view.WindowManager.LayoutParams;
Begin the FountainGLActivity
class and add two variables. mRenderer
will be a pointer the FountainGLRenderer
instance and mMenuList
will be used to store the items of the options menu.
public class FountainGLActivity extends Activity
{
FountainGLRenderer mRenderer = null;
MenuItem[] mMenuList = new MenuItem[10];
Add the onCreate
callback. This is called when the application first starts and when the phone changes orientation (Portrait\Landscape). First, we set the application to full screen and disable the screensaver, then call the parent constructor. We create the FountainGLRenderer
instance passing the instance of the Activity. We then load the user preferences. If the preferences are not available, the defaults are used. We then call SwapCenter
twice to ensure that the camera and scene angles are set properly.
@Override
public void onCreate(Bundle savedInstanceState) {
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(0xFFFFFFFF,
LayoutParams.FLAG_FULLSCREEN|LayoutParams.FLAG_KEEP_SCREEN_ON);
super.onCreate(savedInstanceState);
if (mRenderer == null)
mRenderer = new FountainGLRenderer(this);
SharedPreferences sp = getSharedPreferences("FountainGL", 0);
mRenderer.ShowBall = sp.getBoolean("ShowBall", mRenderer.ShowBall);
mRenderer.ShowFountain = sp.getBoolean("ShowFountain", mRenderer.ShowFountain);
mRenderer.ShowFloor = sp.getBoolean("ShowFloor", mRenderer.ShowFloor);
mRenderer.ShowPool = sp.getBoolean("ShowPool", mRenderer.ShowPool);
mRenderer.ShowFPS = sp.getBoolean("ShowFPS", mRenderer.ShowFPS);
mRenderer.UseTiltAngle = sp.getBoolean("UseTiltAngle", mRenderer.UseTiltAngle);
mRenderer.RotateScene = sp.getBoolean("RotateScene", mRenderer.RotateScene);
mRenderer.SwapCenter();
mRenderer.SwapCenter();
}
Add the onPrepareOptionsMenu
callback. This is called each time the menu is shown so we can change the menu as needed. All of the user options are boolean toggles, so we just set each menu option based on the current toggle setting. Note that the menu can only hold 5 items, so the last five items will go to the overflow menu (user must click More). The first five items should be the most used.
@Override
public boolean onPrepareOptionsMenu(Menu menu)
{
menu.clear();
mMenuList[0] = menu.add((mRenderer.ShowBall?"Hide":"Show")+" Ball");
mMenuList[1] = menu.add((mRenderer.ShowFloor?"Hide":"Show")+" Floor");
mMenuList[2] = menu.add((mRenderer.ShowFountain?"Hide":"Show")+" Fountain");
mMenuList[3] = menu.add((mRenderer.ShowPool?"Hide":"Show")+" Pool");
mMenuList[4] = menu.add("Rotate "+(mRenderer.RotateScene?"Camera":"Scene"));
mMenuList[5] = menu.add("Use "+(mRenderer.UseTiltAngle?"Touch":"Tilt")+" Angle");
mMenuList[6] = menu.add((mRenderer.MultiBillboard?"Single":"Multi")+" Billboard");
mMenuList[7] = menu.add((mRenderer.ShowFPS?"Hide":"Show")+" FPS");
mMenuList[8] = menu.add(mRenderer.Paused?"Unpause":"Pause");
mMenuList[9] = menu.add("Exit");
return super.onCreateOptionsMenu(menu);
}
Finish off the FountainGLActivity
class by adding the onOptionsItemSelected
callback. This called when the user chooses a menu item. For the RotateScene
option, we call SwapCenter
because we need to recalculate the camera or scene angles when the center of rotation changes. For the other options, we just toggle the current setting. For Exit, finish is called to close the application. After changing the setting, the settings are persisted so they will be the same for the next application run.
@Override
public boolean onOptionsItemSelected(MenuItem item)
{
if (item == mMenuList[0])
mRenderer.ShowBall = !mRenderer.ShowBall;
else if (item == mMenuList[1])
mRenderer.ShowFloor = !mRenderer.ShowFloor;
else if (item == mMenuList[2])
mRenderer.ShowFountain = !mRenderer.ShowFountain;
else if (item == mMenuList[3])
mRenderer.ShowPool = !mRenderer.ShowPool;
else if (item == mMenuList[4])
mRenderer.SwapCenter();
else if (item == mMenuList[5])
mRenderer.UseTiltAngle = !mRenderer.UseTiltAngle;
else if (item == mMenuList[6])
mRenderer.MultiBillboard = !mRenderer.MultiBillboard;
else if (item == mMenuList[7])
mRenderer.SetShowFPS(!mRenderer.ShowFPS);
else if (item == mMenuList[8])
mRenderer.Paused = !mRenderer.Paused;
else if (item == mMenuList[9])
finish();
getSharedPreferences("FountainGL", 0).edit()
.putBoolean("ShowBall", mRenderer.ShowBall)
.putBoolean("ShowFountain", mRenderer.ShowFountain)
.putBoolean("ShowPool", mRenderer.ShowPool)
.putBoolean("ShowFloor", mRenderer.ShowFloor)
.putBoolean("ShowFPS", mRenderer.ShowFPS)
.putBoolean("UseTiltAngle", mRenderer.UseTiltAngle)
.putBoolean("RotateScene", mRenderer.RotateScene)
.commit();
return super.onOptionsItemSelected(item);
}
}
And that finishes off the application code, now we're ready to run the application and see the scene we created.
Build the project (Project->Build All). If you have Build Automatically set, the project will rebuild each time you save a source file.
Running the App
In Eclipse, press Ctrl-F11 to start the application (or F11 to debug).
After a few seconds (if everything goes right), the application should start on the virtual device (or your phone if it's attached).
To change the orientation of the virtual device, use keypad 9 (NumLock must be off). To test the phone tilt functionality, you will need to use your actual phone. The virtual device does not tilt.
To exit the app, use the back button on your phone (or Exit) or choose Run->Terminate in Eclipse.
To install the application to your phone using an APK file.
On your phone, in Settings->Applications, enable Unknown sources to allow non-market apps on your phone.
In Eclipse, choose File->Export..->Android-> Export Android Application.
Click Next.
Enter FountainGL
as the project name.
Click Next.
If you already have a keystore, choose Use existing keystore. If not, here are the steps to create one.
Choose Create new keystore. Enter a file name (no extension is needed) and a password.
Click Next.
For Alias and Password, you can use the same values you entered into the previous screen. Set validity to 100 years. Enter any name in the Name field. If you plan to publish any apps using this keystore, you should probably use your real information.
Click Next.
Enter the file name for your apk file.
Click Finish.
To install the apk file onto your phone, use the adb tool in the android-sdk\platform-tools folder. If you don't know the folder, just search your computer for adb.exe.
To install the apk file, use this command line:
adb install C:\FountainGL.apk
You can also use one of the (free) installer apps from the Android market which lets you install apk files from the phone's SD card.
Once the install is complete, FountainGL
should be available in your phones application list.
Congratulations on your new application. Be sure to test the options and see how the fps is affected and the affect of billboarding.
The remainder of this walkthrough discusses some of the concepts used in this application
Calculating Angles and Coordinates
For those of us that haven't touched geometry since high school, here's a quick refresher. I've abbreviated arccos\arcsin\arctan to acos\asin\atan to match the Java functions.
| Given a right triangle:
| h=√(x2+y2) | | x=h*cos(θ) | θ=acos(x/h) | | y=h*sin(θ) | θ=asin(y/h) | | y=x*tan(θ) | θ=atan(y/x) |
|
The atan2 function
The above equations used to compute x and y are accurate for the full 360 degrees. The functions used to compute the angles (axxx) are only accurate for 180 degrees. The other 180 degrees will produce the same angles.
Consider the diagram below:
| | Here we have 2 angles, 45 and 225 degrees. If we compute the coordinates from the angles, the results are correct:
h = √(52+52) = 7.07
x = 7.07*cos(45) = 5 y = 7.07*sin(45) = 5
x = 7.07*cos(225) = -5 y = 7.07*sin(225) = -5
If we compute the angles from the coordinates, we run into a problem: θ = acos(5/7.07) = 45 Correct θ = acos(-5/7.07) = 135 Wrong! We want 225 (or -135).
This is because only one coordinate sign is used in the formula. The other vaiable used is the hypotenuse (h) which is always positive. If we try using atan, the same issue occurs: atan(5/5) = atan(-5/-5)
We could solve this by adding a check in our code: if (y<0) Angle = -Angle;
Fortunately, most programming languages include the Atan2 function to solve this exact issue. Atan2 considers both coordinate signs when computing the angle: θ = atan2(y,x) θ = atan2(5,5) = 45 Correct θ = atan2(-5,-5) = -135 Correct
Note that in Excel, the ATAN2 function has the parameters reversed (x,y). |
Working in 3D
The scene in our OpenGL program is based in 3D, so we need to compute angles and coordinates in 3 dimensions.
| | Here is the process to calculate the scene angles from the camera coordinates.
β = atan2(cz, cx) h = √(cx2 + cy2) α = atan2(cy, h) hz = √(cx2 + cy2 + cz2)
To calculate the camera coordinates from the scene angles (and hz), we just reverse the process.
h = hz * cos(α) cx = h * sin(β) cz = h * cos(β)
When we rotate the camera, the calculations are the same except that the camera is at the center and the camera target moves around the camera.
Note that in Java, these math functions compute the angle in radians where PI (3.141592) radians = 180 degrees.
Also note that in the diagram, the Z axis points along the floor. This is because the android screen (the camera) is viewing the scene from the side and in OpenGL, the Z axis goes through the screen. |
Vertex Sequencing
When coordinates (vertices) are stored in the GPU buffer, they can be organized in several ways to create different shapes. All the shapes consist of triangles, and some triangles can share vertices allowing for reduced storage and faster rendering. OpenGL will render the coordinates based on the constant passed in the glDrawArrays
call. In the FountainGL
project, we used three types of vertex sequences.
GL_TRIANGLE_STRIP | | GL_TRIANGLE_FAN | | GL_TRIANGLES |
| | | | |
GL_TRIANGLE_STRIP
is used when each triangle shares a side with the triangle next to it. This sequence was used to create the pool wall in our application.
GL_TRIANGLE_FAN
is used when each triangle shares a common central vertex. This sequence was used to create the pool water in our application.
GL_TRIANGLES
is used when creating triangles that are not attached to each other so nothing is shared. This requires the most storage and rendering time of the three sequence types we used. This sequence was used to create the floor, ball, and fountain drops in our application.
Billboarding
Billboarding is a way to make 2D objects appear 3D. This increases performance because the OpenGL engine does not need to render a complete 3D object. For example, a ball looks just like a circle facing the camera and the circle is rendered much faster. The trick to billboarding is rotating the 2D object so it always faces the camera and appears the same as a 3D object.
In our program, we implemented billboarding two ways: Single billboard and Multi billboard.
Single Billboard
Here we calculate the billboard angle at the center of the fountain (0,0,0) to the camera then use that same angle for every fountain drop.
We can render the fountain faster because we only need to calculate the billboard angle once. From a distance, things look okay, but close up, our shortcut becomes obvious. The drops are rotated away from the camera and they no longer appear as triangles.
Multi Billboard
Here we calculate the billboard angle for every drop which increases render time. From a distance the scene looks nearly identical to the single billboard render, yet when close up it is noticeably better. The drops are facing the camera and appear as full triangles.
In a scene where the fountain is always in the background, the single billboard method would suffice and improve render time. Since our application allows the camera to get close to the fountain, we give the user the multi-billboard option.
Splashes
| As requested by ErrolErrol, splashes were added to the scene. The splashes are created by using a ring of triangles around the splash point.
To create the triangle vertices, we just go around the drop point and calculate the coordinates of each triangle vertex. We are using 6 triangles, so we divide the circle by 12. For odd steps, we calculate the triangle edge vertex using a smaller radius. For even steps, we calculate the middle vertex of the triangle using a larger radius. The middle vertex is also higher (on Y axis) than the edge vertices, so the triangle points up from the pool surface.
By creating triangles that angle up, we can create the splash affect by scaling the triangles on the Y axis: gl.glScalef(1f, Math.abs((mRepeatLen/2f - mAngCtr%(mRepeatLen)) * 0.4f), 1f); If mRepeatLen is 10, then the scale factor goes from 5 ⇒ 0 ⇒ 5 (we take the abs value of -5 ⇒ 0 ⇒ 5). We only scale on the Y axis so the splashes get taller, not wider. The mAngCtr is used so we stay in sync with the drop cycle.
All the splash triangles for the entire scene are stored together and drawn at the same time. No billboarding is used when drawing the splashes because the splashes look okay from most angles and we save on CPU time. |
Perspective and glFrustumf
Perspective
In our application, we used the glFrustumf
method to set up the perspective for the camera. The perspective is basically the field (or angle) of view for the camera. A larger FOV allows the camera to see more of the scene, but objects appear smaller and the size difference between close and far objects is more pronounced. You can think of it as putting a wide angle lens on your camera. A smaller FOV has the opposite effect; the camera can see less of the scene and the size change between near and far objects is less significant. This is the same affect produced by using a zoom lens on your camera.
In these two screen captures, the scene angles are the same, but the difference in FOV creates noticeably different views.
| | |
Frustum Length = 1 Large FOV | | Frustum Length = 2 Small FOV |
| | |
glFrustumf
The glFrustumf
call uses 5 parameters to set up the perspective (We'll discuss zFar
in a moment). These parameters define the pyramid (frustum) of the perspective.
glFrustumf(left, right, bottom, top, zNear, zFar) |
When creating the perspective, the shape of the pyramid is important, not the size. As long as the ratios are the same, the perspective is the same:
glFrustumf(-2, 2, -4, 4, 100, 500) |
creates the same perspective as:
glFrustumf(-4, 4, -8, 8, 200, 500) |
The difference between these two commands is the clipping region. zNear
helps determine the shape of the perspective, but it also indicates the near clipping region. Any pixels that are closer to the camera than this line are not shown. Any pixels that are farther than the zFar
clipping region are also not shown. zNear
cannot be zero or negative.
The Depth Buffer
When OpenGL renders a scene, it uses a depth buffer to sort the pixels according to distance from camera. Once the pixels are sorted, OpenGL will render them far to near so closer objects will hide far objects (OpenGL can also skip pixels if it knows they will be hidden).
The depth buffer consists of buckets from zNear
to zFar
and all the pixel regions in the scene will go into one of these buckets. The buckets are then rendered far to near. Pixels in the same bucket are considered equal distance from the camera and will be rendered as a single plane. There are always the same number of buckets and they are divided into the clipping region (zNear
to zFar
). A large clipping region will have the same number of buckets as a small clipping region, but the buckets will be bigger.
The precision of the depth buffer (number of buckets) can vary between devices. My Huawei has a 16 bit buffer, which indicates 65,536 buckets. Some devices will have a 24 or 32 bit buffer, which would provide more accuracy.
Z-Fighting
It's important to know that the buckets of the buffer are not equally sized. The buckets are much more dense (smaller buckets) at zNear
and spread out at zFar
. This is so objects close to the camera will have more precision and less risk of pixel overlap. The overlap problem is called z-fighting.
Here are two screen captures from the FountainGL
application running on the emulator. The camera is under the fountain looking up and the pool is 6 units above the floor.
Clip Region = 300 glFrustumf(-1, 1, -1, 1, 1, 300) | | Clip Region = 1000 glFrustumf(-1, 1, -1, 1, 1, 1000) |
| | |
As you can see, the image on the right looks incorrect. It looks like the pool is falling through the floor. The issue is that the clipping region is so large (1000), the buckets are larger and pixels which are close together are falling into the same bucket and being rendered on the same plane. The left image looks correct because the clipping region is much smaller (300) creating smaller buckets and better depth resolution.
Bucket sizes
As mentioned previously, the bucket size is quite small near the camera (zNear
) and quite large in the distance (zFar
). Bucket size increases exponentially as distance from the camera increases. If we set zNear
to 1
and zFar
to 100
, here are the relative bucket sizes at 10 unit increments.
The first bucket is so small it doesn't even show on the bar. The last bucket, covering .0015 units, is 10,000 times larger than the first, which covers a tiny .00000015 units. For a 16 bit depth buffer, there will be 65,536 (2^16) buckets.
As you can see from the graph, the scene will lose depth resolution quickly as objects move away from zNear
. When creating a scene, the goal is to keep objects close to zNear
and keep the clipping region (zFar
-zNear
) as small as possible.
Shifting the clipping region
Unfortunately, our application allows the camera to move around the entire scene and view the fountain from any distance. If we use 300 for the clip region, the scene would begin to clip as soon as the camera moves back and using 1000 would cause excessive z-fighting. To get around this problem, we move the clipping region when the camera moves forward or back so the clipping region length (and depth resolution) remains constant.
Near Clipping Region
Far Clipping Region
Multipass Rendering
In some cases, the scene that is being rendered is large and we don't want to sacrifice depth resolution to render the scene properly. This is where multipass rendering comes in. This is when you render the scene in chunks starting with distant objects and ending with nearby objects. Each chunk will use a separate depth buffer so each chunk will be more accurately rendered (less z-fighting). The cost of this is the additional processing time to render the full scene.
Render far objects using far clipping region.
Reset the depth buffer then render near objects using near clipping region.
Complete scene created with separate depth buffers.
If you want to test multipass rendering in the FountainGL
application, comment out the existing calls to glMatrixMode
(both of them) and DrawSceneObjects
then insert this code in onDrawFrame
right after the gluLookAt
call. If you want to see a gap between render regions, set the glFrustumf
far clip region to 98
in the bottom code block. In this scene, all the objects have the same center point so we're actually rendering the same objects twice (pixels will be clipped according to each clipping region).
gl.glPushMatrix();
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glClear(GL11.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glFrustumf(-mScrRatio*100, mScrRatio*100, -1f*100, 1f*100, 1f*100, 500);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, mCamXpos, mCamYpos, mCamZpos, mTargetX, mTargetY,
mTargetZ, 0f, 100.0f, 0.0f);
DrawSceneObjects(gl);
gl.glPopMatrix();
gl.glPushMatrix();
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glClear(GL11.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glFrustumf(-mScrRatio, mScrRatio, -1f, 1f, 1f, 100);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, mCamXpos, mCamYpos, mCamZpos, mTargetX, mTargetY,
mTargetZ, 0f, 100.0f, 0.0f);
DrawSceneObjects(gl);
gl.glPopMatrix();
Calculating FPS (Frames Per Second)
In the FountainGL
application, the FPS is the average render time for the last 20 frames. This is done by storing the end time of each frame in an array. After 20 frames, we take the frame end time of the current frame, subtract the end time of the first frame (frame 1), then divide into 20. The FPS result will not be correct until the application runs for 20 frames.
For the sake of simplicity, let's assume we are calculating based on 10 frames. For this example, we'll assume every frame takes 5 seconds (it would go much faster in real life).
At application start, there is no frame data in the frame array and the frame pointer is pointing to slot 0.
Frame Ptr | ⇓ | | | | | | | | | |
Array Slot | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Frame Time | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
After 5 frames, we have populated 5 frames of data and shifted the pointer at each frame. The first frame ended at boottime+100 seconds. Each frame takes 5 seconds. The FPS calculation is still wrong because of the zero entries.
Frame Ptr | | | | | | ⇓ | | | | |
Array Slot | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Frame Time | 100 | 105 | 110 | 115 | 120 | 0 | 0 | 0 | 0 | 0 |
After 9 frames, we have populated 9 frames of data.
Frame Ptr | | | | | | | | | | ⇓ |
Array Slot | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Frame Time | 100 | 105 | 110 | 115 | 120 | 125 | 130 | 135 | 140 | 0 |
After 10 frames, we have populated the entire array. The FPS calculation will be correct now. The current frame will be at 150 seconds, so the FPS average will be 10/(150-100) = .2 frames per second. After the FPS calculation, we set the value at the frame pointer to the current frame time so slot 0 will be set to 150.
Frame Ptr | ⇓ | | | | | | | | | |
Array Slot | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Frame Time | 100 | 105 | 110 | 115 | 120 | 125 | 130 | 135 | 140 | 145 |
After 15 frames, we have wrapped around the array, but the frame pointer is still correctly pointing to 10 frames ago. The FPS average will be 10/(175-125) = .2 frames per second. After the FPS calculation, we set the value at the frame pointer to the current frame time so slot 5 will be set to 175.
Frame Ptr | | | | | | ⇓ | | | | |
Array Slot | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
Frame Time | 150 | 155 | 160 | 165 | 170 | 125 | 130 | 135 | 140 | 145 |
As noted previously, the actual code uses 20 frames, but we use 10 here to save some space. In the application, the FPS value is displayed every 10 frames. If you get a high FPS on your device, you may want to increase the frame count so the FPS display doesn't become a blur of digits.
Additional Thoughts
- The fountain drops dramatically increase render time. I don't see a way around this since all the drops move and rotate at every frame.
- There is probably a more efficient way to do the multipass render. This application does not really benefit from it since all the objects have the same Y axis.
- The emulator has terrible depth precision. There was always z-fighting. My phone did much better once the clip region shifting was implemented.
- Using the VBO (GPU memory) for storing vertices gave a impressive performance boost. If rendering just the floor, the FPS doubled when compared to using main memory buffers.
- The bucket size chart is accurate based on this site. I used Excel to calculate\create the bar chart.
- The 3D graphics were created using 3D Studio Max. The 2D graphics were created using Paint.Net (freeware).
- The animation at the top of the walkthrough was created using DropBox (screen captures) and UnFREEz (gif creator). Both are freeware.
- Please vote\comment. I appreciate any feedback you have.
Resources
"Share your knowledge. It's a way to achieve immortality." - Dalai Lama
And I think we're done. I hope you found this walkthrough useful. If you found any part confusing or if you think I missed something, please let me know so I can update this page.