Harold Serrano

View Original

How to render a game character using OpenGL ES

Introduction

In this post, you will render your first 3D model on an iOS device using OpenGL ES. The steps to render a 3D model on a screen are as follows:

  1. Initialize an OpenGL ES Context.
  2. Setup a rendering loop.
  3. Create OpenGL objects and load Character data.
  4. Create the transformation space.
  5. Update the framebuffer.

Objective

Our objective is to render a robot on an iOS device screen.

This post will be a hands-on project. You will be able to code along as you learn, so download this empty XCode template project. The project contains the skeleton for the C++ methods which you will implement. The project also contains the data for the character which will be render on the screen. The file name is Robot.h

Things to know

Before we begin, I suggest for you to take a look at these posts. Although you do not have to, it will give you a good idea of what is going on throughout the code.

Initialize an OpenGL Context

Rendering requires the initialization of a graphics context. In iOS devices, a graphics context is allocated and initialized with the creation of an EAGLContext object.

To initialize an OpenGL ES context on an iOS device, open up the file name ViewController.mm. Locate the viewDidLoad method and type what is shown on Listing 1:

Listing 1. Initialize an OpenGL Context
- (void)viewDidLoad
{
[super viewDidLoad];

//1. Allocate a EAGLContext object and initialize a context with a specific version.
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

//2. Check if the context was successful
if (!self.context) {
NSLog(@"Failed to create ES context");
}

//3. Set the view's context to the newly created context
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;

//4. This will call the rendering method glkView 60 Frames per second
self.preferredFramesPerSecond=60;

//5. Make the newly created context the current context.
[EAGLContext setCurrentContext:self.context];

//6. create a Character class instance
//Note, since the ios device will be rotated, the input parameters of the character constructor
//are swapped.
character=new Character(self.view.bounds.size.height,self.view.bounds.size.width);

//7. Begin the OpenGL setup for the character
character->setupOpenGL();

}

Line 1 in the listing 1, allocates memory for the EAGLContext object, it then initializes the context by calling the initWithAPI method. The method initWithAPI initializes and returns a newly allocated rendering context with the specified OpenGL ES version 2.0.

The context is then checked if it was successfully created (line 2). If so, the newly created context is set as the current context with the method setCurrentContext (line 5). The current context is also set to the iOS view's context (line 3). The view is then asked to update 60 Frames per second (line 4).

Our robot is implemented using C++ classes. The main class in our project is called Character. This class contains several methods which are in charge of loading data into the OpenGL buffers and updating the framebuffer.

Our first task is to simply create an instance of our Character class (line 6). The constructor receives the width and height of the screen. Next, we call the method SetupOpenGL() (line 7). This method will create the OpenGL buffers and load the 3D model data.

Setup a rendering loop

In order to update our framebuffer, we need a function that is constantly being called by our application.

In file ViewController.mm, locate the glkView() method and type what is shown in Listing 2.

iOS devices provide a method called glkView. The glkView method is called whenever the contents of the iOS view needs to be updated. In our case, we have set it to be updated 60 Frames Per Second. We will implement the glkView method to update the framebuffer by calling the Character's draw() method (line 3).

Listing 2. Setting up the drawing routine
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
//1. Clear the color to black
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

//2. Clear the color buffer and depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

//3. Render the character
character->draw();
}

Create OpenGL objects and load Character data

There are multiple ways to load data into OpenGL Buffers. In this tutorial, we will load data using the glBufferSubData function. For rendering efficiency, we will make use of Vertex Array Objects.

Let's review the 11 steps required for rendering. They are:

  1. Generate a VAO (glGenVertexArrays): Informs OpenGL to create a VAO.
  2. Bind the VAO (glBindVertexArray): Informs OpenGL to bind a VAO.
  3. Generate a VBO (glGenBuffers()): Informs OpenGL to create a Buffer.
  4. Bind the VBO (glBindBuffer()): Informs OpenGL to use this buffer for subsequent operations.
  5. Buffer Data (glBufferData() or glBufferSubData()): Informs OpenGL to allocate and initialize sufficient memory for the currently bound buffer.
  6. Get Location of Attributes (glGetAttribLocation()): Get location of attributes in current active shader.
  7. Get Location of Uniform (glGetUniformLocation()): Get location of Uniforms in currect active shader.
  8. Enable (glEnableVertexAttribArray()): Enable the attribute locations found in the shader.
  9. Set Pointers (glVertexAttribPointer()):Informs OpenGL about the types of data in bound buffers and any memory offsets needed to access the data.
  10. Draw (glDrawArrays() or glDrawElements()): Informs OpenGL to render a scene using data in currently bound and enabled buffers.
  11. Delete (glDeleteBuffers()): Tell OpenGL to delete previously generated buffers and free associated resources.

We will implement these steps in the method setupOpenGL(). This method will be in charge of creating a VAO and loading data into the OpenGL buffers.

Open up file Character.mm. Locate the setupOpenGL() method and type what is shown in listing 3.

The vertices of the robot geometry are found in an array > robot> vertices[]_> > in the Robot.h file.

Listing 3. Loading data into buffers
void Character::setupOpenGL(){

//load the shaders, compile them and link them

loadShaders("Shader.vsh", "Shader.fsh");

glEnable(GL_DEPTH_TEST);

//1. Generate a Vertex Array Object

glGenVertexArraysOES(1,&vertexArrayObject);

//2. Bind the Vertex Array Object

glBindVertexArrayOES(vertexArrayObject);

//3. Generate a Vertex Buffer Object

glGenBuffers(1, &vertexBufferObject);

//4. Bind the Vertex Buffer Object

glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject);

//5. Dump the data into the Buffer
/* Read "Loading data into OpenGL Buffers" if not familiar with loading data
using glBufferSubData.
http://www.haroldserrano.com/blog/loading-vertex-normal-and-uv-data-onto-opengl-buffers
*/

glBufferData(GL_ARRAY_BUFFER, sizeof(robot_vertices)+sizeof(robot_normal), NULL, GL_STATIC_DRAW);

//5a. Load data with glBufferSubData
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(robot_vertices), robot_vertices);

glBufferSubData(GL_ARRAY_BUFFER, sizeof(robot_vertices), sizeof(robot_normal), robot_normal);

//6. Get the location of the shader attribute called "position"

positionLocation=glGetAttribLocation(programObject, "position");

//7. Get the location of the shader attribute called "normal"

normalLocation=glGetAttribLocation(programObject, "normal");

//8. Get Location of uniforms
modelViewProjectionUniformLocation = glGetUniformLocation(programObject,"modelViewProjectionMatrix");

normalMatrixUniformLocation = glGetUniformLocation(programObject,"normalMatrix");


//9. Enable both attribute locations

glEnableVertexAttribArray(positionLocation);

glEnableVertexAttribArray(normalLocation);

//10. Link the buffer data to the shader attribute locations

glVertexAttribPointer(positionLocation, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid *) 0);

glVertexAttribPointer(normalLocation, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)sizeof(robot_vertices));

/*Since we are going to start the rendering process by using glDrawElements
We are going to create a buffer for the indices. Read "Starting the rendering process in OpenGL"
if not familiar. http://www.haroldserrano.com/blog/starting-the-primitive-rendering-process-in-opengl */

//11. Create a new buffer for the indices
GLuint elementBuffer;
glGenBuffers(1, &elementBuffer);

//12. Bind the new buffer to binding point GL_ELEMENT_ARRAY_BUFFER
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementBuffer);

//13. Load the buffer with the indices found in robot_index array
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(robot_index), robot_index, GL_STATIC_DRAW);

//14. Unbind the VAO
glBindVertexArrayOES(0);

//Sets the transformation
setTransformation();

}

We first start by creating and binding a Vertex Array Object as shown in line 1 & 2. An OpenGL Object is then created and binded to the binding target GL_ARRAY_BUFFER (lines 3 & 4). The character is then loaded using the glBufferSubData function as shown in line 5.

The locations of the attributes and uniforms are then obtained as shown in lines 7 & 8. The attributes locations are enabled and the buffer data is linked to the attribute locations as shown in line 10.

Since we are going to start the rendering process by using glDrawElements(), we need to create a new buffer for the indices of our robot geometry. Recall that rendering with glDrawElements is more efficient than rendering with glDrawArrays.

With glDrawElements you provide a set of indices that will guide OpenGL through the primitive assembly stage. These set of indices reduce any redundant vertex connection that may occur if using glDrawArrays.

The indices of the robot geometry are found in an array > robot> index[]_> > in the Robot.h file.

Lines 11-12 show the creation and binding of the new index buffer. Line 13 shows the data from _robotindex array loaded into the buffer.

Finally, the Vertex Array Object is unbind as shown in line 14.

Set up transformation space

The data found in file Robot.h describes the geometry of our character. However, this data describes the character's geometry in its own unique coordinate space (also known as model space). In order for the character to be seen on the screen, it needs to be converted through multiple coordinate spaces. They are as follows:

  • World Space Transformation.
  • View Space Transformation.
  • Perspective Projection Space Transformation.

Open up file Character.mm. Locate the setTransformation() method and type what is shown in listing 4.

Listing 4. Setting up the coordinate transformations
void Character::setTransformation(){
//1. Set up the model space
modelSpace=GLKMatrix4Identity;

//Since we are importing the model from Blender, we need to change the axis of the model
//else the model will not show properly. x-axis is left-right, y-axis is coming out the screen, z-axis is up and
//down

GLKMatrix4 blenderSpace=GLKMatrix4MakeAndTranspose(1,0,0,0,
0,0,1,0,
0,-1,0,0,
0,0,0,1);

//2. Transform the model space by Blender Space
modelSpace=GLKMatrix4Multiply(blenderSpace, modelSpace);

//3. Set up the world space
worldSpace=GLKMatrix4Identity;

//4. Transform the model space to the world space
modelWorldSpace=GLKMatrix4Multiply(worldSpace,modelSpace);

//5. Set up the view space. We are translating the view space 1 unit down and 5 units out of the screen.
cameraViewSpace = GLKMatrix4MakeTranslation(0.0f, -1.0f, -5.0f);

//6. Transform the model-World Space by the View space
modelWorldViewSpace = GLKMatrix4Multiply(cameraViewSpace, modelWorldSpace);


//7. set the Projection-Perspective space with a 45 degree field of view and an aspect ratio
//of width/heigh. The near a far clipping planes are set to 0.1 and 100.0 respectively
projectionSpace = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(45.0f), fabsf(screenWidth/screenHeight), 0.1f, 100.0f);


//8. Transform the model-world-view space to the projection space
modelWorldViewProjectionSpace = GLKMatrix4Multiply(projectionSpace, modelWorldViewSpace);

//9. extract the 3x3 normal matrix from the model-world-view space for shading(light) purposes
normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelWorldViewSpace), NULL);


//10. Assign the model-world-view-projection matrix data to the uniform location:modelviewProjectionUniformLocation
glUniformMatrix4fv(modelViewProjectionUniformLocation, 1, 0, modelWorldViewProjectionSpace.m);

//11. Assign the normalMatrix data to the uniform location:normalMatrixUniformLocation
glUniformMatrix3fv(normalMatrixUniformLocation, 1, 0, normalMatrix.m);

}

Let's start by setting the character's space to an identity Matrix (line 1). This simply means that the model is set at the origin of its coordinate system.

Normally, this would be enough to represent the model space of a character. However, the model was created in a modeling software known as > > Blender> > . Unfortunately, the coordinate systems of > > Blender> > and > > OpenGL> > are different. Thus, we need to transform the model space by a Blender's transformation matrix (line 2).

Let's also set the World space to an Identity Matrix (line 3). We are going to transform our model's space to the world's space by simply multiplying their spaces. The new space is now called Model-World Space (line 4).

The View Space, represented by the camera will be set without any rotations but translated 0.0,-1.0,-5.0 units along the x,y and z-axis respectively (line 5).

The Model-World Space will be transformed by the View Space. The resulting space is now called * Model-World-View Space* (line 6).

Our final transformation involves converting the Model-World-View Space to a Model-World-View-Projection Space. However, We must first construct a Projection-Perspective space with a field of view of 45 degrees. A near and far clipping distance of 0.1 and 100.0, respectively (line 7).

Now we are able to transform the Model-World-View Space to Model-World-View-Projection Space as shown in line (8).

Our final task is to provide the data in the Model-World-View-Projection Space to our uniform locations found in the shaders (line 10). The Shaders will use this data to properly transform the character's model space to screen space.

Update the framebuffer

Finally, we are ready to implement the draw() method. This method will update the framebuffer 60 Frames Per Second and will be called by the method glkView().

Open up file Character.mm. Locate the Draw() method and type what is shown in listing 5.

The first task is to set the Shader program which will be used (line 1). Next, bind the Vertex Array Object (VAO) which contains our OpenGL rendering states (line 2). This was specified in listing 3. The rendering process is then started by calling glDrawElements() (line 3). Finally, the VAO is unbind.

Run the project and you should see a robot on the screen of your iOS device.

Listing 5. Rendering routine
void Character::draw(){

//1. Set the shader program
glUseProgram(programObject);

//2. Bind the VAO
glBindVertexArrayOES(vertexArrayObject);

//3. Start the rendering process
glDrawElements(GL_TRIANGLES, sizeof(robot_index)/4, GL_UNSIGNED_INT,(void*)0);

//4. Disable the VAO
glBindVertexArrayOES(0);

}

Final Result

Once you run the project, you should see a cute little robot rendered on the screen of your iOS device.

Source code

The final source code can be found here.

Questions?

So, do you have any questions? Is there something you need me to clarify? Did this project help you? Please let me know. Add a comment below and subscribe to receive our latest game development projects.

Note:

In newer Xcode versions, you may get this error while running the project demos:

"No such file or directory: ...xxxx-Prefix.pch"

This error means that the project does not have a PCH file. The fix is very simple:

In Xcode, go to new->file->PCH File.

Name the PCH file: 'NameOfProject-Prefix' where "NameOfProject" is the name of the demo you are trying to open. In the OpenGL demo projects, I usually named the projects as "openglesinc."

So the name of the PCH file should be "openglesinc-Prefix"

Make sure the file is saved within the 'NameOfProject' folder. i.e, within 'openglesinc' folder.

Click create and run the project.