The game characters that you see on a mobile game are constructed out of primitives. The most popular primitive in computer graphics is the Triangle.
In order to render a character on a screen, the vertices making up the triangle are sent to the GPU (Graphics Processing Unit).
In the GPU, these vertices go through several coordinate transformations, testing and rasterization. At the end, pixels approximating the shape of a triangle are stored in the framebuffer memory.
The pixels stored in the framebuffer are the pixels that you see on your screen.
The basic element of a 3D character
In a small Game Studio, composed of a graphics designer (Bill) and a programmer (Tom), their daily routine may be something like this:
Bill starts his day by opening up Blender, a 3D modeling software. His task is to model a shiny robot-like character. After two hours, the model is ready to be used by a game engine. However, Bill knows that Tom can’t do anything with the model, unless the model is broken down into triangles. Bill breaks down the model into hundreds of triangles and gives Tom the most important element in Computer Graphics, i.e. the vertices of the triangles.
The vertices of a primitive are the basic elements of a 3D model. It is the data the Graphics Processing Unit (GPU) requires in order to assemble a geometry on the screen.
Figure 1. A 3D Model with primitives
Sending data to the GPU
The transfer of vertices to the GPU requires the existence of specialized containers. These containers are created and assigned a sole purpose for their existence. Their purpose may be to transport geometric or image data to the GPU. These containers are called OpenGL Objects.
If you want to render a three dimensional cube on a screen, you need to send the cube’s eight vertices to the GPU through OpenGL Objects. However, loading these vertices on a recently created OpenGL object is not enough. The creation of an object, does not mean that OpenGL knows about it. The object needs to be attached, i.e. bound, to the OpenGL context for it to be useful.
Furthermore, it is in this binding stage that an object gets assigned its purpose in life, also known as the Target Point.
When an OpenGL object has been created and bound to an OpenGL context, it can transport vertices to the GPU, with the intent to be used by a Shader.
A Shader is basically a mini-application code written by you. It does not live in the CPU. Instead, it resides in the GPU.
The OpenGL Rendering Pipeline
Once the vertices of the cube are in the GPU, they are ready to be rendered. Rendering is managed by the OpenGL Rendering Pipeline, which consists of six stages:
- Per-Vertex Operation
- Primitive Assembly
- Primitive Processing
- Fragment Processing
- Per-Fragment Operation
In the first stage, called Per-Vertex Operation, vertices are processed by a shader, known as the Vertex Shader.
Each vertex is multiplied with a transformation matrix, effectively changing its 3D coordinate system to a new coordinate system. Just like a photographic camera transforms a 3D scenery into a 2D photograph. The Vertex Shader changes the 3D coordinate system of a cube into a projective coordinate system.
Once three vertices have been process by the vertex shader, they are taken to the next stage in the pipeline; the Primitive Assembly stage.
This is where a primitive is constructed by connecting the vertices in a specified order.
Before the primitive is taken to the next stage, Clipping occurs. Any primitive that falls outside the View-Volume, i.e. outside the screen, is clipped and ignore in the next stage.
What you ultimately see on a screen are pixels approximating the shape of a primitive. This approximation occurs in the Rasterization stage. In this stage, pixels are tested to see if they are inside the primitive’s perimeter. If they are not, they are discarded.
If they are within the primitive, they are taken to the next stage. The set of pixels that passed the test is called a fragment.
A Fragment is a set of pixels approximating the shape of a primitive. When a fragment leaves the rasterization stage, it is taken to the Per-Fragment stage, where it is received by a shader.
This shader is called a Fragment Shader and its responsibility is to apply color or a texture to the pixels within the fragment.
Before the pixels in the fragment are sent to the framebuffer, fragments are submitted to several tests like:
- Pixel Ownership test
- Scissor test
- Alpha test
- Stencil test
- Depth test
At the end of the pipeline, the pixels are saved in a Framebuffer, more specifically the Default-Framebuffer. These are the pixels that you see in your mobile screen.
Figure 3. 3D Model on an iOS device
Hope this helps.