How does the Game Engine Loop make a game possible?

Ever since I was a kid, I've always been captivated by computer graphics effects. The day I decided to look more into computer graphics was when Angry Bird became popular. I was amazed at the "sling-shot" effect and the collision between the blocks. Honestly, I would play the game to figure out how the collision worked.

So, I picked up a Game Development book and learned how to use the cocos2d game engine and the Box2d Physics system. Creating my first game demo with collision detection was exciting. However, the more I learned, the more I became intrigued. I wanted to learn more; I wanted to dig deeper into computer graphics.

Eventually, I decided to develop my 3D game engine, and it was then when I had to opportunity to dive deeper into Computer Graphics, OpenGL, C++, Design Patterns, Linear Algebra, and Computational Geometry.

Throughout the five years of the engine's development, I deciphered how a game engine truly works, what makes it tick and how each component is linked together to make a game possible.

In this post, I'm going to demystify the purpose of the Game Engine Loop.

Game Engine Loop

The heart of a game engine is the Game Engine Loop. It is through this loop that the interaction between the Math, Rendering and Physics Engine occurs.

 
gameengineloopflowpost1.png
 

During every game-tick, a character flows through these sub-engines, where it is rendered, and subject to physical-simulated forces, such as gravity and collision-responses.

At a minimum, a Game Engine Loop consists of a Rendering Engine and the Update stage.

 
gameengineloopflowpost2.png
 

Rendering Engine

The first stop of a character is the Rendering Engine. The Rendering Engine's responsibility is to render the character depending on the entity's property. For example, if the object is a 3D character, it will enable the proper GPU Shaders (programs) that will utilize the appropriate attributes to recreate the character on the screen. When it comes to rendering a 3D character, a GPU will require at least these attributes: Vertices, Normal Vectors, and UV Coordinates.

 
gameengineloopflowpost4.png
 

However, if the entity is a 2D entity, the GPU will require only the Vertices and UV-Coordinates of the entity.

 
gameengineloopflowpost5.png
 

Unless you have never seen a video game, you know that a typical game contains more than just 3D or 2D characters. It contains skyboxes, explosion effects, etc. A Rendering Engine is capable of rendering each of these entities by activating the correct GPU Shader during the rendering process.

 
gameengineloopflowpost3.png
 

Updating the Coordinate Space

As mentioned above, to properly render an entity, the GPU requires the attributes of the entity. However, it also needs the space coordinates of the entity.

A Space-Coordinate defines the position of an object. Since a game character is made up of hundreds or thousands of vertices, a space-coordinate is assigned to the character, which defines the position of its vertices.

gameengineloopflowpost7.png

The space coordinates contain rotation and translation information of the character. Mathematically, the space coordinate is represented as a 4x4 matrix. The upper 3x3 matrix contains Rotation information, whereas the right-most vector contains Position information.

gameengineloopflowpost6.png

The space coordinate of an entity is known as the Model Space.

If you were to send the Model Space to the GPU, a game engine would render the entity on the wrong location on the screen. You may not even see it at all.

Why?

Because the GPU needs the Model-View-Projection (MVP) coordinate space to place the character on the screen correctly.

To produce the MVP space, the Model Space is transformed into the World Space. The product is then transformed into the Camera (View) Space. Finally, the resulting space is transformed by the Perspective- Projection Space, thus producing the correct MVP space required by the GPU.

 
gameengineloopflowpost8.png
 

The attributes of the entities are sent to the GPU during the initialization of the game whereas the space coordinate is continuously transmitted to the GPU during every game tick by the engine loop.

With these set of information, the GPU can adequately render the 3D entity.

Updating the character state

The next stop in the engine loop is the update stage. The engine calls each entity's update method and checks for the current state of the character. And depending on the state, the game developer sets the appropriate actions.

For example, let's say that you are moving the joystick in the game controller which makes the character walk. The moment that you move the joystick, the state of the character is changed to Walk. When the engine calls the update method of the entity, the walk animation is applied.

walkinganimation.gif

However, at this moment, the space-coordinate of the entity is also modified — specifically the rotation and translation components of the 4x4 matrix. And the new values are transformed into the MVP space and sent to the GPU, thus creating the walking motion that you see in games.

tutorial101.gif

Physics Engine

Most game engines provide a Physics Engine (with Collision-Detection System). A game engine interacts with this system through the Engine Loop.

gameengineloopflowpost9.png

The primary purpose of the Physics Engine is to integrate the Equation of Motion; which means to compute the velocity and position of an object from an applied force. From the newly calculated position, the space coordinate of the model is modified, which creates the illusion of motion.

For example, let's say a game has Gravity enabled. Gravity is a force that acts downward.

During every game-tick, the physics engine computes the new velocity and position of the character, thus modifying the coordinate system of the entity. Which upon rendering, creates the illusion that the character is falling due to gravity.

gravity.gif

Collision-Detection System

The Collision-Detection System works hand in hand with the Physics Engine. And it's purpose is to detect a collision, determine where the collision occurred and computed the resultant impulse force. Just like the other components, this system is called continuously by the Game Engine Loop.

gameengineloopflowpost10.png

Once the system detects a collision between two objects, it tries to determine the exact location of the collision. It uses this information to compute the collision response correctly. That is the impulse force that will separate the two colliding objects. And once again, the space-coordinates are modified, sent to the GPU, thus creating the illusion of collision.

collisionlab6a.gif

Entity Manager

There is a fourth component that works hand-in-hand with the engine loop. It's the Entity Manager. Its purpose is to provide game entities to the engine loop as efficiently as possible.

 
gameengineloopflowpost11.png
 

Let's say that a game has over 100 game entities; 3D characters, 2D sprites, images, etc. The data structure that you use to store these entities will affect the speed of the game engine. If you were to store these entities into a C++ Vector Container, the engine would slow down since it takes time to traverse the elements in a vector container. However, if these objects were stored in a data structure known as a Scenegraph, the engine's speed will not take a hit.

Why?

Because a scenegraph has a fast-traversal property.

The Entity Manager is in charge of managing the scenegraph, which provides the entities to the Engine Loop for rendering and update.

It is the Game Engine Loop that connects all the components of a game engine that makes video games possible. In my opinion, it is the heart of a game engine.

Hope this helps.

Three tips before you start a technical blog

Three tips before you start a technical blog

Back on Dec 24, 2014, I published my first post. I still recall sitting on my couch, making sure that the article was perfect. After I hit Publish, I felt a bit of euphoria. I don't know why. I didn't expect anyone interested in reading articles about Game Engine Development, but it felt good to hit the Publish button.

About a month ago, I remembered that it had been four years since I started blogging. I went to check the analytics of the blog, and I was amazed that I had 175K visitors. And 133K of them came to my site last year.

How do video game characters move?

When I was a kid, a saw an arcade machine near my house. Although I only got to play it a couple of times, I do remember how curious I was with it. My curiosity was not linked to any gameplay but the science behind the games. I was curious how the characters followed the motion of the joystick; how the joystick was able to rotate or move the characters across the screen.

rotateByExample.gif

At the time, I was too small to comprehend the math involved. It took several decades until I was able to understand how game-controllers rotate and move game characters.

If you are interested in how it works, keep on reading.

Space Coordinates

A Space-Coordinate defines the position of an object. Since a game character is made up of hundreds or thousands of vertices, a space-coordinate is assigned to the character, which defines the position of its vertices.

vertexofcharacterwithcoordsystem.png

Mathematically, a space coordinate is defined as a 3x3 Identity Matrix.

However, since a character also moves across the screen, the 3x3 matrix is extended to a 4x4 matrix which contains a translation vector.

spacecoordinatematrix.jpg

The upper 3x3 matrix controls the rotation of the character, whereas the rightmost column determines the position of the character on the screen.

Therefore, the rotation and translation of a game character is defined per the components of a 4x4 matrix.

Rendering

During the initialization of a game, the vertices of a game character are sent and stored in the GPU; the Graphics Processing Unit.

Once a game starts, the space coordinate, i.e., the 4x4 matrix values, is continuously sent to the GPU. Each character's vertex is transformed by the 4x4 matrix right before the character is rendered on the screen.

If the space coordinate matrix is the Identity Matrix, then the character is rendered without any rotation or translation.

However, if the space coordinate is modified, the GPU renders the character as rotated or translated.

 
spacecoordmodified.png
 

Game Controller

The components of the space coordinate are modified when you move a joystick or press a button of a game controller. For example, when the joystick is moved left or right, the upper 3x3 matrix is modified causing the character to rotate.

And when you press a button (or move a second joystick) the translation vector in the 4x4 matrix is changed, causing the character to move across the screen.

 
controllerrotatecharacter.png
 

This is how game controllers control a game character. They directly modify the Space-Coordinate of the game character. There is of course more to it, but this is the gist to how it works.

Hope it helps.

Untold Engine’s improvement goals for 2019

One of the weaknesses of the Untold Engine is the Collision Detection System. At times it fails to detect collisions and produces the incorrect collision response. Developing a stable Collision Detection System is tough. And I have realized that it can’t be taken lightly. Thus, my primary goal for 2019 is to focus solely on improving the Collision Detection System of the engine. 

As you are aware, the GJK algorithm is in charge of detecting collision among convex objects. The current GJK implementation has helped several ill-Conditioned Error Bounds which I have yet to improve. For example, the engine fails to detect collisions between objects of disproportionate sizes. As another example, rotation causes the algorithm to miss detections.

Moreover, the BVH algorithm, which is in charge of parsing the space between objects, is inefficient. It is currently implemented as a “Recursive” algorithm, instead of as an “Iterative” algorithm.  And The algorithm slows down after a few dozen objects. 

Finally, the Sutherland-Hodgman algorithm also has flaws. Most of the time, it produces the correct Collision-Manifolds, but at times it provides the incorrect manifolds, thus affecting the collision response of the engine. 

Getting Collision Detection to work is extremely complicated, especially for 3D convex objects, but I’m looking forward to improving the Collision Detection System and have a better engine by the end of 2019.

Thank you for reading. 

What strategies do you use to finish a project?

I'm sure you have read so many different strategies on how to successfully finish a project. To be frank, most of the well-known procedures don't work for me since they do not take into account behavioral aspect.

Over the years I've kept track on what works and what doesn't. These are the strategies that have worked for me:

Manage Energy Level

We all have limited amount of energy per day. And just like you would manage time, you must also manage your energy level.

In my case, my energy level is at its peak early in the morning. I have the least amount of energy around 2– 7 pm. And for some reason, I’m more focused late at night.

Therefore, I use my energy level to my advantage whenever I'm working on a project. I try to complete critical tasks early in the morning. If I need to read, I'll do that at night since I'm more focused. And any not so critical tasks I try to do in the afternoons.

Avoid Perfection

Seeking perfection slows you down. And more importantly, it prevents you from quickly learning what works and what doesn't.

Moreover, you don't know what you don't know. So it is a waste of time to complete a task to perfection, when you don’t know how it will affect your project as a whole.

A better approach is to implement the task until is Just Good Enough. Doing so will allow you to quickly test, debug and fix. Moreover, it will reveal weaknesses of your project that you were unaware of.

Make it a habit

The key to finish a project is Consistency. And one way to be consistent is by making things a habit.

I suggest to do at least one thing a day related to your project, no matter how small. The key is to trick your mind that you are moving forward with your goal.

Most of the time, we give up and abandon a project due to psychological factors, more so than technical ones. So, manage your energy level, try to build momentum. Avoid wasting time trying to reach perfection and do at least one task a day, no matter how small it is.