It has been two years since I started writing

Two years ago, on 12/25/2014, I wrote my first blog post. And after 170+ posts I'm still here, writing.

My first year was depressing. I hardly got any visitors. I remember that on average I was getting about 5-10 visitors a day. Like anyone, I wanted my traffic to increase, but I didn't take any marketing actions on it. I avoided using catchy headlines, nor did I use popular keywords.

I didn't have a strategy. I simply started writing and sharing what I was learning. My only requirement to write a post was "Will this post help someone?"

After a year or so, my blog traffic started to increase. I remembered being so happy when I started getting 40-50 visitors a day. By this time I had written over 100 articles, but I had never received a comment on any of my posts. The day that I received a comment I was kind of excited.

It makes me happy to see that after two years of writing and sharing my knowledge, my blog is getting more traffic. On average about 1,600 visits a month. To some of you these numbers may be insignificant, but keep in mind that when I started, I was getting about ten visitors a day.

So why did I start writing?

Three and a half years ago I was developing a game engine in obscurity. I was learning a lot from this project, and I wanted a way to express myself and let people know about my project.

I am not a social media person. And the thought of posting a picture of me working on the game engine either on Twitter or Instagram didn't go well with my personality.

Around this time, I was reading the book Show your work. Which stated the following points among others:

  • You don't have to be a genius
  • Think process, not product
  • Teach what you know

These three points encouraged me to show off my game engine. I realized that I didn't have to be a genius on game engine development to start writing about it. It made me understand that you can inspire others by showing the process of a project. And it made me realize that the whole point of social media is to educate and inform others, not to share what you ate this morning.

So I started writing about Computer Graphics, Programming and Game Engine Development.

These two years have not been easy. Many times I wanted to quit writing and shut down this blog. But I am so tired of quitting that I force myself to keep writing.

So if you want to start a blog, go for it. Don't be afraid to share your knowledge, even if you are not an expert in your field. I was not an expert when I started but I have written plenty of articles on Game Engine Development, C++ and OpenGL.

If your project is not ready to be shared, don't wait. Share it anyways. Write about the process. Someone out there will be interested in what you are doing. I began sharing the process of the game engine when the engine was a piece of nothingness.

Below is a timeline of the game engine process.

And finally teach what you know. Writing NOT Coding helped me become fluent in game engine development. If you can't teach it, then you don't understand it.

Thanks for reading.

Getting Started with Metal API

One of my goals for 2017 is to become an expert in the new graphics API from Apple known as Metal. Thus I started learning Metal and thought it would be nice to share with you what I have learned.

Prerequesite: Before using Metal: Computer Graphics Basics

Objects in Metal

Unlike OpenGL, Metal treats most rendering components as Objects. For example, Metal creates a rendering pipeline as a Pipeline object. The shaders, known as functions in Metal, are encapsulated in Library objects. Vertex data is encapsulated in Buffer objects.

Metal requires a set of objects to be created before rendering can occur. The primary objects in Metal are:

  • Device Object
  • Command Queue Object
  • Library/Function Objects
  • Pipeline Objects
  • Buffer Objects
  • Render Pass Descriptor Object
  • Command Buffer Object
  • Render Command Encoder Object

Metal Rendering Process

The Metal Rendering Process consists of the initialization of these objects, which are created once and last for the lifetime of the application:

  • Device Object
  • Command Queue Object
  • Library/Function Objects
  • Pipeline Objects
  • Buffer Objects

And the creation of these objects during each render pass:

  • Render Pass Descriptor Object
  • Command Buffer Object
  • Render Command Encoder Object

The Metal rendering process consists of the following steps:

Iniatilizing Objects

  1. Create a device object, i.e. a GPU
  2. Create a Command Queue object
  3. Create a CAEMetalLayer
  4. Create a Library object
  5. Create a Pipeline object
  6. Create buffer objects

Render-Pass

  1. Retrieve the next drawable layer
  2. Create a Render Pass Descriptor object
  3. Create a Command Buffer object
  4. Create a Render Command Encoder object
  5. Present the drawable
  6. Commit the Command Buffer

Initializing Metal Objects

Create a Metal Device

The Metal rendering process starts with the creation of an MTLDevice object. An MTLDevice represents an abstraction of the GPU. A device object is used to create other kinds of objects such as buffers, textures and function libraries.

Create a Command Queue Object

An MTLCommandQueue object is created from an MTLDevice. The Command Queue object provides a way to submit commands/instructions to the GPU.

Create a CAMetalLayer

Next, we must provide a destination texture for the rendering pipeline.

The destination of a rendering pipeline is a Framebuffer. A framebuffer contains several attachments such as Color, Depth, and Stencil attachments. However, a framebuffer can display the rendering content to a screen ONLY if a texture has been attached to a framebuffer attachment. The CAMetalLayer object provides a texture that is used as a rendering destination.

Create a Library and Function Objects

Next, we must create the Vertex and Fragment functions (Shaders) that will be used by the rendering pipeline. The Vertex and Fragment are MTLFunction objects and are created through an MTLLibrary object.

Create a Rendering Pipeline Object

Now is time to create the rendering pipeline object.

An MTLRenderPipelineState object represents a rendering pipeline. However, unlike other objects, you do not directly create a pipeline object. Instead, you create it indirectly through an object called Rendering Pipeline Descriptor.

The MTLRenderPipelineDescriptor describes the attribute of a render pipeline states. For example, it defines the Vertex and Fragment functions used by the pipeline. As well as the color attachment properties.

Create Buffer Objects

The next step is to load MTLBuffers objects with vertex data. i.e., vertices, normals, UV, etc.

The interaction between these objects is illustrated below:

Metal Render-Pass

Whereas, objects mentioned in "Initializing Metal Objects" are created once and last for the lifetime of your application, objects created during the render pass are often created and short-lived.

The steps in the rendering-pass stage are as follow:

Retrieve the next drawable layer

As mentioned above, a framebuffer requires a texture before it can display the rendering results to the screen. Thus, you must ask the CAMetalLayer object for the next available texture.

Create a Render Pass Descriptor object

Next, we must describe the various actions that must occur before and after the render pass. For example, you may want to clear the rendering texture to a particular color.

These actions are described through the MTLRenderPassDescriptor object. Moreover, The MTLRenderPassDescriptor object links the texture provided by the CAMetalLayer as the pipeline destination texture.

Create a Command Buffer object & Encoder object

We then create an MTLCommandBuffer object. An MTLCommandBuffer object stores drawing commands and rendering pipeline states until the buffer is committed for execution by the GPU.

However, these drawing commands and rendering pipeline states must be encoded by an MTLRenderCommandEncoder object before they are stored into the MTLCommandBuffer object. Essentially, the MTLRenderCommandEncoder translates the commands into a format the GPU can understand.

Present the drawable layer

I mentioned previously that the CAMetalLayer provided a texture that serves as the rendering destination. With our commands encoded, we must inform the command buffer that it must present this texture to the screen once rendering is complete.

Commit the Command Buffer

Finally, the Command Buffer is committed, and loaded into the Command Queue; where it waits to be executed by the GPU.

The render pass routine is illustrated below:

In summary, the Metal rendering process can be summarized in these steps:

  1. Create a device object, i.e. a GPU
  2. Create a Command Queue object
  3. Create a CAEMetalLayer
  4. Create a Library object
  5. Create a Pipeline object
  6. Create buffer objects
  7. Retrieve the next drawable layer
  8. Create a Render Pass Descriptor object
  9. Create a Command Buffer object
  10. Create a Render Command Encoder object
  11. Present the drawable
  12. Commit the Command Buffer

Your First Metal Application

Let's create a simple Metal application. We are going to render a simple red rectangle on the screen.

For your convenience, the project can be found here.

Download the project so you can follow along.

Note: The project is found in the "MetalBasics" git branch. The link should take you directly to that branch.

Open Xcode and create a new project. Select "Single View Application" as the template and give your project a name. Select "Objective-C" as the language.

Include the following Frameworks into your project through the "Build Phases" tab:

  • Metal
  • UIKit
  • QuartzCore

In the "ViewController.h" file, make sure to import the following headers:

#import <UIKit/UIKit.h>
#import <Metal/Metal.h>
#import <QuartzCore/CAMetalLayer.h>
#import <GLKit/GLKMath.h>

You are going to initialize Metal in the viewDidLoad method. We are going to follow the 12 Metal Rendering steps outlined above.

Step 1. Create a metal device:

mtlDevice=MTLCreateSystemDefaultDevice();

Step 2. Create a command queue object

mtlCommandQueue=[mtlDevice newCommandQueue];

Step 3. Create a CAMetal Layer

metalLayer=[CAMetalLayer layer];
metalLayer.device=mtlDevice;
metalLayer.pixelFormat=MTLPixelFormatBGRA8Unorm;
metalLayer.frame=self.view.bounds;
[self.view.layer addSublayer:metalLayer];

Step 4. Create a library object and function objects

//create a library object
id<MTLLibrary> mtlLibrary=[mtlDevice newDefaultLibrary];

//create a vertex and fragment function object
id<MTLFunction> vertexProgram=[mtlLibrary newFunctionWithName:@"vertexShader"]; 
id<MTLFunction> fragmentProgram=[mtlLibrary newFunctionWithName:@"fragmentShader"];

Step 5. Build the Rendering Pipeline

//build a Render Pipeline Descriptor Object
 mtlRenderPipelineDescriptor=[[MTLRenderPipelineDescriptor alloc] init];

//assign the vertex and fragment functions to the descriptor
[mtlRenderPipelineDescriptor setVertexFunction:vertexProgram];
[mtlRenderPipelineDescriptor setFragmentFunction:fragmentProgram];

//specify the target-texture pixel format
mtlRenderPipelineDescriptor.colorAttachments[0].pixelFormat=MTLPixelFormatBGRA8Unorm;

//Build the Rendering Pipeline Object
renderPipelineState=[mtlDevice newRenderPipelineStateWithDescriptor:mtlRenderPipelineDescriptor error:nil];

Step 6. Create Buffer objects and load data into it

We will be using these set of data as the vertices of our rectangle

static float quadVertexData[] =
{
    0.5, -0.5, 0.0, 1.0,
    -0.5, -0.5, 0.0, 1.0,
    -0.5,  0.5, 0.0, 1.0,

    0.5,  0.5, 0.0, 1.0,
    0.5, -0.5, 0.0, 1.0,
    -0.5,  0.5, 0.0, 1.0
};

The vertices are loaded into the buffer object:

//load the data QuadVertexData into the buffer
vertexBuffer=[mtlDevice newBufferWithBytes:quadVertexData length:sizeof(quadVertexData) options:MTLResourceOptionCPUCacheModeDefault];

At this point, the initialization of the Metal objects is complete. We need to create sort of a timer that constantly calls a render-pass function. The best way to do this is through a CADisplayLink object.

//Set the display link object to call the renderscene method continuously
displayLink=[CADisplayLink displayLinkWithTarget:self selector:@selector(renderScene)];

[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];

The system will call the method "renderScene" repeatedly once it is ready for rendering.

Create a method called "renderScene" and include the following:

Step 7. Get the next drawable texture

frameDrawable=[metalLayer nextDrawable];

Step 8. Create a Render Pass object

//create a render pass descriptor
MTLRenderPassDescriptor *mtlRenderPassDescriptor =[MTLRenderPassDescriptor renderPassDescriptor];

//set the target texture for the rendering pipeline
mtlRenderPassDescriptor.colorAttachments[0].texture=frameDrawable.texture;

//set the following states for the pipeline. i.e., clear the texture before each render pass
mtlRenderPassDescriptor.colorAttachments[0].loadAction=MTLLoadActionClear;
mtlRenderPassDescriptor.colorAttachments[0].clearColor=MTLClearColorMake(1.0, 1.0, 1.0, 1.0); 
mtlRenderPassDescriptor.colorAttachments[0].storeAction=MTLStoreActionStore;

Step 9. Create a Command Buffer

id<MTLCommandBuffer> mtlCommandBuffer=[mtlCommandQueue commandBuffer];

Step 10. Create a Command Encoder object

//creat a command encoder
id<MTLRenderCommandEncoder> renderEncoder=[mtlCommandBuffer renderCommandEncoderWithDescriptor:mtlRenderPassDescriptor];

//Configure enconder with the pipeline
[renderEncoder setRenderPipelineState:renderPipelineState];

//set the vertex buffer object and the index for the data
[renderEncoder setVertexBuffer:vertexBuffer offset:0 atIndex:0];

//Set the draw command
[renderEncoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:6];

//End encoding
[renderEncoder endEncoding];

Step 11. Present the drawable

[mtlCommandBuffer presentDrawable:frameDrawable];

Step 12. Commit the buffer

[mtlCommandBuffer commit];

The Metal API initialization and render-pass operations are complete. However, we need to set up our function shaders.

Setting up the Function Shaders

Go to File->New and create a new file. Select a "Metal File" and call it "MyShader."

I will not go into details how shaders work but know this: A Vertex shader processes incoming geometrical data. A Fragment shader sets the color of the outgoing fragment.

In step 4, we created two function objects with the names "vertexShader" and "fragmentShader." We need to create two function shaders with the same names in the "MyShader" file.

//Vertex Function (Shader)
vertex float4 vertexShader(device float4 *vertices [[buffer(0)]], uint vid [[vertex_id]]){

    return vertices[vid];

}

//Fragment Function (Shader)
fragment float4 fragmentShader(float4 in [[stage_in]]){

    //set color fragment to red
    return float4(1.0,0.0,0.0,1.0);

}

The vertex function shader receives vertex data through the argument "vertices [[buffer(0)]]".

If you look at step 10, you told the render encoder to use information in the vertexBuffer (which holds your rectangle vertices) and link it to the buffer at index 0. The Vertex function shader receives these information a vertex at a time.

Fragment function shader sets the color of the fragment to red.

And that is it. If you have an iPhone or an iPad, connect it to your Mac and run the project. You should see a red rectangle on the screen. Note, the iOS simulator does not support Metal yet.

The complete project can be found on my GitHub page.

Algorithms in Game Engine Development

For a game character to experience physics, a game engine needs to compute the equations of motion, collision, contact points, etc. There is a set of basic algorithms that aids a character experience such effects. For example, the Runge-Kutta Method computes the Equations of Motion using numerical integration methods. The Gilbert-Johnson-Keerthi (GJK) algorithm detects collision using the Minkowski Difference. The Sutherland-Hodgman algorithm identifies collision contact points by clipping a polygon.

Numerical Integration Methods

Calculating the equations of motion allows a character to fall as if gravity was acting on the object. The Equations of Motion are Newton's Second Law:

and Rotational Force:

A game engine integrates the Equations of Motion to obtain the velocity and displacement of a character. The engine does this operation in a continuous loop which consists of the following steps:

  1. Identify all forces and moments acting on the object.
  2. Take the vector sum of all forces and moments.
  3. Solve the equation of motion for linear and angular acceleration.
  4. Integrate the acceleration with respect to time to find the linear and angular velocity.
  5. Integrate the velocity with respect to time to find the linear and angular displacement.

If a character experiences gravitational and torque forces, the continuous loop creates the illusion that an object falls and rotates.

The problem is the Integration of the acceleration and velocity. Computers can only approximate the value of an integral by using Numerical Integration techniques.

There are three numerical integration methods used in game engine development. They are:

  • Euler Method
  • Verlet Method
  • Runge-Kutta Method

The Euler's Method calculates the velocity at a time interval and predicts the next velocity at t+∆t. The method is simple to implement yet it is the least accurate.The illustration below shows the shortcoming of this approach. You can argue that making ∆t smaller, the closer you'll get to the exact solution. However, there is a practical limit as to how small a time step you can take.

The Runge-Kutta Method is a numerical integration technique which provides a better approximation to the equation of motion. Unlike the Euler's Method, which calculates one slope at an interval, the Runge-Kutta calculates four different slopes and uses them as weighted averages.

These slopes are commonly referred as k1, k2, k3 and k4, and an engine needs to compute them at every time step.

The Runga-Kutta uses these slopes as weighted average to better approximate the actual slope, velocity, of the object. The position of the object is then calculated using this new slope.

Collision Detection

Detecting collisions has trade-offs. A simple collision detection system is fast but unprecise. Whereas, a complex collision detection system is precise but very computationally expensive.

During collision detection, an engine bounds a game character with a geometrical volume. This volume is known as a Boundary Volume. And the most common are:

  • Sphere
  • OBB
  • AABB
  • Convex Hull

A collision detection system works by detecting if boundary volumes are intersecting. A system that uses Spherical Boundary Volumes are fast but returns many false detections.

Back in the 1980s, I'm sure a detection system using spherical boundary volumes was acceptable, but nowadays, gamers may not be so happy with many false detections.

A more precise detection system uses Convex Hulls as boundary volumes.

The intersection between Convex Hulls is computed using the Gilbert-Johnson-Keerthi (GJK) algorithm. Surprisingly, the mathematics behind this algorithm is quite simple. The algorithm determines if the volumes have intersected if their Minkowski Difference contains the origin point. The image below illustrates two convex hulls intersecting (left). Since the Minkowski Difference (right) includes the origin point, the algorithm reports the intersection.

Even though the mathematics behind the GJK algorithm is simple, it is very computationally expensive.

A collision system circumvents calling the GJK algorithm for every possible collision through a Two-Phase Detection System. These phases are known as Broad-Phase and Narrow Phase detection.

A Broad Phase detection system detects collision using spherical boundary volumes. This phase is fast and reports any possible collision to the Narrow Phase. The Narrow Phase tests the reported collision, but it uses the more expensive and precise GJK algorithm.

To further minimize calls to the GJK algorithm, a game engine parses the space of game characters and creates a tree-like structure known as a Boundary Volume Hierarchy (BVH).

The BVH algorithm parses the position of every object and assigns them to a particular node in the binary tree. The algorithm recursively analyzes each node until each node contains two characters most likely to collide.

Collision Response

A collision detection system reports if a collision has occurred. Unfortunately, it does not report the Contact-Manifold (contact points) of the collision. The contact-manifold are essential to determine the Collision Response (impulse and resulting velocities) of collided characters.

A game engine uses the Sutherland-Hodgman Algorithm to compute the contact manifolds of two colliding characters. The algorithm starts off by identifying a Reference and an Incident polygon.

The segments of the reference polygon serve as Reference Planes for the algorithm.

Once a reference polygon is identified, the algorithm tests each vertex of the incident polygon against each reference plane by using a Clipping rule.

Upon termination, the algorithm generates a new polygon whose vertices are the contact points (Contact Manifold) of two collided polygons.

Visualizing Game Engine Algorithms

Now that you know these game engine algorithms, you may want to know how to implement them. Several books go straight into the implementations of these algorithms without giving you a visual depiction of how they work. I think that if you can see a visual representation of an algorithm, you can implement them easier and faster. Thus, I wrote several posts where I provide a visual explanation for each algorithm. I hope they are helpful:

Enjoy.

PS. Sign up to my newsletter and get Game Engine development tips.

Game Engine Second Game Demo

To showcase the changes made in beta version v0.0.3, I decided to make a simple shooting game. Initially, the game was going to be a battle between tanks (see screenshots below). However, as I modeled the 3D game assets, it occurred to me to add an airplane and an anti-aircraft gun.

 
 
 
 

The final game demo ended up being a battle between the tank, aircraft and the anti-aircraft . Below is a video showcasing the game.

 
 

The game demo makes use of the camera rotation. The camera follows the anti-aircraft view direction, thus creating the illusion that the player is controlling the anti-aircraft gun.

The demo also makes use of the collision-detection system and scenegraph. The tank, airplane and anti-aircraft are composed of children game objects. For example, the tank is made of the tank head and the tank base. When a bullet hits the tank, the engine detects the collision. The game disassociates the tank children thus causing the tank head to move up. The same idea is implemented for the airplane.

So what is next?

I found several issues with the engine. The main issue is an occasional crash of the engine during gameplay. It seems to happen in the OpenGL Manager. I will also add several features to the engine such as graphical effects showing an explosion.

Hope to showcase these changes next year :)

Understanding Encapsulation in C++

To understand what encapsulation is, I will start off with a brief explanation of setters/getters and class behavior.

Setters/Getters

C++ is an Object-Oriented Programming (OOP) language. One of the principles of OOP is to encapsulate data members as private members and access them through a public access interface.

Let's take for example the class shown below:

class Person{

private:

    int age; //1. Age is a private data member

public:

    Person();

    ~Person();

    void setAge(int uAge); //2. Sets the value of the age member

    int getAge(); //3. Gets the value of the age member

};

The data member 'age' is a private data member (line 1). The value of this data member is accessed either through the method 'setAge' or 'getAge.' The method 'setAge' sets the value of 'age' (line 2). The method 'getAge' retrieves the value of 'age' (line 3).

In Object-Oriented Programming languages, a method that sets the value of a private data member is known as a 'setter.' In contrast, a method that retrieves the value of a data member is known as a 'getter.' Therefore, the 'setAge' method is referred as a setter. Whereas, the 'getAge' method is referred as a getter.

Class Behavior

In a program, a Class represents a real-world object or entity. A class has data members and methods that account for the behavior of the object.

A class behavior is a common action performed by the object. For example, an artist has a set of common behaviors; before painting a masterpiece, an artist prepares the easel, cleans the brushes and starts painting.

A program can represent an artist behaviors or actions through a Class' method. The snippet below illustrates the behaviors of an artist object in C++. See lines 1, 2 & 3.

class Artist{

private:

public:

    Artist();
    ~Artist();

    void prepareEasel(); //1. Prepares the easel

    void cleanBrushes(); //2. Cleans the brushes

    void paint(); //3. Artist's paint style  
};

Encapsulation

Encapsulation is the act of making data members private and accessing them using a public interface; through setters and getters methods.

Object-Oriented Programming principles suggest always to encapsulate data members. However, data members are not the only entities that should be encapsulated. A Class behavior should also be encapsulated; in particular, behaviors of a class that varies.

Let's analyze the Artist class shown above. The class contains these actions:

  • Prepares Easel
  • Cleans Brushes
  • Paints

Of these behaviors, painting differs among painters. For example, an artist's could paint in a modern, impressionist or abstract style. The other two actions do not change among painters. Every artist prepares an easel and cleans a brush the same way.

Therefore, we should encapsulate the 'paint' behavior since it varies from object to object.

In C++, abstract classes are used to encapsulate behaviors. The snippet below shows an abstract class, 'PaintStyle,' that encapsulates the 'paint' behavior (see line 1).

class PaintStyle{

public:

    virtual void paint()=0;  //1. Paint virtual method

    virtual ~PaintStyle(){}; //2. Virtual destructor
};

To make use of the abstract class, you need to create a subclass. The snippet below shows a subclass of 'PaintStyle.' Notice that its 'paint' behavior prints "I paint in a modern style" (line 4).

class ModernPaintStyle:public PaintStyle{

private:

public:

    ModernPaintStyle(); //1. Constructor

    ~ModernPaintStyle(); //2. Destructor

    void paint(); //3. Paint method

};

ModernPaintStyle::ModernPaintStyle(){}

ModernPaintStyle::~ModernPaintStyle(){}

void ModernPaintStyle::paint(){

    std::cout<<"I paint in a modern style"<<std::endl; //4. Prints "I paint in modern style"

}

Let's create a second subclass of 'PaintStyle.' But this time set the behavior to print "I paint in an impressionist style" (line 4).

class ImpressionistPaintStyle:public PaintStyle{

private:

public:

    ImpressionistPaintStyle(); //1. Constructor

    ~ImpressionistPaintStyle(); //2. Destructor

    void paint(); //3. Paint method

};

ImpressionistPaintStyle::ImpressionistPaintStyle(){}

ImpressionistPaintStyle::~ImpressionistPaintStyle(){}

void ImpressionistPaintStyle::paint(){

    std::cout<<"I paint in an Impressionist style"<<std::endl; //4. Prints "I paint in an Impressionist style"

}

Let's go back to the Artist class and apply the changes shown below.

The snippet shows a pointer member of type 'PaintStyle' (line 1). Line 4 illustrates the addition of the method 'setPaintStyle.' This method along with the modification of the 'paint' method (line 5) injects polymorphism into the class. That is, the Artist class can change behaviors during runtime (See lines 6 &7).

class Artist{

private:

    PaintStyle *artistStyle; //1. Pointer member to 'PaintStyle'

public:

    Artist();
    ~Artist();

    void prepareEasel(); //2. Prepares the easel

    void cleanBrushes(); //3. Cleans the brushes

    void setPaintStyle(PaintStyle *uStyle); //4. Sets the paint style

    void paint(); //5. Artist's paint style

};

Artist::Artist(){}

Artist::~Artist(){}

void Artist::setPaintStyle(PaintStyle *uStyle){

    artistStyle=uStyle; //6. Sets the style of the painter
}

void Artist::paint(){

    artistStyle->paint(); //7. Calls the paint method of the subclass

}

Polymorphism permits the artist class to paint in a modern style. The next minute it can behave like an artist that paints in an impressionist style.

The snippet below showcase these properties.

int main(){

    Artist *picasso=new Artist(); //1. Create an instance of an artist

    PaintStyle *modernStyle=new ModernPaintStyle(); //2. Create an instance of a modern painter style

    PaintStyle *impressionistStyle=new ImpressionistPaintStyle(); //3. Create an instance of an impressionist painter style

    picasso->setPaintStyle(modernStyle); //4. sets the painter style to a modern style

    picasso->paint(); //5. Calls paint method. It prints "I paint in a modern style"

    picasso->setPaintStyle(impressionistStyle); //6. Change the painter style to an impressionist style

    picasso->paint(); //7. Calls the paint method. It prints "I paint in an impressionist style"

    return 0;

}

Line 1 creates an instance of an artist object. Lines 2 & 3 creates instances of different paint styles. The painter style is set in line 4, and when the method 'paint' is executed, it prints "I paint in a modern style" (line 5).

Because of polymorphism and encapsulation, the behavior of the class can change with a simple method call as shown in line 6. When the 'paint' method is called again, it prints "I paint in an impressionist style" (line 7).

Polymorphism and encapsulation give flexibility and modularity to an application. And it enables a program to be extended and modified with few changes. For example, years from now, a new paint style can be added without ever touching the Artist class.

Hope this helps.