Game Engine Beta v0.0.4

It has been a month and a half since I gave you an update on the engine. I have been very busy implementing new features and fixing several bugs with the engine. Some of the new features are shown in the video below:

 
 

Improvements

One of the major features that I implemented in v0.0.4 is a particle system. To be honest, the particle system is very primitive. I am still learning how to create several particle effects, so expect more effects soon. As you can see from the video, I was able to implement a "kind of" explosion effect.

I also implemented collision filters. Collision filters are useful whenever you want a particular type of objects to collide with one another but not with any other kind. For example, object A and object B can collide; object A and object C can collide, but any collision between object B and object C is ignored.

A minor detail which I had ignored all along was to enable multi-touch in the engine.

Issues

While developing the second game demo, I started noticing glitches with the OpenGL manager. With a particular type of objects, the OpenGL manager would spit out an error. This issue was hard to detect, and it took me quite a few weeks to find it. I thought I had fixed the bug, but as I was developing this beta version, the OpenGL manager complained again (once). The problem with this bug is that it is intermittent and very hard to reproduce.

I'm considering porting the engine to work with the Apple Graphics API, Metal. However, I'm still weighing the pros and cons of using OpenGL vs. Metal. One thing I have noticed is that Metal is a lot easier to work with than OpenGL, but that is just my opinion.

Thanks for reading

Components of a Game Engine

In 2013, I decided to develop a Game Engine from scratch. Why did I decide to do so, I still do not know. However, what I do know, is that I wanted to do something beyond my intellectual abilities.

When I started, I knew nothing about game engines, OpenGL, Computer Graphics. My C++ skills were limited, and I remember having problems grasping Linear Algebra during college.

Developing a Game Engine demanded that I wake up earlier than most people (5:00 am), so I could squeeze in about two hours of coding before heading to work. It forced me to code until the late hours of the night (approx 7:00 pm-1:00 am). And it made me say goodbye to my weekends. Weekends that I spent coding in my room or at Starbucks instead of enjoying life.

Then on July 21, 2016, around 2:00 am, I did it!!! I finally finished the basic framework of the game engine. It took three years, about 1,095 days, approximately 15,330 hours of work.

Throughout this journey, my math, coding, and engineering skills improved tenfold. However, it would be worth little if I didn't share what I've learned with you. Thus, I decided to share all my knowledge on this blog. As of today, I have written over 175 articles on this blog.

I have decided to compile my best articles into an ebook. In this ebook, Components of a Game Engine I share all that I know about game engine development. I talk about computer graphics concepts, such as the rendering pipeline, shaders, lighting. I also share concepts on computational geometry and its use in collision detection. Furthermore, I explain several algorithms used in game engines.

Components of a Game Engine will not make you a guru on game engines. But it will give you a solid understanding of the mechanics and elements of a game engine. The materials in the ebook are freely available on my site. However, if you want to have all these articles in an organized manner, I recommend you to get a copy of the ebook.

I would appreciate if you support this site by buying my new ebook Components of a Game Engine.

Thanks

Applying Light to a 3D model using Metal

In the previous post, you learned how to shade a 3D model. The shading effect was very simple. It merely provided the 3D model with a depth-perception. In this post, you will learn how to light an object by simulating Ambient-Diffuse-Specular (ADS) Lighting.

Before I start, I recommend you to read the prerequisite materials listed below:

Prerequisite:

How Light Works?

When light rays hit an object, the rays are either reflected, absorbed or transmitted. For our discussion, I will focus solely on light rays reflection.

When a light ray hits a smooth surface, the ray will be reflected at an angle equal to its incident ray. This type of reflection is known as Specular Reflection.

Visually, specular reflection is the white highlight seen on smooth, shiny objects.

In contrast, when a light ray hits a rough surface, the ray will be reflected at a different angle as its incident ray. This type of reflection is known as Diffuse Reflection.

Diffuse reflection enables our brain to make out the shape and the color of an object. Thus, diffuse reflection is more important to our vision than specular reflection.

Let's go through a simple animation to understand the difference between these reflections.

The animation below shows a 3D model with only specular reflection:

Whereas, the animation below shows a 3D model with only diffuse reflection:

As you can see, it is almost impossible for our brain to make out the shape of an object with only specular reflection.

There is another type of reflection known as Ambient Reflection. Ambient reflection is light rays that enter a room and bounces multiple types before reflecting off a particular object.

When we combine these three types of reflections, we get the following result:

Simulating Light Reflections

Now that you know how light works, the next question is: How can we model it mathematically?

Diffuse Reflection

In diffuse reflection, a light ray's reflection angle is not equal to its incident angle. From experience, we also know that a light ray's incident angle influences the brightness of an object. For example, an object will have different brightness when a light ray hits the object's surface at a 90-degree angle than when light rays hit the surface at a 5-degrees angle.

Mathematically, we can simulate this natural behavior by computing the Dot-Product between the light rays and a surface's Normal vector. When a light source vector S is parallel and heading in the same direction as the normal vector n, the dot product is 1, meaning that the surface location is fully lit. Recall, the dot product ranges between [-1.0,1.0].

As the light source moves, the angle between vectors S and n changes, thus changing the dot product value. When this occurs, the brightness levels also changes.

Taking into account the surface's Diffuse Reflection factor, the equation for Diffuse Reflection is:

diffuseEquation.png

Specular Reflection

In Specular Reflection, the light ray's reflection angle is always equal to its incident angle. However, the specular reflection that reaches your eyes is dependent on the angle between the reflection ray (r) and the viewer's location (v).

This behavior implies that to model a specular reflection; we need to compute a reflection vector from a normal vector and the light ray vector. We then calculate the influence of the reflection's vector onto the viewer's vector, i.e., we compute the dot product. The result provides the specular reflection of the object.

Taking into account the surface's Specular Reflection factor, the equation for Specular Reflection is:

The exponent determines the size of the highlight.

Ambient Reflection

There is not much to ambient reflection. The ambient reflection depends on the light’s ambient color and the ambient’s material reflection factor.

The equation for Ambient Reflection is:

Simulating Light in the Rendering Pipeline

In Computer Graphics, Light is simulated in either the Vertex or Fragment Shaders. When lighting is simulated in the Vertex Shader, it is known as Gouraud Shading. If lighting is simulated in the Fragment Shader, it is known as Phong Shading.

Gouraud shading (AKA Smooth Shading) is a per-vertex color computation. What this means is that the vertex shader determines the lighting for each vertex and pass the lighting results, i.e. a color, to the fragment shader. Since this color is passed to the fragment shader, it is interpolated across the fragments thus giving the smooth shading.

Here is an illustration of the Gouraud Shading:

In contrast, Phong shading is a per-fragment color computation. The vertex shader provides the normal vectors and position data to the fragment shader. The fragment shader then interpolates these variables and calculates the lighting for the fragment.

Here is an illustration of the Phong Shading:

With either, Gouraud or Phong Shading, the Lighting computation is the same, although the results will differ.

In this project, we will implement a Phong Shading Light effect. For your convenience, I provide link to the Gouraud Shading Light project at the end of the article.

Setting up the project

Let's apply Lighting (Phong shading) to a 3D model.

By now, you should know how to set up a Metal Application and how to implement simple shading to a 3D object. If you do not, please read the prerequisite articles mentioned above.

For your convenience, the project can be found here. Download the project so you can follow along.

Note: The project is found in the "applyingLightFragmentShader" git branch. The link should take you directly to that branch. Let's start,

Open up file "MyShader.metal"

The only operation we do in the Vertex Shader is to pass the normal vectors and vertices (in Model-View Space) to the fragment shader as shown below:

//6. Pass the vertices in MV space
vertexOut.verticesInMVSpace=verticesInMVSpace;

//7. Pass the normal vector in MV space
vertexOut.normalVectorInMVSpace=normalVectorInMVSpace;

In the fragment shader, the first thing we do is to compute the light ray vector as shown below:

//2. Compute the direction of the light ray betweent the light position and the vertices of the surface
float3 lightRayDirection=normalize(lightPosition.xyz-vertexOut.verticesInMVSpace.xyz);

We then compute the reflection vector between the light ray vector and the normal vectors as shown below:

//4. Compute reflection vector
float3 reflectionVector=reflect(-lightRayDirection,vertexOut.normalVectorInMVSpace);

The diffuse reflection is computed by first computing the diffuse intensity between the normal vectors and the light ray vector. The diffuse intensity is then multiplied by the light color and the material diffuse reflection factor. The snippet below shows this calculation:

//6. compute diffuse intensity by computing the dot product. We obtain the maximum the value between 0 and the dot product
float diffuseIntensity=max(0.0,dot(vertexOut.normalVectorInMVSpace,lightRayDirection));

//7. compute Diffuse Color
float3 diffuseLight=diffuseIntensity*light.diffuseColor*material.diffuseReflection;

To compute the specular reflection, we take the dot product between the reflection vector and the view vector. This factor is then multiplied by the specular light color and the material specular reflection factor.

//8. compute specular lighting
float3 specularLight=float3(0.0,0.0,0.0);

if(diffuseIntensity>0.0){

    specularLight=light.specularColor*material.specularReflection*pow(max(dot(reflectionVector,viewVector),0.0),material.specularReflectionPower);

}

The total lighting reflection color is then added together and is assign to the fragment:

//9. Add total lights
float4 totalLights=float4(ambientLight+diffuseLight+specularLight,1.0);

//10. assign light color to fragment
return float4(totalLights);

You can now build and run the project. However, since you have a texture applied to the 3D model, you can mix the lighting color with the sampled texture color, as shown below:

//10. set color fragment to the mix value of the shading and light
return float4(mix(sampledColor,totalLights,0.5));

And that is it, build the project. Swipe your finger across the screen; you should see the lighting change as you move your fingers.

ADS Phong Shading

ADS Phong Shading

For your convenience, the Gouraud Shading project can be found here. The Phone Shading project can be found here.

Note: As of today, the iOS simulator does not support Metal. You need to connect your iPad or iPhone to run the project.

Hope this helps.

Applying textures to 3D objects in Metal

In computer graphics, a texture represents image data that wraps a 3D model. For example, the image below shows a model with a texture.

In this article, I will teach you how to apply a texture to a 3D model.

Before I start, I recommend you to read the prerequisite materials listed below:

Prerequisite:

How does a GPU apply a Texture?

As you recall, a rendering pipeline consists of a Vertex and a Fragment shader. The fragment shader is responsible for attaching a texture to a 3D model.

To attach a texture to a model, you need to send the following information to a Fragment Shader:

  • UV Coordinates
  • Raw image data
  • Sampler information

The UV coordinates are sent to the GPU as attributes. Fragment Shaders can not receive attributes. Thus, UV coordinates are sent to the Vertex Shader and then passed down to the Fragment Shader.

The raw image data contains pixel information such as the image RGB color information. The raw image data is packed into a Texture Object and sent directly to the Fragment Shader. The Texture Object also contains image properties such as its width, height, and pixel format.

Texture Object-metal.jpeg

Many times, a texture does not fit a 3D model. In this instances, the GPU will be required to stretch or shrink the texture. A Sampler Object contains parameters, Filtering and Addressing, which inform the GPU how to wrap or stretch a texture. The Sampler Object is sent directly to the Fragment Shader.

Once this information is available, the Fragment Shader samples the supplied raw data and returns a color depending on the UV-coordinates and the Sampler information. The color is applied fragment by fragment onto the model.

UV coordinates

One of the first steps to adding texture to a model is to create a UV map. A UV map is created when you unwrap a 3D model into its 2D equivalent. The image below shows the unwrapping of a model into 2D.

By unwrapping the 3D character into a 2D entity, a texture can correctly be applied to the character. The image below shows a texture and the 3D model with texture applied.

During the unwrapping process, the vertices of the 3D model are mapped into a two-dimensional coordinate system. This new coordinate system is known as the UV Coordinate System.

The UV Coordinate System is composed of two axes, known as U and V axes. These axes are equivalent to the familiar X-Y axes; the only difference is that the U-V axes range from 0 to 1. The U and V components are also referred as S and T components.

The new vertices produced by the unwrapping of the model are called UV Coordinates. These coordinates will be loaded into the GPU. And will serve as reference points to the GPU as it attaches the texture to the model.

Luckily, you do not need to compute the UV coordinates. These coordinates are supplied by modeling software tools, such as Blender.

Decompressing Image Data

For the fragment shader to apply a texture, it needs the RGBA color information of an image. To get the raw data, you will have to decode the image. There is an excellent library capable of decoding ".png" images known as LodePNG created by Lode Vandevenne. We will use this library to decode a png image.

Texture Filtering & Wrapping (Addressing)

Textures don't always align with a 3D model. Sometimes, a texture needs to be stretched or shrunk to fit a model. Other times, texture coordinates may fall out of range.

You can inform the GPU what to do in these instances by setting Filtering and Wrapping Modes. The Filtering Mode lets you decide what to do when pixels don't have a 1 to 1 ratio with texels. The Wrapping Mode allows you to choose what to do with texture coordinates that fall out of range.

Texture Filtering

As I mentioned earlier, there is never a 1 to 1 ratio between texels in a texture map and pixels on a screen. For example, there are times when you need to stretch or shrink a texture as you apply it to the model. This will break up any initial correspondence between a texel and a pixel. Because of this, the color of a pixel needs to be approximated to the closest texel. This process is called Texture Filtering.

Note: Stretching a texture is called Magnification. Shrinking a texture is known as Minification.

The two most common filtering settings are:

  • Nearest Neighbor Filtering
  • Linear Filtering
Nearest Neighbor Filtering

Nearest Neighbor Filtering is the fastest and simplest filtering method. UV coordinates are plotted against a texture. Whichever texel the coordinate falls in, that color is used for the pixel color.

Linear Filtering

Linear Filtering requires more work than Nearest Neighbor Filtering. Linear Filtering works by applying the weighted average of the texels surrounding the UV coordinates. In other words, it does a linear interpolation of the surrounding texels.

Texture Wrapping

Most texture coordinates fall between 0 and 1, but there are instances when coordinates may fall outside this range. If this occurs, the GPU will handle them according to the Texture Wrapping mode specified.

You can set the wrapping mode for each (s,t) component to either Repeat, Clamp-to-Edge, Mirror-Repeat, etc.

  • If the mode is set to Repeat, it will force the texture to repeat in the direction in which the UV-coordinates exceeded 1.0.
  • If it is set to Mirror-Repeat, it will force the texture to behave as a Repeat mode but with a mirror behavior; thus flipping the texture.
  • If it is set to Clamp-to-Edge, it will force the texture to be sampled along the last row or column with valid texels.

Metal has a different terminology for Texture Wrapping. In Metal, it is referred to as "Texture Addressing."

Applying Textures using Metal

To apply a texture using Metal, you need to create two objects; a Texture and a Sampler State object.

The Texture object contains the image data, format, width, and height. The Sampler State object defines the filtering and addressing (wrapping) modes.

Texture Object

To apply a texture using Metal, you need to create an MTLTexture object. However, you do not create an MTLTexture object directly. Instead, you create it through a Texture Descriptor object, MTLTextureDescriptor.

The Texture Descriptor defines the properties, such as image width, height and image format.

Once you have created an MTLTexture object through an MTLTextureDescriptor, you need to copy the image raw data into the MTLTexture object.

Sampler State Object

As mentioned earlier, the GPU needs to know what to do when a texture does not fit a 3D model properly. A Sampler State Object, MTLSamplerState, defines the filtering and addressing modes.

Just like a Texture object, you do not create a Sampler State object directly. Instead, you create it through a Sampler Descriptor object, MTLSamplerDescriptor.

Setting up the project

Let's apply a texture to a 3D model.

By now, you should know how to set up a Metal Application and how to render a 3D object. If you do not, please read the prerequisite articles mentioned above.

For your convenience, the project can be found here. Download the project so you can follow along.

Note: The project is found in the "addingTextures" git branch. The link should take you directly to that branch.

Let's start,

Declaring attribute, texture and Sampler objects

We will use an MTLBuffer to represent UV-coordinate attribute data as shown below:

// UV coordinate attribute
id<MTLBuffer> uvAttribute;

To represent a texture object and a sampler state object, we are going to use MTLTexture and MTLSamplerState, respectively. This is shown below:

// Texture object
id<MTLTexture> texture;

//Sampler State object
id<MTLSamplerState> samplerState;

Loading attribute data into an MTLBuffer

To load data into an MTLBuffer, Metal provides a method called "newBufferWithBytes." We are going to load the UV-coordinate data into the uvAttribue buffer. This is shown in the "viewDidLoad" method, line 6c.

//6c. Load UV-Coordinate attribute data into the buffer
uvAttribute=[mtlDevice newBufferWithBytes:smallHouseUV length:sizeof(smallHouseUV) options:MTLResourceCPUCacheModeDefaultCache];

Decoding Image data

The next step is to obtain the raw data of our image. The image we will use is named "small_house_01.png" and is shown below:

The image will be decoded by the LodePNG library in the "decodeImage" method. The library will provide a pointer to the raw data and will compute the width and height of the image. This information will be stored in the variables: "rawImageData", "imageWidth" and "imageHeight."

Creating a Texture Object

Our next step is to create a Texture Object through a Texture Descriptor Object as shown below:

//1. create the texture descriptor
MTLTextureDescriptor *textureDescriptor=[MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatRGBA8Unorm width:imageWidth height:imageHeight mipmapped:NO];

//2. create the texture object
texture=[mtlDevice newTextureWithDescriptor:textureDescriptor];

The descriptor sets the pixel format, width, and height for the texture.

After the creation of a texture object, we need to copy the image color data into the texture object. We do this by creating a 2D region with the dimensions of the image and then calling the "replaceRegion" method of the texture, as shown below:

//3. copy the raw image data into the texture object

MTLRegion region=MTLRegionMake2D(0, 0, imageWidth, imageHeight);

[texture replaceRegion:region mipmapLevel:0 withBytes:&rawImageData[0] bytesPerRow:4*imageWidth];

The "replaceRegion" method receives a pointer to the raw image data.

Creating a Sampler Object

Next, we create a Sampler State object through a Sampler Descriptor object. The Sampler Descriptor filtering parameters are set to use Linear Filtering. The addressing parameters are set to "Clamp To Edge." See snippet below:

//1. create a Sampler Descriptor
MTLSamplerDescriptor *samplerDescriptor=[[MTLSamplerDescriptor alloc] init];

//2a. Set the filtering and addressing settings
samplerDescriptor.minFilter=MTLSamplerMinMagFilterLinear;
samplerDescriptor.magFilter=MTLSamplerMinMagFilterLinear;

//2b. set the addressing mode for the S component
samplerDescriptor.sAddressMode=MTLSamplerAddressModeClampToEdge;

//2c. set the addressing mode for the T component
samplerDescriptor.tAddressMode=MTLSamplerAddressModeClampToEdge;

//3. Create the Sampler State object
samplerState=[mtlDevice newSamplerStateWithDescriptor:samplerDescriptor];

Linking Resources to Shaders

In the "renderPass" method, we are going to bind the UV-Coordinates, texture object and sampler state object to the shaders.

Lines 10i and 10j, shows the methods used to bind the texture and sampler objects to the fragment shader.

//10i. Set the fragment texture
[renderEncoder setFragmentTexture:texture atIndex:0];

//10j. set the fragment sampler
[renderEncoder setFragmentSamplerState:samplerState atIndex:0];

The fragment shader, shown below, receives the texture and sampler data by specifying in its argument the keywords [[texture()]] and [[sampler()]]. In this instance, since we want to bind the texture to index 0, the argument is set to [[texture(0)]]. The same logic applies for the Sampler.

fragment float4 fragmentShader(VertexOutput vertexOut [[stage_in]], texture2d<float> texture [[texture(0)]], sampler sam [[sampler(0)]]){}

Setting up the Shaders

Open up the file "MyShader.metal."

Recall that the Fragment Shader is responsible for attaching a texture to a 3D object. However, to do so, the fragment shader requires a texture, a sampler object, and the UV-Coordinates.

The UV-Coordinates are passed down to the fragment shader from the vertex shader as shown in line 7b.

//7b. Pass the uv coordinates to the fragment shader
vertexOut.uvcoords=uv[vid];

The fragment shader receives the texture at texture(0) and the sampler at sampler(0). Textures have a "sample" function which returns a color. The color returned by the "sample" function depends on the UV-coordinates, and the Sampler parameters provided. This is shown below:

//sample the texture color
float4 sampledColor=texture.sample(sam, vertexOut.uvcoords);

Finally, we set the fragment color to the sampled color.

//set color fragment to the sampled color
return float4(sampledColor);

You can now build and run the project. You should see a texture attached to the 3D model.

If you want, you can combine the Shading Color returned by the Vertex Shader with the texture Sampled Color. (Shading was implemented in this project.)

//set color fragment to the mix value of the shading and sampled color
return float4(mix(sampledColor,vertexOut.color,0.2));

And that is it. Build the project; you should see the 3D model with a texture. Swipe your fingers across the screen to see the shading effect.

Note: As of today, the iOS simulator does not support Metal. You need to connect your iPad or iPhone to run the project.

Hope this helps.

Simple 3D Shading using Metal

In the previous article, you learned how to render a 3D object. However, the object lacked depth perception. In computer graphics, as in art, shading depicts depth perception by varying the level of darkness.

Before I start, I recommend you to read the prerequisite materials listed below:

Prerequisite:

Shading

As mentioned in the introduction, shading depicts depth perception by varying the level of darkness on an object. For example, the image below shows a 3D model without any depth perception:

Whereas, the image below shows the 3D model with shading applied:

The shading levels are dependent on the incident angle between the surface and light rays. For example, an object will have different shading when a light ray hits the object's surface at a 90-degree angle than when light rays hit the surface at a 5-degrees angle. The image below depicts this behavior; the shading effects vary depending on the angle between the light source and the surface.

We can simulate this natural behavior by computing the Dot-Product between the light rays and a surface's Normal vector. When a light source vector s is parallel and heading in the same direction as the normal vector n, the dot product is 1, meaning that the surface location is fully lit. Recall, the dot product ranges between [-1.0,1.0].

As the light source moves, the angle between vectors s and n changes, thus changing the dot product value. When this occurs, the shading levels also changes.

Shading is implemented either in the Vertex or Fragment Shaders (Functions), and it requires the following information:

  • Normal Vectors
  • Normal Matrix
  • Light Position
  • Model-View Space

Normal vectors are vectors perpendicular to the surface. Luckily, we don't have to compute these vectors. This information is supplied by any 3D modeling software such as Blender.

The Normal Matrix is extracted from the Model-View Space transformation and is used to transform the normal vectors. The Normal Matrix requires being transposed and inverted before they can be used in shading.

The light position is the location of the light source in the scene. The light position must be transformed into View Space before you can shade the object. If you do not, you may see weird shading.

We will implement shading in the Vertex Shader (Function). In our project, the Normal Vectors will be passed down to the vertex shader as attributes. The Normal Matrix and light position will be passed down as uniforms.

Setting Up the Application

Let's apply shading to a 3D model.

By now, you should know how to set up a Metal Application and how to render a 3D object. If you do not, please read the prerequisite articles mentioned above. In this article, I will focus on setting up the Shaders.

For your convenience, the project can be found here. Download the project so you can follow along.

Note: The project is found in the "shading3D" git branch. The link should take you directly to that branch.

Let's start,

By now, you should know how to pass attribute and uniform data to the GPU using Metal, so I will not go into much detail.

Also, the project is interactive. As you swipe your fingers across the screen, the x and y coordinates of the light source will change. Thus a new shading value will be computed every time you touch the screen.

Open up the "ViewController.mm" file.

The project contains a method called "updateTransformation" which is called before each render pass. In the "updateTransformation" method, we are going to compute the Normal Matrix space and transform the new light position into the Model-View space. This information is then passed down to the vertex shader.

Updating Normal Matrix Space

The Normal Matrix space is extracted from the Model-View transformation. Before each render pass, the Normal Matrix must be transposed and inverted as shown in the snippet below:

//get normal matrix
matrix_float3x3 normalMatrix={modelViewTransformation.columns[0].xyz,modelViewTransformation.columns[1].xyz,modelViewTransformation.columns[2].xyz};

normalMatrix=matrix_transpose(matrix_invert(normalMatrix));

//load the NormalMatrix into the MTLBuffer
normalMatrixUniform=[mtlDevice newBufferWithBytes:(void*)&normalMatrix length:sizeof(normalMatrix) options:MTLResourceOptionCPUCacheModeDefault];

Updating Light Position

The position of the light source is transformed into the View space before each render pass. See the snippet below:

//light position
vector_float4 lightPosition={xPosition*5.0,yPosition*5.0+10.0,-5.0,1.0};

// transform the light position
lightPosition=matrix_multiply(viewMatrix, lightPosition);

// load the light position into the MTLBuffer
mvLightUniform=[mtlDevice newBufferWithBytes:(void*)&lightPosition length:sizeof(lightPosition) options:MTLResourceCPUCacheModeDefaultCache];

Setting up the Shaders

Open up the "MyShader.metal" file.

We will implement shading in the Vertex Shader. The Vertex Shader (Function) receives the following information in its argument:

  • Normal Vectors (as attributes)
  • Model-View space
  • Normal Matrix Space
  • Light Position

To apply shading, we transform the Normal Vectors into Normal Matrix space as shown below:

//2. transform the normal vectors by the normal matrix space
float3 normalVectorInMVSpace=normalize(normalMatrix*normal[vid].xyz);

Since we need to compute the light ray direction, we transform the model's vertices into the same space as the Light Position, i.e., we transform it into the Model-View space. Once this operation is complete, we can compute the light ray direction by subtracting the light position and the surface vertices. See the snippet below:

//3. transform the vertices of the surface into the Model-View Space
float4 verticesInMVSpace=mvMatrix*vertices[vid];

//4. Compute the direction of the light ray betweent the light position and the vertices of the surface
float3 lightRayDirection=normalize(lightPosition.xyz-verticesInMVSpace.xyz);

With the Normal Vectors and the light ray direction, we can compute the intensity of the shading. Since the dot product ranges from [-1,1], we get the maximum value between 0 and the dot product as shown below:

//5. compute shading intensity by computing the dot product. We obtain the maximum the value between 0 and the dot product

float shadingIntensity=max(0.0,dot(normalVectorInMVSpace,lightRayDirection));

Next, we multiply the shading intensity value, by a particular light color. The shading color is then passed down to the fragment shader (function).

//6. Multiply the shading intensity by a light color

float4 shadingColor=shadingIntensity*lightColor;

//7. Pass the shading color to the fragment shader

vertexOut.color=shadingColor;

The fragment shader is quite simple. It applies the shading color to the 3D model.

And that is it. Build the project and swipe your finger across the screen. The 3D object will be shaded differently depending on the position of the light source.

Note: As of today, the iOS simulator does not support Metal. You need to connect your iPad or iPhone to run the project.

Hope this helps