Introduction
Igneel.Graphics is an API for rendering 3D graphics on .NET. It provides and abstraction for interacting with the graphic hardware from the C# code. The API was developed combining the expresivity of C# with the power of C++. Futhermore it combines conceps taken from OpenGL and Direct3D specifications with unique features like C# interface and dynamic mapping to shader's uniforms variables. Although Igneel.Graphics shares commons definition with Direct3D10 it is not just a simple wrapper of it, it's more like a platform o middleware you can use from managed code, that can be implemented with Direct3D11, OpenGL or OpenGL ES. Also the shaders management in Igneel.Graphics is more related to OpenGL specification than Direct3D.
In Igneel.Graphics every progamable stage of the graphic pipeline is represented by a IShaderStage<TShader>
inteface. This interface can be used to creates shader, set resources like textures ,buffers or sampler states.
This article will cover a sample application hosted in a Windows Forms environment in order to show how use the API components for rendering geometry, applying textures, load and build shaders code and provides application values to shader's uniforms variables.
Background
Igneel.Graphics was developed as the low level Graphic API of Igneel Engine. It is an abstraction on .NET that support the high level rendering system of Igneel Engine. The API was designed to support several shader models upto SM5.0. Therefore Igneel.Graphics design allows the API to be implemented on different native platforms.
The current Igneel.Graphics implementation use Shader Model 4.0 (SM4.0). This shader model was initially supported by Direct3D10 and OpenGL 2.0. This model redefined completely the previous shader model architecture, allowing the customization of new stages of the graphic processing. Later shader model 5.0 added new stages like the hull and domain shader stages that interact with the non-customizable tessellation stage. The vertex shader stage is the only required, the others are optional. For more information take a look at the DirectX SDK or OpenGL Documentation.
Using the code
First of all we create a Windows Form application. In the Form
constructor we acquire a reference of the GraphicDevice
, after that we are ready to start loading our shaders and creating the GraphicBuffer
for holding the model's geometry. After the GraphicDevice
was created we can also load the model's textures represented by the Texture2D
.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using Igneel;
using Igneel.Graphics;
using Igneel.Windows;
using Igneel.Windows.Forms;
namespace BasicEarth
{
public partial class Form1 : Form
{
[StructLayout(LayoutKind.Sequential)]
public struct SphereVertex
{
[VertexElement(IASemantic.Position)]
public Vector3 Position;
[VertexElement(IASemantic.Normal)]
public Vector3 Normal;
[VertexElement(IASemantic.Tangent)]
public Vector3 Tangent;
[VertexElement(IASemantic.TextureCoordinate, 0)]
public Vector2 TexCoord;
public SphereVertex(Vector3 position = default(Vector3),
Vector3 normal = default(Vector3),
Vector3 tangent = default(Vector3),
Vector2 texCoord = default(Vector2))
{
Position = position;
Normal = normal;
Tangent = tangent;
TexCoord = texCoord;
}
}
[StructLayout(LayoutKind.Sequential)]
public struct DirectionalLight
{
public Vector3 Direction;
private float pad0;
public Color3 Color;
private float pad1;
}
public interface ProgramMapping
{
float Time { get; set; }
Matrix World { get; set; }
Matrix View { get; set; }
Matrix Projection { get; set; }
DirectionalLight DirectionalLight { get; set; }
Sampler<Texture2D> DiffuseTexture { get; set; }
Sampler<Texture2D> NightTexture { get; set; }
Sampler<Texture2D> NormalMapTexture { get; set; }
Sampler<Texture2D> ReflectionMask { get; set; }
Vector3 CameraPosition { get; set; }
float ReflectionRatio { get; set; }
float SpecularRatio { get; set; }
float SpecularStyleLerp { get; set; }
int SpecularPower { get; set; }
}
private GraphicDevice device;
private GraphicBuffer vertexBuffer;
private GraphicBuffer indexBuffer;
ProgramMapping input;
ShaderProgram shaderProgram;
Matrix world;
Matrix view;
Matrix projection;
SamplerState diffuseSampler;
Texture2D diffuseTexture;
Texture2D nightTexture;
Texture2D normalMapTexture;
Texture2D reflectionMask;
private Vector3 cameraPosition = new Vector3(0, 10, -15);
public Form1()
{
SetStyle(ControlStyles.Opaque, true);
InitializeComponent();
Init();
Application.Idle += (sender, args) =>
{
NativeMessage message;
while (!Native.PeekMessage(out message, IntPtr.Zero, 0, 0, 0))
{
RenderFrame();
}
};
}
protected override void OnResize(EventArgs e)
{
base.OnResize(e);
if (device != null)
{
device.ResizeBackBuffer(Width, Height);
device.ViewPort = new ViewPort(0, 0, Width, Height);
projection = Matrix.PerspectiveFovLh((float)Width / (float)Height, Igneel.Numerics.PIover6, 1, 1000);
}
}
private void Init()
{
ShaderRepository.SetupD3D10_SM40("Shaders");
GraphicDeviceFactory devFactory = new IgneelD3D10.GraphicManager10();
device = devFactory.CreateDevice(new WindowContext(Handle)
{
BackBufferWidth = Width,
BackBufferHeight = Height,
BackBufferFormat = Format.R8G8B8A8_UNORM,
DepthStencilFormat = Format.D24_UNORM_S8_UINT,
FullScreen = false,
Sampling = new Multisampling(1, 0),
Presentation = PresentionInterval.Default
});
shaderProgram = device.CreateProgram<SphereVertex>("VertexShaderVS", "PixelShaderPS");
input = shaderProgram.Map<ProgramMapping>();
device.Blend = device.CreateBlendState(new BlendDesc(
blendEnable: true,
srcBlend: Blend.SourceAlpha,
destBlend: Blend.InverseSourceAlpha));
device.DepthTest = device.CreateDepthStencilState(new DepthStencilStateDesc(
depthFunc: Comparison.Less));
device.Rasterizer = device.CreateRasterizerState(new RasterizerDesc(
cull: CullMode.Back,
fill: FillMode.Solid));
diffuseSampler = device.CreateSamplerState(new SamplerDesc(
addressU: TextureAddressMode.Wrap,
addressV: TextureAddressMode.Wrap,
filter: Filter.MinPointMagMipLinear));
diffuseTexture = device.CreateTexture2DFromFile("Textures/Earth_Diffuse.dds");
nightTexture = device.CreateTexture2DFromFile("Textures/Earth_Night.dds");
normalMapTexture = device.CreateTexture2DFromFile("Textures/Earth_NormalMap.dds");
reflectionMask = device.CreateTexture2DFromFile("Textures/Earth_ReflectionMask.dds");
world = Matrix.Identity;
view = Matrix.LookAt(cameraPosition, new Vector3(0, 0, 1), Vector3.UnitY);
projection = Matrix.PerspectiveFovLh((float)Width / (float)Height, Igneel.Numerics.PIover6, 1, 1000);
CreateSphere();
}
.............
}
Listing 1: Initialization.
In the previous code we hookup the Application.Idle
event, thus we can render each frame when there aren't pending messages to process. The line SetStyle(ControlStyles.Opaque, true)
was used to avoid the flickering when windows tries to repaint the background. In addition some structures were defined, like the geometry vertex definition SphereVertex
, and the light definition DirectionalLight
used in the pixel shader for lighting the scene. Also an interface was defined ProgramMapping
, this interface is used to create a mapping between the application's code and the shader's uniforms. A shader's uniform is a variable that can recieves its value from the application's code and are generally defined in a Constant Buffer.
In the Init
method the line ShaderRepository.SetupD3D10_SM40("Shaders")
tells the API the shader' files location and default shader compiling settings for use with shader model 4.0. Next a GraphicDeviceFactory was created, in these case an implementation with Direct3D10 was used so a GraphicManager10 instance is required. This is the only part of the code that is attach to a specific native implementation of the API. Then using the factory we can create the graphic device passing a WindowsContext as argument that contains presentation settings like the width and height of the window as well the back buffer format and multisampling.
After the device is created we can load and compile the shaders with just one line of code.
shaderProgram = device.CreateProgram<SphereVertex>("VertexShaderVS", "PixelShaderPS");
The previus line creates a shader program containing a vertex shader with input layout specified by the SphereVertex
struct and a pixel shader. In order to simplify the code for creating shader objects, the API use some conventions for identify the type of shader to create based on the shader's filename. For accomplish this it use the filename suffix like VS(Vertex Shader), PS(pixel shader), GS (geometry shader), HS (hull shader), DS (domain shader) and CS (compute shader).
Also a unique feature of Igneel.Graphics is called shader interface mapping as shown in the following statement:
input = shaderProgram.Map<ProgramMapping>();
Then after retrieving the interface instance we can use it to set the shader uniforms variables like transformation matrices, lighting data and textures just by setting C# properties with additional intellisence support.
Textures are very important in Computer Graphics applications. So the graphic device support several methods for loading textures from files ,streams or just reserving GPU memory for filling it later. You can also load different types of textures like Texture1D, Texture2D or Texture3D. Cube textures are treated as an array of 6 Texture2D. The supported file formats are .DDS, .JPG, .PNG, .TGA and .BMP.
In Computer Graphics matrices are used to transform vectors in one space to another space. Therefore during the rendering process we need matrices to used in the vertex shader to transform position from the local mesh space to projection space also called homogeneous device coordinates. Then the GPU will take care of transforming these projection coordinates to screen cooordinates by dividing by z and viewport transformation.
world = Matrix.Identity;
view = Matrix.LookAt(cameraPosition, new Vector3(0, 0, 1), Vector3.UnitY);
projection = Matrix.PerspectiveFovLh((float)Width / (float)Height, Igneel.Numerics.PIover6, 1, 1000);
Listing 4 Create Transforms
The code for generating the sphere mesh is shown here:
private void CreateSphere()
{
var stacks = 128;
var slices = 128;
var radius = 10;
var vertices = new SphereVertex[(stacks - 1) * (slices + 1) + 2];
var indices = new ushort[(stacks - 2) * slices * 6 + slices * 6];
float phiStep = Numerics.PI / stacks;
float thetaStep = Numerics.TwoPI / slices;
int numRings = stacks - 1;
int k = 0;
var v = new SphereVertex();
for (int i = 1; i <= numRings; ++i)
{
float phi = i * phiStep;
for (int j = 0; j <= slices; ++j)
{
float theta = j * thetaStep;
v.Position = Vector3.SphericalToCartesian(phi, theta, radius);
v.Normal = Vector3.Normalize(v.Position);
v.TexCoord = new Vector2(theta / (-2.0f * (float)Math.PI), phi / (float)Math.PI);
v.Tangent = new Vector3(-radius * (float)Math.Sin(phi) * (float)Math.Sin(theta), 0, radius * (float)Math.Sin(phi) * (float)Math.Cos(theta));
vertices[k++] = v;
}
}
vertices[vertices.Length - 2] = new SphereVertex(new Vector3(0.0f, -radius, 0.0f), new Vector3(0.0f, -1.0f, 0.0f), Vector3.Zero, new Vector2(0.0f, 1.0f));
vertices[vertices.Length - 1] = new SphereVertex(new Vector3(0.0f, radius, 0.0f), new Vector3(0.0f, 1.0f, 0.0f), Vector3.Zero, new Vector2(0.0f, 0.0f));
int northPoleIndex = vertices.Length - 1;
int southPoleIndex = vertices.Length - 2;
int numRingVertices = slices + 1;
k = 0;
for (int i = 0; i < stacks - 2; ++i)
{
for (int j = 0; j < slices; ++j)
{
indices[k++] = (ushort)((i + 1) * numRingVertices + j);
indices[k++] = (ushort)(i * numRingVertices + j + 1);
indices[k++] = (ushort)(i * numRingVertices + j);
indices[k++] = (ushort)((i + 1) * numRingVertices + j + 1);
indices[k++] = (ushort)(i * numRingVertices + j + 1);
indices[k++] = (ushort)((i + 1) * numRingVertices + j);
}
}
for (int i = 0; i < slices; ++i)
{
indices[k++] = (ushort)i;
indices[k++] = (ushort)(i + 1);
indices[k++] = (ushort)northPoleIndex;
}
int baseIndex = (numRings - 1) * numRingVertices;
for (int i = 0; i < slices; ++i)
{
indices[k++] = (ushort)(baseIndex + i + 1);
indices[k++] = (ushort)(baseIndex + i);
indices[k++] = (ushort)southPoleIndex;
}
vertexBuffer = device.CreateVertexBuffer(data: vertices);
indexBuffer = device.CreateIndexBuffer(data:indices);
}
Listing 5 Create the vertex and index buffers
In the method CreateSphere
the last two statements create the vertex buffer for storing the vertices in gpu memory and the index buffer containing the indices that defines mesh triangles.
vertexBuffer = device.CreateVertexBuffer(data: vertices);
indexBuffer = device.CreateIndexBuffer(data:indices);
This will reserve memory on the graphic device for holding the arrays containing the vertex data and indices. The memory is reserved with default settings ,the method also allows passing several parameters for controlling the resource memory behavior or the cpu access type like reading or writing.
The code for rendering the scene is located in the RenderFrame method.
private void RenderFrame()
{
device.SetRenderTarget(device.BackBuffer, device.BackDepthBuffer);
device.ViewPort = new ViewPort(0, 0, Width, Height);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, new Color4(0, 0, 0, 0), 1, 0);
device.PrimitiveTopology = IAPrimitive.TriangleList;
device.SetVertexBuffer(0, vertexBuffer, 0);
device.SetIndexBuffer(indexBuffer);
input.World = Matrix.RotationY(-(float)Environment.TickCount / 5000.0f);
input.View = view;
input.Projection = projection;
input.CameraPosition = cameraPosition;
input.ReflectionRatio = 0.05f;
input.SpecularRatio = 0.15f;
input.SpecularStyleLerp = 0.15f;
input.SpecularPower = 8;
input.DirectionalLight = new DirectionalLight
{
Color = Color3.White,
Direction = new Euler(45, 0, 0).ToDirection()
};
input.DiffuseTexture = diffuseTexture.ToSampler(diffuseSampler);
input.NightTexture = nightTexture;
input.NormalMapTexture = normalMapTexture;
input.ReflectionMask = reflectionMask;
device.Program = shaderProgram;
device.DrawIndexed((int)indexBuffer.SizeInBytes / indexBuffer.Stride, 0, 0);
device.Present();
}
Listing 6 Render Frame
After setting the render and depth-stencil buffers ,the ViewPort
is set and the rendering buffers are cleared. The primitive type is specified as a list of triangles, and the GraphicBuffer
storing vertices and indices are binded to the pipeline. Then shader interface mapping is used to sending the shader variable values like matrices and lightning info. Also shader interface mapping can be used to bind textures and sampling states like in the statement.
input.DiffuseTexture = diffuseTexture.ToSampler(diffuseSampler);
.....
input.NightTexture = nightTexture;
Textures and sampler states can also be set by GPU registers using the IShaderStage
interface like in the following statements where a texture and a sampler state are binding to texture register 0 and sampler register 0.
device.GetShaderStage<PixelShader>().SetResource(0, diffuseTexture);
device.GetShaderStage<PixelShader>().SetSampler(0, diffuseSampler);
A IShaderStage<TShader>
can be obtained calling device.GetShaderStage<TShader>()
where TShader is a type that inherit from Shader
like VertexShader
, PixelShader
, GeometryShader
, HullShader
, DomainShader
or ComputeShader
. If a particular GraphicDevice
implementation does not support a shader stage for a [Shader Type] it must return null when calling device.GetShaderStage<[Shader Type]>()
.
On the other hand another unique feature called dynamic shader mapping can be used instead of interface shader mapping like the in the following line of code.
shaderProgram.Input.World = Matrix.RotationY(-(float)Environment.TickCount/5000.0f);
The type of shaderProgram.Input
is dynamic so it's not necessary to declare a interface for mapping the shader constants. But the drawback of dynamic shader mapping is that it can only map primitive types like vectors, matrices or textures. It cannot map user defined types like the DirectionalLight
struct. Also it cannot bind a SamplerState
therefore the SamplerState
must be binded by using an IShaderStage
.
The Vertex Shader
struct VSInput
{
float4 Position : POSITION;
float3 Normal : NORMAL;
float3 Tangent : TANGENT;
float2 TexCoords : TEXCOORD0;
};
struct VSOutput
{
float4 PositionVS : SV_POSITION;
float2 TexCoords : TEXCOORD0;
float3 Normal : TEXCOORD1;
float3 Tangent : TEXCOORD2;
float3 Binormal : TEXCOORD3;
float3 Position : TEXCOORD4;
};
cbuffer camera
{
float4x4 View;
float4x4 Projection;
};
cbuffer perObject
{
float4x4 World;
};
VSOutput main( VSInput input)
{
VSOutput output;
float4 worldPosition = mul(input.Position, World);
output.PositionVS = mul(worldPosition, mul(View, Projection));
output.Normal = mul(input.Normal, World);
output.Tangent = mul(input.Tangent, World);
output.Binormal = cross(output.Normal, output.Tangent);
output.Position = worldPosition.xyz;
output.TexCoords = input.TexCoords;
return output;
}
The Pixel Shader
struct Light
{
float3 Direction;
float3 Color;
};
struct VSOutput
{
float4 PositionVS : SV_POSITION;
float2 TexCoords : TEXCOORD0;
float3 Normal : TEXCOORD1;
float3 Tangent : TEXCOORD2;
float3 Binormal : TEXCOORD3;
float3 Position : TEXCOORD4;
};
cbuffer cbParams
{
float ReflectionRatio;
float SpecularRatio;
float SpecularStyleLerp;
int SpecularPower;
};
cbuffer cbLight
{
Light DirectionalLight;
float4x4 View;
float3 CameraPosition;
};
Texture2D DiffuseTexture;
Texture2D NightTexture;
Texture2D NormalMapTexture;
Texture2D ReflectionMask;
SamplerState sDiffuseTexture;
float4 main(VSOutput input) : SV_TARGET
{
float3 EyeVector = normalize(input.Position - CameraPosition );
float3 Normal = NormalMapTexture.Sample(sDiffuseTexture, input.TexCoords).rgb;
Normal = (Normal * 2) - 1;
float3x3 tangentFrame = {input.Tangent, input.Binormal, input.Normal};
Normal = normalize(mul(Normal, tangentFrame));
float light = saturate( dot( Normal, -DirectionalLight.Direction ) );
float3 color = DirectionalLight.Color * light;
float4 diffuse = DiffuseTexture.Sample(sDiffuseTexture, input.TexCoords);
color *= diffuse.rgb;
float sunlitRatio = saturate(2*light);
float4 nightColor =NightTexture.Sample(sDiffuseTexture, input.TexCoords);
color = lerp( nightColor.xyz, color, float3( sunlitRatio, sunlitRatio, sunlitRatio) );
float reflectionMask = ReflectionMask.Sample(sDiffuseTexture, input.TexCoords);
float3 vHalf = normalize( -EyeVector + -DirectionalLight.Direction );
float PhongSpecular = saturate(dot(vHalf, Normal));
color += DirectionalLight.Color * ( pow(PhongSpecular, SpecularPower) * SpecularRatio * reflectionMask);
float atmosphereRatio = 1 - saturate( dot(-EyeVector, input.Normal) );
color += 0.30f * float3(.3, .5, 1) * pow(atmosphereRatio, 2);
return float4(color, 1.0);
}
In both shader different constant buffers were declared and the interface and dynamic mapping mechanism will take care of efficiently manage those constant buffers, so only buffers with accessed variables will be opened and closed only once in each rendering frame.
Application Screens
Points of Interest
In Igneel.Graphics it's interisting to note the simplicity for creating devices and resources like shader, buffers, textures and pipeline states. It's also interisting the particular managent of shaders and features like interface mapping and dymanic mapping. As a remark you can write your base rendering code unaware the native implementation of the API and test if a IShaderStage
if implemented for a given Shader
type. Also I enjoyed a lot developing this API and I learn so much how to write hight performance code and integrating managed code in .NET/MSIL with native unmanaged code.
Also in order to run the sample you must first install the DirectX redistributable that comes with the sdk and you can download at https://www.microsoft.com/en-us/download/details.aspx?id=6812. Note after installing the SDK you must locate the SDK installation folder by default in Program Files and run the redistributable installer in [SDK Folder ]/Redist/DXSETUP.exe
Futhermore Igneel Engine is now available in Github so contributions are welcome.
About the Author
My name is Ansel Castro Cabrera, I have a bachelor in Computer Science from the University of Havana were I specialized in computer graphics, compiling and .NET development. I also worked in other trends of computer science like Machine Learning ,Computer Vision ,Web and Android Programming. In addition I have developed neural network and convolutional neural networks models for pattern recognition on images, also I have worked with OpenCV in feature tracking and extration. On the other hand I have worked with Django, PHP, ASP.NET WebForms, ASP.NET MVC, Javascript in web development.