Introduction
Hardware accelerated effects for WPF were first introduced in .NET 3.5 SP1. Very complex effects and graphically rich applications can be created with little impact on performance, thanks to the huge computing power of modern graphic cards. However, if you want to take advantage of this feature, you first need to learn a thing or two. The purpose of this article is to provide all the information you need to get started with Effects.
What is an Effect?
Effects are an easy-to-use API to create (surprisingly) graphical effects. For example, if you want a button to cast a shadow, there are several ways to accomplish the task, but the simplest and most efficient method is to assign the "Effect
" property of the button, either from code or in XAML:
MyButton.Effect = new DropShadowEffect() { ... };
<Button Name="MyButton" ... >
<Button.Effect>
<DropShadowEffect ... />
</Button.Effect>
</Button>
As you can see, effects are so easy to use that you don't need any further explanation. The fun starts when you decide to write your own effects...
BitmapEffect, Effect, ShaderEffect... What?
First of all, there are several .NET classes that share the "Effect
" suffix, and to make it even more confusing, they are all in the System.Windows.Media.Effects namespace. However, not all of those classes are useful when it comes to hardware acceleration, in fact some of them are completely useless.
BitmapEffect
The BitmapEffect class and its subclasses were originally supposed to provide the functionality of effects. However, this API doesn't use any hardware acceleration and it has been marked obsolete in .NET 4.0. It's strongly recommended to avoid using the BitmapEffect class or any of its subclasses!
Effect and its Derived Classes
As stated above, you apply an effect to a control by assigning the control's Effect property (the property is actually inherited from UIElement, just in case you needed to know). Now the question is... What needs to be assigned to the Effect
property? The answer is as simple as it can be - it's an object of type Effect
.
The Effect class is the base class of all hardware accelerated effects. It has three subclasses: BlurEffect
, DropShadowEffect
and ShaderEffect
. The first two are ready-to-use effects included directly in the .NET library. The ShaderEffect class is the base class of all custom effects.
Why BlurEffect and DropShadowEffect?
Why are there only 2 fully implemented effects in the library and why don't these 2 effects derive from ShaderEffect
? I can't answer the first question, but I can tell you what makes BlurEffect
and DropShadowEffect
so special.
Both DropShadowEffect
and BlurEffect
are using complex algorithms that require multiple passes, but multi-pass effects are not normally possible. However, the guys at Microsoft probably did a few dirty hacks deep inside the unmanaged core of the WPF rendering engine and created these two effects.
Note: It is possible to create a single-pass blurring algorithm, but such algorithm is terribly slow compared to multi-pass blurring. Anyway, there are probably more reasons why these 2 effects are implemented in a special way.
How Does It Work?
If you want to take advantage of hardware acceleration, you first need to know how the whole thing works.
A Few Words about the GPU Architecture
The architecture of Graphic Processing Units (GPUs) is different than the architecture of CPUs. GPUs are not general-purpose, they are designed to perform simple operations on large data sets. The operations are executed with high amount of parallelism, which results in great performance.
Modern GPUs are becoming more and more programmable and the range of tasks that can be executed on GPUs is growing (although there are several restrictions described below). Small programs executed on GPU are called shaders. There are several kinds of shaders - vertex shaders and geometry shaders are used when rendering 3D objects (not used by WPF Effects) and pixel shaders are used to perform simple operations on pixels.
There are even attempts to use the sheer computing power of GPUs for general purpose programming... Unfortunately there are several restrictions, such as limited number of instructions in one program, no ability to work with advanced data structures, limited memory management abilities, etc. Amazing speed comes with several trade-offs...
Pixel Shaders
A pixel shader is a short program that defines a simple operation executed on each pixel of the output image. That's pretty much all you need to create all kinds of interesting pixel-based effects.
Before You Write your First Effect...
WPF objects, including Effects
, are rendered using the DirectX engine. DirectX shaders are written in High Level Shader Language (HLSL) and then compiled into bytecode. Therefore HLSL is one of the things you need to learn to write your own Effects (more about HLSL further in this article).
Some people will tell you that you need to download and install the entire DirectX SDK in order to compile HLSL code. Fortunately, this is not true. All you need is to download a Visual Studio add-in written by Greg Schechter and Gerhard Schneider. It is called Shader Effects BuildTask and you can get it from the CodePlex WPF site. This add-in reportedly works with Visual Studio 2008 and 2010.
Once the add-in is installed, a new project template called "WPF Shader Effect Library" will appear in Visual Studio. The best thing about this add-in is that you can write HLSL code directly in Visual Studio (without intellisense support and syntax-highlighting though) and all your shaders will be compiled automatically when you build your project.
The First Simple Effect
Let's get started! If you have already downloaded and installed the Shader Effects BuildTask mentioned above, you can open the project attached to this article.
Each effect has 2 parts: a pixel shader written in the HLSL language (a file with the .fx extension) and class derived from ShaderEffect
(a .cs file), which serves as a managed wrapper of the pixel shader. When you build your project, all .fx files are compiled and the resulting pixel shaders (with the extension .ps) are included in the assembly.
If you select an .fx file and open the "Properties" window, you will see that the "Build Task" of this file is set to "Effect" (see the image on the right). This ensures that the effect will be properly compiled. Important: When you add a new effect to your project, it's Build Task property is not set automatically and you have to change it manually!
The effect I am going to describe is simple - it is called "Transparency
" and it has one parameter "Opacity
". It makes a control semi-transparent depending on this parameter. Please ignore the fact that such effect is completely useless...
Creating the Pixel Shader
Let's start with the difficult part: the Pixel Shader first. "Transparency.fx" contains the following code:
sampler2D implicitInputSampler : register(S0);
float opacity : register(C0);
float4 main(float2 uv : TEXCOORD) : COLOR {
float4 color = tex2D(implicitInputSampler, uv);
return color * opacity;
}
The first two lines contain pixel shader constants. The first constant is of type sampler2D
and it refers to the image this effect is applied to. Yes, I know the effect is applied to a control (not necessarily an Image
), but the word "image" refers to the visual representation of the target control...
The other constant is our custom input parameter (called "opacity
") and it is of type float
. Although the value of this parameter can change in time, it is considered to be a constant in the scope of the pixel shader. As I have mentioned above, the pixel shader is executed once for each pixel and all pixels in one frame need the same input parameters - that's why "opacity
" is considered to be a constant.
The register
keyword is used to associate each of the constants with the registers where input values are stored. There are several "image registers" that contain the input image data and these registers are named S0
, S1
, S2
, etc. (most pixel shaders only use one such register). There are also "floating point registers" named C0
, C1
, C2
, etc. and these registers store the values of other input parameters.
The rest of the shader is the algorithm itself. There is a method called main
, the entry point of our shader program. This method accepts one parameter of type float2
and returns float4
(a method with this name and signature must be in every pixel shader). Return type of this method is a vector of 4 floating-point values that represents RGBA color. The method argument is a 2-dimensional vector and you can think about it as "the x and y coordinates of the current pixel". In fact the values are not pixel-based: the coordinates of the upper-left corner are (0, 0) and the lower-right corner is represented by (1, 1).
The body of the method is quite simple - the source color is found by a call to the tex2D
method, it is multiplied by opacity and returned. When the "*" operator is used to multiply a scalar value with a vector, all components of that vector are multiplied by the scalar. You might think that only the alpha channel should be multiplied to get the correct result. However, DirectX shaders work with premultiplied alpha channel, which means that the values of RGB channels are always multiplied by alpha channel.
The Effect Class
Let's take a look at the other part of the "Transparency
" effect. "Transparency.cs" contains the following class:
public class Transparency : ShaderEffect {
static Transparency() {
_pixelShader.UriSource = Global.MakePackUri("Transparency.ps");
}
private static PixelShader _pixelShader = new PixelShader();
public Transparency() {
this.PixelShader = _pixelShader;
UpdateShaderValue(InputProperty);
UpdateShaderValue(OpacityProperty);
}
public Brush Input {
get { return (Brush)GetValue(InputProperty); }
set { SetValue(InputProperty, value); }
}
public static readonly DependencyProperty InputProperty =
ShaderEffect.RegisterPixelShaderSamplerProperty("Input", typeof(Transparency), 0);
public double Opacity {
get { return (double)GetValue(OpacityProperty); }
set { SetValue(OpacityProperty, value); }
}
public static readonly DependencyProperty OpacityProperty =
DependencyProperty.Register("Opacity", typeof(double), typeof(Transparency),
new UIPropertyMetadata(1.0d, PixelShaderConstantCallback(0)));
}
As you can see, it's pretty much an ordinary class with a few dependency properties. However, there are several important things I have to point out.
The pixel shader is stored in a private static
field _pixelShader
. This field is static
, because one instance of the compiled shader code is enough for the whole class. There is a static
constructor that initializes the UriSource
property of _pixelShader
- it basically lets _pixelShader
know where to look for the compiled shader bytecode. The Global.MakePackUri()
method is a helper method that converts the file name to a full uri path, which looks approximately like this: "pack://application:,,,/[assemblyname];component/Transparency.ps"
. (I'm sure you understand why this needs a helper method).
There has to be a property of type Brush
called "Input
". This property contains the input image and it is usually not set directly - it is set automatically when our effect is applied to a control. The corresponding dependency property is not initialized by calling DependencyProperty.Register()
, the ShaderEffect.RegisterPixelShaderSamplerProperty()
method must be used instead. Note the last parameter of this method: it is an integer and it corresponds to the S0
pixel shader register.
The other property is our custom parameter called "Opacity
". It is declared like any other dependency property, the only difference is the value of PropertyChangedCallback
in UIPropertyMetadata
constructor. The value must be PixelShaderConstantCallback()
and the integer parameter must be the number of the corresponding floating point register (note that the value "0
" corresponds to the register name C0
).
The last important thing that needs explanation is the constructor of our class. It sets the PixelShader
property (which is required) and forces the shader to update all input values.
A Few Notes about HLSL
As you can see, the HLSL language is a simple C-based language. Common C operators (+, -, *, / etc.) can be used as well as many math functions. Code flow control statements (such as if
, while
or for
) can be used as well, a complete list can be found here.
The most common type used in WPF pixel shaders is float
and float
-based vectors (float2
, float3
and float4
). A detailed description of HLSL vectors (and how to work with them) can be found here. There is no int
or bool
type in the current version of WPF pixel shaders (see table below).
Accepted Parameter Types
The following table shows all allowed input types (as defined in the ShaderEffect
class) and the corresponding HLSL types (as defined in the pixel shader). Only floating-point values are currently allowed.
.NET type |
HLSL type |
System.Boolean (C# keyword bool ) |
Not Available |
System.Int32 (C# keyword int ) |
Not Available |
System.Double (C# keyword double ) |
float |
System.Single (C# keyword float ) |
float |
System.Windows.Size |
float2 |
System.Windows.Point |
float2 |
System.Windows.Vector |
float2 |
System.Windows.Media.Media3D.Point3D |
float3 |
System.Windows.Media.Media3D.Vector3D |
float3 |
System.Windows.Media.Media3D.Point4D |
float4 |
System.Windows.Media.Color |
float4 |
Taking It a Step Further
If you have read all the above, you understand the basics of effects. Here are a few important things you might want to know before you start creating your own effects:
Animations
Effects are DependencyProperty
-based and they can be animated just like any other WPF element.
Displacements
Effects can do much more than change the color of a pixel. See the following example:
float4 main(float2 uv : TEXCOORD) : COLOR {
uv = uv / 2;
float4 color = tex2D(implicitInputSampler, uv);
return color;
}
The uv
value is divided by 2 before getting the source color, which will stretch the top-left quarter of the target control to the whole area of the control. Much more complicated transformations can be used to create interesting effects.
You can even create a displacement effect that will respond to user input correctly (such as mouse-over etc.) - all you need to do is to set the Transform
property of your effect class.
Multi-input Effects
Effects can have several input images to execute advanced blending operations. This is beyond the scope of this article, but you can find a detailed description of this technique on Greg Schechter's blog.
Common Mistakes
The way pixel shaders are compiled from Visual Studio introduces several potential bugs. These bugs do not cause any compile-time errors and are sometimes difficult to find.
- First of all, when you add a new Shader Effect to your library, don't forget to set the "Build Task" of the new .fx file to "
Effect
". If you forget to do this, your shader will not be compiled and your application will crash as soon as it attempts to use the effect.
- Another possible problem is the way compiled effects are included in your managed assembly. If you change the file structure of your project, or if you rename an effect source file, don't forget to change the file path in the associated
PixelShader
constructor, otherwise your application will crash when it attempts to create an instance of the effect.
- If you add a new parameter to your effect, don't forget to add a new
UpdateShaderValue
method call to the constructor of your effect class. Otherwise, your effect might use wrong default values.
- Be careful when defining default values of effect parameters, because the default value type must match the parameter type exactly. If a property is of type
double
, you can't simply use an integer literal (such as "1
") as its default value, you have to use double values such as "1.0
" or "1d
".
Recommended Resources
Greg Schechter wrote a brilliant series of articles about Shader Effects, by far the best resource I have found. The series includes several examples, explains how to create multi-input effects and more.
Walt Ritscher created a great tool called Shazzam, an interactive development tool for Shader Effects. Shazzam lets you write an effect, apply it to any image and change all input parameters interactively. It even generates the associated C#/VB code for you. See the official Shazzam page to learn more. (Thanks a lot to Sacha Barber for bringing Shazzam to my attention.)
Nick Darnell wrote WPF ShaderEffect Generator, a Visual Studio plugin that lets you write and compile shaders directly in VS, a great alternative to BuildTask from Greg Schechter and Gerhard Schneider. The main difference is that Nick Darnell's plugin generates all C# classes automatically from the finished HLSL code (a functionality very similar to Shazzam). Thanks to U-P-G-R-A-Y-E-D-D for posting a link to this project!
Tamir Khason created a small program called HLSL Tester. This application is a great help if you are completely new to the HLSL language. It lets you load a bitmap image and then write simple pixel shaders, debug them and apply them to your image interactively. The only disadvantage is that this application requires the DirectX SDK to be installed.
WPF Pixel Shader Effects Library is an open-source library of high quality ready-to-use effects. The library (including source code) can be downloaded from CodePlex: http://wpffx.codeplex.com/
Sample Project
The Transparency effect with a simple WPF test application can be downloaded here.
The End
That's it. I hope this article gave you all you need to create your own amazing hardware-accelerated effects. I recommend that you pay attention to the links mentioned in the "Recommended Resources" section above.
I will be happy to answer all your questions and any feedback is highly appreciated.
Version History
- Edited 2010-07-24: Added links to the Recommended Resources section