Hi guys,
I am using Visual Studio 2010 and Shazzam Editor for a C#/WPF application.
I would like to map colors to an image made of floats using a pixel shader but I am stucked.
For example, let's say I want to map a blue-red gradiant to that image (the min/max will be set by the user).
In pixel shaders, processing RGB images is easy. However my image is not made of RGB values. I must pass the original float values. The only way to pass floats I found is to use
PixelFormats.Gray32Float
. One of the problems with this format is the values must fit within the range 0-1.
So the fursthest I could go is:
In C# code:
- Normalize the image so that its value fit within the range 0-1
- Create a
WriteableBitmap
with
PixelFormats.Gray32Float
and store the normalized values in it.
- Pass this bitmap to the shader.
- Pass the original min/max values to the shader.
- Pass the user min/max to the shader.
int width = 2000;
int height = 2000;
float[] image = new float[width * height];
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++)
image[y * width + x] = (float)x;
float min = image.Min();
float max = image.Max();
float[] normalizedImage = image.Select(x => (x - min) / (max - min)).ToArray();
WriteableBitmap bitmap = new WriteableBitmap(width, height, 96, 96, PixelFormats.Gray32Float, null);
bitmap.WritePixels(new Int32Rect(0, 0, width, height), normalizedImage, width * sizeof(float), 0);
shaderLut.InputMin = min;
shaderLut.InputMax = max;
shaderLut.LutMin = 200;
shaderLut.LutMax = 1500;
shaderLut.Input = new ImageBrush(bitmap);
In the shader code:
- get the normalized value as a gray level
- convert it back to its original value
- interpolate between the min/max provided by the user
sampler2D Input : register(s0);
float InputMin : register(C0);
float InputMax : register(C1);
float LutMin : register(C2);
float LutMax : register(C3);
float4 main(float2 locationInSource : TEXCOORD) : COLOR
{
float normalizedValue = tex2D(Input, locationInSource.xy).r;
normalizedValue = pow(normalizedValue, 2.2);
float originalValue = lerp(InputMin, InputMax, normalizedValue);
float scale = (originalValue - LutMin) / (LutMax - LutMin);
float3 blue = { 0, 0, 1 };
float3 red = { 1, 0, 0 };
float3 color = lerp(blue, red, scale);
return float4(color, 1);
}
All this works almost perfectly. The issue appears when
LutMax - LutMin
becomes very low compared to
InputMax - InputMin
: in this case, the gradient quality gets poorer (in the previous example, set
LutMin
to
1000
and
LutMax
to
1100
for example).
I suppose this is because the float value read from the pixel shader is not as precise as the float value given to the WriteableBitmap (probably I can't get more than 256 different values?).
So does anyone know how to pass the original float values without loosing precision?
Thanks.