|
|
I'm already using pointers + unsafe code to convert the input image to a pixel matrix, and to reconvert the processed matrix back to an image.
I don't see using pointers throughout my code providing much improvements since most of my operations are basic mathematical operations which I perform on the integer matrix. Besides, a pixel map resembles the image more than the Bitmap data, which is a linear list of pixel values in the input image. I often need to analyze 2-dimensional "windows" around each pixel which is simpler to do in a 2-dimensional map than in a linear data structure.
As for now, the methods suggested by Robert have given some improvements. I'll look into pointers again - am just not very familiar with their use at this point.
Thanks...
|
|
|
|
|
This is probably not a data structure but an algorithm problem
It would help much if you could describe what you are actually doing or what result you are expecting. I always like crunching performance issues but for that some more input is needed.
Nevertheless some suggestions:
1. Dont use Length or GetLength within a loop. Assign the length of the two dimensions to some variables and use them. Depending on what you are doing within those loops this could speed things up up to 50% (only true for Release mode).
2. Test with Release mode. Depending on what operations you are doing this can have a real impact on your performance.
3. Take a better processor
|
|
|
|
|
I've tried my best to make my algorithm as optimal as I can...
1.
Here's what I'm trying to do - the two dimensional matrix I had mentioned is a pixel map of an image. I need to process the image, updating the value of each pixel depending upon the characteristics of a 10x10 pixel window with the pixel I wish to update at its center.. A typical loop for example goes like:
/*
iht - height of array
iwd - width of array
iPix - two dimensional array
n1, n2, n3 - integers
iWindowSize - size of processing window
*/
for (int y=0;y<iht;y++) {
for (int x=0;x<iwd;x++) {
// get margins of the processing window
xl=(x<iWindowSize)?0:x-iWindowSize;
xr=(x>iwd-iWindowSize-1)?iwd:x+iWindowSize+1;
yt=(y<iWindowSize)?0:y-iFilterSize;
yb=(y>iht-iWindowSize-1)?iht:y+iWindowSize+1;
// get mean value
n1=0; n2=0;
for (int i=yt;i<yb;i++)
{
for (int j=xl;j<xr;j++)
{
n1+=iPix[i,j];
n2++;
}
}
n3=n1/n2; // n3 is the mean
// get variance
n1=0;
for (int i=yt;i<yb;i++) {
for (int j=xl;j<xr;j++) {
n1+=((n3-iPix[i,j])*(n3-iPix[i,j]));
}
}
n2=n1/n2; // n2 is the variance
// update pixel
if (n2<(n3*n3)/4) iPix[y,x] = n3;
// this is an example of the kind of processing I am using
}
}
2.
Haven't tried in release mode sofar.. Guess I'll try that option...
3. Can't change the processor!
|
|
|
|
|
Release mode doesn't provide any advantage.
Instead, whereas in debug mode, the average processing time for 14 pictures was 3.411 seconds, it climbed upto 3.540 seconds for the same set of pictures in release mode!
|
|
|
|
|
Hmmmm... Ive tested exactly the code you posted and on my machine the relese mode version cut down the time needed to nearly the half. Have you tested it within VS or standalone?
Check that the 'optimize' flag is set to true, 'check for arithmetic overflow' to false and 'generate debug info' also to false.
But I think I have found something to increase the speed of the algorithm itsself:
If I get it right you loop through all columns and in that loop through all rows. Then you sum up all values within this window. One big part is summing up the values within this window. If you imagine all those windows in a chart you can imagine that they all overlap each other. Thus you are summing up the same values some dozens times. My suggestion would be to sum up the values for the needed rows for all columns at once before processing a line. This way you dont have to sum all values within the window but only the prepared column sums. The same is probably possible for the variance.
I hope this was somehow clear. If not I'll probably implement it myself when I have some minutes free .
Btw: Do You only work with integers? Are those matrix values calculated for only one color chanel (red, green , blue) or is really the complete RGB-value stored? If latter then summing up those values could easily overflow an integer variable
|
|
|
|
|
Great find!
I've made the changes you'd suggested in my algorithm (to avoid repeatedly summing up columns in overlapping windows), and have been able to shed over 1 second!
The release mode however is still not giving much improvement. The 'optimize' flag is true and 'check for arithmetic overflow' and 'generate debug info' are both set to false. I must mention that the code I had posted is only a small section of my entire procedure. On my end, I'm processing the input image through a series of functions(/filters), each of which update pixel values depending upon the environments and are similar to the extract in my previous post.
Currently, the entire procedure takes an average of ~ 2.4 seconds for a set of 14 test pictures in both Debug and Release modes.
Also, am only working with integers. The input images are 8-bit grayscale images, so these integers are only between 0 and 255.
Thanks...
|
|
|
|
|
So you are now relatively close to your goal...
You say you are working on images. Where does the data come from? Do you convert a Bitmap into a matrix, do your calculations and then recreate the image? If yes you could instead work directly (unsafe) on the bitmap data with the LockBits function from the Bitmap. Its a bit complicated (I don't like pointer handling) but its worth the try.
|
|
|
|
|
I pick the data from the image using the BitmapData object, copy all pixel values to the 2-dimensional array, process this array and finally copy the values back to the memory recreating the image.
I don't have much experience with pointers either , and am avoiding using them extensively 'cause the using the bitmap data directly seems more complicated since the pixels are placed linearly in the memory - whereas I need to analyze 2-dimensional windows (hence the more suitable pixel map).
Besides, I'm not even sure whether using unsafe code and pointers will provide a major breakthrough (although even a not so major breakthrough is acceptable for my case), since most of my processing comprises simple mathematical operations.
|
|
|
|
|
You are probably right. Probably optimization ends here...
Btw: What kind of machine is this running on? Multiprocessor? Hyperthreading? I have have some multithreaded improvements in mind
|
|
|
|
|
Single Processor, Single Thread!
Let's hear about the improvements...
|
|
|
|
|
sarabjs wrote:
Single Processor, Single Thread!
I think in that case threading would be contraproductive...
Sorry but I think I have no further suggestions for improving the performance
|
|
|
|
|
Nein problemo!
You've done great.. Thanks for the help...
Sarab.
|
|
|
|
|
|
No he doesn't. He can't.
Why?
Lets assume you have something like this:
for (int i = 0; i < myArray.Length; i++)
{
}
How can the compiler know what is done within the loop. The loop could probably change the array:
for (int i = 0; i < myArray.Length; i++)
{
myArray = new int[myArray.Length + 10];
}
If the JIT had replaced the condition in the loop with a constant value than the program would not behave correct anymore. This simple example could probably be resolved, but there are more complex examples a compiler can never resolve... so it doesn't even try
I've measured this already as well in debug as in release mode and it can have a real impact on your performance (about 50% if you e.g. just sum up the values of an array).
|
|
|
|
|
|
Hmmm... I've tested some constellations.
As far at it goes to GetLength Im totally right. Caching the length makes it by far faster.
For the normal Length property its a bit more complicated. Brad is true if the arrays is declared locally. If its a field member the cached version will still be a bit faster (about 15%).
Seems to be hard to give some general hints on thic topic. Its mostly like Brad stated:
"The lesson here is more along the lines of measure, measure, measure when you are doing perf work"
|
|
|
|
|
Robert Rohde wrote:
Seems to be hard to give some general hints on thic topic. Its mostly like Brad stated:
"The lesson here is more along the lines of measure, measure, measure when you are doing perf work"
I agree.
David
Never forget: "Stay kul and happy" (I.A.)
David's thoughts / dnhsoftware.org / MyHTMLTidy
|
|
|
|
|
I guess...
Though as far as my requirements are concerned, I don't need to use getLength etc. since the dimensions of the pixel matrix I'm using remain the same throughout.
|
|
|
|
|
I am at the moment building a client/server application using .NET. This is a Win Forms based app and for the communication I thought I would use sockets. But someone just mentioned "why dont you use remoting?". I dont know how skilled this person is and how much experience he has with remoting and/or sockets so I cant really know based on what he made that comment.
The application is nothing out of the ordinary really. The server has a database and the client interacts with the server and inserts, updates, deletes and requests data to/from the server. All clients and the server are all on a local network.
What I really need to know is, how do I know when I have an application that is a good candidate to use remoting for its network communication?
|
|
|
|
|
My take on this :-
Use remoting if client and server are guaranteed to be .NET
If a non-.NET client needs to connect to the server, use sockets
Nish
|
|
|
|
|
I want to distribute an application to people who may or may not have the .NET Framework installed. Currently, I have created a project that includes the application and the .NET Framework. The .NET framework is installed if it is currently not installed. However this leads to a large distribution file (I would like to email my application). As the .NET Framework is only needed to be installed once, any subsequent release containing the .NET Framework would be a waste.
I would prefer to determine if the .NET Framework is installed. If not I would display a popup to tell the user to download the framework from the Microsoft website.
Does anyone know how I could do this?
Thanks
Liam
|
|
|
|
|
You could examine the registry :-
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform
|
|
|
|
|
What I am struggling with is how I get code to execute to determine if the .NET Framework exists.
The code that I write will need the .NET Framework to run.
|
|
|
|
|