|
Thank You so much! i have so many different ideas and i am working on it thank you!
|
|
|
|
|
Hi,
I am attempting to create a simple region growing image program. The idea is that these regions will become "nodes" for the next part of my project. I just need a bit of advice of a good easy way to store the pixel data.
At the moment I have a Class called PixelData that holds x/y location, pixel color value and a Label ("F"- free/ "A"-allocated etc) of all the pixels in a grayscale image. This is all held by a List<PixelData>. I have a node class that will hold the pixels that it consumes.
So the idea is that a random seed point will be selected, and that the PixelData list will change that pixel lable to "A" for allocated and also add it to the node class. Over time lots of pixels in the PixelData list will become Allocated by the node, and the node class's list of pixels will grow.
This is the first way that came to me. Can anyone recommend a better or simpler way? Speed isnt a necessity at this stage, I just want to get something working.
Regards.
|
|
|
|
|
Sorry, but I don't understand what you're trying to accomplish, so I can't really offer any ideas.
If you want to choose a pixel at random and change its attribute, then just do it -- that should be easy. The question for me would be : Why? Certainly that's not the entire object of the program or class?
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
Hi,
if I understand you correctly you have an object per pixel, holding grayscale, x, y, and some state. That is a lot of data, and a sure way to spend lots of memory and time. .NET will need some 32 bytes per object, so a megapixel image of yours will consume some 40MB of memory of which 80% is irrelevant to your app (I'm assuming shorts for grayscale, x, y, state). These pixel objects will be spread around in your app's memory, and the garbage collector will eventually have to watch over them and collect them when no longer needed; that will take some doing.
So I am completely against the idea of turning every pixel into an object.
I also am not sure why a pixel needs to store its location, I've never done that.
I tend to keep monochrome pixels in a one-dimensional array, so I can easily move to its neighbours (by adding +1, -1, +WIDTH, or -WIDTH to the array index, or pointer). A second array could keep the state. That would result in only 4 bytes/pixel, one tenth of the pixel=object approach; and pixels would be adjacent in memory making your app take full advantage of the processor caches.
Another tip is this: having state, you could make the arrays a bit larger, effectively adding a boarder to the image, then store a special value in the state array for those border "pixels". That may be sufficient to avoid all border tests, as you now can move from a pixel to its neighbour, and from an edge pixel to a border pixel, without falling of the image.
modified on Monday, December 14, 2009 6:45 PM
|
|
|
|
|
This is my code:
using System;
using System.IO;
using System.ServiceProcess;
using System.ComponentModel;
using System.Configuration;
using System.Configuration.Install;
namespace ServiceTest
{
public class MainClass : ServiceBase
{
public static void Main(string[] args)
{
ServiceBase.Run(new MainClass());
}
protected override void OnStart(string[] args)
{
Console.WriteLine ("Started OK");
base.OnStart (args);
}
protected override void OnStop ()
{
Console.WriteLine ("Stoppeds OK");
base.OnStop ();
}
}
[RunInstaller(true)]
public class MainClassInstaller : Installer
{
public MainClassInstaller()
{
ServiceProcessInstaller process = new ServiceProcessInstaller();
process.Account = ServiceAccount.LocalSystem;
ServiceInstaller serviceAdmin = new ServiceInstaller();
serviceAdmin.StartType = ServiceStartMode.Manual;
serviceAdmin.ServiceName = "Service Test";
serviceAdmin.DisplayName = "Service Test";
Installers.Add(process);
Installers.Add(serviceAdmin);
}
}
}
I'm writing a Server program in Monodevelop on Ubuntu 9.04, and I've got mono-service2 installed. When I try and run the exe via mono-service2 --debug ./ServiceTest.exe I don't get any output.
modified on Monday, December 14, 2009 4:51 PM
|
|
|
|
|
Try to use:
Console.Out or Console.Err
|
|
|
|
|
|
I replied too quickly....
In fact stdin and stdout may be rerouted under linux
--> /var/log/messages
--> you can try to use strace to see where the logs goes
otherwise use: --no-daemon in order to see the output going to the console
I hope it's the good answer
|
|
|
|
|
I've tested your code on my 9.10 installation and it works fine, but I was able to reproduce your problem. Whether or not it's the same problem I can't tell, but here's how I figured out what it was:
In addition to writing to the Console, your programs output is sent to the user.log file, you can view it by going to [System -> Administration -> Log File Viewer]. In the list on the left hand side you should be able to find user.log, select it and scroll to the bottom of the log file on the right to see the output from your program.
What I did to screw things up was run mono-service2, then open the System Monitor and kill the mono process. When I tried to run mono-service2 again it put an error message in the user.log file that helped me fix the problem, in my case it had created a lock file that wasn't properly disposed of. All I had to do was delete the lock file and all was back to normal.
Try checking your log file for any errors that might indicate what's wrong.
|
|
|
|
|
I changed it to Console.Error.Writeline and I was able to see "Started Ok" and "Stopped Ok". I ran mono-service2 with the --no-daemon arg. I'll try it without.
*Edit*
I started it without the --no-daemon argument, there is no output, but mono does show up in the list of running processes.
*Edit #2*
There is no output in the user.log file coming from my "service".
*Edit #3*
Changing it from Console.Error to Console.Out made no change.
modified on Monday, December 14, 2009 11:22 PM
|
|
|
|
|
I'm surprised that it outputted at all; Windows Services don't have visible consoles attached, running without the --debug flag seems to do that. What if you try writing to a file instead?
|
|
|
|
|
In my opening post I stated I was running Ubuntu 9.04
|
|
|
|
|
Yes, but a service is a service. I would expect mono to mimic the behavior of services on their native platform, which is what running without the --debug flag seems to do.
Natively they have no console or UI so if you want output, the easiest way is to write it to a file. Like I said, though, your code works on my 9.10 installation with the --debug flag.
|
|
|
|
|
I'll try writing to a file.
*Edit*
On Ubuntu 9.10 the output of Console.WriteLine, Console.Out.WriteLine and Console.Error.WriteLine goes to the user log. Why do I have to killall mono in order to stop the service? Why can't I just call mono-service2 ServiceTest.exe to stop it?
modified on Tuesday, December 15, 2009 12:02 PM
|
|
|
|
|
From what it looked like when I tried it killing the mono process didn't give it a chance to cleanup the lock file that it created for the service that it was running. Any further attempts to run the service would fail because it would see the dead lock file and think it was still running.
The docs for mono-service[^] tell you how to start a service where you know where the lock file is and then use that to stop the service.
|
|
|
|
|
|
I have two classes, my main class (in which a form resides) and a file copying class. (I have other classes they just aren't important).
I have a proc event that determines when a USB drive is inserted and when caught it will fire a WorkerThread which will run the file copying method so that while the file operations are running the Form is still able to be updated.
protected override void WndProc(ref Message m)
{
if ((m.Msg == WM_DEVICECHANGE) && (m.WParam == DBT_DEVICEARRIVAL))
{
int deviceType = Marshal.ReadInt32(m.LParam, sizeof(int));
if (deviceType == LOGICAL_VOLUME)
{
uint unitMask = (uint)Marshal.ReadInt32(m.LParam, sizeof(int) * 3);
if (unitMask != 0)
{
char driveLetter = DriveLetterFromUnitMask(unitMask);
backgroundWorker1.RunWorkerAsync(copy);
}
}
}
}
The thread is not hard coded but simply added using the form controls (BackgroundWorker control).
The thread methods are so (added from double clicking the controls in the designer view)
After adding code I had this;
public void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)
{
CopyObject a = e.Argument as CopyObject;
RecursiveCopy.FullCopy(a.sConfigLocation, a.sourceFolder, a.destFolder, a.sDriveName, a.iniFile);
e.Result = "1";
}
public void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
string message = e.UserState as string;
fileCopyingLbl.Text = message;
}
So the BackgroundWorker will call FullCopy as shown here;
public class RecursiveCopy
{
public static void FullCopy(string sConfigLocation, string sourceFolder, string destFolder, string sDriveName, object iniFile)
{
if (!Directory.Exists(destFolder))
Directory.CreateDirectory(destFolder);
int countFile = 0;
string[] files = Directory.GetFiles(sourceFolder);
foreach (string file in files)
{
string name = Path.GetFileName(file);
string dest = Path.Combine(destFolder, name);
DateTime WriteTime = File.GetLastWriteTime(dest);
IniFunctions.WriteToIni(sConfigLocation, sDriveName, file, WriteTime.ToString(), iniFile);
object fileCopying1 = file;
PenDriveBackup myClass = new PenDriveBackup();
myClass.backgroundWorker1.ReportProgress(0, file);
File.Copy(file, dest, true);
}
string[] folders = Directory.GetDirectories(sourceFolder);
foreach (string folder in folders)
{
string name = Path.GetFileName(folder);
string dest = Path.Combine(destFolder, name);
Console.WriteLine("Copying " + folder);
FullCopy(sConfigLocation, folder, dest, sDriveName, iniFile);
}
}
}
It's been annoying me a lot, trying a lot of different methods and to no avail none of them work.
I don't really understand threading all that much so I'm shooting in the dark here a little bit.
Thanks in advance George.
|
|
|
|
|
Hi George,
while you clearly attempted to explain things well, you did manage to confuse me: what is the name of the class holding most of the code shown? and what is inside PenDriveBackup?
are you sure you want a new PenDriveBackup() instance created in your FullCopy() method, which is after all a recursive method?
you might answer these by providing a simple class schematic, or by editing your initial post.
|
|
|
|
|
Okay sorry for not explaining this clearly. My goal is simple really. What I want to achieve is;
1. Detect a USB Insertion (this works fine)
2. Upon detection begin copying the contents from the USB drive onto the hard drive. (this works fine)
3. Update a label control on my form
Now if we call the FileCopying method in the same thread the form can not be updated (the form just freezes as copying is taking place)
To count-act this I went into the Designer View and added a 'BackgroundWorker' control which allowed me to run the FileCopying method in that thread so
that the form is able to be updated. (This works, the copying takes place and the form isn't frozen)
Now as point 3. on my list is to update the control this is what I need to do.
Hopefully this can be explained better in code than words so;
public void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)
{
CopyObject a = e.Argument as CopyObject;
RecursiveCopy.FullCopy(a.sConfigLocation, a.sourceFolder, a.destFolder, a.sDriveName, a.iniFile);
e.Result = "1";
}
This is the 'BackgroundWorker' control as added before and the RecursiveCopy.FullCopy is called in this function (which is called in another thread).
This works fine, the file copy operation is called succesfully as shown here;
public class RecursiveCopy
{
public static void FullCopy(string sConfigLocation, string sourceFolder, string destFolder, string sDriveName, object iniFile)
{
if (!Directory.Exists(destFolder))
Directory.CreateDirectory(destFolder);
int countFile = 0;
string[] files = Directory.GetFiles(sourceFolder);
foreach (string file in files)
{
string name = Path.GetFileName(file);
string dest = Path.Combine(destFolder, name);
PenDriveBackup myClass = new PenDriveBackup();
myClass.backgroundWorker1.ReportProgress(0, file);
File.Copy(file, dest, true);
}
string[] folders = Directory.GetDirectories(sourceFolder);
foreach (string folder in folders)
{
string name = Path.GetFileName(folder);
string dest = Path.Combine(destFolder, name);
Console.WriteLine("Copying " + folder);
FullCopy(sConfigLocation, folder, dest, sDriveName, iniFile);
}
}
}
public void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
string message = e.UserState as string;
fileCopyingLbl.Text = message;
}
So from what I know the ProgressChanged method is there so you can update your Form. So from what I believed should work from calling this method is that
the label will be updated as shown in the code.
If I were to put a MessageBox in the method then I can see that the file names are passed into the method properly.
For example;
public void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
string message = e.UserState as string;
MessageBox.Show(message);
fileCopyingLbl.Text = message;
}
Everytime a file gets copied in the RecursiveCopy.FullBackup method it will send the data into that function and I can see the MessageBoxes for each file.
So I know the data is getting sent between the methods correctly just that the label control is not getting updated.
Perhaps I am tackling the problem the wrong way? Maybe with the way I'm calling it?
PenDriveBackup myClass = new PenDriveBackup();
myClass.backgroundWorker1.ReportProgress(0, file);
I hope the information this time is enough to hel you get a grasp of the project I am under-going. Thanks for replying, George.
|
|
|
|
|
Hi George,
you did not really answer the three questions I asked you, however you added a comment which probably points to the main problem:
George Quarton wrote: PenDriveBackup myClass = new PenDriveBackup(); // the PenDriveBackup class is where the form resides.
so you create a NEW form; this new form is not the one you already had and probably are looking at, and it is never shown; it has its own instance of everything, including a backgroundworker and a label; that label now gets updated, which will not do anything on the original form, the only one that is visible.
I was somehow expecting this. As I already said before, creating new things in a recursive method often is a mistake.
BTW: You should be more careful when choosing names; I'd suggest PenDriveBackupForm (normally class names are nouns and most often their name suggests what they really are or represent, here a form; methods on the other hand are best named using verbs as they are doing something).
|
|
|
|
|
Ah! I see. Silly mistake. How would I go about updating the form directly then?
I can not call the backgroundWorker1_ProgressChanged method as its apart of another class and is not able to use the static identifier. (I dont have much experience in c# coming from a scripting language things are a little different)
Thanks again for your help and advice.
|
|
|
|
|
I would organize things a bit differently:
1) simple scheme
1A. have your PenDriveBackupForm as it is, however without the BackgroundWorker, as it does not belong to the form.
1B. have your RecursiveCopy class, with a constructor that creates a single (and private) BackgroundWorker (with the "new" keyword), to be used inside that class only; and give it a Label parameter, which you store in a private class member, so the BGW inside that class can access it.
2) better scheme (more object-oriented; keywords are: delegate, event)
2A. same as 1A.
2B. have your RecursiveCopy class, with a constructor that creates a single (and private) BackgroundWorker, to be used inside that class only; don't give it a Label parameter; give the class a public event of type Action<string> which almost means "function pointer to a function that takes a string and returns nothing". Now let your BGW fire that event.
2C. Add a "SetProgressLabel()" method to PenDriveBackupForm to accept a string and set the Label; then also add a delegate for that method to the public event you provided in RecursiveCopy.
The net result is: RecursiveCopy will execute a method of which it does not know much, and SetProgressLabel() will be called when necessary; the coupling between both classes is minimal, you can change the Label to something else without telling RecursiveCopy at all (which can't be done in the simple scheme).
modified on Monday, December 14, 2009 7:35 PM
|
|
|
|
|
Using SMO with SQL Express 2005, I'm experiencing some frustrating behavior.
I'm using SMO Server and Database classes to script my database using Database.ExecuteNonQuery . The database objects are getting scripted correctly, but I'm having problems re-connecting to the database (using SqlConnection.Open ). I get a SqlException ("System.Data.SqlClient.SqlException: Cannot open database \"MyDatabase\" requested by the login. The login failed.\r\nLogin failed for user 'MyDomain\\MyUser'.\r\n at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject)\r\n at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)\r\n at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)\r\n at System.Data.SqlClient.SqlConnection.Open()\r\n at MyCodePath\\DataAccessFactory.cs:line 57")
The problem "goes away" if I just wait before trying to create the second connection e.g. if I use a breakpoint and step through the code it works fine.
I am closing my previous connection (the one where I scripted the database) and everything is sequential on a single (the UI) thread.
I'd hate to have to code some delay or retry logic for something that seems pretty standard.
|
|
|
|
|
It's normal, you may use a connection pool in order to avoid problems of connection.
myConnection.ConnectionString = "Persist Security Info=False;Integrated Security=SSPI;database=MyDatabase;server=MyServer;Connect Timeout=30";
The value is expressed in seconds:
Connect Timeout=30
|
|
|
|
|
That's kind of annoying. I've noticed these types of subtleties even when using standard MS tools like SSMS. My first reaction when something fails is to change some variable of the experiment before trying again, but with SQL Server, a lot of times, you just need to act like a crazy person and try the EXACT same thing expecting different results.
Thanks for your tip, I had managed to workaround the issue, but I might implement this suggestion as well.
|
|
|
|
|