|
I was about to ask a question related to this.
Right now my code looks like:
[assembly: AssemblyVersion("1.0.0.0")]
But I have versions like 1.0.1300.27939, anyway! That's fine with me, actually, since I wanted to have build numbers included, too -- it seemed like that would make sense to have some sort of subminor version. So I'm really confused where it's getting "1.0.*" from...
The funny thing is, if I set the assembly version to "1.0.*", then I build a new installer (setup.msi), and it won't install! It only installs when assembly version is set to "1.0.0.0".
And then, it's a really minor thing, but when browsing C:\Windows\Assembly, I see that the System assemblies have different Versions vs. Assembly/Product versions (which seem to be the same in the Properties dialog). And I don't see how to set that -- perhaps it's set by an Installer?
What's the reasoning behind having it be major.minor.build.revision, anyway? Doesn't it make more sense as major.minor.revision.build?
|
|
|
|
|
So I figured out the problem with my installer is that I'm using a custom installer, with included code:
Assembly VisITBarAssembly<br />
{<br />
get<br />
{<br />
return AppDomain.CurrentDomain.Load("VisIT, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4d504ee06f99380a");<br />
}<br />
}
And so that probably fails if I use "1.0.*" in AssemblyInfo. Is there a way to get the above code to work if I set AssemblyVersion to "1.0.*"? I've tried some different attacks, and they've all been failing. I'd like to not have to change version numbers in 10 different places all the time.
|
|
|
|
|
The question of "What's the reasoning behind having it be major.minor.build.revision, anyway? Doesn't it make more sense as major.minor.revision.build?" is one that we are puzzling over.
From various articles posted around it seems that there are some Microsoft strategies such as maj.min.d.s (elapsed days since Feb 1 1970 and elapsed seconds since midnight) and other 'private' strategies of fixing the build and /or revision numbers.
We favour using maj.min.iter.build where for every build the last digit is incremented (1.2.3.4 goes to 1.2.3.5), and 'iter' is incremented when :|we move to the next iteration (Agile / Extreme). Whenever we move up any of iteration, minor or major numbers we reset the lower numbers to 0. E.g. If we have 1.2.3.4 and update the mojor number then the next full number will be 2.0.0.0.
|
|
|
|
|
|
|
Here's my final question for tonight in my series of questions, and as always, they all pertain to my particular work project, which involves visualizing Web searches, so there is a fair amount of intense parsing, and a fair amount of GDI+ usage and custom controls.
In a rewrite of my work project (from originally a Java application), we've embraced MSHTML mainly for its parsing technology (using DOM), but we also use it to download some Web pages because it can do so in a somewhat intelligent and controlled way. Also, perhaps stupidly, we think that if it works for IE, it must be good enough for us.
Possibly there's high overhead in using MSHTML for our purposes, and perhaps there are some other HTML parsing libraries which might better suit our needs. For instance, we use MSHTML to retrieve (and inherently build a DOM) Web pages so we might analyze them... but maybe we only need the first 10K of a Web document, not the entire document. As far as I can tell, MSHTML doesn't support a partial download. Maybe we could do a partial download ourselves and feed that to MSHTML, but then there are other problems, such as the URLs that it returns are incorrect. Another problem is that we might want to access the document source via a stream rather than a single string, especially if we get into Large Object problems.
Are there any suggestions for this? We used to use Java string manipulation to do the screen scraping, but it seemed fairly inefficient and not very well maintainable. But then again, it's possibly inefficient to build an entire DOM for a page and then discard it after extracting a few bits of information. Still, I don't know that it's very good for us to write our own HTML parser. It's probably not the best timesink for us.
Thanks,
Arun
|
|
|
|
|
Continuing my short series of questions to ask, and as always, they all pertain to my particular work project, which involves visualizing Web searches, so there is a fair amount of intense parsing, and a fair amount of GDI+ usage and custom controls.
My understanding is that the way the .NET CLR handles Large Objects (> 85K), it has a separate memory pool which is never garbage-collected, for performance reasons, except perhaps if the system is running extremely low on memory. This is not preferable for my client application, where it's possible that I will be parsing Web pages (and possibly storing them for a period of time in memory), and many Web pages can be longer than 85K.
It seems like there would be some handy ways to deal with the Large Objects problem. My understanding is that garbage collection for Large Objects cannot be forced like it can for ordinary garbage collection. It seems like there would be two decent solutions: 1) breaking down my large objects (mainly strings, in my case) into 84K (or smaller) chunks, or 2) using unmanaged memory which I manage myself. Presently the source of my strings is from HTMLDocument objects, which is basically COM (MSHTML), so while I might prefer to use unmanaged memory, I don't really know how I would do all this in C#, avoiding marshalling a long string through some .NET wrapper before putting it back in unmanaged memory. I guess two possible solutions then would be to either write some unmanaged code (hopefully in C#, as I think I would be doomed in Win32 C/C++!), or try to find some Stream-based access from HTMLDocument, which would allow me to store a document a series of chunks. Then there are other problems, like finding ways to deal with these chunks....
Any advice would be appreciated.
Thanks,
Arun
|
|
|
|
|
|
Thanks! I'll give it a read.
I welcome any other ideas/articles, too. I suppose I'm overdue for giving Richter's Applied Microsoft .NET Framework Programming a close read.
|
|
|
|
|
Hmm, I actually read that a few months ago, and I didn't see any techniques for handing large objects, just how they are handled in the CLR.
Also, I've heard 20K and 85K as two minimum sizes for Large Objects. Which is it?
|
|
|
|
|
I think it is 85K, but if i were you I would check in MSDN
|
|
|
|
|
I have a few questions to ask, and as always, they all pertain to my particular work project, which involves visualizing Web searches, so there is a fair amount of intense parsing, and a fair amount of GDI+ usage and custom controls.
I was profiling my project using the very great AutomatedQA AQtime .NET Profiler. In the GUI thread, one of calls that were using the most CPU time for that thread was Panel.Invalidate(), which is actually Control.Invalidate(). The exact arguments were (RECT something, true), as in invalidate the child controls, too. What's happening is that I was updating something on the panel, and the child controls of the panel perform transparency, so those child controls within the bounds of the rectangle need to be invalidated, too, so that their backgrounds are updated. So at least I think those arguments are correct. I assumed that Invalidate() just updated some sort of Invalidated Rectangle or Region within a Control, which would be fairly efficient. It appears that If the second argument for Invalidate() is true, then it calls SafeNativeMethods.RedrawWindow() instead of SafeNativeMethods.InvalidateRect(). (Thanks to Reflector's decompiler for this information!) I don't really know what RedrawWindow() does, but it sounds like it's actually redrawing the window or performing some more complex task than just invalidating!
So it makes me wonder if Invalidate() with invalidateChildren = true is not quite what I expect out of Invalidate(), and also if it's better for me to maintain my own InvalidatedRectangle and call Invalidate() once instead of possibly 50 times, especially for the case where invalidateChildren = true.
I don't really know if this is something for me to worry about. The GUI thread doesn't seem to use a whole lot of CPU usage compared to the parsing threads, but I'm trying to reduce flicker on the GUI side, too.
Thanks,
Arun
|
|
|
|
|
|
Huh? That seems like the opposite of what I want to do. Update() forces a synchronous repaint, and I would want to do that less often than Invalidate().
|
|
|
|
|
Any one knows how to make the datagrid select the whole row when the mouse click on it? just like a list box.
Thanks.
eric feng
www.infospec.com
|
|
|
|
|
Override mouse events and select row AFTER calling base method.
public ... override ... OnMouse...(...)
{
base.OnMouse...(...);
this.Select(...)
}
Hi,
AW
|
|
|
|
|
this works, but when scroll the grid, the cell goes into edit mode.
eric feng
www.infospec.com
|
|
|
|
|
I have no this trouble - I use overridden column style. Try to override next events ... may be OnScroll() (save selected row numbers, call base method and restore selection I think). Next trouble is with sorting. Previous algorithm doesn't work .
Hi,
AW
|
|
|
|
|
Hi,
I experienced a strange problem - I am currently building an application that has a user control in it and for this control I want to trap the pressing of the ARROWS keys. BUT...
The problem is that only the OnKeyUp event occurs for THOSE KEYS - this is not what I
need
I tried overriding the ProcessKeyMessage method and found out that the control actually is not RECEIVING the WM_KEYDOWN message for the ARROW keys!
Any idea how to solve this problem?
Thanks,
Georgi
|
|
|
|
|
I dont understand, on my machine both the keyup and keydown
events are fired.
Bo Hunter
|
|
|
|
|
I had thesame trouble, try to check another methods containing "key" in their names. I override ProcessKeyPreview and ProcessDialogKey in similar case. Or use KeyUp() ...
Hi,
AW
|
|
|
|
|
All that you have to do is to override IsInputKey and then you'll be able to process key in OnKeyDown method
"...hasn't really been well accepted ... as the ratings tell us so far " - Nishant S
|
|
|
|
|
All that you have to do is to override IsInputKey and then you'll be able to process key in OnKeyDown method
"...hasn't really been well accepted ... as the ratings tell us so far " - Nishant S
|
|
|
|
|
I have this...
private void buttonOut_Click(object sender, System.EventArgs e)
{
int nOutNum;
int b=1;
int b2=1;
string baseBytes;
string basebyte1;
string baseCell;
string baseFULL;
string byteFile=this.textBoxFileName.Text;
try
{
for (b=1; b<=8; b++)
{
baseBytes="//Bytes";
basebyte1="/byte"+b2;
baseCell="/b"+b.ToString();
baseFULL=baseBytes+basebyte1+baseCell;
XmlDocument xmlDoc=new XmlDataDocument();
XmlNode bit;
string NUM;
xmlDoc.Load(byteFile);
bit=xmlDoc.SelectSingleNode(baseFULL);
NUM=bit.InnerText;
nOutNum=short.Parse(NUM, NumberStyles.AllowHexSpecifier);
NTPort.Outport(nAddress, (short)nOutNum);
this.listPorts.Items.Add((short)nOutNum);
if(b==8)
b2++;
if(b==8)
b=0;
}
MessageBox.Show("LOOP ENDED");
}
catch (System.NullReferenceException caught)
{
MessageBox.Show("End of "+byteFile);
this.listPorts.Items.Add("End of File");
}
catch (System.IO.FileNotFoundException caught)
{
MessageBox.Show("This file does not exist");
}
}
How can I make the timer wait 1 second before doing the loop over again?
/\ |_ E X E GG
|
|
|
|
|
If you want to put the process to sleep, use:
Thread.Sleep(Milliseconds);
Remember to inlude:
using System.Threading;
Rocky Moore <><
|
|
|
|
|