|
This doesn't concern you?
Wow.
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
??????!!!!!!
foreach(Minute m in MyLife)
myExperience++;
|
|
|
|
|
Hi everyone!
I want to reduce the numbers of colors in images dynamiclly in asp.net. For that I use this library (Source code). I use the PalettQuantizer class to reduce to specific colors specifyed in an ArrayList. But if I make the palett bigger than 256 entries, I get IndexOutOfBoudsException. I get the exception because the Bitmap.Palette.Entries array just can hold 256 entries. But if I want a palett that is bigger, how should I do?
Regards Tobias
|
|
|
|
|
Microsoft are retiring the MCAD/MCSD March next year. I remember a poll on here about which version of the .NET framework developers are targeting. The majority said 2.0.
I have an MCAD and wish to upgrade it to a higher qualification, and I feel that going for an MCSD would be a bit of a wasted effort as it targets 1.1 and will soon be obsolete. Many jobs I have seen require experience with 2.0 or higher.
If I go with the MCSD, I fear that version 4.0 of the .NET framework will come out by the time I've done it. So I'm thinking I will upgrade to an MCPD to catch up a bit. Any thoughts on this?
|
|
|
|
|
I would say go for the upgrade to MCPD, especially since the MSCD is also being retired.
Scott Dorman Microsoft® MVP - Visual C# | MCPD
President - Tampa Bay IASA
[ Blog][ Articles][ Forum Guidelines] Hey, hey, hey. Don't be mean. We don't have to be mean because, remember, no matter where you go, there you are. - Buckaroo Banzai
|
|
|
|
|
What's the upper limit on the number of projects for solution in VS2005 ?
|
|
|
|
|
Not sure if there is any. Hard disk, available ram, and processor power, human sanity dealing with a slow load, are probably the deciding factors on the limits. Have you seriously tried googling around? Maybe MS has the actual number somewheres in their docs?
--- modified
After a quick google around, nothing much. Probably depends on your hardware, number of developers, resources available.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
I have one with fifty-six projects so far.
|
|
|
|
|
Hi all,
I am developing an application that plots the timeseries data.
Currently, I have used SQL Server to store the data and ChartDirector to plot the chart.
But the application is slow and takes long time (40-50 seconds) just to plot 1 million data points. What I do is fire select query to database and use the result to draw that chart using CharDirector.
Please suggest me on how can I improve the speed of this application?
I am thinking on sampling (if data is too huge like fromo 2000 - 2005).
Nevertheless, I should not miss out the outliers meanwhile.
I would be grateful for your help.
Thanking you!
|
|
|
|
|
Have you found any solution to your problem?
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
I have code along the lines of:
try
{
MarshalByRefObject obj = (MarshalByRefObject)RemotingServices.Connect(
typeof(MyClasss),
"http://MyServer:8080/MyClass");
MyClass class = (MyClass)obj;
class.CallSomeMethod();
return;
}
catch
{
return;
}
In the above code, if for whatever reason I can't connect to my server, an exception is thrown, I eat it, and move on. That's all expected.
What I'd like to know is: Is it possible to set how long the timeout is before the remote request gives up? If the server is down, it can take a while before the exception is thrown - sometimes a minute, and I'd like to limit it to something low, like 10 seconds.
Thanks,
Jeff
|
|
|
|
|
|
Thanks! I wasn't evening thinking about HttpClientChannel (and I wasn't aware of the configuration dictionary that's passed in), just MarshalByRefObject.
Thanks,
Jeff
|
|
|
|
|
If an object reference were stored adjacent to an integer in the same oct-word, or if two object references were stored likewise, is there any reasonable way to to a single CompareExchange that will update both of neither has changed or leave both alone if either has changed? Many algorithms for non-blocking updates seem to want a 'CompareExchange-plus'. From a hardware perspective there should be no difficulty(*), but since .net needs to "know" about objects, storing two object references in a Long would seem problematical.
The closest approaches I can figure would be to either (1) when it's necessary to have one 'extra bit' with the object reference, create two objects that the object reference can point to, both of which have pointers to each other. One of them is the real object, and one is a spare. If the object reference points directly at the "real" object, the "extra" bit is "zero"; if it points to the spare, it's "one". This avoids any need to create new objects while doing the update; (2) have the object reference point to an intermediate object which holds the real object reference and the other data; to do an update, copy the data from the old intermediate object to a new one and then CompareExchange the reference to the old to point to the new. That approach allows an arbitrary amount of supplemental data, but is more apt to tax the garbage collector. Further, even if the rest of the algorithm is interference-free, the memory allocation isn't.
Anyone have any brilliant insights?
(*) I wouldn't be surprised if object references in IA64 fill up the largest available CompareExchange type, but I can't see any real reason they should. If an object reference stores an index into a system-maintained array of objects, then 32 bits should be plenty for that purpose. I don't want to sound like Bill Gates' "640K is enough for everyone", but I can't see much use for having four billion live objects. If an application would need anything near that number, I would think it would be more efficient to use fewer objects and store more information in each.
|
|
|
|
|
32-bit processors support 32-bit interlocked operations; 64-bit processors in 64-bit mode support 64-bit interlocked operations and that's it.
Lock-free programming is very, very hard and requires you to fully understand the memory model. It's much easier to use actual locks until you can prove that the lock is the bottleneck. I suggest you read Herb Sutter's DDJ article Use Critical Sections (Preferably Locks) to Eliminate Races[^].
The .NET Monitor is a very low cost lock: it spins on a condition variable for a while, before eventually waiting on a Windows event object (allocating one if there wasn't one). This is much the same as the Windows CRITICAL_SECTION structure, if initialized with the InitializeCriticalSectionAndSpinCount function.
DoEvents: Generating unexpected recursion since 1991
|
|
|
|
|
Mike Dimmick wrote: 32-bit processors support 32-bit interlocked operations; 64-bit processors in 64-bit mode support 64-bit interlocked operations and that's it.
The 32-bit processors support a CMPXHG8 instruction for 64-bit compare-exchange. That's what's used when performing CompareExchange on a long integer. So there is no hardware impediment to providing an "Object-plus" compare-exchange in .net.
As for 64-bit architectures, the question would be whether they need to use a full 64 bits for an object reference. I can't really see any reason that should be necessary.
Lock-free programming in general is difficult, though some cases are pretty easy (e.g. a singly-linked list in which either insertions or deletions are restricted to the start of the list). Allowing a CompareExchange on an "Object-plus" would allow convenient handling of more data structures.
The problem with locks isn't just one of performance, but also one of stability. While it's certainly possible to use locks safely, the more widely they're used, the greater the likelihood of deadlock, priority inversion, or other such problems. Certainly there are places where locks are more practical than non-locking methods, but using locks within a class can create tricky behavioral dependencies which need to be documented and dealt with. If a non-blocking approach can yield the same result, such dependencies are eliminated.
|
|
|
|
|
Hi,
although you only just entered this, google has already picked it up.
But their link is wrong, so I entered this message[^].
about your subject:
- I am in favor of lock-free stuff too, if the environment lets me.
- a Win64 system would need pointers larger than 32-bit, I expect them to use the full 64-bit even
when they already said virtual address space would be limited to 48-bit IIRC.
- I don't think you can rely on the CMPXHG8 instruction to be present on every machine running .NET,
hence you should use an API function that hides those hardware details. Don't know if that is available.
And if it is, it should be offered in a .NET class !
|
|
|
|
|
- a Win64 system would need pointers larger than 32-bit, I expect them to use the full 64-bit even when they already said virtual address space would be limited to 48-bit IIRC.
I don't see why object references would need to be as long as pointers. I would expect the system to have a table of object references; if each object reference is 32 bits, that would allow for four billion object references. If each entry in the object table is 16 bytes (more likely they'd be 32 bytes each) that would mean that the limit of four billion object references wouldn't be a factor until there were 64 gigs in use for the object table alone.
How could one practically use more than four billion objects in a process space? I ask that not as a "why would anyone ever need more than 640K" question, but rather "How could one have billions of objects in a process space and not have garbage collection become totally unmanageable."
- I don't think you can rely on the CMPXHG8 instruction to be present on every machine running .NET, hence you should use an API function that hides those hardware details. Don't know if that is available. And if it is, it should be offered in a .NET class !
The System.Threading.Interlocked class offers methods to perform a compare-exchange on an integer, object reference, or long. According to Wikipedia, the 64-bit processors offer a 16-byte compare-exchange which could operate on two side-by-side 64-bit objects. The only difficulty with doing an object-pair compare-and-swap would be making sure that the .net memory manager knew what it was doing. Unfortunately, I don't know how to accomplish that, but I wouldn't think it should be overly difficult.
|
|
|
|
|
supercat9 wrote: How could one practically use more than four billion objects in a process space?
Indeed. However if you believe developers in the field think practically, I suspect you have not spent much time reading posts in our Code Project forums. I have seen posts with people talking about having thousands of edit boxes (or something) on their Forms. Counting on todays software developers being practical is like counting on Santa Clause.
led mike
|
|
|
|
|
Hi,
supercat9 wrote: I don't see why object references would need to be as long as pointers.
You are probably right. I still think of references as pointers, but they could be something entirely
different, and having an upper limit of 4G of them is fine by me. Although it *will* become
another 640K limit at some point in time; don't ever think we have seen the last such mistake,
things evolve a lot in 10 or 20 years time.
supercat9 wrote: The only difficulty with doing an object-pair compare-and-swap would be making sure that the .net memory manager knew what it was doing
I don't think the memory manager nor the garbage collector care much about what one does to references
(replace or not, swap or not); the gc only needs to know which registers and memory locations are
holding or could be holding references, but it does not care which objects get referenced,
so I would say swap as much as you like.
I tried to look into the MSIL/CIL definition, but the ECMA-335 PDF file seems damaged somehow.
Anyway, I was hoping to check the presence/absence of some relevant IL instructions there,
since that seems the right way to go.
|
|
|
|
|
Luc Pattyn wrote: You are probably right. I still think of references as pointers, but they could be something entirely
different, and having an upper limit of 4G of them is fine by me. Although it *will* become
another 640K limit at some point in time; don't ever think we have seen the last such mistake,
things evolve a lot in 10 or 20 years time.
Actually, since it turns out the IA64 supports a 16-byte version of CompareExchange, there probably wouldn't be much need to squish object references into 32 bits. Actually, there are a number of useful things that could be done with at least some bits in an object reference (e.g. a bit to indicate whether the holder of the reference particularly wants an object to stick around; each object descriptor could then have a 'wanted' bit which be cleared once the 'wanted' bits in all references were cleared. Classes that maintain weak references could decide to drop references that were no longer 'wanted'. Such a system would avoid the problem of weakly-referenced objects surviving garbage collections that occur while a temporary live reference exists. The garbage collection could discover that an object was not 'wanted' even while a temporary live reference existed; the maintainer of the weak reference could then avoid creating any new live references in future.
|
|
|
|
|
Thanks for the info.
It reminds me of several microprocessor families that use the lowest say four bits of an address for
such purposes when describing memory blocks, e.g. for paging tables, while they know these bits
in real addresses would always be zero due to the granularity of things.
|
|
|
|
|
Mike Dimmick wrote: 32-bit processors support 32-bit interlocked operations; 64-bit processors in 64-bit mode support 64-bit interlocked operations and that's it.
According to Wikipedia, the IA64 architecture supports 128-bit interlocked operations, so code which assumed the existence of an interlocked CompareExchange twice as wide as an object reference should be workable on all .net machines.
The DDJ article was a bit disappointing, since it failed to mention a general effective use of CompareExchange:- Create a new object.
- Begin loop:
- Latch old object reference
- Copy old object to new one, making any desired changes
- Use CompareExchange to store a reference to the new object in the old spot, if the old spot hasn't changed
- Loop until the CompareExchange succeeds.
Note that unlike a 'normal' spinlock, provided the time between latching the old reference and attempting to write the new one was short, the CompareExchange spinlock will seldom fail more than once per active CPU trying to write the variable. It never has to wait for another thread to do anything--merely for a chance to do its own thing without other threads disrupting it.
That general approach to CompareExchange works very nicely in a wide variety of situations. The two biggest weakness: (1) It's only practical for that are small enough to be copied quickly; the longer it takes to copy an object, the more likely the CompareExchange will be to fail and force a re-loop. (2) Allowing the reuse of objects from a pool opens up major complications; each operation thus requires getting a new object from the garbage collector, creating a performance bottleneck.
Adding support for CompareExchange on an 'Object-plus' structure would make it easier to allow safe reuse of objects from a pool. If each object kept a counter of how many times it had been reused and a copy of the counter were stored along with the object reference, that would eliminate the danger that a thread could attempt to use CompareExchange on an object which another thread had discarded (put in its pool for reuse) and reallocated.
|
|
|
|
|
I'll try to explain :
look this example :
Module
Module1
Sub Main()
Dim num AsDouble = 1.25
Console.WriteLine(Math.Round(num, 1, MidpointRounding.AwayFromZero))
num = 1.225
Console.WriteLine(Math.Round(num, 2, MidpointRounding.AwayFromZero))
num = 1.2225
Console.WriteLine(Math.Round(num, 3, MidpointRounding.AwayFromZero))
num = 1.22225
Console.WriteLine(Math.Round(num, 4, MidpointRounding.AwayFromZero))
num = 1.222225
Console.WriteLine(Math.Round(num, 5, MidpointRounding.AwayFromZero))
num = 1.2222225
Console.WriteLine(Math.Round(num, 6, MidpointRounding.AwayFromZero))
EndSub
End
Module
The output of this sample console application is :
1,3
1,23
1,223
1,2223
1,22222
1,222223
Is there something wrong ?
In my application I must use five decimal digits and this round method.
Regards
Andrea
|
|
|
|
|
|