|
|
A .NET struct is not a .NET class. I didn't read that bit, although I did check that I was not in the managed C++ forum, because I did think of that.
Christian Graus - Microsoft MVP - C++
|
|
|
|
|
Hi,
When I use the APPWIZARD (EXE) to create a basic MFC application and if I select the "REGULAR MFC" option in the end, the program runs fine....But if I select the "MFC IN STATIC LIBRARY"..it shows LNK4003 error...
I am using the Enterprise Version of VC++ 6
Please help,,,,
Allwyn Ds
|
|
|
|
|
Allwyn D`souza wrote:
...it shows LNK4003 error...
Which library is the linker complaining about?
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
it says:
C:\Program Files\Microsoft Visual Studio\VC98\LIB\libcmtd.lib : warning LNK4003: invalid library format; library ignored
C:\Program Files\Microsoft Visual Studio\VC98\LIB\libcmtd.lib : warning LNK4003: invalid library format; library ignored
nafxcwd.lib(winfrm2.obj) : error LNK2001: unresolved external symbol ___CxxFrameHandler
....
and the list goes on...
Kindly help !!!
|
|
|
|
|
It sounds like your linker options are askew. I don't know if you need to add Nafxcwd.lib and Libcmtd.lib to the list of libraries to ignore, or change the order in which the linker processes them.
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
I will try what you suggested and get back to you...
Actually, I just selected the "WIN32 Release" in the 'etting for' option.... and the code compiled and ran completely....
|
|
|
|
|
I just selected the "WIN32 Release" in the 'Setting for' option.in Project/Settings... and the code compiled and ran completely....
|
|
|
|
|
I am sorry I told you before that the code linked properly...actually the option just got resetted to "REGULAR share DLL" and the code ran fine....
Now I am back at the same point...
When I ignore the default libraries the options look like this:
/nologo /subsystem:windows /incremental:yes /pdb:"Debug/netmet1.pdb" /debug /machine:I386
/nodefaultlib /out:"Debug/netmet1.exe" /pdbtype:sept
and when I disable the ignore default libs..the options look like this....
/nologo /subsystem:windows /incremental:yes /pdb:"Debug/netmet1.pdb" /debug /machine:I386
/out:"Debug/netmet1.exe" /pdbtype:sept
What changes do i need to make to the above options...???
|
|
|
|
|
I'm working on a project that involves quite a bit of graphical drawing using GDI, and thus I need to eliminate flicker using memory DCs. I've used them extensively in my program, but I think I have overused them, causing the drawing to be unnecessarily slow. (A listview, a listbox, a few buttons, and the background of the main window AFAIK)
Perhaps allocating the memory bitmap in WM_CREATE instead of WM_PAINT will speed up the painting some, keeping in mind I need to reallocate when the size changes too.
A couple of questions, because I've had some inconsistent results. (I use the memdc from atlgdix.h for you WTL devs)
1) In the WM_ERASEBKGND handler, I've found that if I do not use Invalidate(FALSE); (indicating to invalidate without erasing the background), sometimes the window does not get redrawn correctly. Is it proper practice to use a WM_ERASEBKGND handler that looks like this?
OnEraseBkgnd()
{<br />
Invalidate(false);<br />
return true;
}
2) When using a custom/ownerdraw for a listbox/listview/treeview, would it be best to use one memory DC for each item as it is being updated, or would it be faster to use a memdc for the entire window and redraw that?
What I use now works, but seems unnecessarily slow, given that progs such as iTunes and WMP are extensively skinned and require lots of drawing, and I notice no flicker there. Is there a trick that I'm not seeing?
Thanks,
Shutter
-- modified at 12:22 Monday 19th September, 2005
|
|
|
|
|
Shutter wrote:
OnEraseBkgnd() // (prototype)
{
Invalidate(false);
return true; // erased bkgnd
}
Invalidate forces a paint message. The point of WM_ERASEBKGND is to NOT generate a paint message when you want to just draw the background. You'd do better to either do nothing, call the base method, or erase the background on the window yourself.
Christian Graus - Microsoft MVP - C++
|
|
|
|
|
Allocate the back buffer (memory dc and associated bitmap) in create, and resize in WM_SIZE handler. When I need to take resizing into consideration, I never shrink the bitmap. That gives an extra performance boost speedwise, but is perhaps not optimal in terms of memory. Classic tradeoff.
Don't fire invalidations inside your WM_ERASEBKGND handler. Just return TRUE if your aim is to avoid redrawing the background.
Shutter wrote:
2) When using a custom/ownerdraw for a listbox/listview/treeview, would it be best to use one memory DC for each item as it is being updated, or would it be faster to use a memdc for the entire window and redraw that?
It depends a lot on what you are drawing. If you are drawing simple text and/or an icon, then no memory dc is needed. If it's flickering, it's probably the result of your Invalidate() inside the WM_ERASEBKGND handler.
Shutter wrote:
Is there a trick that I'm not seeing?
The best trick I know of is to draw everything in a bitmap, and then blit it to the screen on WM_PAINT.
Generally I do:
* allocate a bitmap used for drawing, and I do it once (may be resized if the control/window is to be resized)
* all operations which alter the appearance of the window, draw to the back buffer. Then I invalidate the corresponding window rectangles, where the changes occurred
* in the WM_PAINT handler, I just blit the bitmap to screen
To make it fast as possible, make sure you only blit portions which need to be repainted.
Good music: In my rosary[^]
|
|
|
|
|
Thank you both; that had been bothering me for a while.
|
|
|
|
|
Having problems in my app working with large amounts of data.
I store a load of data using a std::vector list of paired doubles.
The app works fine with small quantities of data, but crawls when loading large amounts of data and displays "out of memory" when dealing with my larger data sets (around 3,456,000,000 doubles).
Tried speeding things up by informing the vector the intended size at the start. This had a marked speed improvement on the load routine but its still painfully slow and it still crashes with larger sets.
Whats the best way of handling large amounts of data like this?
--
The Obliterator
|
|
|
|
|
Obliterator wrote:
3,456,000,000 doubles
...is almost 26GB of data.
Obliterator wrote:
Whats the best way of handling large amounts of data like this?
how about a database ?
Cleek | Image Toolkits | Thumbnail maker
|
|
|
|
|
<reality check=""> hold on a minute! that can't be right!!!
Sorry about that, I copied the wrong value in!!
Its actually around 29,000,000 data points (and possibly worst-case as many as 115,000,000).
--
The Obliterator
|
|
|
|
|
for large datasets, you absolutly need to have a larger amount of memory.
check your system to see if you are missing memory ( the system will swap out memory to disk ).
Maximilien Lincourt
Your Head A Splode - Strong Bad
|
|
|
|
|
I've got 1GB memory. I'm sure thats not the problem.
I'm processing what is essentially around 100MB of data. Even if its swapped out, it shouldn't be as slow as it is!!!
I'm wondering if I need to drop the use of vectored lists and just use large arrays of malloc.
--
The Obliterator
|
|
|
|
|
I thought a double was 8 bytes. With 29 million xy points, that's over 400 meg I think.
Anyways, another idea is to define an XYPoint class to encapsulate your pair of doubles, and then store pointers to XYPoint in your vector instead. When you close a dataset, don't delete all of the points - you could return some of them to a free pool (up to 50 meg say) so that the point objects could be reused when loading the next dataset.
|
|
|
|
|
Interesting, I gave this method a try.
I wrapped my doubles into a class and allocated them using 'new' then storing pointers to the objects.
It had a marked improvement in that it processes more of the data, but it still falls over - just further along the dataset.
Thanks for the suggestion though.
--
The Obliterator
|
|
|
|
|
My guess is you are running out of virtual memory still.
Other members' suggestions such as using STL list or memory mapped files sound good to look into more as well.
But I would also consider whether your application really needs to work with the entire dataset in memory all at once. For example, is it possible to allocate a fixed cache of say 1 million points, and read in a million points, process them, write them back out, etc. The general idea is to see if your requirements allow you to load/unload just a portion of the dataset on demand, rather than all at once up front.
|
|
|
|
|
Hello,
I suppose that the data is stored in a file. Maybe you want to use Memory Mapped Files to map certain portions of the file into your process address space. This will not only save load time, but will also save you RAM.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
This could be an excellent solution.
I'd forgotton about the existance of such things!
I'll look into this further.
I have a feeling though it will result in a major rewrite of this module given its current design!
Thanks
--
The Obliterator
|
|
|
|
|
Obliterator wrote:
Having problems in my app working with large amounts of data.
Whenever you start dealing with large memory systems you need to start thinking about how you access your data, how many times you access your data etc.
I deal with Megs to Gigs of data on occasion. A flat list of items is not always the most efficient use of memory, certainly not for speed. It has no knowledge of the contents, no optimized structure for accessing the contents. By using a vector you are in a dynamically allocating system, so if you store iterators to data, then suddenly over-run your reserve() capacity all the iterators are useless and the software can easily crash. So be very careful of your algorithms.
So the first thing I would do is verify you set it up right. Check the size() and capacity() as a diagnostic. If you notice the capacity is still increasing as you run the software, you didn't reserve enough items to hold the data, or your algorithm is using more data than you think (which amounts to the same as not reserving enough).
This could lead you to an iterator that is jumping outside your reserved size. I actually prefer a crash state in debugging because I can find what was happening at the crash state and work backwards to find why. It is sometimes long and tedious, but it is at least straight forward.
Obliterator wrote:
Whats the best way of handling large amounts of data like this?
Like I said at the start, this is dataset dependant. Only you know the contents of the raw data inside the vector and how it was intended to be used. There are bintrees, quadtrees, octtrees, as many datastructures as there are stars in the sky. But not all are suited for each type of data. Octtrees are designed well for spatially oriented data that fit in a 3 dimensional construct. If your data exceeds that of your core memory, now you get into a larger system of handling paging of out-of-core datasets -- and that is an art of its own.
So before you change your container, first make sure you understand the cause of the crash what was your size() and capacity() at the crash, was the capacity() still set to the same as what you reserved?
Then if you truly feel the container of a vector is not suffient for you needs, which may be possible, then you have to get into the guts of your data and choose something that is more efficient for access. I've done multi-dimensional lists for storage that gave my professor headaches, of course at 5 dimensions you are probably trading understanding for access speed. Which is why you always start at the contents of the data. The right tool for the right job works for software as well as woodshop.
_________________________
Asu no koto o ieba, tenjo de nezumi ga warau.
Talk about things of tomorrow and the mice in the ceiling laugh. (Japanese Proverb)
|
|
|
|
|
I'll do some investigations with size() and capacity(). Though I suspect the problem is the way I'm allocating the objects with new that is causing me to run out of heap memory.
With regards to the data, think of it as a simple 2d graph. I don't need anything fancy. I need to be able to produce calculations based on each point in turn, to rapidly access sections of points within the list and to be able to move both forwards and backwards through the list.
At the time of design, I never considered I would be dealing with so much data. Hence, the little thought I put into the design! I'm sure theres a lesson there for me somewhere
--
The Obliterator
|
|
|
|