|
I completely understand what is trying to be accomplished, however you seem to underestimate what MMF's were designed for. They were designed for completely random access! They were designed to access large files! They were also integrated into the memory manager so you have great flexibility and optimization!
That's fine if you want some type of front end to make it seem as if all memory is available at the same time; however the back end should definately be using memory mapped files as this is what you are really doing on the back end; except in a very limited and unoptimized way.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
If you do benchmark tests and find out that copying a file from disk then into memory all the time is faster than on demand memory mapped files I would send the code to Microsoft's Base OS team!
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
I'm not trying to be a prick; I just want to educate you on the operating system and what it has so you can take full advantage now and in the future.
The memory mapped file has the advantage that you can set options to use the memory mapped file for the "page file" for the memory locations! This is great because if you just read a file from disk into memory you're actually puttint it onto two locations in the disk and this could be happening concurrently.
There is something called a "working set" for any particular process. This is the number of pages that the process can have concurrently in physical memory at the same time. This means that as you copy pages you could now be swapping pages back out if you're crossing 4k boundaries. You are now dirtying your cache constantly and when your memory manager deems it as dirty, the OS has possibly copied this memory to page file for update and now you're also copying it to your file. So, you are doing double paging! Memory mapped files could use themselves for this so no copying any data to the page file.
If you use a memory mapped file you map the view but it doesn't mean that any data is read into memory. Once you access a memory location then it may be pulled into memory. So, if your user says "Allocate 10 meg" you would then copy 10 meg into a memory location. If he then changes one byte and hits "done" you could then copy the 10 meg back from that memory location.
The memory mapped file has the advantage that it would probably not put 10 meg into physical ram. It would also only put the page of the bytes changed into memory and then swap it back directly into the file. This is much faster.
The second optimization is that you would have to do yourself would be to keep pages of memory in RAM even though you've been told to swap it out. As an example, your user could say "allocate 5 meg" and you do, but now you have to swap that out for a different 5 meg. In your situation you copy out and recopy the new 5 meg into the same location. If you used a memory mapped file, just because the physical address of the original 5 megabyte isn't viewable to your process anymore doesn't mean it's not still there. The Operating System itself can optimize and will attempt to keep pages in memory even if they aren't being used. This is great because if you then remap that 5 meg, the OS simply just needs to update your virtual address. In your example, you'd have to recopy all the memory into the process from disk every time.
Memory Mapped files were designed to have optimal flexibility and you'd be surprised at how many implementations use this. You can imagine it would have to be flexible since all files, executables and DLLs on the system in every process are just memory mapped files!
If you looked at that article as well that I posted you will notice the date is 1993. Over 10 years of optimizations to the memory mapped file implementation in Windows is also at your advantage.
I wouldn't underestimate thinking that this was just implemented by some junior developer! Memory mapped files are even available on other operating systems. It is very flexible, highly optimized and it was implemented for this specific reason.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Toby Opferman wrote:
I'm not trying to be a prick; I just want to educate you on the operating system and what it has so you can take full advantage now and in the future.
I know that you are not trying to piss me off and I thank you for your efforts on trying to educate me on this subject, since I know very little about it.
Toby Opferman wrote:
If you looked at that article as well that I posted you will notice the date is 1993. Over 10 years of optimizations to the memory mapped file implementation in Windows is also at your advantage.
I just printed a few articles on memory mapped files (including the one you posted to me) and I will read them throughly during my long train ride back home.
After reading a little about the topic, I was convinced that I need more flexibility in my design. I also want to keep other options open, so I put the cache at a lower level in my design. I now easely can use different implementations, even together, of the core.. So for now, I'll just go ahead and start implementing the thing and wait for the benchmark tests to be accurate.
Toby Opferman wrote:
They were designed for completely random access! They were designed to access large files!
I doubt this, since in all the articles that I read about MMF's; the authors keep emphasizing that they are the best solution for accessing large sequential files and IPC.
Toby Opferman wrote:
I wouldn't underestimate thinking that this was just implemented by some junior developer! Memory mapped files are even available on other operating systems. It is very flexible, highly optimized and it was implemented for this specific reason.
I know that MMF's have a relative long past comepared to other pieces of software, but I just want to keep some options open for now. Besides that, the sceptic that I am, I'll have to see the hard numbers to be completely convinced. Having those numbers will have an other advantage. It will convince people who would have doubts about the solution in the first place..
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
Toby Opferman wrote:
They were designed for completely random access! They were designed to access large files!
I doubt this, since in all the articles that I read about MMF's; the authors keep emphasizing that they are the best solution for accessing large sequential files and IPC.
Just because author's don't mention this or know about it doesn't make it untrue. Think about it logically, why would you be required to access the file sequentially? Is there a limitation that would prevent this?
Remember, there is no physical link between the file stored on the disk and RAM. There is only a logical link which is created in software. The hard disk itself can be accessed quite randomly; you simply move the read/write head to the cylinder and sector you want to read. Think about the paging file for Virtual Memory itself. It's stored on the disk, pagefile.sys. Would this be accessed sequentially?
Generally, the authors aren't talking about simulation of > 2Gigabytes of memory and as such MMF's are more commonly used for shared memory. Most general applications, aside from databases, don't generally need this functionality.
How Windows NT Provides 4 Gigabytes of Memory[^]
MAPPED FILE I/O
If an application attempts to load a file that is larger than both the system RAM and the paging file combined, the mapped file I/O services of the virtual memory manager are used. Mapped file I/O enables the virtual memory manager to map virtual memory addresses to a large file, inform the application that the file is available, and then load only the pieces of the file that the application actually intends to use. Because only portions of the large file are loaded into memory (RAM or page file), this greatly decreases file load time and system resource drainage. This is a very useful service for database applications that often require access to huge files.
There are a variety of applications which use MMF's on systems that support it for this reason, here's an example:
MatLab - Accessing Files with Memory-Mapping
[^]
When Memory Mapping Is Most Useful. Memory-mapping works best with binary files, and in the following scenarios:
For large files that you want to access randomly one or more times.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Ok you convinced me! But, I just don't understand one thing: why wouldn't the authors of those articles mention the goal where the MMF's were designed for?
Anyway, I'll implement this MMF technique as the main technology for my very large heap. So this basically means that my heap will be thick wrapper around MMF's.
I'll implement the other technique as well, just for the benchmarks and for the learning curve. These benchmarks will also support the use of MMF's and maybe it will convince other users as well..
I guess that I found someone who I'll ackowledge for his expert insight in the issue.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
The funny thing is that it's really just the opposite. If you want to just read through a very large file sequentially and only do this once; it's actually not reccomended to use file mapping
Good luck with the project.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Thanks.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
Windows 32 bit maintains 2 Gigabyte of address space for User Mode processes and 2 Gigabyte of address space for Kernel Mode (Which is segmented into Session Space and System Space).
This causes memory below 80000000h to be "user mode" and memory equal to or above this number to be a kernel mode address.
So, depending on how many DLLs you have loaded into your process, how much physical/virtual memory is on the system, how much you can allocate varies.
There are two methods that you can use to extend your user mode address space. The first is the /3GB switch. This will allow you to use 3 Gigabytes of User Mode space and restrict the kernel to 1 Gigabyte... So, what's wrong with this?
1. You're limited to 1 Gigabyte of kernel address space. So, don't plan on loading tons of heavy drivers or using this as a scalable multi-user system.
2. The drivers and applications you load should not depend on using "80000000h" as a check to determine if an address is user or kernel.
So, what's the other method to extend into more memory? Buy more! You can buy 8 Gigabytes of memory! So, how do you use more than 32 bits of memory on a 32 bit system? The answer is that the Operating System can use 36 bit addressing on some machines that support PAE (Physical Address Extensions) (/PAE). This allows the OS to use that 8 Gigabytes for user mode applications since it can swap in a different set of page tables for each process. You as an application programmer may also take advantage of this using AWE (Address Windows Extensions).
Physical Address Extensions[^]
Address Windowing Extensions[^]
Alternatively you can use memory mapped files and create section views to various parts of the file to simulate large memory yourself as well.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Hi
I have to develop a client server application where the client is an MFC SDI application & my server is a dialog based application.
I m using Winsock2 as the communication mechanism between client & server.
The basic skeleton is developed. Sockets at both ends r successfully created & binded. Client has successfully connected to server.
Now the next thing is client sending some data to the server.
My question is where should i write the recv() so that the server is able to receive all what is send by the client.
Also does the server receives any notification that the data is coming.
|
|
|
|
|
Hi,
you can simply create a thread for any incoming connection, and there you recv data in an endless loop:
do{<br />
ret = recv(sock, buf, len, 0);<br />
}while(ret>0);<br />
Or you create asynchrone sockets. With this way, you get a WM message when an action to your socket happens (new client wants to connect, data is incoming etc.), then you can react on it, and do your stuff...
EDIT: check out the following message from Jack Squirrel posted on another thread:
Try the Winsock Programmer's FAQ<br />
<br />
<a href="http://tangentsoft.net/wskfaq/" rel="nofollow">http://tangentsoft.net/wskfaq/</a>[<a href="http://tangentsoft.net/wskfaq/" target="_blank" rel="nofollow" title="New Window">^</a>]<br />
<br />
<br />
<a href="http://tangentsoft.net/wskfaq/articles/io-strategies.html" rel="nofollow">http://tangentsoft.net/wskfaq/articles/io-strategies.html</a>[<a href="http://tangentsoft.net/wskfaq/articles/io-strategies.html" target="_blank" rel="nofollow" title="New Window">^</a>]Which I/O Strategy Should I Use?<br />
<br />
If I remember correctly, there should be sample C code of the various techinques somewhere on that site.
DKT
|
|
|
|
|
Dear all,
I'm beginer, Can you help me to distinguish between "class" and "struct" in C++ ?
Thanks!
Remis
|
|
|
|
|
not much difference, the first is class members are by default private, and struct members are public, unless you explicitly change their access level.
http://www.priyank.in/
|
|
|
|
|
Members of a class are by default private whereas members of a struct are by default public. Apart from that, there is not much of a difference.
Usage wise, structs are generally used to represent PODs (Plain Old DataType) whereas classes are used to more complicated datatypes (with constructors/virtual functions...)
Regards
Senthil
_____________________________
My Blog | My Articles | WinMacro
|
|
|
|
|
Hello,
There is no 'real' difference in these types. The only reason that there is a struct in C++ is because it was there in C. It's there for backwards compatability.
As other members stated, the little difference between the two is the default access of members.
I also got the blogging virus..[^]
|
|
|
|
|
There is a lot of difference b/w struct in C and in C++.
The first and the fore most thing is that C++ structure are nearly equivalent to the classes with some exceptions. They are active data structures.
but in C the structures is a grouping data, they are as passive as any other data types;
binding of code and data is not possible in C structures. this makes them to remain passive.
|
|
|
|
|
Anonymous wrote:
There is a lot of difference b/w struct in C and in C++.
I believe that the original quesion was: The differences between "class" and "struct" in C++
I know that you cannot have member functions and such in C, but in C++ you can, either with a struct or a class. There is really no behavioral difference between them in C++
I also got the blogging virus..[^]
|
|
|
|
|
Hi All,
I am trying to specify the output tray (output bin) for specific printer. It is quite easy to specify the input tray but I couldn't find the way to specify the output tray. I can specify the output tray within PCL printer by sending the PrintString (Escape sequence: ESC&l#L)
Is any of the Master/Guru can help me? Thank you in advance
Cheers...
|
|
|
|
|
Anyone??? Master/Guru, please help...
|
|
|
|
|
Hi All,
When I am using CDC::StartPage() and CDC::EndPage() it does NOT honour the setting of "Start Printing Immediately". It means that if I have 10000 pages job, I must wait until it spooled completely before it starts printing to the printer.
I know that StartPagePrinter() and EndPagePrinter(), it does print the page even though the print job is not completely spooled. That means for 10000 pages job, the printer is start printing as soon as the first page is completely spooled.
Is this a bug in CDC::StartPage() and CDC::EndPage()? I do not want to use StartPagePrinter() and EndPagePrinter(). Is there any way around it so I still can use CDC to do an immediate printing?
Thanks for any help in advance
|
|
|
|
|
You also need to do a CDC::StartDoc() which informs the device a new job is starting. Then you do a CDC::StartPage() which informs the device a new page is being printed.
When don, you need to do an CDC::EndPage() for the end of a page and CDC::EndDoc() to inform the device that it has everything.
Take a look at the MSDN CDC::StartDoc documentation. There is an example in there.
Hope this helps.
Larry J. Siddens
|
|
|
|
|
Hi Larry,
Sorry if I wasn't make my question clear. What happen is that the page is start printing when it reaches CDC::EndDoc(). What I want is that the page starting to print as soon as it reaches CDC::EndPage() (NOT CDC::EndDoc()). Therefore if I have 10000pages job, I do NOT want to wait until it completely spooled before it starts printing.
The only way that I know to do this is using StartPagePrinter() and EndPagePrinter(). Of course, I need to get the printer handle by calling OpenPrinter() to get the printer handle, StartDocPrinter() before calling StartPagePrinter() and EndPagePrinter(), and finally EndDocPrinter() to indicate end of print job.
With this way, as soon as it reaches EndPagePrinter(), it will start printing. That means if I have 10000pages job, I can start printing as soon as the first page is completely spooled. (Please note that this will depend of the setting of your printer. Set your printer setting to "Start printing immediately").
Since my application is using a CDC to draw rectangle, etc... I do not want to change my application from using CDC.
Thanks for any help again in advanced...
Cheers
|
|
|
|
|
If I understand you correctly, you have a handle to a printer and want to use the CDC.
Look at CDC::Attach(). This takes the handle (HDC) and attaches it to the CDC so you can use it just like any other CDC. When you get done, you can use the CDC::Detach(). Unless you wish the ~CDC() to close it for you.
Hope this helps.
Larry J. Siddens
|
|
|
|
|
Hi Larry,
(No offence), but you are totally miss-understand my question. My CDC print is working perfectly. The only problem I have is when I need to print, say, 100000pages report (a hundred thousand pages report).
Currently if I wanted to print 100000pages report, I have to wait until 100000pages report to be COMPLETELY spooled before start to print the 1st page.
What I really need when I print 100000pages report is that as soon as the 1st page is spooled, I can print it straight away. So, I do NOT need to wait until 100000th pages is COMPLETELY spooled before printing the first page.
Now, this feature is currently supported by StartPagePrinter() and EndPagePrinter(). Unfortunately, I can’t use these two functions because I need to use CDC to draw text, rectangle, etc…
I know that I am still be able to draw the text, rectangle, etc, by passing the HDC, but it will involve too much modification on the application. Therefore, I must stick with CDC.
Do you know how to do this? Thanks again for any help in advanced
|
|
|
|
|
Friends,
AfxGetAppName() is an MFC function.
What is the API equivalent of this function ?? There is an API GetModuleFileName() but unlike AfxGetAppName() it returns the entire path.
Imtiaz
|
|
|
|