|
For "search" I meant the file enumeration with _findnext().
|
|
|
|
|
Maybe read the whole file at once as opposed to many fgets() calls, then do your string search in RAM?
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
I tried that method, but nothing changes.
Any kind of file opening (including MapViewOfFile) terribly slows the process down.
|
|
|
|
|
Hi,
Some comments:
1.) Reading files may be slightly faster if you read using a multiple of the drive sector size or read it all at once.
2.) If you've ever wondered why file-backed cache implementations save files into a hierarchical folder structure it's because enumerating 10,000 files in a single folder may cause a small performance hit. If you plan on storing many thousands of files you may want to design a folder structure. Maybe something simple such as alphabetical A-C, D-F ... or something based on timestamp. This is not much of an issue on a modern SSD but old spindle drives take a performance hit.
3.) The code you have shown above is reading the file contents into a local buffer. You would get a huge performance boost by using the MapViewOfFile function to map the file directly into your process space.
Have a look at the Creating a View Within a File sample. This sample is demonstrating how to take a large file and map only 1kb at a time into your process. Don't do that.
You stated that your files are around ~40 kb so I'd recommend mapping the entire file into your process address space. I'd also recommend using two file mappings. While FileA is being processed you can have the operating system map FileB into your process. This would mitigate any latency caused by the i/o subsystem.
The majority of your latency is between opening files. I highly recommend the second file mapping.
Best Wishes,
-David Delaune
|
|
|
|
|
I tried with 1 file mapping only; it works but the speed is exactly the same.
Probably the only way is to merge all the file in one big file. That way I can also optimize the file format for my needs.
Thanks you all.
|
|
|
|
|
The more obvious answer is get whatever is saving the files to put the ones which contain your string ABCD out under a special name string. Then you don't have to search inside the file at all to find the files you want. Another obvious choice is have the files on a ramdisk as there isn't much data.
The whole process seems a bit backward to me you are working on the reading code not the writing code.
In vino veritas
modified 31-Jan-19 21:00pm.
|
|
|
|
|
That's a good approach, but as I understand it the main task is not to find files that contain the string, but find all lines within these files containing it. A specific kind of filename wouldn't be enough.
Your suggestion to include the writing into the problem solution is a good idea. However, if we do that, we might as well write all the data to a database. Retrieving the correct lines would then only require a simple SQL query.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
You only need one instance to flag it as special who cares how many times it writes the special sequence after that. The process is to eliminate the mass of files that aren't of any interest by using the name.
I am not changing anything other than the name of the file .. hardly complex or rocket science and much easier and much faster than a database connection
In vino veritas
|
|
|
|
|
I did not see the OP mention how many of the files do have that symbol. If a significant fraction of the files are affected, your solution would not help a lot.
leon de boer wrote: much faster than a database connection
.. to implement, sure. But certainly not to execute.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
Stefan_Lang wrote: as I understand it the main task is not to find files that contain the string, but find all lines within these files containing it. Each file usually contains 190 to 220 lines and a file may or may not contain the wanted line (but almost all the files contain the wanted line). If the wanted line is in the file, there is only one line.
|
|
|
|
|
leon de boer wrote: The more obvious answer is get whatever is saving the files to put the ones which contain your string ABCD out under a special name string. Then you don't have to search inside the file at all to find the files you want. What happens if I need to find "BCDE" or "S1 " or "01FA" or ...?
Another obvious choice is have the files on a ramdisk as there isn't much data. If I put the folder on the SSD, the process is much faster: 60.9 s for the HD and 2.9 s for the SSD. The SSD is an "unusual" location for that folder because all the other files are on the HD, but it's the easiest solution.
Thank you
|
|
|
|
|
Quote: What happens if I need to find "BCDE" or "S1 " or "01FA" or ...?
Label them differently with a special name obviously, all you are doing is coming up with a filenaming convention
Hell use the file extension you already have (*.states) and mask the bits of it for what special strings are in it
*.states = file with no special tags
*.states1 = file with special tag 1 in it
*.states2 = file with special tag 2 in it
*.states3 = file with special tag 1 & 2 in it
*.states4 = file with special tag 3 in it
*.states5 = file with special tag 1 & 3 in it
*.states6 = file with special tag 1 & 2 in it
*.states7 = file with special tag 1, 2 & 3 in it
You can know what tags are in the file without ever opening it all you need to know is the filename.
This is also obviously a windows program why aren't you using the Windows API for the file open and reading?
HANDLE Handle = CreateFile (fd.name, GENERIC_READ, FILE_SHARE_READ,
0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
if (Handle != INVALID_HANDLE_VALUE)
{
DWORD Actual;
ReadFile(Handle, &buf[0], sizeof(buf)-1, &Actual, 0);
if (Actual > 0)
{
buf[Actual] = 0;
if(0 == _strnicmp(buf, "ABCD", 4)) {
Save buf in a std::vector
}
}
CloseHandle(Handle);
}
In vino veritas
modified 2-Feb-19 3:59am.
|
|
|
|
|
Since almost all the files contain the wanted string, I'll need to open almost all the files, so the speed up would be negligible.
I don't use Win Api because, afaik, there is no fgets() equivalent and there is no speed up if I read the whole file at once.
|
|
|
|
|
I gave you the fgets equivalent above (its only a couple of lines of code) .. I am not convinced it isn't faster because you will be using the standard console file handler for opening and reading thru the standard library.
Anyhow I will leave you to it
In vino veritas
|
|
|
|
|
I suspect that the reason for your program to run much faster on the second run is that modern drives cache a certain amount of data, and therefore don't need to rely on slow hardware for repeatedly reading the same files.
In your code, you read from each file line by line. Internally, these reads will trigger a request to read some block (or multiple blocks) of data. While each of these blocks is probably cached to be used for consecutive reads, any read request requiring a new block will cause another, slow, access to the hard disk.
You could speed this up by reading the whole file in a single operation: query its size, allocate a sufficiently large buffer, open as binary, and read the whole file into that buffer. Then your internal while loop can request each line from that buffer, which should be considerably faster.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
The main culprit is "fgets". Once you call that, the fopen series of calls immediately loads, I believe, 32k of data. On top of that fgets is relatively slow. For speed, you may be better off using fread, reading in 4k (or the page size) at a time and parsing the block yourself (by simply looking for ABCD. This could be sped up faster by doing a Boyer-Moore search, though since the string is short, simply scanning first for A and then checking for the rest may be faster. That said, I believe some new implementations of the standard library now include a Boyer-Moore algorithm.)
Do also note that caching plays a big part here. Just recursing folders will take significantly longer the first pass than the second. This can be deceptive, however, since in actual operation those caches may be flushed between program runs.
|
|
|
|
|
The definitive solution: ctrl-x from HDD to SSD, restart the PC (probably not needed), ctrl-x from SSD to HDD.
Now the process takes 8.2 s instead of 61 s, which seems reasonable to me.
|
|
|
|
|
how to write in .xlsx extension excel file using c++??
|
|
|
|
|
|
|
Does typedefs and a lot of use of typedefs make code complicated especially if code has to be used by a lot of people?
I can understand cases where typedef can be useful for the person who created it and uses it but if the person revisits the code 5 years later or some other individual is reviewing the code, they have to constantly look up the typedefs. If there a couple of typedefs it might be ok but if the code is millions of lines and there are 1000 typedefs defined in various projects, does it not defeat the purpose of typedef?
I would rather not use typedef's at all because of this and just deal with the pain of typing or using complete syntax.
Any thoughts or revelations on this?
|
|
|
|
|
As with most "shortcuts", you should only use them where they add value or improve readability.
|
|
|
|
|
nitrous_007 wrote: Does typedefs and a lot of use of typedefs make code complicated especially if code has to be used by a lot of people? How do they make code more complicated?
nitrous_007 wrote: ...but if the person revisits the code 5 years later or some other individual is reviewing the code, they have to constantly look up the typedefs. Not if they were named/implemented correctly.
nitrous_007 wrote:
I would rather not use typedef's at all because of this and just deal with the pain of typing or using complete syntax. And what about that one place you forgot?
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
|
|
|
|
|
About the only place I use them is for function prototypes for lambdas.
With intellisense and auto, my typing doesn't increase much and seeing the full type makes the code more clear for me.
|
|
|
|
|
Without typedef s (or using[^]) code could be a mess.
Anyway nothing prevents messy code to take advantage of typedef to further mess up.
|
|
|
|