|
Are you saying that you want the recursive solution rewritten into a non-recursive one?
Then you have two main alternatives: Either find a completely different algorithm for solving the same task.
Or do it by recursion, by maintaining your own recursion stack, e.g. in an array. The only reason for doing it that way would be because your programming language doesn't allow recursion - but I haven't seen that limitation since the days of Fortran
I'll admit you a secondary reason for being a little careful with recursion: If your estimate for maximum recursion depth is way off, you may experience a stack overflow. In principle, you have the same situation with your array-as-recursion stack, but as you explicitly move stack pointers (i.e. calculate array indexes), it will be far more visible to you, so that you can avoid a fatal crash. You may set up an exception handler for a "real" stack overflow as well, but usually you have a hard time getting back onto your feet with no loss of data or code flow.
|
|
|
|
|
I've read that any recursion program can be written in non-recursion manner unless there's dynamic allocation in that recursion method, and I want to apply that on this particular code because I tried doing it by myself but again, not having a good programming background made me stop and couldn't do it myself.
|
|
|
|
|
iNoor72 wrote: not having a good programming background made me stop So now would be a good time to learn programming properly, rather than trying to rewrite something that you do not understand.
|
|
|
|
|
Trying to evade the problem that you don't understand recursion by rewriting it to non-recursion is never going to work.
In my university days, a fellow student realized that he did not fully master recursion - termination in particular. So he defined a small programming problem for himself. After solving the task, he never had any problems with how to terminate a recursion. I think he made an excellent "programming etude", and have spread it out to a lot of people. It goes like this:
When you enter the recursive function, you write a line with the number of spaces given by the recursion depth, and then an asterisk. When you leave the recursive function, you do the same. The top level call gives parameters for (a) the maximum recursion depth, which you dive right into, (b) an intermediate recursion depth that you return to, before again recursing to the maximum depth, and (c) the number of times to recurse to the maximum depth and back to the intermediate level, before finally returning to the top level call (i.e. the number of "peaks").
For a call with arguments (5, 3, 3) the ouput should look something like
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
* The problem statement seems very simple (and it is, for a seasoned recursionist). For an inexperienced programmer, you can usually hear a lot of cursing and re-cursing during the testing
|
|
|
|
|
I don't think that message was meant for me.
|
|
|
|
|
Pass it up the recursion stack!
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
No, it was meant as a side-by-side answer to yours, supporting what you wrote.
So it was primarily meant for the originial poster. But it is a good exercise in recursion. Lots of (even experienced) programmers are touching recursion only occasionally, and then often in a quite simplistic way. So I propose it to any programmer.
If you think it so trivial that you can do it ten minutes, then spend those ten minutes proving to yourself that you master recursion. I have met several experienced programmers who thought it would be simple and straightforward, but experienced it to be "somewhat more tricky" than they at first thought it would be.
|
|
|
|
|
Very interesting, but the person that the message was meant for is unlikely ever to see it.
|
|
|
|
|
If only five other people read it, say "Hey, that I can do in ten minutes", go ahead to show it and they are still fiddeling around with termination conditions after half an hour, I think it has been worth it.
You are probably right about the original poster, and nothing seems to indicate that he will be ready to take on the task for some time.
|
|
|
|
|
Member 7989122 wrote: say "Hey, that I can do in ten minutes" Making such statements is guaranteed to cause failure. I have done plenty of basic recursion functions but never anything very complicated. So I understand the concept, but the practical application always requires considerable thought.
|
|
|
|
|
Member 7989122 wrote: When you enter the recursive function, you write a line with the number of spaces given by the recursion depth, and then an asterisk. When you leave the recursive function, you do the same.
If the first value is 5 then the first line should be 5 spaces and then the asterisk, then 4 spaces asterisk ... (ignoring the second two parameters).
*
*
*
*
*
*
*
*
*
*
*
|
|
|
|
|
I'd like to see the code.
|
|
|
|
|
I made my solution 35+ years ago, and the source might still exist on one of the approx 100 eight-inch floppy disks, with a proprietary formatting and a proprietary file system, that I'm still keeping in a box in my basement. The manufacturer went bankrupt in 1992. I think that we were programming in Pascal at that time, so I would have to obtain a Pascal compiler to verify the code.
The cost&effort of recovering the source code (if it is available on one of the floppies, and the floppy is still readable) would be much larger than the cost of developing it anew. I could do it; you could do it yourself. The solution is really tiny, but the devil is in the details. Once the details are right, you'll say: Well, of course that's how it should be.
I might give it a try (maybe 35+ years of coding experience will make it appear simpler), but I haven't got the time right now. Go ahead yourself!
|
|
|
|
|
Member 7989122 wrote: I might give it a try (maybe 35+ years of coding experience will make it appear simpler), but I haven't got the time right now. Same here, although coming up for 55 years. One day soon ...
|
|
|
|
|
Hi All,
I would ask if there any opensource version of telnet client library(encrypted) available for windows
|
|
|
|
|
|
I have a MFC Application, where I am creating a new feature.
The new feature is to write a structure around size 900 KB to a file on disk every second. Similarly, my another application will read the same file every second after some time.
My questions are:
1. How to save the structure (900 KB) to the file in reduced size format, such that the file size doesn't become too big?
2. Saving the structure every second data as an individual file for reading and writing (there will be 60 files for 60 seconds)
or saving the structure sequentially in the same file(only one file)...Which is efficient for reading and writing
Please note that I am going to write the structure to the file for 4 days as a maximum period.
|
|
|
|
|
Quote: 1. How to save the structure (900 KB) to the file in reduced size format, such that the file size doesn't become too big? It depends on the data. A general compression algorithm is probably not reccomended here, because you have to access fast the struct. So, it is up to you (based on data knowledge) to implement (if feasible) an ad hoc compression/decompression algorithm.
Quote: Saving the structure every second data as an individual file for reading and writing (there will be 60 files for 60 seconds)
or saving the structure sequentially in the same file(only one file)...Which is efficient for reading and writing At first glance the individual file option looks more promising. But, again, data knowledge could change such an estimate.
|
|
|
|
|
Unless you have a lot of software experience, consider posting your specification (i.e. the actual problem that you're trying to solve). It might be that there are better approaches than having to write, and later read, a 900KB file every second.
|
|
|
|
|
manoharbalu wrote: Please note that I am going to write the structure to the file for 4 days as a maximum period. 4 days is 345,600 seconds. Are you saying that you are going to write 311 gigabytes of data to disk in 4 days? (That is if you don't find a good way to compress it.)
I certainly would not want to manage a single 311 GByte file. Nor would I want to manage 345,600 files ...
Writing a megabyte per second to a modern disk is nothing to worry about. A modern processor could easily handle compression of a megabyte a second, too. A disk to handle a third of a terabyte is no problem. You don't need any super-technology to get this to work. I would be more worried about either handling a 300 GByte file, or a third of a million tiny files (yes, by modern standards, files < 1Mbyte are "tiny").
Compressing to save space is just a technical detail - at application level, you realate to it at its uncompressed size. That comes natural if data are, say, a video stream, but then you would never consider to spread it over a third of a million files. So I am really curious to hear about the nature of the data!
I am not shaing CPallini's sceptisism to general compression algorithms. They work by the principle of recognizing recurring patterns, representing them by a shorter codeword. In totally unprocessed data there is often a large share of repeated patterns, especially in logging data. And in all sorts of readable text. Either, you'll come quite some way using a general method (such as pkzip), or it takes lots of detail knowledge of the information structures to do it much better.
Obviously, if you data is a known media type, such as video, photo or sound, then the mehtods have already been developed; you can get hold of a library to do it - I am thinking of when you to handle you own datatypes for which no dedicated compression method is yet known. Try out general methods first. Only if the results are completely unsatifactory, the files still way too large, may you consider making something of your own. Maybe you can reduce the data volume before you send it to compression, factoring out information elements, omitting data that can be reconstructed from other data etc.
Also consider whether you need losslss compression. If these 900 kB chunks are individual log entries, will they ever later be addressed as individual entries? Maybe you can reduce the data to statistics - averages, sums, whatever. (Like, when you swipe your card at the grocery store, the 49 individual entires on your checkout slip is reduced to a single total sum.)
Giving specific advice would be a lot easier with some details about the nature of the data you want to store.
|
|
|
|
|
Putting aside the data storage constraints mentioned by others, you need to experiment.
1) Is the structure bitwise copyable?
2) If so, just write it to disk using CFile and measure the impact.
2a) Use LZ4 to compress the data first (My own guess is that this would make a difference with a hard drive, but may actually take more time with an SSD.)
3) If not, you will need to serialize it; I'd still keep all the data binary.
3a) You could serialize it to a block of memory and then write that in one go (very likely the fastest way)
3b) You could serialize to a file using CStdioFile (depending on the data, this could be very slow. Even slower if you used C++ iostreams.)
3c) Do a combination; Serialize the data in 32k chunks and write them, perhaps asynchronously. (That said, when doing anything comparable, I prefer just having a second thread write synchronously.)
If the data needs to be future proofed, consider serializing using RapidJSON (which is a very fast C++ JSON library), compressing the result with LZ4 and then writing that. However, this could easily take longer than a second, depending on what the data is.
Edit: If the data is fairly regular, you might be able to save the full thing every ten seconds and differences in between. This can be tricky; I once worked on an app which did this and I found that the differencing alone exceeded the time it took to transmit the data over TCP.
|
|
|
|
|
Joe Woodbury wrote: If the data needs to be future proofed, consider serializing using RapidJSON (which is a very fast C++ JSON library), compressing the result with LZ4 and then writing that Guaranteed to be a viable format forever, or five years, whichever comes first.
During my career, I have seen shockingly many "future proof" formats come and go. I have come to adopt the attitude I met when working on a large project with digital libraries: Do not go for one standardized format, but let each library choose its own. When you, thirty years from now, need to recover some document, hopefully one of the different formats used is still recognized.
Don't forget that when you pick up that file thirty years from now, that ST506 disk needs a machine with the proper interface. And you need software to interpret the disk layout and directory structures of the file system. You may need to understand all the metatdata associated with the data stream. The record structure in the file. The encoding of numerics and characters. The data schema. The semantics of the data.
Sure, JSON is one of the fourteen everlasting syntax standards for how to represent a hierarchical structure. Ten years ago, it wasn't - it didn't exist. Some of those format used ten years ago are dead now; maybe JSON will be dead in ten years.
Bottom line: Never choose a data representation because it will last forever. Or more than five years.
If you have a need for that, make a plan for regularly move your data to a new disk (or other physical storage) format. Move it to a new machine with the proper interface for the physical unit. Move it to a new file system. A new character (/other low level) encoding. A new schema. A new concrete grammar. Be preparered for some information loss during each move. While having format n as the primary one, always preserve format n-1 (along with all hardware and software required to access it) until you switch to format n+1 - i.e. always have data available in two operational formats. Preferably generate format n+1 from format n-1, to avoid the losses from n-1 to n and further from n to n+1.
But first of all: Don't trust DCT100 or Travan to be a format to last forever. Nor eSATA. Nor SGML. Nor EBCDIC. HFS. BER. YAML. ... For five years, it may be safe. Anything significantly beyond that is gambling.
|
|
|
|
|
JSON will be around for a while. At the very least, it's very readable and a step up from plain text or csv. (And to be pedantic, JSON has been around for almost 20 years. XML has been around 24 years and it's predecessor, SGML, for 34 years.)
Your rant is borderline senseless and unproductive. I use the slang "future proof" meaning it will last as long as the program and you know that; you are arguing for argument sake and preening while doing so. Moreover, your statement "Do not go for one standardized format, but let each library choose its own." is meaningless in this context--short lived files--and in the broader sense since it traps you back where you started, afraid to do anything lest it become obsolete.
You are also mixing hardware protocols with file formats. Even obscure formats, such as BSON, would be readable in a lossless way fifty years from now as would a BMP. A plain text file of JSON or YAML is even more readable.
JSON is one step above key/value pairs--how would you lose information? And, if moving from one disk to another, what does "proper interface for the physical unit" have to do with anything? It's a file. I have 30 year old text files; should I be panicking. Perhaps I should have kept them on floppy disks and kept a floppy disk reader and format n-1 (whatever that is for a text file.)
modified 9-Jun-20 5:32am.
|
|
|
|
|
I am happy to see that you have rock solid confidence in today's high fashions. Keep it up! We need enthusiasts.
Nevertheless: When you work on digital library projects with the aim to preserve information (as contrasted to "data" and "files" - the semantic contents!) for at least one hundred years, maybe five hundred, then things come in a different light.
To illustrate problems, I used to show the audience an XML file where everything was tagged in Northern Sami. It made little sense to anyone (except that Sami guy who had helped me make the file). So why couldn't the entire world speak English? My next example was one where a 'p' tag identified a 'person', a 'part' or a 'paragraph', depending on context. It makes little difference whether those are XML or JSON tags if they make no sense whatsoever to the reader, or are highly ambiguous unless you have extensive context information.
Of course you can loose information! Say that you want to preserve a document where page formatting is essential (it could be for legal reasons, or other): For this digital library project, it didn't take me long to dig up seven different stragegies in use in various text processing systems for how to handle space between paragrahps in connection with page breaks. If you can "do an XKCD 927" and replace the 14 existing standards with a 15th replace them all, then good luck! Many have tried, none have succeeded. When you select a format, whether JSON, MS-Word, HTML, PDF or any other for your document storage, and convert documents in other formats to the chosen one, you will lose information.
I could show you a pile of different physical media with my old files, totally inaccessible today. If you want to preserve data, you cannot simply say "forget about physical formats, interfaces, media - as long as we use JSON it is safe". No, it isn't. The Travan tape reader uses its own interface card, for the ISA bus. I no longer have a PC with an ISA bus. I've got one SCSI disk with a 50-pin connector, it is not the Centronix 50-pin but a "D"-connector with three rows of pins. I once had a 50-pin D-to-Centronics cable, but even if I had saved it, I have no SCSI-interface where it fits. I have got the full source code of the OS I was using, on an 8-in harddisk cartridge disk, but this millenium I haven't seen a reader for it. I still keep a Win98 PC alive: If I plug the USB floppy disk reader into a modern (XP or later) PC; it won't read floppies without a proper format code in the boot sector. Win98 could.
Sometime in the 1990 I has a user coming with 8" floppy disks from an old project, asking if we could still handle them - I was the one who still had a reader for those. The user had no idea about formats at any level, but they needed all the information they could get from these floppies, whatever it was. I copied each floppy to a disk file for analysis; at first it looked like digitzed white noise. It was suspected to be text, so I ran a count of byte values: the most common values where the EBCDIC values for 'e' and 't'. EBCDIC comes in many variants ("code pages"), and we had to try a few before getting all the characters right. That made it possible to identify the block size, and I could start flipping unordered blocks around to make longer strands of coherent text.
Then I might have found some JSON (if it had existed by then) to start to identify the information - the semantics. Forunately, the information was not in Northern Sami. Or in French - in another project, we obtained another structured text, a C source library for record management, that had high reputation. When we got the source code, all variable names, all comments, all documentation, was in French, a language none of us mastered. We were unable to make the extensions to the library that we had planned.
Long time information preservation and recovery goes far beyond JSON style "firstName" : "Peter", "age" : 51, ... JSON is no more than some dots of foam on top of the ocean. Cute enough for a demo, and you may even employ it for something useful! But dig down in history to see how ASCII was introduced as the final solution for all computers all over the world to interchange arbitrary data, the solution to all interconnect problems. The universal format for all purposes. Yeah, right.
|
|
|
|
|
The original poster isn't preserving information; he is saving it temporarily. This is all about temporary data.
Right now you are just repeating the obvious. You also shifted your argument; you went from you will lose information to you can lose information. Then you introduce both complex documents and hardware into your argument. Nobody disagrees about the transience of hardware standards and physical media, so why preach on it? What does it have to do with a text or csv file I have from 1993?
I have thirty year old source that was on floppy, tape backup, Zip drives, Jazz drives, various hard drives, CDROM, DVD-ROM and is now on an SSD and on OneDrive. It still compiles. Yet, you argued that I was all but guaranteed to lose information in each transfer; that no file format lasts. It has. (Granted, I'm the only one to care about that specific project, so when I die or get tired of keeping it around, it will vanish, but that's also true about almost everything I own--that's life.)
I'm also a bit baffled by the claim that "ASCII was introduced as the final solution for all computers all over the world to interchange arbitrary data". No it wasn't. Your straw man collection is now complete! Or is it?
|
|
|
|
|