|
jkirkerx wrote: I just didn't want to sound like I wanted someone to fix my code because I posted it. No insult intended.
Oh, no no no no no. None taken - I'd feared having been an irritation.
Fantastic! Glad to hear you've got it working. 'twas quite the source of satisfaction for me when I had a running stable program, rather than just crash after crash[after crash, after crash]
Indeed, I used URLDownloadToFile or something similar in a VBA crud app in the last job I was at, though found it to be a pain for all sorts of reasons - not the least of which being that it's a blocking call - not so special in MS office apps when the whole UI freezes.. :grin:
I also hate the idea of having to continually allocate/reallocate memory to receive the download, hence the idea of being able to retrieve the size of content from the HTTP headers. Delving into the headers, I discovered the "Byte-Range:" or something similarly named, as a tag in the headers. This allows you to specify which _part_ of a particular resource you want. This is the thing that gives you the download resume functionality, it's also a way to (a) download large files with a number of parallel connections or (b)download only part of a file - e.g first part of an image file to determine dimensions (c) thinking about one of your questions, it would also mean you could checksum smaller portions of the file, minimizing the number of bytes that need to be re-downloaded in the event of a check-sum failure - think along the lines of torrent programs, where the download is split up into pieces, each of which may be check-summed and repeated in the event of failure. A 700mb file may be split up into 700 pieces of 1 meg, for instance. Much nicer to re-download 1mb rather than the whole 700
Sony PlayStation network, I hate you for this oversight. I am ever so tired of using 190mb of my monthly 4000mb plan to download a system-update, necessary to continue to play online - only to find that it fails 3, 4, 5 times in a row. Brilliant! 20% of my internet gone because nobody thought to checksum pieces rather than the whole payload
Oh, and in answer to your question, nope - not check-summing the download at this stage, thanks for causing me to start thing about it.
Given that it would appear that you'd only ever have one instance of the above class at any given time, looks like the global flag in the .h file works okay. Of course, if it were a member of the class itself, you could have multiple instances of the class co-existing - though I can see no reason for that for this particular implementation. A more generic threaded file download class whether it be inherited or not, would be the place. In my implementation I've tested with about 15 simultaneous threads all downloading rss feeds, also with a test-bed app that grabs 10 files simultaneously and updates the download progress of each in a listview as each progress callback is received. - I can cancel any of them by choice. Next comes resume!
|
|
|
|
|
Looks like the checksuming is a lot of work at this point.
I testing my code now, 742 meg download, and starting and stopping (Cancel). Strange, it picks up where it left off. I added a filesize check, to see if I have a partial or complete download before starting. If complete just install the file, that what the checksum was for, make sure it's legit.
I put that global in the callback h file, and cleaned up the h and cpp files for a class, so I suspect I can call it several times for concurrent use. I made the callback class global in the window code, so I call reference it from several different functions, and then release it at the end. All I need are more threads. I think I'll keep the downloads one at a time to get all the programs needed.
So far 554 megs, no stalls or freezes, progress bar looks accurate, percentage is correct.
No you were not a pain, I just had to much written already, and in theory, my code should work whether it was a dialog box or mdi child window, I didn't see the difference. I understood the last sample, just had to cherry pick what I needed.
What's your program for?. Commercial use?
I make web applications, and I've been waiting 8 years for a program that can install web applications with ease. Kind of like buying a bar-b-Que already build, having it delivered - with a bag of coals and matches. I tried very good instructions as a pdf, but the 1 click window app seems to be the way to go. I have a 99% failure rate on installation right now, so the goal is a single app packaged in installshield, in which it installs the window program, and sets up everything you need, and launches the browser window with the web code ready to go. Then add data import capability, like yahoo store, and an interface that sets your company info and preferences.
Download almost over, I need to stop and save a copy of a partial download so I don't have to keep downloading it.
Window Code:
SQL_Servers_BindCallback *callback;
HANDLE hThread;
long thread_id;
Function in Window Code:
callback = new SQL_Servers_BindCallback;
callback->m_MDIChild = hSQL_Servers_Download;
callback->m_Progress_Text = lbl_SQL_Server_Download_Status;
callback->m_Progress_Bar = pb_SQL_Server_Download_Status;
Class H:
#if !defined SQL_Servers_BindCallback_H
#define SQL_Servers_BindCallback_H
class SQL_Servers_BindCallback : public IBindStatusCallback
{
public:
SQL_Servers_BindCallback();
~SQL_Servers_BindCallback();
// Pointer to the download progress dialog.
LONG g_fAbortDownload;
HWND m_MDIChild;
HWND m_Progress_Text;
HWND m_Progress_Bar;
int iLoopCount;
// The time when the download should timeout.
BOOL m_bUseTimeout;
CTime m_timeToStop;
STDMETHOD(OnProgress)(
/* [in] */ ULONG ulProgress,
/* [in] */ ULONG ulProgressMax,
/* [in] */ ULONG ulStatusCode,
/* [in] */ LPCWSTR wszStatusText);
|
|
|
|
|
Hi everyone, I am trying to successfully run the Recipe Preview Sample from MSDN and cannot get it to work. I have downloaded the sample and the SDK, tried to change target platforms, changed the AppID, and followed all the instructions. I can get the handler to register successfully, but the .recipe file they provide as an example does not show in the preview pane.
Here is the link to the MSDN site. The code builds fine....
http://msdn.microsoft.com/en-us/library/windows/desktop/dd940374(v=vs.85).aspx
Any help would be appreciated, the source code they provide for download is very small if someone gets a chance to try it.
Thanks for the help,
AG
|
|
|
|
|
Sorry for non VC / MFC post.
Here is a base of my code to open COM port. Works as advertized.
The problem is that the DTR/RTS lines are used to key up a ham radio transmitter and are enabled when the file is first created.
Even this code still have considerable "flash" on these lines. I am just monitoring the lines with test LED's – I did not measure the flash lenght.
I could use a hardware delay circuit to stop this but would prefer to do this in software.
Any suggestions ?
Thanks for your time,
Vaclav
DCB dcb;
HANDLE hFile = ::CreateFile( strCOM ,
0, // querry only
0,
0,
OPEN_EXISTING, // communication file
0,
0);
dcb.fDtrControl = 0;
dcb.fRtsControl = RTS_CONTROL_DISABLE;
if(!::SetCommState(hFile,&dcb))
{
TRACE("\nObtain the error code");
}
|
|
|
|
|
Vaclav_Sal wrote: Any suggestions ?
It would help if you formatted your code within <pre> tags so it is easier to read. You have also not really explained what your problem is, e.g. what is meant by the expression "flash" on these lines?
Unrequited desire is character building. OriginalGriff
I'm sitting here giving you a standing ovation - Len Goodman
|
|
|
|
|
"The problem is that the DTR/RTS lines are used to key up a ham radio transmitter and <b>are enabled when the file is first created</b>."
Disabling the DTR/RTS - setting to 0 - in DCB clears these lines but it takes a time to do that ( my test LED "flashes") , thus generating unwanted "key up " of the transmitter.
|
|
|
|
|
I would suggest this is a hardware, rather than a C/C++ issue. Maybe you should try an alternate forum.
Unrequited desire is character building. OriginalGriff
I'm sitting here giving you a standing ovation - Len Goodman
|
|
|
|
|
I'd like to speak about pointers, I am trying to stay neutral and thoughtful, please don't transform this thread into a language or belief war.
Pointers have several problems.
1 - they can be freed and they then point to garbage
2 - they can be freed twice
3 - sometime the programmer forget to free some memory
4 - probably more...
The most common solution these days seem to be using garbage collectors.
Garbage collector fix the issues 1,2, 3 above
Unfortunately garbage collectors has come with some new problems:
- you can't really release memory precisely when you want to
- it has to freeze the computer for some time occasionally to count all its blocks
- more...
There has been a lot of middle way attempts.
I know there are many flavour of smart pointers around, I am not claiming I know any of them and this is why I am writing here.
Lets imagine we have a language between C++ and C#
In this language :
- every allocation requires an owner.
The root owner would be the application itself.
- free is available and work synchronously
- when an object get freed, any child object is freed.
- when an object is freed any other pointer to it get nullified
This would solve our 3 original problems:
1 - they can be freed and they then point to garbage
They would point to null
2 - they can be freed twice
They would be freed once and then the pointer being null they would not be freed again
3 - sometime the programmer forget to free some memory
When the owner is freed, the object is freed
I was thinking of implementing those pointers and trying them on a free project like Apache or something like that.
But first, I thought I would submit it to all, there is a good chance that it already exists ...
Or perhaps the constraint of having an owner is too constraining.
It is hard to say until you try on a real project.
Having an owner per object could have other advantages in a multi-threading environment.
I haven't finished on this but first, does anyone have come across anything like it?
|
|
|
|
|
Smart pointers are OK, for lazy/bad programmers, but not needed otherwise. Which means they are needed a lot.
==============================
Nothing to say.
|
|
|
|
|
LOL.
There are good reasons to use them on occasion.
I've used them when I'm keeping multiple tables of objects in memory, that can be indexed and accessed from different points. I've had this come up in multi-threaded program where multi-threads are processing data at different entry points.
In this case there is no clear owner of the object, so there's no clear way to determine when an object is out of scope and can be safely deleted.
|
|
|
|
|
JackDingler wrote: In this case there is no clear owner of the object,
Unfortunately, the OP pretty much required a singular ownership. Just one of several reasons why he needs to rethink his premises...
|
|
|
|
|
It would defintely be problematic for him to implement his approach in a current codebase.
As you point out, it requires a that there be a clear definition of ownership. You can't just willy nilly, delete the resource. You won't know when other modules are done with it. You'll end up with attempts to resolve a null pointer, and the exceptions that come with that.
So with his approach, you'll have to make sure that every user of the pointer is done with it, before deleting it. If you're doing this, then the smart pointer is unnecessary and just adds overhead.
|
|
|
|
|
JackDingler wrote: So with his approach, you'll have to make sure that every user of the pointer is done with it, before deleting it.
Agreed, that would work. However, that would require to adhere to certain coding standards - but if you have a strict coding standard, ...
JackDingler wrote: then the smart pointer is unnecessary
|
|
|
|
|
In a decent design, smart pointers are needed when there's a chance of an exception being thrown.
|
|
|
|
|
This is a good example of dogma. You dont need smart pointers in exception handled code, since you can easilly deallocate an allocated pointer in any of the exception handling cases.
==============================
Nothing to say.
|
|
|
|
|
Not dogma, just efficiency. The scope where an allocated pointer is allocated to may not be the best scope for handling an exception. Without a smart pointer, you have to cover everything with a try-catch, deallocate in all catch clauses (and you'd better not miss one or you'll be leaking), and then rethrow.
So okay, smart pointer isn't needed, it just makes for sturdier, faster, and more concise code.
|
|
|
|
|
It is perfectly OK to use normal pointers, provided you deallocate at every return from the function, exceptions included.
==============================
Nothing to say.
|
|
|
|
|
Yes, agreed. But I wouldn't necessarily call that a decent design, which was the caveat in my original comment.
|
|
|
|
|
I disagree - you don't need smart pointers to deal with exceptions. You just initialize all pointers with 0 and in the catch block delete all that are not. Of course, with smart pointers this is easier, but they are not specifically needed.
|
|
|
|
|
That's why I said they were needed in a "decent design". To not use smart pointers when they are available makes for clunkier, unsafer code.
|
|
|
|
|
Do create an article when you are done.
I don't think modern garbage collector algorithms and implementation suffer a lot of performance degradation, or it would have been well publicized on the interweb.
In C#, according to the documentation, there are ways to tell the garbage collection to be less intrusive by setting latency mode (http://msdn.microsoft.com/en-us/library/bb384202.aspx[^]).
(mostly from http://stackoverflow.com/questions/147130/why-doesnt-c-have-a-garbage-collector[^] )
In C++ since there is no central garbage collection mechanism approved by the language committee and since the language is and should be platform agnostic, having a consensus on how to do it (design and implementation) it was not added in.
But, the C++ language (C++0x, or whatever the revision number is now) offers shared pointers (std::shared_ptr) that can be used as a low level garbage collection mechanism.
---
In my experience:
- adding garbage collection to an existent code base is nearly impossible and can be very time consuming.
- creating new software from scratch with zero memory issues (leaks,overruns ... ) is now easily feasible with the use of modern compilers and debuggers in the hand of good (*) programmers with a good coding process and software practices.
(*) I know crappy programmers create crappy code, but with the tools available in VS2010 and VS2011, there's no excuse.
Watched code never compiles.
|
|
|
|
|
Issues with garbage collection are well documented.
For many applications, the advantages of garbage collection outwiegh the disadvantages.
Applications can startup faster and see better performance if garbage collection is deferred.
The type of application where deferred garbage collection should not be implemented, are apps that require long periods of sustained throughput. In these apps, there is no good time to do the garbage collection, so the requests tend to build towards a criticality. At best the garbage collection will spike processing time while it satisifies requests. At worst, the app can crash because resources can't be made available in a timely manner. In these apps, it's better to coalesce as you free, so the overhead is spread thinly over time, rather than batched when the system is under stress.
The last time I had a chance to demonstrate this, I was tasked to datamine a corps internal web servers to populate servers needed to complete an emergency billing cycle. Corporate politics were a major part of the technical considerations. It would've been easier to hit the Oracle SQL Server directly.
The internal web servers were Java based, and I was assured that they were tuned by the vendor to maintain high data throughput. I was able to get high throughput out of them for short periods of time, but if I ran sustained requests at about 50,000 record / sec, it didn't take too many minutes before the response time would suddenly slow and the server crash.
The solution was to throttle the datamining down to a few thousand records / sec, and to implement cool off intervals so the GC could catch up.
|
|
|
|
|
Windows (and most modern OS's) will automatically free your allocated-but-unfreed memory when your process exits.
the real problem with not freeing memory is that, if you allocate 20MB but don't free it, that's 20MB less that every program running, including your own, can allocate later. by freeing it as soon as you're done, you make it available to be allocated again.
|
|
|
|
|
That is true if _heapmin() is called periodically. If it is not called the free'd memory is still on the heap of the process that allocated it. _heapmin must be called to release memory back to the OS.
|
|
|
|
|
sorry. misread your comment.
on most modern desktop operating systems, when a process exits, it's dynamically-allocated memory is freed.
_heapmin is useful if you want the OS to get the memory back while your app is running. but i was talking about when it exits.
|
|
|
|
|