|
I disagree - you don't need smart pointers to deal with exceptions. You just initialize all pointers with 0 and in the catch block delete all that are not. Of course, with smart pointers this is easier, but they are not specifically needed.
|
|
|
|
|
That's why I said they were needed in a "decent design". To not use smart pointers when they are available makes for clunkier, unsafer code.
|
|
|
|
|
Do create an article when you are done.
I don't think modern garbage collector algorithms and implementation suffer a lot of performance degradation, or it would have been well publicized on the interweb.
In C#, according to the documentation, there are ways to tell the garbage collection to be less intrusive by setting latency mode (http://msdn.microsoft.com/en-us/library/bb384202.aspx[^]).
(mostly from http://stackoverflow.com/questions/147130/why-doesnt-c-have-a-garbage-collector[^] )
In C++ since there is no central garbage collection mechanism approved by the language committee and since the language is and should be platform agnostic, having a consensus on how to do it (design and implementation) it was not added in.
But, the C++ language (C++0x, or whatever the revision number is now) offers shared pointers (std::shared_ptr) that can be used as a low level garbage collection mechanism.
---
In my experience:
- adding garbage collection to an existent code base is nearly impossible and can be very time consuming.
- creating new software from scratch with zero memory issues (leaks,overruns ... ) is now easily feasible with the use of modern compilers and debuggers in the hand of good (*) programmers with a good coding process and software practices.
(*) I know crappy programmers create crappy code, but with the tools available in VS2010 and VS2011, there's no excuse.
Watched code never compiles.
|
|
|
|
|
Issues with garbage collection are well documented.
For many applications, the advantages of garbage collection outwiegh the disadvantages.
Applications can startup faster and see better performance if garbage collection is deferred.
The type of application where deferred garbage collection should not be implemented, are apps that require long periods of sustained throughput. In these apps, there is no good time to do the garbage collection, so the requests tend to build towards a criticality. At best the garbage collection will spike processing time while it satisifies requests. At worst, the app can crash because resources can't be made available in a timely manner. In these apps, it's better to coalesce as you free, so the overhead is spread thinly over time, rather than batched when the system is under stress.
The last time I had a chance to demonstrate this, I was tasked to datamine a corps internal web servers to populate servers needed to complete an emergency billing cycle. Corporate politics were a major part of the technical considerations. It would've been easier to hit the Oracle SQL Server directly.
The internal web servers were Java based, and I was assured that they were tuned by the vendor to maintain high data throughput. I was able to get high throughput out of them for short periods of time, but if I ran sustained requests at about 50,000 record / sec, it didn't take too many minutes before the response time would suddenly slow and the server crash.
The solution was to throttle the datamining down to a few thousand records / sec, and to implement cool off intervals so the GC could catch up.
|
|
|
|
|
Windows (and most modern OS's) will automatically free your allocated-but-unfreed memory when your process exits.
the real problem with not freeing memory is that, if you allocate 20MB but don't free it, that's 20MB less that every program running, including your own, can allocate later. by freeing it as soon as you're done, you make it available to be allocated again.
|
|
|
|
|
That is true if _heapmin() is called periodically. If it is not called the free'd memory is still on the heap of the process that allocated it. _heapmin must be called to release memory back to the OS.
|
|
|
|
|
sorry. misread your comment.
on most modern desktop operating systems, when a process exits, it's dynamically-allocated memory is freed.
_heapmin is useful if you want the OS to get the memory back while your app is running. but i was talking about when it exits.
|
|
|
|
|
I have a class that has been evolving for some time that satisfies these constraints.
It a templatized class, so it is capable of calling the object's destructor.
It keeps a static member to a std::map<void *,="" int="">, which serves as the owner of the object, and a location for reference counting.
When the 'int' member is decremented to zero, the last class instance out, deletes the object and removes it from the map.
|
|
|
|
|
As I understand it you, count the number of records that point to your object and when it reach zero you free it.
I was not clear, I am suggesting this.
My idea is you would have a list of all the smart-pointer that point to your object.
When the object get deleted (by anyone), all the other pointers that point to it get cleared.
So free is really freeing the object but also clear who-ever is still pointing to it.
|
|
|
|
|
You could do that by keeping a multimap of all of the smart pointers that you create, ordered by the pointer that they reference.
Then when you delete to pointer, you could access every smartpointer, and set it's reference pointer to NULL. Then remove all instances of that pointer from your multimap.
Give it a try. I think you'll learn a lot by doing this.
|
|
|
|
|
I think you got my idea pretty well.
So the bad news now.
I haven't resolved the problem ptr += something;
I assume I'll have to have a size for every object.
|
|
|
|
|
You might need to derefence your pointer to do pointer math.
template <T>
class CSPtr
{
private:
T * pData;
public:
operator T* () { return pData; }
public:
CSPtr(T * pNewPtr) { }
};
void func(void)
{
CSPtr<int> IntPtr = new int[5];
int * pInt = (int *) IntPtr;
pInt += 2;
*pInt = 3;
}
|
|
|
|
|
There is a problem with my strategy if I allow arithmetic on pointers.
If the object is freed the pointer who point 1 byte further is not.
I need to do a proof of concept.
Anyone know a good 50,000 line of C++ project I could try this on?
|
|
|
|
|
You could keep a pointer and an offset. Then when doing arithmetic, just change the offset.
When you dereference the pointer, add the offset to it.
|
|
|
|
|
You would need to have a class that manages allocation and manages the pointers pointing to that allocaiton so that when the allocation is freed all the pointers are NULLed.
Since the allocations are to be shared, you would want to have some way of refering to those allocations, so that if some code needs an address it calls this class, and if it exists, the existing address is given back, if not, it is created.
This class would need to manage access to the address so that only one caller at a time can modify the data.
Finally a new class, for the pointers them selves that can track when a pointer is copied so that the new pointer can be added to the list of pointers managed.
This is complex though and easilly broken, but could be interesting to try to do.
==============================
Nothing to say.
|
|
|
|
|
First, the new C++11 standard does contain smart pointers, and they avoid all of the above problems as well as some that you introduced.
Second, C++11 introduced move-semantics, and thereby the concept of passing ownership to another. Your core-premise of a singular ownership directly interferes with that concept. The ANSI commission didn't introduce this on a whim, they had some very good reasons, so why would you want to counter their efforts?
Third, this...
Pascal Ganaye wrote: - when an object is freed any other pointer to it get nullified
is a very bad idea! what are other parts of your application supposed to do when the pointer they rely on suddenly got nullified? How can you guarantee that your whole application isn't left in an undefined state because an operation couldn't be completed? This is an obvious concern in a multi-threaded application, but even single-threaded, it may cause big headaches. Sure, you can create a mutex for a pointer to make sure you can complete your operations, but that also means that the owner may not be able to nullify it's own pointer for an unspecified time. That doesn't quite mix with the conceept of ownership.
Fourth, you mention that you know there are smart pointer implementations. Why don't you go have a look at them before implementing a concept of your own? They solve all the problems you listed plus the ones I pointed out above.
|
|
|
|
|
Thanks Stefan for you answer.
I think I got you on the wrong foot, I don't really suggest that what the C++11 have done is bad.
I am only introducing something that is different and I am interested to see where it goes.
When I wrote :
Stefan_Lang wrote:when an object is freed any other pointer to it get nullified
I know it is a quite an dangerous concept.
In my mind I have this idea, like a little snow ball, I want to roll it and see where it goes.
what are other parts of your application supposed to do when the pointer they rely on suddenly got nullified?
They are not in a worst state than calling an object that has been freed.
The . or -> operators can check if the pointer is null or not.
How can you guarantee that your whole application isn't left in an undefined state because an operation couldn't be completed?
I am not sure I follow your thoughts.
the free function would basically do:
free(memory)
{
foreach pointer in the entire system who point to memory
*pointer = null
real free(memory);
}
If the pointers got reused later by the rest of the application it is bound to crash anyway as you're not supposed to use a block of memory that have been freed.
With this mechanism you can check very deterministically whether the block was freed by simply testing the pointer value.
Again I am not here to start a religious war, I am not even saying this is the way to go. I am merely asking what if?
|
|
|
|
|
I did not want to start a religious war either, but you did mention to use this in a multithreaded environment, and that is a very dangerous place to go! That is why my response may have seemed a bit radical. But my concerns are real enough.
Pascal Ganaye wrote: They are not in a worst state than calling an object that has been freed.
The . or -> operators can check if the pointer is null or not.
Ouch!
First, your 'owner' has no idea at all what state other objects are in. None! For all you know they may have called a lengthy operation on the object that pointer points to and awaiting a response from somewhere else. Now your 'owner' nullifies all pointers, then the response the aforementioned operation waited for arrives. At this point, the execution continues. However, the object it was called upon has been destroyed, all member variables are undefined, the this pointer is invalid, causing any attempt to call another member function to crash! And you call this 'not in a worst state'?
Second, the . operator can not be overloaded! You can overload -> , but apart from preventing an immediate crash, what should it do? And besides, you cannot even prevent a crash: operator -> does (at least) two things: it dereferences the pointer and then uses the resulting address to access the referenced member. If the nullifiying happens just after the dereferencing, but before accessing the member, this access will crash your application! (in truth, a lot more atomic operations happen upon dereferencing, I just picked the important two to make a point; the important thing to note is that C++ operations can - and will - be interrupted in mid-swing by multi-threading. Even a simple operation such as ++i can be interrupted, between reading i and incrementing the value and writing it back to storage)
Third, even if you could overload both operators, and in a meaningful way, you would have to do that for every single class!
As I said, I'm looking through the multi-threading glasses at this, and what I see isn't pretty. You might be able to fix a few of these concerns, but not all. And some might even apply to single-threaded apps.
Again: do look at existing smart pointers. They're good. They're clever. A horde of extremely clever people created them, used them, improved them, and made them both easy to use and fool-proof. And yes, that includes multi-threading. There's really no reason to roll your own, certainly not when you're using a compiler that already supports C++11 (or at least that part of it).
|
|
|
|
|
I am loving this conversation, you're certainly a lot more knowledgeable in C++ than I am.
I am not against smart pointers, as I said I don't know them very well it might well be than one of them or a combination of them covers perfectly what I fuzzily thinking.
I am taking your comments into consideration and either give up or come with a new proposition.
I don't really want to fight against anyone on this, the idea is to accept some arbitrary and perhaps unorthodox concepts and see if it rolls.
Hopefully it is yes. If not we can change our arbitrary and perhaps unorthodox concepts and see if it works.
You're right the hypothesis I posted are too wild, I'll try to write something with less scope and see where it goes.
|
|
|
|
|
Well I don't 'fight' you, I just want to point out the problems. Consider this:
1. Dealing with pointers by yourself is tricky. But most people learn it soon enough.
2. Writing a class to automate this requires you to not only think of your code, but also the code of other users who will use that class. That is quite hard. Especially when you think of users who fail at 1.
3. Writing such a class to deal with multi-threading issues is mind-boggling!
It's not that I judge your abilities as poor, just that you might have set your goals too high. Me, I've tried myself at level 2, and although I thought I did it reasonably well, I know it wasn't quite perfect. I might be willing to experiment with sth at the level of 3, but I don't think I'd offer to write an article about it until after I know it works ...
On a sidenote, I did write an Object Pool implementation that hands out special smart pointers to make sure all pool items are released as soon as possible. In fact, the concept of releasing pool items ASAP runs a bit contrary to the idea of a pool, so in truth it's closer to a specialized garbage collector with 'immediate response'. That is about as close to garbage collection I dare go. And I only did it because I realized I could implement each function at a complexity of amortised O(1). Maybe I should put that up as an article, but I first have to fix memory alignment and then think up a new name...
|
|
|
|
|
Hello,
I have a problem ,
Whan I open a port by CreateFile(..) after closing it by
CloseHandle(..)
the CreateFile fails , I tried to do a Sleep(2000) before reopening
its didnt help,
on the first time that I am opening it, its succeed
it fails just if I open it after closing it.
And its happend only in my laptop that have windows 7, on the other computer that have windows XP its work fine.
What can be the problem with reopening port after closing it?
Thanks!!
|
|
|
|
|
What does GetLastError return after the second CreateFile call?
modified 13-Sep-18 21:01pm.
|
|
|
|
|
its return ERROR_ACCESS_DENIED
|
|
|
|
|
This[^] appears to be the same problem, you could try if that works.
modified 13-Sep-18 21:01pm.
|
|
|
|
|
Thank you its solved my problem
|
|
|
|
|