|
Smart pointers are OK, for lazy/bad programmers, but not needed otherwise. Which means they are needed a lot.
==============================
Nothing to say.
|
|
|
|
|
LOL.
There are good reasons to use them on occasion.
I've used them when I'm keeping multiple tables of objects in memory, that can be indexed and accessed from different points. I've had this come up in multi-threaded program where multi-threads are processing data at different entry points.
In this case there is no clear owner of the object, so there's no clear way to determine when an object is out of scope and can be safely deleted.
|
|
|
|
|
JackDingler wrote: In this case there is no clear owner of the object,
Unfortunately, the OP pretty much required a singular ownership. Just one of several reasons why he needs to rethink his premises...
|
|
|
|
|
It would defintely be problematic for him to implement his approach in a current codebase.
As you point out, it requires a that there be a clear definition of ownership. You can't just willy nilly, delete the resource. You won't know when other modules are done with it. You'll end up with attempts to resolve a null pointer, and the exceptions that come with that.
So with his approach, you'll have to make sure that every user of the pointer is done with it, before deleting it. If you're doing this, then the smart pointer is unnecessary and just adds overhead.
|
|
|
|
|
JackDingler wrote: So with his approach, you'll have to make sure that every user of the pointer is done with it, before deleting it.
Agreed, that would work. However, that would require to adhere to certain coding standards - but if you have a strict coding standard, ...
JackDingler wrote: then the smart pointer is unnecessary
|
|
|
|
|
In a decent design, smart pointers are needed when there's a chance of an exception being thrown.
|
|
|
|
|
This is a good example of dogma. You dont need smart pointers in exception handled code, since you can easilly deallocate an allocated pointer in any of the exception handling cases.
==============================
Nothing to say.
|
|
|
|
|
Not dogma, just efficiency. The scope where an allocated pointer is allocated to may not be the best scope for handling an exception. Without a smart pointer, you have to cover everything with a try-catch, deallocate in all catch clauses (and you'd better not miss one or you'll be leaking), and then rethrow.
So okay, smart pointer isn't needed, it just makes for sturdier, faster, and more concise code.
|
|
|
|
|
It is perfectly OK to use normal pointers, provided you deallocate at every return from the function, exceptions included.
==============================
Nothing to say.
|
|
|
|
|
Yes, agreed. But I wouldn't necessarily call that a decent design, which was the caveat in my original comment.
|
|
|
|
|
I disagree - you don't need smart pointers to deal with exceptions. You just initialize all pointers with 0 and in the catch block delete all that are not. Of course, with smart pointers this is easier, but they are not specifically needed.
|
|
|
|
|
That's why I said they were needed in a "decent design". To not use smart pointers when they are available makes for clunkier, unsafer code.
|
|
|
|
|
Do create an article when you are done.
I don't think modern garbage collector algorithms and implementation suffer a lot of performance degradation, or it would have been well publicized on the interweb.
In C#, according to the documentation, there are ways to tell the garbage collection to be less intrusive by setting latency mode (http://msdn.microsoft.com/en-us/library/bb384202.aspx[^]).
(mostly from http://stackoverflow.com/questions/147130/why-doesnt-c-have-a-garbage-collector[^] )
In C++ since there is no central garbage collection mechanism approved by the language committee and since the language is and should be platform agnostic, having a consensus on how to do it (design and implementation) it was not added in.
But, the C++ language (C++0x, or whatever the revision number is now) offers shared pointers (std::shared_ptr) that can be used as a low level garbage collection mechanism.
---
In my experience:
- adding garbage collection to an existent code base is nearly impossible and can be very time consuming.
- creating new software from scratch with zero memory issues (leaks,overruns ... ) is now easily feasible with the use of modern compilers and debuggers in the hand of good (*) programmers with a good coding process and software practices.
(*) I know crappy programmers create crappy code, but with the tools available in VS2010 and VS2011, there's no excuse.
Watched code never compiles.
|
|
|
|
|
Issues with garbage collection are well documented.
For many applications, the advantages of garbage collection outwiegh the disadvantages.
Applications can startup faster and see better performance if garbage collection is deferred.
The type of application where deferred garbage collection should not be implemented, are apps that require long periods of sustained throughput. In these apps, there is no good time to do the garbage collection, so the requests tend to build towards a criticality. At best the garbage collection will spike processing time while it satisifies requests. At worst, the app can crash because resources can't be made available in a timely manner. In these apps, it's better to coalesce as you free, so the overhead is spread thinly over time, rather than batched when the system is under stress.
The last time I had a chance to demonstrate this, I was tasked to datamine a corps internal web servers to populate servers needed to complete an emergency billing cycle. Corporate politics were a major part of the technical considerations. It would've been easier to hit the Oracle SQL Server directly.
The internal web servers were Java based, and I was assured that they were tuned by the vendor to maintain high data throughput. I was able to get high throughput out of them for short periods of time, but if I ran sustained requests at about 50,000 record / sec, it didn't take too many minutes before the response time would suddenly slow and the server crash.
The solution was to throttle the datamining down to a few thousand records / sec, and to implement cool off intervals so the GC could catch up.
|
|
|
|
|
Windows (and most modern OS's) will automatically free your allocated-but-unfreed memory when your process exits.
the real problem with not freeing memory is that, if you allocate 20MB but don't free it, that's 20MB less that every program running, including your own, can allocate later. by freeing it as soon as you're done, you make it available to be allocated again.
|
|
|
|
|
That is true if _heapmin() is called periodically. If it is not called the free'd memory is still on the heap of the process that allocated it. _heapmin must be called to release memory back to the OS.
|
|
|
|
|
sorry. misread your comment.
on most modern desktop operating systems, when a process exits, it's dynamically-allocated memory is freed.
_heapmin is useful if you want the OS to get the memory back while your app is running. but i was talking about when it exits.
|
|
|
|
|
I have a class that has been evolving for some time that satisfies these constraints.
It a templatized class, so it is capable of calling the object's destructor.
It keeps a static member to a std::map<void *,="" int="">, which serves as the owner of the object, and a location for reference counting.
When the 'int' member is decremented to zero, the last class instance out, deletes the object and removes it from the map.
|
|
|
|
|
As I understand it you, count the number of records that point to your object and when it reach zero you free it.
I was not clear, I am suggesting this.
My idea is you would have a list of all the smart-pointer that point to your object.
When the object get deleted (by anyone), all the other pointers that point to it get cleared.
So free is really freeing the object but also clear who-ever is still pointing to it.
|
|
|
|
|
You could do that by keeping a multimap of all of the smart pointers that you create, ordered by the pointer that they reference.
Then when you delete to pointer, you could access every smartpointer, and set it's reference pointer to NULL. Then remove all instances of that pointer from your multimap.
Give it a try. I think you'll learn a lot by doing this.
|
|
|
|
|
I think you got my idea pretty well.
So the bad news now.
I haven't resolved the problem ptr += something;
I assume I'll have to have a size for every object.
|
|
|
|
|
You might need to derefence your pointer to do pointer math.
template <T>
class CSPtr
{
private:
T * pData;
public:
operator T* () { return pData; }
public:
CSPtr(T * pNewPtr) { }
};
void func(void)
{
CSPtr<int> IntPtr = new int[5];
int * pInt = (int *) IntPtr;
pInt += 2;
*pInt = 3;
}
|
|
|
|
|
There is a problem with my strategy if I allow arithmetic on pointers.
If the object is freed the pointer who point 1 byte further is not.
I need to do a proof of concept.
Anyone know a good 50,000 line of C++ project I could try this on?
|
|
|
|
|
You could keep a pointer and an offset. Then when doing arithmetic, just change the offset.
When you dereference the pointer, add the offset to it.
|
|
|
|
|
You would need to have a class that manages allocation and manages the pointers pointing to that allocaiton so that when the allocation is freed all the pointers are NULLed.
Since the allocations are to be shared, you would want to have some way of refering to those allocations, so that if some code needs an address it calls this class, and if it exists, the existing address is given back, if not, it is created.
This class would need to manage access to the address so that only one caller at a time can modify the data.
Finally a new class, for the pointers them selves that can track when a pointer is copied so that the new pointer can be added to the list of pointers managed.
This is complex though and easilly broken, but could be interesting to try to do.
==============================
Nothing to say.
|
|
|
|