|
Joaquín M López Muñoz wrote:
The semantics of CWnds often force the programmer to go trough the sequence construction-creation-destruction. This is IMHO an open hole for errors cause the possibility exists of having a "void" CWnd with no attached HWND. If MFC guys had moved to RAII, CWnds will be created as part of their constructor, elminating the possibility of the error described above. This render the common ASSERT(IsWindow())s unnecesary. "void" CWnds van still be modeled, should the need arise, by having unassigned CWnd pointers.
I think that you are right about RAII but in that particular case, I'm not sure of what should be the solution since allowing no attached HWND allows so simplification of the code and help for RAD...
The C++ solution (and Delphi too) is that the object would hide the fact that the object is created or not by esentially duplicating all properties (for ex. Text, Position,...) so that you can alter them even when the object is not yet created... It works relatively well. I think that .NET Forms would be similar in that area.
Maybe a good solution would be to hide a bit more the user interface by access the data through a class that would knows if a interface object is connected to it (so that the UI is updated when created)... But IMHO, such a solution is probably too complicate for the benefit.
But OTOH since MFC uses DDX for data exchange, maybe it would have been possible to uses RAII exclusively... but then we would have need the possibility to transfer far more property... and also MFC update everything at once and it would cause a lot of overhead...
Philippe Mori
|
|
|
|
|
Chris Maunder wrote:
It's interesting to see a shift in focus from performance based programming to secure and robust based programming.
I think there is a middle ground for C and C++ programmers, a way to have it both ways (almost). My approach, which I've just recently adopted, is to use assertions to validate preconditions. That way when the code is finished, those tests do not affect the performance of the final product. Exceptions come into the picture when a function cannot perform its duty after the preconditions have been verfied through the assertions. In those cases, an exception is thrown.
This is not a full-proof method because it is possible to imagine certain situations in which the UI passes along data from the user that is invalid in production code. The assertions are no longer there at that point, so you have trouble. However, this is the UI's responsibility and not the libriaries; validating user input is up to the UI. As long as the preconditions are clearly documented, the library is no longer responsible for these kinds of bugs.
I think the use of assertions for validating preconditions is a good middle ground - Defensive programming without driving yourself nuts.
|
|
|
|
|
*wave arms* I'm here!
But I do make a difference between "ordinary code" and security related code.
In Debug-builds I [ATL]ASSERT() anything that moves. ASSERT() is a friend you can trust.
--
Chatai. Yana ra Yakana ro futisha ta?
|
|
|
|
|
Robustness first, performance second.
I've always lived by that rule, unless the boss forces me to do otherwise.
Most performance implementations are based on assumptions about the code. That's all well and good until you get some hotshot programmer that breaks all your assumptions and tears down that house of cards you call an application.
------- signature starts
"...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001
Please review the Legal Disclaimer in my bio.
------- signature ends
|
|
|
|
|
Data needs to be checked whenever it comes in from a source you need to trust but can't.
A library should be made to trust the users of the library, except when this compromises security. If anyone can use the library, and invalid data causes a security problem, you should check the data in the library.
If the only consequence is that the calling app will die, there's no reason for the library to waste time checking stuff. A program determined to crash will crash.
Now, the debug build should check everything, probably at multiple layers.
|
|
|
|
|
I'm another long time C/C++ guy who strongly believes in robustness over performance. The average user, whether an end-user or a programmer, is going to be far more forgiving of a few nano-seconds processing time compared to waiting weeks or months to get a bug fixed.
"Any clod can have the facts, but having opinions is an art."
Charles McCabe, San Francisco Chronicle
|
|
|
|
|
I think it comes down to adequate documentation and understanding thereof. If the user has been fully informed in how to use a function/class/API correctly, I have no problem leaving out the extra checking on the library side. OTOH, when the library makes further calls into itself or into the system API or wherever, it then has the onus of checking what it is passing on.
|
|
|
|
|
Chris Maunder wrote:
The thought that the function they were calling would do further checking was considered by some to be not just a waste of cycles but also something only of value to lame programmers who still slept with their teddy bears used night-lights.
In my opinion, the question revolves around the source code license. If you provide the code to your library, and are willing to allow customers to change it for their own need, you can choose to do less error checking. Ideally, make it a compile-time switch to turn on/off data validation.
If you provide only compiled libraries, particularly DLLs, then you must do robust error checking. If you don't there is too large a liability for the customer.
c
|
|
|
|
|
If you have to validate everything because you can't trust a library function, then you you probably are not saving much effort from using the library in the first place and are probably better off just implementing your own version of the function.
JMHO
|
|
|
|
|
..now that depends on what the library call is. Even if you could implement your own version; would you want to spend the time working out how to actually implement it? Probably not, thats why you choose to use a library in the first place!
=)
|
|
|
|
|
you're right. I would probably would look for an alternate library instead. I would not use a library that doesn't check for invalid parameters.
|
|
|
|
|
Well, not every check can be made by the library itself.
But I think most of the check should be placed in the library.
|
|
|
|
|
Yes, if you want to use one of the advantages of library funcion, you must trust in that library and save your time. If the library function writer was make the validation of all input data why to make your own validation?
CC
|
|
|
|
|
If the library function should validate the data, how thorough would you be? If a pointer is sent to the function, would you use IsBadReadPtr , IsBadWritePtr and IsBadStringPtr to validate the pointer or would you simply check if the pointer isn't NULL ? If the pointer in turn is a pointer to a struct would you in turn validate all its pointer members with IsBadReadPtr , IsBadWritePtr and IsBadStringPtr , and all its member functions with IsBadCodePtr ?
A lot of responsibility lies upon the caller, most of the time the callee must trust the caller with the data it sends.
If the documentation of the function states that a parameter must not be NULL (or that it must be valid) it is up to the caller to see to that it really is (or else I will call abort ).
|
|
|
|
|
You don't design robust operating systems by assuming the caller passing in good data. That would give you Windows 95/98.
Tim Smith
I'm going to patent thought. I have yet to see any prior art.
|
|
|
|
|
True, robustness doesn't come hand in hand with the assumption that the caller will always be passing good data.
Robustness is not always what you are looking for. In operating systems, yes. In games, not always, especially if it hurts the FPS and the rendering quality.
If the caller will send erronious data to a function, it will be the callers fault if the program crashes. The callee could do anything in its power to prevent it, but still, it would be the caller's fault that the program crashed. Ever called delete on a pointer that you already have delete d, what happend then?
Open Source software could help this by doing assert s (and similar things in debug builds) to notify the programmer that he/she did something wrong. But in release builds of a game, these checks would hurt the FPS.
It all depends on what you are developing.
|
|
|
|
|
Good documentation also helps, if I write something like "the results are undefined if the X parameter is NULL or if it does not point to valid data" in the documentation for a function and the caller does so anyway, I'll gladly let the program crash, it is not my problem, not by a longshot.
If my library is used in a heart-lung machine, and the patient dies, it is not my problem (unless I'm the patient), but not my fault.
If my library is used in a space shuttle, and the shuttle crashes, it is not my problem (unless I'm one of the seven), but not my fault.
If my library is used in a nuclear warhead, and it somehow detonates and WW3 starts and all people die, it is my problem, but not my fault.
|
|
|
|
|
ASSERTS are a BAD idea.
I'd rather write the code correctly than use ASSERTS. What if the person calling your library function doesn't have the source code to it, and your code ASSERTS for some reason?
Dalle wrote:
It all depends on what you are developing.
What a crock of horseshit. Crap design = crap performance.
------- signature starts
"...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001
Please review the Legal Disclaimer in my bio.
------- signature ends
|
|
|
|
|
John Simmons / outlaw programmer wrote:
ASSERTS are a BAD idea.
Not really, they help eliminating erronious code, if I make an assert the caller knows that something is wrong, if the program crashes he/she will have much less clue of what happened. He/she should then read the documentation or look at the source code.
John Simmons / outlaw programmer wrote:
I'd rather write the code correctly than use ASSERTS.
Then stop calling my functions with invalid data if you should code correctly.
John Simmons / outlaw programmer wrote:
What if the person calling your library function doesn't have the source code to it, and your code ASSERTS for some reason?
If it is Open Source he/she could download it, or he/she could look at the documentation, or he/she could actually read the assert message instead of just clicking ignore.
John Simmons / outlaw programmer wrote:
What a crock of horseshit. Crap design = crap performance.
Okay then, develop a game engine that is 100% crash proof whatever I send to it, look at the performance, remove all your guards, and compare the performance again.
Are you really using IsBadReadPtr , IsBadWritePtr , IsBadStringPtr and IsBadCodePtr on ALL pointers in your, so called safe, code?
|
|
|
|
|
You're taking all this completely out of context. I'm talking about how the library function handles the parameter itself, not about the parameters you're going to pass to it.
If your library is well written, a programmer could pass in any pointer and the application using your library should be able to recover gracefully from a brain fart-induced pointer reference.
Fortunately, the compiler tries valiantly to keep you from passing in pointers of the wrong type, but no matter what the programmer passes in, if your function is eventually entered, it should handle whatever parameter is passed to it.
However, an ASSERT in a library function is a BAD DESIGN IDEA for the reasons I already stated (don't have the source code). If it's a bad idea in one instance, it's ALWAYS a bad idea. Your debug version should work the same way as your release version.
If your documentation states that a programmer can pass in any pointer type, then your function should be ready to handle that, but NOT through the ASSERT mechanism.
I stand by my claim that your views on library code are a crock of horseshit. No matter what you are writing, the code should be flawless first, and fast second. Writing a library for a programmer is exactly the same as writing a program for an end user. It's YOUR job to make sure the programmer can't get away with doing something stupid. Like I said before, the compiler keeps you from having to do a lot of the type checking.
Why are you hung up on the memory access functions? How many processes of yours allow themselves to be hi-jacked in such a manner as to have read/write access to memory changed by an external process?
------- signature starts
"...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001
Please review the Legal Disclaimer in my bio.
------- signature ends
|
|
|
|
|
John Simmons / outlaw programmer wrote:
You're taking all this completely out of context.
- You're the one that started to talk about horse manure.
I'm talking about how the library function handles the parameter itself, not about the parameters you're going to pass to it.
- How should this be done, then? I'm not sure that I'm following you...
If your library is well written, a programmer could pass in any pointer and the application using your library should be able to recover gracefully from a brain fart-induced pointer reference.
- How should it do that? If the signature of the function is void foo(bar* pbar) and the caller passes a pointer to an int to the function how would I know that (without using the memory access functions that I've been hung up on), and what should I do about it?
Fortunately, the compiler tries valiantly to keep you from passing in pointers of the wrong type
- The compiler is good at that, but there are always wicked programmers that are doing casts.
but no matter what the programmer passes in, if your function is eventually entered, it should handle whatever parameter is passed to it.
- If the programmer are doing casts to make it compile, how would the callee know that?
However, an ASSERT in a library function is a BAD DESIGN IDEA for the reasons I already stated (don't have the source code).
- If I do an assert(pbar != NULL) the message that the programmer will get is 'pbar != NULL' . It is somewhat helpful, don't you think? Or if I do an assert(!"foo() - pbar cannot point to a bar object") , would that hurt?
If it's a bad idea in one instance, it's ALWAYS a bad idea.
- Ok, ok, you don't have to use assert if you don't want to.
Your debug version should work the same way as your release version.
- The debug version is for debugging the release version, it should contain more debugging information.
If your documentation states that a programmer can pass in any pointer type, then your function should be ready to handle that, but NOT through the ASSERT mechanism.
- True, if the documentation states that it can take any type of pointer, yes. But if it states that it should take a pointer to a specific datatype and the programmer passes another type to the function (using casts) it cannot possibly know that.
I stand by my claim that your views on library code are a crock of horseshit.
- There you go again, talking about horse manure.
No matter what you are writing, the code should be flawless first, and fast second.
- I agree, code should be flawless. The function I have written is flawless, it is the callee that are flaw.
Writing a library for a programmer is exactly the same as writing a program for an end user.
- Yes, the programmer may actually be the end user.
It's YOUR job to make sure the programmer can't get away with doing something stupid.
- Noone can stop programmers from doing stupid things.
Like I said before, the compiler keeps you from having to do a lot of the type checking.
- Yes, but the wicked programmer from the above example does casts, so you cannot trust the compiler.
Why are you hung up on the memory access functions?
- How should I otherwise know if I could use the pointer that the wicked programmer have sent to my function.
How many processes of yours allow themselves to be hi-jacked in such a manner as to have read/write access to memory changed by an external process?
- Zero. But the wicked programmer will surely make the numbers increase.
Enough of this nonsence, all I wanted to point out is that the caller could always send data to functions that are invalid in some way. It is practically impossible to prevent this from happening.
|
|
|
|
|
an ASSERT in a library function is a BAD DESIGN IDEA
An assert in the surface layer, yes. That is, any assert that checks inputs from the outside. Internal asserts are fine, even in a library.
Your debug version should work the same way as your release version.
Then why have a debug version?
Why are you hung up on the memory access functions?
Probably because, in the end, it's the only way to really validate a pointer (and even that's not foolproof) and it's slow. In the end, if someone passes you a pointer to the wrong thing, you can't really tell unless you put a signature on everything.
|
|
|
|
|
Why are you hung up on the memory access functions?
Probably because, in the end, it's the only way to really validate a pointer (and even that's not foolproof) and it's slow. In the end, if someone passes you a pointer to the wrong thing, you can't really tell unless you put a signature on everything.
- Exactly my point. Thank you for writing it a bit clearer than myself.
|
|
|
|
|
Show me a function that have the following signature void foo(int* pint) that sets the value of the int that pint points at to 47 and does so in a secure manner.
|
|
|
|
|
That topic should be determined before hand depending on the type of data and solution that is to be written. In most cases I would say the library should validate the data and handle errors, because people tend to look at functions as a black box. However, for a particular implementation, it may be more efficient for the caller to verify their data.
For example, take a 3D rendering program that must process millions of polygons, it would waste a lot of time to process each polygon separately in a library function, however if the caller could verify the data all at once before calling the processing function, that would probably be more efficient.
The bottom line is, it is difficult to always classify the solution in software engineering, as the solution usually depends on the domain of the problem.
Build a man a fire, and he will be warm for a day Light a man on fire, and he will be warm for the rest of his life!
|
|
|
|
|