|
Dave Kreskowiak wrote: what's the default for a newly created, but uninitialized pointer??
Whatever it happens to be in that memory location at the time a pointer is defined
|
|
|
|
|
I didn't see it in the spec and I don't have an up-to-date C++ compiler handy.
But I expect that if it isn't NULL already it soon will be.
|
|
|
|
|
PIEBALDconsult wrote: But I expect that if it isn't NULL already it soon will be.
No it won't. The new standard is ready to be adopted and there is nothing about it that would mandate such behavior. None of the compilers I used recently (MS and GNU) automatically initialize local variables.
|
|
|
|
|
|
I am sure you have some point you want to prove here, but it escapes me
If you are saying that checking pointer for NULL is going to make your programs more robusts, I think I already demonstrated you are wrong. You can check for NULL all you want and still have an access violation.
|
|
|
|
|
Nemanja Trifunovic wrote: I think I already demonstrated
No, while your point of view is valid, it carries little weight with us, as ours seem to with you. A program that checks for NULL pointers is (likely) more robust; we're not saying it will never crash, we're just saying it won't crash on something as simple to test as a NULL pointer, or if it does, it should at least give a clearer indication of what went wrong.
Corrie ten Boom[^] didn't save all the Jews in Holland, but she did what she could. Doing nothing because you can't do everything is not a way to go through life.
|
|
|
|
|
But in that example you didn't set the pointer to NULL and you know it.
A called method should not free something that was passed in, or if it's expected to, you'll need double indirection.
Find a better example, that one's a coding horror on its own.
|
|
|
|
|
PIEBALDconsult wrote: But in that example you didn't set the pointer to NULL and you know it.
Of course I did. Just after deleting it. I didn't set other pointers that point to the same object to NULL because it is impossible to do, and that was the point of my sample.
PIEBALDconsult wrote: A called method should not free something that was passed in, or if it's expected to, you'll need double indirection.
Find a better example, that one's a coding horror on its own.
Of course it is a horror - and you can't protect from such horrors by checking if a pointer is NULL. That's all I am trying to point here.
|
|
|
|
|
That really made my day, and I hope the project is not in C
|
|
|
|
|
Finding null pointer risks is not easy at all, even with carefull re reading. I can help finding then (nearly!!) all in C#, VB6 and java. For this risk and lots of others, have a look at http://d.cr.free.fr/indexen.html
|
|
|
|
|
Pointers should be explicitly checked for null if there is a realistic scenario by which they could be null. For example, ptr=malloc(1024); will set ptr to null if the system cann't allocate 1024 bytes for it. If the program isn't allocating much memory, such a scenario may be unlikely but not unrealistic.
On the other hand, in something like:
{
int arr[5];
int *p;
int i;
p=arr;
for (i=0; i<5; i++)
*p++ = i;
} there is no realistic way that p is ever going to be null. It simply can't happen.
|
|
|
|
|
I think it's more a question of, if you write a function (perhaps a library function) that takes one or more pointers, do you check them for null or let them blow up? And why?
|
|
|
|
|
PIEBALDconsult wrote: I think it's more a question of, if you write a function (perhaps a library function) that takes one or more pointers, do you check them for null or let them blow up? And why?
IMHO, the biggest questions would be:
- Is the operation in the null-pointer situation defined by the interface standard?
- Would the null-pointer situation have a logical meaning (e.g. it may be useful for a function that reads data from a stream to have an option to simply throw away some data; allowing the function to take a null pointer for such usage may be more elegant than requiring the use of a separate function)?
- Are there any circumstances that could case a null pointer to be passed in accidentally?
- How would the probable consequence of passing in a null pointer compare with the best result one could achieve?
Incidentally, I found myself annoyed at the design of some TCP libraries which returned the same sort of failure code when a non-blocking write attempt was done on a port whose buffer was full, as when such an attempt was performed on a port that was closed. The full-buffer case needs to be easily distinguishable from the closed-port case, since one will want to wait in the former case but not the latter. In my own libraries, I allow a write to a closed port to immediately return 'success', but then check whether the port is actually open. If the port closed unexpectedly, the data I'm sending will vanish into the aether, but the program won't crash. I may not know how much data vanished in the aether, but oftentimes (1) it won't matter, and (2) it may be impossible to know for certain if some packets gets sent but never acked. A closed port isn't quite the same thing as a null pointer, but I think some of the philosophical arguments are similar.
|
|
|
|
|
Use Assertions as it will be useful to chack for NULL pointers.Debug.Assertin C#
Thanks and Rgds,
VamsiDhar.MBC
SoftwareEngineer.
|
|
|
|
|
You are using pointers???
|
|
|
|
|
WTF!!!???? I think I might be close to speechless.
Has the man never had a blue screen of death and gotten annoyed? Never thought I'd hear of some genius campaigning for application failure...
|
|
|
|
|
Found in my own code:
while (t.IsAlive)
{
}
I laughed inside, that was quite a while back. t.Join() looks prettier.
|
|
|
|
|
Right, no one pointed Join out to me either until earlier this year.
|
|
|
|
|
Look at bright side: if you have two processors (or dual-core processor), the second one will be busy with something.
|
|
|
|
|
...and you will not lie if you say to a client: see that 100%? Our software fully utilizes CPU power.
Greetings - Gajatko
Portable.NET is part of DotGNU, a project to build a complete Free Software replacement for .NET - a system that truly belongs to the developers.
|
|
|
|
|
That was a funny comment for sure, but... unless he had raised his thread's priority above most other threads in the system, the o/s would preemptively schedule other threads to run, so both processors would still be doing some useful work.
|
|
|
|
|
Coding horror?
private System.IO.Ports.StopBits stopBitsFromString(string stopBitsAsString)
{
System.IO.Ports.StopBits stopBits = StopBits.None;
foreach (StopBits sb in Enum.GetValues(typeof(StopBits)))
{
if(sb.ToString()== stopBitsAsString)
{
return sb;
}
}
return stopBits;
}
Is this better?
private System.IO.Ports.StopBits stopBitsFromString2(string stopBitsAsString)
{
try
{
return (StopBits)Enum.Parse(typeof(StopBits), stopBitsAsString, true);
}
catch {return StopBits.None;}
}
JustAStupidGurl
|
|
|
|
|
Yes, and yes. But if you do that frequently, better to cache the names and values; may I suggest my EnumTransmogrifier[^]?
And in the spirit of the season I won't complain about the multiple return statements.
|
|
|
|
|
Well I've tried to stay out of that.
I think it's a mistake to become obsessive about 'programming rules' in cases where they are of marginal benefit. There is of course a difference between this (the two returns are pretty obvious) and yards of obscure code salted with multiple return statements.
JustAStupidGurl
|
|
|
|
|
justastupidgurl wrote: There is of course a difference between this (the two returns are pretty obvious) and yards of obscure code salted with multiple return statements.
Yes indeed. Definitely easier to take this way.
|
|
|
|