|
Developers should test their unit level of code.
Developers should NOT test business functionality or at functional level.
|
|
|
|
|
Could not agree more!!!
AlbertDadze
|
|
|
|
|
...are where it's at, IMO. It gets the developer directly involved with how the customer is going to use the software, and can be really eye opening. And frankly, that's where bugs that have to do with process come out, which is a lot more interesting than, gee, does the unit test for this function pass? While useful, a unit test is like checking that the tires are inflated, but you don't know if the car can handle 150 MPH.
Marc
|
|
|
|
|
There's no "Hell No" option.
I mean why? What's the point? Is the codez long enough? Is it copy-paste capable?
Is the codez following encapsulation? I mean is the codez encapsulated in easy to copy and paste #regions?
Is it inheritable as in from one person to another? If Yes than it's all good. There's hope.
I'm right ain't I?
I used to think....
Finally I realized it's no good.
|
|
|
|
|
-Developers write code and test it.
-Testers test the finished tested developer code.
For each bug a tester finds: The tester gets +1 point and the developer gets -1 point.
-Developer is paid: Base developer salary + (-points) * coefficient.
-Tester is paid: Base tester salary + (+points) * coefficient.
This model works when: Base tester salary < Base developer salary.
-Tester salary can only grow up if he finds bugs.
-Developer salary can only grow down if testers find bugs.
A third level to this model can be the user. For each bug a user finds that the tester of the code gave as bug free, the tester gets -1 point (these points are then removed from tester's salary with some other coefficient).
|
|
|
|
|
This will surely lead to WW III!
|
|
|
|
|
Scenario 1) Developer sets up deal with tester: Introduce bugs for a cut.
Developer salary cannot drop below 0, and cut would (obviously) not be declared for tax purposes. Everyone wins...
Scenario 2) Tester just reports bugs even if they don't exist. Intermittent ones are fun. Tester gets rich, every one else is really pissed off.
Scenario 3) Developer quits as fast as possible and tester is made redundant when company goes out of business as no developer worth his salt will take the job.
Real men don't use instructions. They are only the manufacturers opinion on how to put the thing together.
|
|
|
|
|
Yeah right!!
You can start that model yourself.
After the first pay cut you'll drop it
|
|
|
|
|
I'm generally against the developer side of this because it is a negative incentive.
While still a negative incentive, this would be a little more reasonable:
-Developer is paid: Base developer salary + the greater of zero or (SomeNumber - points) * coefficient.
With that at least, the worst a developer could do is base salary. Of course the best course of action for the developer under a payment plan like that is to write as little code as possible. The more code you write the more bugs you're likely to produce. In other words your most productive developer and/or the developer tackling the toughest problems would get the lowest pay.
This might be a pay scheme that would work where life was at stake, but where I work the worst that happens is your bug inconveniences a few workers.
|
|
|
|
|
Well, my model above is not only provocative. It is plain wrong.
Using (bug) metrics to affect payments is in not a good option as people will spend their time to overcome and optimize those metrics and fight over them, rather than work.
However, most companies, including my one, use similar metrics to affect payments. Those models are not as obvious as I wrote but often similar (even thought our testers payment model used to be for some time exactly as above - a daily nightmare).
No one knows how much money this really costs. I am just surprised that time after time, those who have the power to decide on such stuff think it is a nice idea to try it out in different forms.
|
|
|
|
|
I think, after some basic testing, it's the testers task to examine the program carefully. The programmer often just won't find the bugs because he doesn't press the buttons in random order or in an order that was probably never specified.
However, what do you do if there are no dedicated testers?
|
|
|
|
|
I let the other programmers in the team test my code and also if necessary I give it to the one or two people in the department that are good at making commercial software break..
John
|
|
|
|
|
John M. Drescher wrote: I give it to the one or two people in the department that are good at making commercial software break..
Too bad there is no CG in my department ...
|
|
|
|
|
Answers are nothing but wishful thinking
|
|
|
|
|
No, really... besides unless at gunpoint, which developer tests his own code WELL? I mean not just "yes it compiled" or "oh yeah, I ran a (very simple) test case and it worked". Developers like to WRITE code, not test it. Testing is waste of time... so much more code can be written in this time
|
|
|
|
|
|
I take it you are speaking for yourself? Because you are certainly not speaking for me.
I (like many others here) take pride in my code, and strive to release it only when it is as fault free as I can make it. It is then the testers job to make sure I didn't miss anything and have interpreted the requirement correctly.
"Just bash the code out" is is a piss-poor excuse for programming, and annoys teh hell out of me when I meet such software. At least now I will know one person to blame for it in future...
Real men don't use instructions. They are only the manufacturers opinion on how to put the thing together.
|
|
|
|
|
As a manager and a programmer this is the one way to get on my bad side. I mean if any of my programmers submit code to me as done but they obviously have not even run the debugger I get very upset at that. I do not expect code to be bug free but at least the obvious bugs if you ran the program for 5 minutes in the debugger should have been caught..
John
|
|
|
|
|
This is what I am talking about - 5 minutes of debugging to check the most obvious problems. No imagination to check anything weird like filling the form bottom-up or just chaotic.
No developer would admit about being lousy at testing but this is the sad truth. Give a product to real customers (read: people who don't want to use it and try to hack it in every possible way), I mean - give it after one-week of "thorough testing" and in the next day or two you get a few pages with bugs and error reports.
It's that programmers who design a system know how it works and expect it to be used "properly". Even simple stuff like entering $123.45 as price not 123.45 may go thorough undetected.
To everyone who claims to test good: do you spend more time to test your code than it takes to write it? Really?
|
|
|
|
|
I guess you don't get the irony in my post.
Tell me how many developers you know who test their code thoroughly? And how much time they spend on testing? Anything close to the time it took them to write the code?
Most developers THINK they test their code well. Believe me for 20 years I saw so much code and dealt with so much developers and I can responsibly say that testing isn't their strong side.
modified on Tuesday, December 14, 2010 8:36 PM
|
|
|
|
|
nsimeonov wrote: And how much time they spend on testing
I tend to spend about the same time testing as actual coding, I hate it when a tester comes back with an obvious error that should have been picked up during development.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Developers should fully test their own code and work with testers to produce a test plan and a regression test plan. The testers should use the test plans plus some additional random tests they think of.
JR
|
|
|
|
|
Define "fully"
I believe that integration and system level tests aka "validation tests" should be performed independently.
But unit-test and other verification activities, such as, code review should be done by developers.
I subscribe to Test Driven Development.
|
|
|
|
|
fully = to the best of their limited abilities.
JR
|
|
|
|
|
I don't like the word "fully". I have previously worked as a dedicated software tester for 6 years in a large international company. That company had both time and money to divide tasks, so that a person don't end up with a conflict of interests. In smaller companies, one have to think a little smarter to achieve a similar goal.
My experience from that time, is that a developer should write code and do some "simple" test activities as code review and unit testing. All other test activities must be left to the dedicated testers.
As a functional tester, I had many long talks with developers when constructing, for example when I needed to construct sequence diagrams for my functional tests. At that time in the process, the developer was only a "consultant", he/she didn't own the test process.
If a company can't afford to have dedicated testers, one should at least avoid that the developer tests his own code.
There's not a data type big enough to count all the bugs and corrections in a project that was found by a fresh pair of eyes
|
|
|
|