Over the years, I've seen many proposals for better ways to do programming. They all sounded great to me at first. Structured programming, abstract data types, waterfall life cycle, spiral model, rapid prototyping, formal inspections, test then code, 2167a, structured analysis & design, CASE tools, cleanroom, object oriented programming: when I read about each one, I was sold. At last, a method that would eliminate bugs! I'd read further, talk over the new method with my colleagues, and try to apply it to whatever the current project was.
What worked? It's hard to separate the effect of new techniques from the effects of increasing experience, extra attention because we were using new methods, and differences between projects and people. Every new idea helped; none was magic.
What didn't work? This is hard to say too. For example, the CASE tools were late, buggy, capacity limited, and slow. They helped, but not as much as we'd hoped, and their costs and drawbacks were more than we'd anticipated. Almost all the new methods were invented by great programmers, who used their methods to do great work. When others tried the same methods, their results were less spectacular. Every method had its disappointments.
What have I learned so far?
People and programs are different. No method works for everybody or every program.
You can produce bad code with any method. To produce good code, you still have to work hard; no method substitutes for intelligence and focused attention.
After-the-fact methods aren't enough. Testing and inspections try to catch bugs soon after introduction. That is, the model is that the programmer puts a bug in, gets caught, and takes it out. All of these steps are waste motion not shipped to the customer; and often bugs leak through. Non-trivial programs have so many states and possible behaviors that no practical amount of testing will give confidence that a program is bug-free. Not putting the bug in in the first place is the least expensive way to ship a bug-free program.
Design methods and reviews aren't enough. These approaches try to stop bugs by specifying more before programming begins. Abstract data types and object oriented programming also operate before coding starts to improve the eventual quality of code. But when we get down to writing code, bugs still get into the product. Since bugs are introduced during programming, we should change the way we program.
Tools, languages, and reuse aren't enough. If you do any of these wrong, you can destroy quality, but we've tried all of these long enough to know that they don't create quality.
It is possible to write perfect, bug-free code. I've seen it done, with no tool except a pencil. The essential ingredient is a decision, by the individual programmer, to make the code perfect, and not to release it until it is perfect.
Suppose we asked every programmer, "For this project, how will you make your program perfect?" If one plans to use tool X or method Y, fine, we don't argue. If another claims to be so smart he or she can do it just by being very careful, fine. The only unacceptable answer is refusing to attempt bug-free code.
(One thing we might try is "dual programming." By this I mean that two programmers work together on a single program, and write every line together. This worked for me and a colleague, in the sixties. We should evaluate the costs and benefits systematically, by trying it. Extreme programming suggests this practice.)
As code is produced, we still inspect and test it. If we find bugs, we don't punish mistakes, we learn from them. So if being very careful isn't enough for some programmer, then that person seeks additional tools, techniques, buddies, or whatever is necessary. Inspection and testing become verification that our process is working and evolving in the right direction, instead of bug removal refinery processes.
Copyright (c) 1995, 2000 by Tom Van Vleck