Read: 227 times
|#187056 - Why not a firm objective?|
Responding to: Andy Neil's previous message
Andy Neil said:
I have been thinking myself that the major stumbling block with much open-source code is the documentation (or, more likely, the lack thereof).
I guess the fundamental problem is that open-source is written by coders "for fun" (as opposed to, "for profit") - and most coders do not consider documentation to be "fun".
That's true, but it's also the difference between work and play.
Richard Erlacher said:
documentation should be complete, accepted, blessed, with Holy Water sprinkled upon it, before the first software weenie is allowed to sit down at his computer ... firm and "absolutely etched in stone" documentation
It is very seldom possible for the documentation to be "absolutely etched in stone" at the start of a project with any significant level of novelty or innovation - in other words, any development project.
There are too many things that just cannot be known at the outset.
I find that puzzling. The documentation represents 95% of the effort in the design and development of a piece of code. That means that it has to be studied, analyzed, decomposed into its absolute requirements. If that work isn't done, no coding effort is worth pursuing. Without that, there's nothing against which to verify that the code is what it should be. The key is to do the hard work, the documentation, up front.
But I thoroughly agree that there does need to be a clearly-defined set of requirements, constraints, etc.
what, exactly, the project's work product is supposed to do, how it's to do it, and how large and slow it's allowed to be
And there needs to be an agreed change process: there will be changes that need to be considered - so there needs to be an agreed way to asses the impact, and decide whether or not it is avoidable/necessary/worthwhile/essential...
Change of what? If the objective task is thoroughly analyzed, decomposed into its absolute requirements, and each requirement addressed by a specific piece of code, i.e. each module justified by a specific requirement, so that it's known in advance how the code implements the functions specified in the doc's, there should be no need for changes between documentation and work product.
So, back to your original point, you do need something as a starting point - but it is usually not helpful to consider it "absolutely etched in stone".
Isn't that how we end up with code that not even the coder can understand? How can you allow a coder to decide that, "Well, it doesn't really have to do that ..."
Too often half the code is written before requirements analysis is performed.
If the requirements aren't "required," then perhaps the documentation task wasn't completed before coding began.
One of the good things about open-source projects is that the ones doing the work get to decide how it is to be done. If they don't decide what it has to do and how it has to go about it, their job isn't done. Too often I see people saying, "I can use this for that, and this other thing I've got for that ... " rather than simply deciding how things are supposed to work and how to make them do that. It's no different in hardware. If you look at a task from the bottom up, all you can see is how to do what you already know. When you look at it from the top down, and thoroughly, things look quite different. I'd guess the key is to focus on the "big picture" rather than the parts. Yes, using "stuff" you already have is easy and can get you part-way there, but often gets you somewhere you don't really need to go.