Recently I read an article on sizing software with testable requirements. In this article the author talks about a survey conducted by Mosaic, Inc., and the Quality Assurance Institute which says, only 23 percent of project managers measure the progress, effectiveness, and productivity of their software development efforts. The author is amazed that, when the costs to develop, maintain, and fix software is so great, why do software developers ignore this important activity?
The author concludes that one important reason is that current software sizing measures are not flexible enough to meet the needs of today‘s software developers.
The author does a nice job of stating the importance of software sizing. A software sizing measure is fundamental to any software measurement program. While estimating cost and schedule is probably the most common use of a sizing measure, there are many other potentially valuable applications, including progress measurement, earned value, risk identification, and change management.
According to the author, there are only two software sizing measures widely used today: lines of code (LOC) and function points (FP). And, although each is a sizing measure, the two actually measure different things and have very different characteristics.
LOC is a measure of the size of the system that is built. It is highly dependent on the technology used to build the system, the system design, and how the programs are coded. There are many well-documented problems and issues with LOC. In fact, Capers Jones has stated that anyone using LOC is “committing professional malpractice.” Despite these problems, LOC is still frequently used by very reputable and professional organizations.
In contrast to LOC, function points (FP) are a measure of delivered functionality that is relatively independent of the technology used to develop the system. While FP addresses many of the problems inherent in LOC, and has developed a loyal following, it has its own set of issues.
Because LOC and FP have been the only widely accepted ways to size a system, a software developer‘s measurement choices have been very limited. Most have opted not to measure at all.
So the author is recommending using the concept of testable requirements as a test of the quality and detail of system specifications. And also states that testable requirements offer a new paradigm for measuring the size of a system.
On the whole, Testable requirements are surely better than LOC. But they look similar to FP to me. I‘m not convinced that this will solve the key problem of measurement. In reality, I don‘t think we can measure a software system. Coz we don‘t understand/know all the requirements in a timely fashion. In case of real estate or oil [examples used by the author], you have tangible things which can be measured.
To me measuring software system is similar to measuring an idea or measuring communication, which is not tangible. We can only have vague sense of measurements, but one cannot say for sure x units of the system can always be mapped to x units. It just depends on who is interpreting those units and when. In fact so far, we don‘t even have a decent first measure of quality of a software system. We use other measures like number of bugs, etc as a measure. Everyone knows how fake this measurement mechanism is. One can add more testers with great domain and technical knowledge and easily find more bugs on the same software system.
I agree the ability to measure can help us in better estimation, risk management and planning in general. But just to achieve this, can we put some numbers out there? Whatever happened to the agility of software development?