From 1978 to 1981 I worked at General Research Corporation on problems of requirements analysis for the Army. I had taught programming methodology at the University of Pennsylvania at a time when people were beginning to use the term “software engineering” instead of “programming.” The focus on methodology grew out of the recognition that the design and construction of computer programs should be as much an engineering discipline as the design and construction of bridges or electronic devices. Engineering was about building something for someone else’s benefit; and that third party needed some confidence that the resulting artifact would do what it was supposed to do, reliably and safely. How would a software engineer determine what that artifact was supposed to do? It was assumed that the party for whom the program was being written would initiate the engineering process with some set of requirements, and software engineering methodologies were concerned with how one could effectively and productively proceed from those requirements to a reliable program that would satisfy the beneficiary.
Unfortunately, the real world has never been particularly sympathetic to abstract engineering methodologies; and, where software is concerned, the problem tends to start right with that beneficiary. How does anyone, the beneficiary or any programmer, know whether or not the requirements provided by the beneficiary are an accurate representation of what that beneficiary really wants? The history of software engineering has been plagued with case studies that begin with the sad truth that requirements are actually a very poor representation, particularly if the beneficiary does not have a clear idea of what the program is supposed to do; and the lack of a clear idea is often a product of the unholy alliance of a beneficiary who does not appreciate what it is reasonable to expect and a software engineer eager to promise the world.
Thus, it is not enough for the Department of Defense to want to protect the United States from missiles with nuclear weapons on their warheads. While that is an understandable request, it is not a requirement that can be translated into the design of a computer program. Knowledge engineering operated under a similar illusion: One cannot simply identify an expert, say in the area of medical diagnosis, and “engineer” a “representation” of the “knowledge” behind that expertise. When I worked at Schlumberger, I had the good fortune to work alongside some of the best minds who knew how to interpret the complex and obscure measurements taken in the boreholes of potential oil wells; but I quickly learned that they did not live in a world of requirements and specifications, which could then be converted into computer software.
Fortunately, I discovered, essentially by accident, an alternative approach. Rather than working among my colleagues in the software systems group, I asked to have an office in the wing in which all these experts worked. Using a Xerox Lisp Machine, I would prototype programs to interpret test cases of these measurements, yielding displays of both the measurements and the interpretations. This became an excellent conversation-starter. An expert would wander past my office, see the display and come in for a closer look. Examining both the data and the results, the reaction would almost always be the same: “Why did your computer do a damn fool thing like that?” My reply was also almost always the same: “Why is that a damn fool thing? I just did what the training manuals said!” This would start an extended lecture on why what you did in the real world had nothing to do with what the training manuals taught you; and, enlightened by that lecture, I could go back to work on the program. Essentially, I had discovered a user-centered, evolutionary approach to identifying and satisfying requirements that reflected what the beneficiary really wanted. Since those days I have discovered that I am far from alone in appreciating this methodology, but my opinions are still very much in the minority.
In retrospect, however, I can appreciate why this methodology has not caught on in the software engineering community; and the reason can be traced back to that distinction, first raised in Plato’s “Republic,” between lexis (word) and praxis (act), which has become one of my favorite topics. The idea that requirements can be “represented” at all presumes the construction of an artifact that is basically a lexis structure, even if the “words” are elements of a formal language, rather than a natural one. At Schlumberger, however, I came to understand the nature of requirements by becoming familiar with the praxis of the experts who could come into my office and make fun of what my prototype programs were doing. Unfortunately, documentation is the bread and butter of engineering methods, whether they are based on natural language or some combination of formal representational systems (blueprints, flow charts, algebraic specifications, etc.). There is no place for praxis in such documentation. Indeed, as I have already observed, John Seely Brown and Paul Duguid have gone so far as to call those documents “Abstractions detached from practice [that] distort or obscure intricacies of that practice.”
The problem, which has been a recurring theme in this blog, is that lexis structures are static, which makes them conducive to representations, which, in turn, are conducive to analysis. Praxis, on the other hand, is, by its very nature, dynamic, making it particularly elusive to most methods of analysis. However, rather than ignoring praxis because it eludes our methods, we should be seeking out alternative methods that, to invoke the language of Richard Neustadt and Ernest May, facilitate our ability to “think in time.” This is a lesson that still continue to grow on me.
Sunday, June 14, 2009
September 05, 2006 (2): My Life and Hard Times in Requirements Analysis
Labels:
Brown,
engineering,
knowledge,
lexis,
logging,
oil well,
praxis,
requirements,
Schlumberger,
time
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment