Systems can be complex1 , needing the many pieces to conform in form, fit, and function to meet the user’s needs and intentions. The development of systems is often funded by non-cognoscenti who, in most cases, wish to be certain that their money is well spent. It seems reasonable to suspect that this “funding control” problem has existed for centuries.
In earlier times, it was often sufficient for individuals of the ruling class to directly acquire and exercise enough technical expertise to directly exercise control over their projects. As the breadth and implications of technology grew too large for any single person’s purview, specialization became necessary and a spending control gap grew. The gap widens with increasing specialization, because the sources of funding can neither understand the details of execution nor adequately predict the outcome of development2.
Trust, in various forms, has often been the paramount control practice under these circumstances. The guild and apprentice systems that established Master Craftsmen of various specialties (an early version of independent certification) are good examples. Implicit forms of trust were also derived from decades of mutual experience with (and between) overlapping generations of skilled cadres.
Trust-based practices remain in place today for many domains. Trust can be efficient and effective, especially in an evolutionary developmental setting. Much of the potential benefit derives from applicable developer experience and prior customer experience with individual developers.
“Efficiency” and “effectiveness” are highly desirable traits, but any trust-based practice is susceptible to cognitive biases (e.g., confirmation bias): the concept of trust doesn’t mandate formal verification. This is most true with implicit trust, because the utility of audits is typically not recognized and, as with all process errors, mitigation of cognitive bias requires effort.
Note that a distinction is made between “cognitive bias” (on the one hand) and nonfeasance, misfeasance, and malfeasance (on the other hand). Having a bias isn’t criminal, but ignoring the concept can be if somebody gets hurt.
Even though the concept of System Engineering (SE) is older than the term “cognitive bias”, I contend that the benefit of SE is most self-evident when construed as a standardized discipline to mitigate such biases.3 If conducted under Configuration Control 4 it can also build a body of objective evidence explicitly showing the degree to which the user’s needs and intentions have been met. Such evidence lends itself to the rational acceptance of competent designs and rejection of unsuitable ones.
This notion suggests that SE is not necessarily mandated under all developmental circumstances: it is best applied where the applicable technologies are new to a domain or to the developers, or the domain itself is outside the detailed expertise of the developer or acquirer. Where none of these are true, initial development costs may well be minimized by other practices5. This is not to say that such situations derive no benefit from SE techniques…only that developers might get by without them.
Like all practices, SE can either succeed or fail. At its best, it provides for cogent exposition of unbiased, objective evidence showing compliance with customer need and intent. At its worst, it provides a blizzard of irrelevant information. The motivation here is to define those characteristics and processes leading to the first outcome while avoiding the second.
Footnotes- Whether modern or otherwise: see also rigging.[↩]
- This usage of “development” is a rather modern notion. I’ve used the anachronism to more easily make my point.[↩]
- This notion communicates the discipline’s utility far better than unverifiable claims of cost avoidance. [↩]
- A dependent, but separate subject.[↩]
- which may, or may not, be documented and institutionalized in a disciplined manner.[↩]