Mitigating Bias

As a broad generalization, a cognitive bias is a short-cut in a decision-making process: an imperfect execution of thought process.  The “imperfection” can be in either the inputs or in the logic.  Here, I’m using “imperfect” in the sense of Game Theory, meaning “incomplete”, not in the sense of “you forgot to dot that ‘i'”.

By way of example, assumptions are often artifacts of a cognitive bias in the process of evaluating inputs. Subsequent failures to identify and validate them constitute a separate logical bias.  Certain long-standing SE practices regard the latter as a process error1, but they typically do not recognize the side effects of the former, which can (and often do) result in “confirmation” and “affirmation” biases2.

It is difficult to discuss types of bias without diving into concepts of detection, description and categorization.  I could spend all day wandering through various psychological theories.  Engineers, however, are neither Psychologists, Scientists nor Politicians: I am interested only in the practicalities of the issues at hand.  It is sufficient to my purpose if I can identify practices that are insensitive to such issues (detection, description, categorization) but still reliably solve the problem.

I assert3 here that cognitive biases are best mitigated by exposing the algorithms and inputs of the decision-making process to make them amenable to review and debate4.  This means that we need only to (1) identify a decision point, (2) define the algorithm we propose to use in making that decision, (3) identify the inputs that are necessary to feed the algorithm, (4) review the first three elements before execution, and (5) review the results after execution.

All five elements of the mitigation process are necessary in order to make any of them worthwhile.  Mere documentation of the decision-making algorithm and inputs is a waste of time and money without a competent review; failing to review the algorithms and origin of inputs prior to execution will usually result in a strong bias toward their acceptance5.  Failing to review the results of execution is just goofy.  Here, the notion of “review” depends on the circumstances, and ranges from peer review to formal contractual reviews and audits.

It is important to observe at this point that “mitigate” does not mean “eliminate”.  As a practical matter, the best we can usually do is to make things better.  We usually can’t afford to make things “perfect”6, and the formality of the review process (including documentation) will usually depend on the cost-benefit trade made by those who are chartered to watch the budget7.  Domain-specific experience and judgment still matter at least as much as theories about how decisions should be made.

Furthermore, I will not claim here that this practice is always the most efficient one available: only that it prophylactically mitigates any biases that might be present8.  Other practices might be as effective under the right set of circumstance (and/or more efficient in terms of cost or time) even if they effectively rely on cognitive biases rather try to mitigate them.  Their adoption is a cost/risk/benefit trade that cannot be made “in the abstract”; I make no generic recommendation for or against their adoption.  Any such recommendation would not be amenable to generalizations that are completely independent of operational and technical domains.  However…experience seems to suggest that greater cognitive bias does not typically result in better-engineered products9.

As with many algorithm implementations, it will often be necessary to develop values for some or all of the inputs in order to enable execution.  Some types of inputs will be amenable to generic creation processes that are identical from one project to the next, but many will require domain-specific10 logic in order to be of any use.  It may, in fact, be necessary to override some of the generic processes for certain projects…and we won’t always know for sure which ones must be overridden until we finish planning the project in detail, which typically can’t occur until after we’ve really understood the problem by starting to execute the project11.  This notion leads directly to concepts like “project-specific process tailoring”, “in-process planning” and “course correction”12.

The above notional process (steps 1 through 5) is the only meta-process13 required.  I’ll apply it to some heuristically-identified major decision points, but it is available for application at all other decision points.  I take this rather lenient approach because the exhaustive application of the method would be…exhausting…and we’d never get anything done.

 

Footnotes
  1. As an aside, I have observed some parts of NASA deal with this issue remarkably well (particularly the Flight Directors).  Others, not so much.[]
  2.   I’ll let you look those up for yourself.[]
  3. that is, without offering proof[]
  4.   i.e., “auditable[]
  5.   Sometimes referred to as the “sunk cost fallacy”, which is a form of cognitive bias[]
  6. without flaw[]
  7. who are, unfortunately, often members of the non-cognoscenti.[]
  8. its a condom for your subconscious mind[]
  9.   This is a lot like saying “bad decision-making usually results in bad outcomes”.  This seems to me straight-forward, but I still get a lot of push-back on it![]
  10. or even project-specific[]
  11. the truthiness of this notion is directly proportional to the project’s complexity, state-of-the-artiness, cost, schedule pressure, and political implications[]
  12. We System Engineers are often tempted to create processes of such abstraction that the relevance is invisible to all the non-initiated.  We need to resist that temptation, every chance we get. []
  13. An abstract process for creating concrete processes[]