Topical Parameterization, Part III: Measures

This is the third of three pages on the topic of parameterization, dealing mostly with the relationship between parameters and the means by which values are calculated.  The first page described the properties of individual topical parameters.  The second page considered topical parameters as sets.

My Standard Caution About Abstractions applies to this page.

Introduction

System Engineering has little use for numbers that appear as if by magic.  The concept of traceability addresses part of that problem, but doesn’t impose enough rigor to reconstruct changes to the values during major development projects, even with detailed rationale as recommended here.  I use the concept of a “measure” to deal with the issues that are specific to producing parameter values1.

I arrived at this posture by way of the legacy practices that referred to the quantification of Performance Requirements as “Measures of Performance”.  A comprehensive definition of each such measure typically incorporated2 the technical relationships by which values were calculated.  That information was critical to the acceptance of the requirement as being valid3.  It is formalized here to establish a data structure that is applicable to all forms of technical characterization.

A Structured Perspective

I find it more effective to address this concept by approaching it from a “structured” perspective.  This is particularly true because, when used in the context of technical characteristics, the concept rigorously associates the algorithm4 used to calculate values5 for topical parameters with the related inputs6.  In this context, therefore, a measure constitutes a tiny wee model for one or more technically relevant,  quantifiable aspects of the dingus.

See Tables 1 and 2 for the elements incorporated into the structured concept of a “measure” as used here.  It should be noted that Table 1 is implicitly hierarchical rather than relationally normalized, and that additional data elements might also be useful in practical administration.

 

Table 1: Structured Data for a Measure

Element Definition
Nomenclature Common name of the measure
Description Prosaic definition
Input Arguments An array of parameters (see Table 2) operated on by the algorithm.  Inputs are “circumstantial” from the perspective of any given measure7, even though they might be “intentional” in the wider context of the big-picture dingus itself.  Their values might be applicable to specific versions of the dingus’ design, such that the algorithm can be more generally applicable than the eventual specific design.  They might also be applicable to circumstances relevant to the design’s operation or state8.
Output Arguments An array of parameters (see Table 2) for which values are produced by the algorithm9.  They may be either (a) directly relevant to the subject topic or (b) circumstantial to other measures (and topics) in a context wider than the measure that produced them, but (in any case) consequential to or correlative with outputs of class (a).
Meta-arguments An array of configuring variables (see Table 2) used to tune the algorithm (distinct from the inputs that are directly relatable to the characteristics of the dingus itself); these can include, for example, gains and other constants for tuning and configuration of the algorithm.  Depending on implementation, they might (or might not) be structured as input arguments10.  They are (however) intrinsic to the evaluating algorithm, and do not themselves quantify any dingus characteristics11.
Algorithm The relationships used to specify points in the output domain based on the inputs.  Examples for expression include (but are not limited to) equations, executable code, object codefunctional flow block diagrams, pseudo-code12, SQL, state transition diagrams, and reference to configuration-controlled source code.  See the additional discussion in the body of the text for this page.


Table 2: Structured Data for a Measure Argument

Element Definition
*Parameter An array of one or more pointers to parameters.
Units Units as they are expected by the algorithm.  See what I think I know about units.
Position Position in the ordered list as specified for the algorithm

Tables 1 and 2 essentially allow the System Engineer to operate as a “linker13.  That is, to “stitch” together project-specific chains of logic that systematically provide values for characteristics of the dingus14.  In some contexts, the stitched-together sequences might be referred to as a Design Analysis Cycle15 or Verification Analysis Cycle16.

With suitable definition of the word “algorithm”, this concept applies to all types of technical characteristics, so not all measures produce “quantities” in the lay sense of the term17.  Some measures address, for example, lists of published documentation, so are evaluated as some accession identifier in a one-dimensional set of documents owned by some specific release authority, according to specific rules defined by a Regulation, Contract or Statement of Work.  This flexibility extends to the notion of algorithms that operate as “sources” that assert a value, or as “pass-throughs” that trivially extract a value from some identified source of documentation18.

The utility of this notion is in the effort of creating the network of measures for a dingus19, specifically by virtue of the insights gained into options and sensitivities of the various elements.  Such insights are crucial when visited by the Vicissitudes of Developmental Fortune, which happens pretty much every time I turn around.  I have found no comparable, systematic way to acquire and organize those insights while simultaneously spanning multiple technical disciplines and operational domains20.

Merely having a network of connected Measures helps a little bit when handing off an assignment from one person to another, but it is of little use otherwise.  For example, it is usually at far too low a level of detail for formal schedule or budgetary tracking21.  I have never published such a network as a deliverable data item, and would object if anyone tried to make me do that.22

Observations

(1)  This concept does not restrict the outputs produced by any one measure from serving as input parameters to some other measure23.  If the various topics, measures, and parameters of a single dingus24 are rigorously coordinated, they can be aggregated to simultaneously model multiple aspects of the subject dingus25.  The aggregated set of measures constitutes the coordinated explanation for why specific26 values characterize the dingus27.

(2)  This concept of “measure” places no restriction on the existence of a (sub-) set of parameters held in common by more than a single measure for more than a single topic, provided that any given parameter is evaluated as an output by a single measure under any unique region in the space formed by the circumstances28, for any given purpose.  Two observations follow:

(a) A connected collection of a dingus’ measures29 can operate in more than just one mode, with different measures30 being applicable during different circumstances.  This should be obvious by inspection, since the definitions would still be met if we were to create an aggregating algorithm that switches between distinct sub-algorithms based on the values of the input arguments31.

(b)  “… any given parameter is evaluated as an output by a single measure under any unique region in the space formed by the circumstances, for any given purpose…” is important because ambiguities are bad.  It can be problematic when algorithms or sets of circumstances overlap in their parametric space(s), and when purposes overlap in time.

(c) Use of the phrase “…for any given purpose” suggests several subordinate observations:

(i)  This concept is general in the sense that we can (for example) use one structured measure to calculate values for a topical parameter when developing requirements, a distinct measure to provide values when verifying that requirement, and a third measure to estimate the value under operational conditions not directly covered by the specified circumstances.  All of these purposes are distinct from one another.

(ii)  Table 1 imposes no restrictions with regard to combinations of arguments (either input or output) being the same for all uses.

(iii)  Table 1 imposes no restrictions with regard to using the same algorithm for any combination of purposes.  It also imposes no restriction with regard to using the same executable model.  In both cases (however) it facilitates explication of such re-use, along with the potential for introducing bias into related decision-making processes.

(iv)  Table 1 will require augmentation to administer the different purposes.  That administration is outside the present subject32 and is, therefore, not addressed here.

(3) No restriction is placed on the number of measures that are supported by any single algorithm.

– It should, however, be noted that there are few good reasons to have parameters that appear to be independent outputs of the same algorithm showing up in different measures33.

– If two or more parameters must, for technically valid reasons34, be evaluated by a single algorithm using the same set of input values, it is usually a good sign that they are actually elements of a single (non-scalar) parameter.

– Note that this notion makes no statement with regard to the packaging of multiple algorithms into a single executable, which is an implementation issue.  The concept simply requires that each independent algorithm be identifiable therein.

Aside (Form and Format)

Because of the observations addressed above, I do not require the information of Table 1 to be stored in tabulated form.  It is, for example, perfectly adequate for the source code of a sub-model to encapsulate the information of Tables 1 & 2 in the form of comments packaged with a function call to a more broadly-applicable sub-routine.  It will (however) typically be easier to keep track of the issues if some standardized method is used, where that method can be accessed to summarize large clusters of related topics, parameters, and purposes.

Counter-observations

It is difficult (perhaps impossible) to prove that all topics can be parameterized as described in this series of pages.  It is, in fact, more realistic to suggest that not all topics are worthy of the time and effort to be so thorough.  At least two general observations from personal experience support that countervailing suggestion:

– The End User, Acquisition Customer, and Engineering Staff are sometimes35 unwilling to accept the concept, being convinced of their own prior success with tried-and-true text-based approaches.  They are not always wrong…but “prior success” sometimes exists only in the eye of the beholder.

– Some topics are entirely in-scope to a single Engineering discipline wherein shared insight amongst the initiates is accepted as sufficient to the purpose.  Carried to excess, this can degenerate into Verification by Testimonial.  But such SME’s are also not always wrong…but I have seen that attitude result in near-catastrophic arrogance.

The upshot is that, although I will almost always use this general approach to inform and organize my own understanding, I won’t always formally publish and disseminate it.

Recap

This subject has covered several concepts at some length, and a brief recapitulation might help the reader to integrate the various threads into a single narrative:

– Topical Analysis focused on identifying the qualities of the dingus36, to the (temporary) exclusion of quantities.  This introduced a qualitative feature (that is, the “topic”) early in the process, which often occurs when a System Engineer is first establishing technical communication with the various SME’s assigned to a new project.

Topical Parameterization discussed quantifying those topics with individual, meaningful variables for which readily producible values provide good discrimination between concrete designs.

Consideration of parameters as sets introduced the notion that parameters are not necessarily free of constraints between them, but should be considered in a collective context.

– This page recognizes that we should explicitly identify the algorithms used to provide values for topical parameters, including (but not limited to) the relationships between the algorithms used in various parts of the project at hand.  It acknowledged the effort required to do so, and the reality that Engineering judgment might be required in order to decide which topics and parameters require the extra effort.

Painting with a broad brush, some theoretical underpinnings for this material can also be observed.  There is not much practical importance in doing so, but it might be of interest if noodle-baking can be avoided:

– The overall process started with semantic analysis, which allows “topic” to subsequently function as a rhetorical device.  “Semantics” addresses the concept of the “meaning” embedded in symbolic representations37, while “rhetoric” formalizes abstract concepts of verbal communication38.  These are disciplines of logic.  Both have been around for over 2000 years, although neither is typically taught in modern Engineering curricula.  In effect, these two notions are the tools we use to begin imposing order on the chaos of our source material39.

– Parameterization began the transition of discipline from semantics and rhetoric to the topological concepts that follow.  It was the first step on the way from verbal description to technical description of the dingus, mostly taking the technical concepts just one at a time.

– The discussion of parameters in their mutual context (as sets) dealt with, but intentionally did not dwell on40 topological concepts like dimensionality and  orthogonality.  “Topology” is an abstract mathematical discipline having to do with sets and rules that govern relationships between elements thereof41.  The concepts are often expressed as being analogous to geometry.

– This page tiptoed to the edge of, but did not venture into42, the abstract mathematics of “Functional Analysis”.  That discipline discusses the properties of “functions”, “functionals”43, and tensors44 to draw broad conclusions about things like the applicability and portability of methodologies and results.  This page also warned of, but did not attempt to resolve, the need to deal with ambiguities resulting from overlapping algorithms, which might be construed as an issue of so-called “non-Euclidian” manifolds.

←Back to Part II: Sets of Parameters

Footnotes
  1.   I have sometimes used other naming conventions.[]
  2.   by reference[]
  3.   In particular, where arcane “Figures of Merit” are used.[]
  4.   Including any algorithm configuration variables[]
  5.   Which are outputs from the algorithm.[]
  6.   More abstractly: the abstract concept a) identifies, but does not itself necessarily implement, the evaluation process, and b) admits to the existence of uni-directional data interfaces exposed to other processes, including any meta-processes that configure the evaluation process itself.[]
  7.   That is, the algorithm itself doesn’t care where the input numbers came from.  That’s why we need a data structure to keep track of it.  Here, the concept of “measure” does that.[]
  8.   Suggesting that we could refine this category into parameters of the design, on the one hand, and of the circumstances on the other.[]
  9.   That is, they are “consequential” rather than “circumstantial”.[]
  10.   They might also be structured as “environment” variables.[]
  11.   Meaning “can disappear if you change the algorithm, but the parameterization of the dingus won’t be impacted”[]
  12.   But never, ever “virtual code”![]
  13.   But not as a compiler, which is the province of those who build the executable models.[]
  14.   Including, but not limited to, requirements.[]
  15.   Dealing with what the values are desired, or expected, to be[]
  16.   Dealing with how we know, or insist on discovering, what the real values are[]
  17.   They do, however, all produce bounded (or semi-bounded) regions in some kind of abstract space, which is defined by the parameters when considered as sets.  The space, however, is not necessarily a metric one.[]
  18.   It does NOT extend to the notion of algorithms that operate as “sinks”, providing no outputs related to one or more of the indicated inputs.  We’re only interested in producing values that are relevant to the inputs.  We’re not interested in making things look as if we used something that we did not, in fact, use.[]
  19.   Or, set of related dingae.[]
  20.   In other words, this is all about “learning curve”[]
  21. …in fact, as a practical matter, you might need to be careful about showing it to people who like to help you track your status[]
  22.   Seriously…the value is in the making, not in the having.[]
  23.   In fact, it encourages that as a practice.[]
  24. or group of interfacing dingae[]
  25.   or group of interfacing dingae[]
  26.   But not necessarily exact[]
  27.   Or group of interfacing dingae[]
  28.   For example, a common set of environmental conditions can be identified as relevant to more than just a single performance characteristic.[]
  29.   That is, a network.[]
  30.   And, therefore, different topics.[]
  31.   This does not necessarily require the dingus being modeled to operate in modes that map to those of the aggregated model.  We’re talking about modes of analysis, not modes of operation.[]
  32.   Which focuses on representation of technical material, not administrative material.[]
  33.   Fear of revealing how much work really has to be done is a common reason for doing so, but is NOT one of the good reasons![]
  34.   The convenience of the analyst not being one of those reasons![]
  35.   perhaps often![]
  36.   Whether concrete or abstract, whether existing or merely postulated. []
  37.   e.g., text, equations, and tabulations[]
  38.   e.g., discussion and persuasion[]
  39.   “Chaos” referring to the fact that no single form or structure governs the inputs from which we begin work.[]
  40. You’re welcome![]
  41.   e.g., orthogonality.[]
  42.   Again: you’re welcome![]
  43.   Which are functions that use other functions as inputs, which is one way to construe a network of measures.[]
  44.   Arrays where the elements are functions rather than parameters.[]