Validate

Unfortunately, this is one of those “overloaded” terms having more than one meaning, depending on context.

In the context of Requirements Management, the concept of validation generally stems from Software Systems Engineering usage1. In that context, “technical requirements” were more or less hypothetical until the “stakeholders” had a chance to concur that they accurately captured the End User’s need and intent. In my experience, this has been evident in the way we develop software for the control Moving Mechanical Assemblies: the hardware engineers know what they want their hardware to do, but don’t speak enough software to properly articulate their requirements, so the software engineers end up playing a bit of a guessing game until full duplex communications are established throughout the team. This usage is generally in line with the normal meaning of valid: to establish the truth or correctness of an assertion2.

In the context of verification, “validate” refers to the issue of whether or not a technique or model is acceptable for use in formal verification activities. This context has itself a certain flavor of verification, except that formal, comprehensive development requirements for models and test setups are not always written in practice. If we can accept that the concept of “model” is intrinsic to the concept of verification, and that all verification is based on some form of “inference”3, then one of the standard uses of “valid” precisely applies: to validate is to ensure that an inference is correctly derived from its premises; specifically: true in terms of the logical principles of the logistic system to which the inference belongs4.

I usually pile on by stipulating that the aforementioned “premises” must also be accepted (“sanctioned”) by widely acknowledged authorities in order for a model to be valid. Here, of course, I have sometimes deviated from the “objectivity” notion: it is possible for even the widely acknowledged authorities to be dead wrong.

As a practical matter, a validation argument (like a verification argument) needs to be replicable, which notion traces back to the repeatability criterion for acceptance of principles according to the Scientific Method: an “experiment” that cannot be replicated by a disinterested third party is not generally accepted by the community of technical peers.

Footnotes
  1. It was not normally used in legacy System Engineering practices in the current sense. The Operational Requirements concept supplied all of the functionality now ascribed to the Requirements Validation processes.[]
  2. “…to grant official sanction…”; Merriam-Webster’s Unabridged Dictionary.[]
  3. In the case of analysis, and even inspection, the concept of “inference” is not too difficult to see. But the concept extends to test, was well. For example, we make inferences from a test setup that does not precisely mimic the real world; we infer from a small number of qualification articles that all S/N built to that set of drawings will also pass; we infer from data that are observable (can be instrumented) things that cannot otherwise be seen. I could go on.[]
  4. This could be as simple as “we all agree that we started with the right physics, and did all the algebra correctly”. It can also be much, much more difficult.[]