Be Advised: Classified, FOUO, and PII Content is Not Permitted
As we build our architecture projects using a federated set of models, we need to be clear on how the architecture for a system of interest change over time. I will be introducing a set of terms that I've consolidated into an effective ontology that hopefully you all can review and maybe start adoption of. These should be familiar terms but now used with very specific definitions and relationships where before they haven't all necessary been linked or formalized before.
There are 2 interrelated concepts that effect how architectures change over time.
The maturity of the architecture evolves over time as engineers define and constrain the engineering trade space. Starting with a vague notion of the missions that the customers need to achieve and ending up with production quality training and logistics.
The evolution of an architecture evolves independently but concurrently with architecture maturity. As stakeholder mission requirements change, as technology evolves, as parts go obsolete or funding levels change, the architecture needs to evolve to adapt to all the ways changes in reality interact with the engineering of systems. When architecture evolution is required, it effectively creates a new architecture maturity project from the architecture point at which the change was introduced. (Please don't confuse this with an architecture "branch", which I will not use the term "branch" here.)
This discussion just focuses primarily on architecture maturity, not architecture evolution, although there are clear areas where evolution is addressed.
Architectures mature over time and are documented/encoded in models, however in a federated modelling strategy, the models themselves are really just collections of highly coupled model elements, and model maturity and evolution will only be minor correlated with architecture maturity and evolution. Only subsets of model elements within an individual model directly impact an assessment of architecture maturity.
Models are used in an architecture in such a way so an architecture can reference and use a subset of the model elements. So architectures actually mature by the inclusion of subsets of models, which I think we should call "data packages". Architectures mature as their data package matures, regardless of how that data package is federated across models or how many data packages are all put into 1 model.
At certain points in time, this data package reaches key levels of validity and completeness. All data that is relevant for a particular viewpoint (or highly correlated set of viewpoints) is in the architecture data package. Let's call this point a "baseline". Mission Technical Baseline, Integrated Capability Technical Baseline, Design Reference Mission Baseline, Functional Baseline, Performance Baseline, ect....
In between these baselines, the architecture data package is maturing at different rates depending on which viewpoints are required to be completely represented in a Baseline. Thus the architecture data package is actually built up of smaller data packages each related to the delta elements between two baselines. A Mission Data Package is all the data elements required for a Mission Technical Baseline, an Integrated Capability Data Package is all the data elements above and beyond the Mission Data Package that are required for an Integrated Capability Technical Baseline.
Thus each Baseline "tagged version" will have an associated Data Package "tagged version" as well as all prior Baseline "tagged versions" from which it matured. Versions are only required to be tagged when reviewed or included in baselines, as data packages change they can have many non-tagged versions in between the tagged ones. Once a data package has been baselined, any change that impacts the completeness of that data package forces an architecture evolution. As long as a change does not impact the completeness, it really is just administrative and does not impact downstream data package maturation.
As a data package matures or evolves, it should be called a "change set". Stating "I am working on the Integrated Capability change set for the 2024 architecture" means that you are maturing an Integrated Capability Data Package to support a 2024 Integrated Capability Technical baseline, and potentially evolving a prior 2022 Integrated Capability data package from the 2022 Integrated Capability Technical Baseline, which also includes a 2024 Mission data package that was part of the 2024 Mission Technical Baseline.
Data Packages, Baselines, Change sets, Versions, and Tagged Versions, used in this way are going to be critical to a clear conversation of assessing architecture maturity and evolution. They will be critical in support for reviews and sign offs.
People will always be reviewing "change sets". Reviews will be able to assess how close a Change Set is to being a complete Data Package and how close all required Data Packages are to a Technical Baseline. Sign Off can occur at different levels of Change Set maturity based on the needs of the organization, but you can't call a Change Set a Data Package until it is part of a Technical Baseline. This also supports analysis of alternatives as there could be multiple competing change sets that all get reviewed and only 1 picked as part of a Technical Baseline after analysis of which change set best represents the way the project wants to mature the architecture. Supporting multiple concurrent change sets should also not be confused with "branching".
As we move further away from Mission Engineering processes and into Systems Engineering, domain engineering and Test Engineering processes the discussion of which Technical Baselines should be part of an architecture and which Data Packages that they depend on will become more nuanced, so this commonality of terms will be extremely important. Each engineering daughter domain will be tracking their own data package dependencies, change set maturity, sign off points and baseline points for each architecture they are maturing. All of this depends on a clear definition of when a change set is valid and complete. Schemas go a long way in making that determination possible as it helps scope what needs to be included in a data package. It will still take human review at certain points as change sets become data packages especially in validity accreditation and certain completion analysis, but some final completion maturation steps could be automatically analyzed.
So where does "branching" fit in? It's secondary to the main thrust of this e-mail which is architecture change, not model change. Models can branch, architectures don't. One model can support many different architectures. But since different architecture projects may require models to change at different rates and for disjoint reasons, and as model administrative change should not force architecture change, branching of models can help in those cases. Branches should always be made with the point of a merge into the trunk down the line. If there is no plan to merge the branch back into the trunk, then it is not "branching" for the right reason, it's doing something else.
Let me know what you like and don't like in the above treatise. Thanks!