Be Advised: Classified, FOUO, and PII Content is Not Permitted
I’d love to hear thoughts about the limits you see for a Cameo model. How would you describe the vision for the use of a system model built in Cameo at a program office? I mean, the software is designed for a specific purposes which, like any software, has an optimal set of use cases, but has capabilities that could enable users to push the software outside of those use cases (or to the limits of the hardware on which the software is being run). For example, excel has a lot of capabilities that most people don’t begin to use, but there are others who try to use excel for very complicated uses because they know how to write formulas and how to integrate it with other software applications. However, just because you can do something with excel doesn’t mean it’s a good idea. Sometimes the best solution in the long run is to use a different piece of software. Some of us who have been helping program offices with MBSE by building models in Cameo are discovering the many ways we can use the tool and it’s tempting to put everything we can find into the model and to make customizations to make the model work and hope to provide some utility in the end (it’s a challenge for modelers, program managers or chief engineers to identify clear vision for the use of these models). It would be helpful for us to be perfectly honest and clear about the best use of models built in Cameo and where we should probably draw the line and rely on a different piece of software. One example of this is when we talk about Configuration Management of an aircraft system and the multiple tail numbers. There is a lot of logistics and maintenance data and many different systems and processes involved. All of it could theoretically be put into cameo, to include using parametric diagrams to analyze and help optimize processes, but in the long run it might become too cumbersome and overwhelm the software and the hardware on which the model is running. We have yet to see a demo of using a PLM tool with a Cameo model here in the DoD so, again, it is difficult to clearly understand where best to draw the line with a Cameo model.
Thank you.
Mike MartinesSystem Architect, PMA-274/271
Cameo/MD/SysML should be used to define design intent (the architecture). We also should do appropriate analysis (and that could mean feeding parametrics/values from the model to something like Phoenix ModelCenter). It may also mean things like using a state machine to control a simulation (Dassault showed that at the No Magic conference this year...clicking on the state machine made an elevator go up/down and CATIA et al computed power draw, moved the elevator, etc.).
I think we should allow PLM tools to manage the downstream data for as-built, etc. That's what they do. As long as we can connect/synch/pull data from the architecture, we should be fine.
Michael J. Vinarcik, ESEP-Acq, P.E., FESD OCSMP-Model Builder Advanced Chief Solutions Architect
As Mike Martines notes, it would be nice to explore connecting a Cameo model to other tools, like a PLM tool. The capability exists, but we haven’t gotten our hands on the PLM tool or the Cameo plug-in yet.
Greg SauterPresidential Helicopters Sustaining Engineering In-Service VH Class Desk (APMSE) Presidential Helicopters Program Office (PMA-274)
I agree with Mike Vinarcik that the original and best use cases are requirements trace-ability to logical and physical design of the system. For existing systems, I would focus first on the physical design and digital twin datasets for fielded systems maintenance measures. As we go into change proposals, you typically see interface management as a very good use case to communicate and coordinate interface changes between the system of systems context or the subsystem integration context. Within the DoD it looks like Cyber and navigating the Risk Management Framework process is going to be a key initial high value area. Also test engineering can do the same use cases with the test infrastructure design and management. If program management needs to see the results of simulations in tools such as MATLAB or Software Design there is a valuable use case in reverse and round trip engineering of code into diagrams for software team collaboration and management review and validation that are high value as well.
Each project and program has it’s own risk areas and reasons for architecture evolution, so focusing on the high risk high visibility areas of the physical design is also a good area to focus your modelling efforts. Whether it’s tracking the power budget, FMECA, cost, weight, SNR, range… figure out your high risk allocations and measures and start using the tool to track maturity of certainty of those technical performance attributes and measures.
A lot of times it comes down to deeply embedding the architect into the rest of the systems engineering team until you understand the risk areas of the system of interest as well as the specifics of the current engineering organization. Which engineering documents cause the most powerpoint slides? There is a large amount of time lost in both document engineering and translation into powerpoint engineering that can be saved by converting both to SysML modelling. Also, look into people in the team that are doing excel engineering, convert their excel into SysML models, I am sure they are spending a lot of time pulling data from one document or excel file into their excel file and then having to create powerpoints based on their excel results or creating documents out of those excel analysis. They should stop doing this and move to a model based approach, they can export their powerpoints right from the latest model updates and if 2 teams share data across 2 different excel files, they now can manage different views of the same model and not have to worry about the data transfer logistics issues.
The architecture should never be an isolated product and shelfware, they are living documents for collaboration, if the systems architect does not regularly engage with the SMEs , and the SMEs don’t start to engage with the architecture, of whatever scope you decide to move the authoritative data into models, then that is a sign of failed adoption and wasted effort. As the architect gets the initial SME’s for the initial use cases more and more comfortable with collaboration in the tool(s), then the scope of the architecture being modelled can expand as the architect works with additional stakeholders and SMEs to address additional use cases.
Coda: While SysML has some very powerful capabilities as being the lingua franca between simulation tools, I would treat that as a more advanced use case that should not be done initially, but once the system architecture is established as a human-human collaboration tool first with some more basic integrations like excel, MATLAB, basic software visualization and DOORs. Also, these tools can and have been used for mission analysis such as pre milestone B CDD, AoA, operational test and other capability gap analysis, but that is also a more advanced use case that I would not introduce without an extremely experienced architect with a depth of knowledge of the system of interest, the operational context and advanced modelling techniques.
Jim CiarciaInterdisciplinary Engineer - AvionicsNAWC WOLF - Air Traffic Control (ATC) - Modeling & Simulation (M&S)
James makes a number of very good points.
First, I agree, that with physical systems getting current interfaces (with flows) modeled is a high priority. You'll not only find and correct all of the errors, you'll also be able to have a better foundation for cybersecurity analysis (see my paper from last year's NDIA SE conference). Also agree that getting software team comfortable with consuming models as design intent is a good step.
Second, I concur with absorbing any Excel or other databases lying around. On MAPS, we brought in the hazard database and tied functional hazards to our functions. Safety rep was happy; less work for him and always in synch with functions. Enabled us to use rules for criticality of messages. Also absorbed vendor Excel-based (!!!!) wiring harness attempt (one of the most wretched engineering artifacts I've ever seen). It was rife with errors...not once I got it into SysML.
1000% agree with Jim that the team needs to see the model as alive, not dead shelfware. I call my metachains and queries "wrestling the genie," and I meant that pretty literally...that I believed MagicDraw could do almost anything I wished. At first I had to guess what queries/artifacts would be useful...eventually SMEs started asking.
I also feel strongly that the architect/modeler needs to be embedded with the team; the SMEs own the content, the modeler owns translating it into the modeling tool. One cannot architect in a vaccuum.