If you don’t know why something is the way it is, then how can you fix it, upgrade it, repurpose it, or reuse it—in totality or in parts?

There is a story of a daughter asking her mom why she would always cut both ends of a roast before putting it in a roasting pan. Mom explained that she’s not sure but she's just following what grandma always did and the meat always comes out great. Next time they visited grandma the girl asked the same of grandma, to which she replied: “oh, I used to do that because my pan was too small,” Now, this is a simple example of a “product configuration” based on the size of the roast and the pan, but it is also highly illustrative of the variability challenge faced by OEMs, given today’s exploding product complexities.

It used to be that variability was primarily focused on a Bill of Materials (mBOM for manufacturing or eBOM for engineering), with the configuration functionality exposed via tools called configurators. These configurators are, of course, very important and are designed to accommodate the needs of assemble-to-order, make-to-order, and in some instances, engineer-to-order. If you ever configure a new car via the automaker’s web site, this is exactly what these x-to-order capabilities represent. Configurators start with a so-called 150% BOM structure that is comprised of all common parts (those that are in every configuration) and all variant parts (those that exist only in specific configurations), an overloaded structure that will not be manufactured as a specific product. The configuration process reduces it to a specific 100% BOM structure that comprises all common parts and only the qualifying variant parts, a structure that will be manufactured as a serialized or lot product. Configurators use a set of rules, features, and options that define what is selectable, and in what combinations, from a 150% BOM structure.

Today, exploding product complexities are forcing OEMs to consider variability on a more abstract level than a BOM by shifting from a BOM-centric view to a system-centric view. While configuration of a 100% BOM remains critical, we now also have a 150% MBSE structure to configure as a reflection of the design intent of that final 100% BOM configuration. Now, consider my opening question. If all my 100% BOM configurations (and therefore all related digital twins) can only trace back to the same general 150% MBSE structure, then how will I know what a specific digital twin is, or is not, capable of since I can’t trace back to its specific design intent? That would be a big problem since it could result in the wrong over-the-air software update, misapplied preventive maintenance steps based on IoT feedback, improper analysis of failures reported by the service teams, or inability to properly change the asset's mission.

If you noticed, I used the term “structure” as the thing that needs to be resolved to a 100% instance. That is because there are many more structures than a BOM or an MBSE model that represent a “view” or an “abstraction” of a product. There could also be structures of software, documentation, environmental requirements, industry or geographical regulations, simulation studies, tests plans, and so on. So really, we are taking about having a set of rules, features, and options that can be uniformly applied to a variety of structures. Structures that start with their own 150% content and resolve to a specific 100% instance—an instance that directly corelates with a specific serialized digital twin. The old way of copy/edit no longer holds up to the amount of complexity involved in many different domains and representations of the product. If structures are replicated, then even a simple change may result in individual updates in these multiplied/copied structures.

This is the challenge of the legacy configuration tools vis-à-vis today’s product complexities. In addition to being physical BOM-centric they are also limited in persisting resolved 100% instances as “views” of the 150% structures without replication or copy/edit. In other words, resolving variability without data replication that results from copy/edit.

That challenge is being addressed by modern PLM architectures such as the Aras Platform. The platform separates the definition of variability rules (including features and options) from the definition of 150% structures and their unique variability points. Whenever a set of rules is applied to a structure, a persistent set of semantically rich relationships is created that allows you to trace the specific resolved 100% structure with the rules used. This means that appropriate “views” are automatically created without replication of data and as part of that design’s digital thread. Of course, this is in addition to the ability to explicitly instantiate a 100% structure when appropriate.


Fig. 1: Aras Platform view of variability management in complex products

This allows for a full traceability from the digital twin to the design intent, in context, across all design representations. And because that traceability is under the PLM’s platform configuration management, any proposed changes to the rules or to a structure can be quickly related to the appropriate impact matrix within that structure as well as across all related structures. If you know why things are the way they are then you can fix them, upgrade them, repurpose them, or reuse them—in totality or in parts!

Circling back to the subject of the roast and “why,” Steven Vettermann posted an entertaining video called “Complexity will eat traceability.” In a lighthearted way, he points out the challenges of today’s products complexities—even though he is not focusing on the management of variability as such. But, it’s all about roasting that roast to perfection by efficiently focusing on what really matters. Click here to learn more about managing variability using the Aras Platform.