Recently, I had the pleasure of chatting with Rik Rasor, Head of CoE Artificial Intelligence at Fraunhofer-Institute for Mechatronic Systems Design, and Ruslan Bernijazov, Co-CEO at AI Marketplace & Systems, Engineering Lead at Heinz Nixdorf Institute, about the future of artificial intelligence (AI) and its impact on modern product lifecycle management (PLM). Our discussion had four distinct tracks:

  1. Broad approaches to apply AI to PLM and Digital Engineering
  2. Opportunities for Generative AI in PLM
  3. Building AI-augmented natural language search in Aras Innovator
  4. What is the future of Gen AI in PLM and Digital Engineering?

Let’s take these one by one.

Applying AI to PLM for Digital Engineering

We can all agree that the hype and excitement surrounding advancements in AI have far exceeded practical applications for incorporating AI into traditional digital engineering tools and business processes. According to Gartner’s 2023 Market Guide:

The challenge is to move forward confidently implementing AI into one’s PLM tech stack and do so in a way that provides real value to your company and your customers.

According to Rik, you can break down the current uses of AI in PLM to include machine learning, Generative or GenAI, Large Language Models (LLMs), and conversation agents (like ChatGPT and GPT4; sometimes these are called copilots). We know that machine learning has already been integrated into thousands of engineering applications, so AI is not that new from a research perspective. Rik says that what is new, however, is how, one year after the launch of ChatGPT, we see the field of generative AI is capable of doing something previously expected of humans.

Opportunities for Generative AI in PLM

So, where are the opportunities for using GenAI in PLM? For Rik, they begin with product marketing, go all the way to variant management (VM), and everything in between!

The first step in crafting a strategy is to categorize the broad set of applications into a framework in which diverse personas can understand, evaluate, and prioritize where to start. From there, a roadmap can be built to locate more “low-hanging fruit” that can quickly unveil additional benefits of this technology.

AI in PLM: Three use cases

During the demonstration section of the discussion, three examples of AI use with low-hanging fruit were shared, starting with Systems Engineering. When used with technical documentation, AI can check if a product’s new requirements are consistent with its other requirements, related documentation, or previous versions.

The second example was the use of AI in technical documentation drafting. It’s possible for AI to generate basic user guides rooted in data and supplemented with a chat user interface. In addition, we see the benefit of leveraging the structure of data in Aras Innovator to create additional, unstructured documentation.

Last, we talked about Enterprise Search and how chat can be used to search your PLM system. This will ultimately require that systems have a fluid user experience and a seamless chat interface to succeed. The system must be able to recognize user prompts and make the appropriate query to the system.

What data infrastructure is needed to apply AI to PLM successfully?

After the demonstrations, we shared our thoughts on what it takes to be successful when developing your AI integration strategy. Rik’s concluding remarks revolved around engineering data and the importance of a semantically rich database with sufficient data, an open system that offers the right APIs, and semantics and evidence that AI can differentiate data for extraction.

Ruslan concurred with Rik’s view that although good quality data is important, data access is crucial. With much of the data in a PLM system entered by engineers, it seemed like quality was less of a concern than availability.

Where do we go from here?

It comes down to this: you need to start with data; the more data, the better, including more integrated and connected data across the product lifecycle. Then, you need to focus on the semantics of that data. Is it machine-readable? Does it recognize how data information intersects? Is it understandable?

Important things to keep in mind:

  • Understand what risks should be considered and managed
  • Make sure you have governance policies around managing AI-augmented business processes
  • Become comfortable with AI modeling

The most challenging bit may be just becoming used to AI modeling. While it is clearly a mistake to think AI can’t understand inconsistent requirements, it is equally dangerous to trust AI too much. AI often has what are called “hallucinations,” in which it creates an answer to a question that sounds good but, in fact, is incorrect. The subject of hallucination detection and mitigation is a hot topic in all LLM circles, and organizations need to carefully monitor the current state of the art and ensure their Gen AI implementations have proper guardrails in place to manage.

Training those in your PLM ecosystem will be a necessary step toward making sure the workforce has the new skills it needs, and that opens the door to wider implementation and greater benefits of AI.

Check out the on-demand webinar here.

Interested in learning more about the role of AI in PLM? Read my earlier post, Exploring Practical AI Use Cases in Product Lifecycle Management.