Do we need to fix PLM basics?

The weekend normally puts me into a much deeper thinking mode about what to discuss on PLMtwine. Since the post about Top Five Disappointing PLM Technologies, I’ve been thinking more about fundamental PLM elements, rather than about specific pieces of PLM. In additional, it was very interesting to see how many thoughts and opinions came in the space of PLM after the Google Wave announcement ten days ago. When new technology comes, it always sounds like new techie stuff can fix old problems magically. But this is not always true, and sometime dressing new technology “clothes” on an “old body” do not create a magical change. So far, I’d like to share my thoughts about the ‘basics’ of Product Lifecycle Management – the things that, in my view, provide fundamental definitions and tool-sets for the rest of our PLM activity.

PLM Model

This is the most important piece of a PLM system. Since PLM is about product lifecycles, it’s essential to be able to create a product model and its surrounding world in the PLM system. In the current PLM landscape, I see three PLM model lines:

(1) CAD / Product Structure – these models evolve from design and product data management systems. The core advantages of these systems emerged from a very mature background and from the history of the CAD industry and its ability to create design and engineering models. In my view, these systems are perfect to represent product design in a static view. However, they lack capabilities to manage product model relationships with the business world. The core reason is in the roots of these models that are able to present only snapshots of various product views.

(2) ERP based– these models came out of business systems. In the beginning of MRP/MRP II, these model fundamentals are in manufacturing and business planning. These systems are much more sustainable to represent time-oriented business and much more appropriated for lifecycle (from a time management standpoint) – but since their core is business-oriented, most of them are missing the ability to keep comprehensive definitions of product design, engineering and other elements of product models.

(3) EDM/PDM – you can find many various product models created as part of different applications in the document and product data management domains. All of these models are normally suited very well for their original applications. The core problem with these models is that most of them are fragmented and not expandable on the level that is needed to keep a system running or expanded.

So, my intermediate conclusion is that Product Models for PLM are still in a very immature phase. Most probably, new technologies need to be applied to this space in order to be more efficient, and in order to scale the tasks we have today in Product Lifecycle Management.

Change Management

Since PLM is about lifecycles, “change” is another fundamental piece of PLM space. Unfortunately, in my view, most PLM systems are not created with ‘change’ in mind. Applying changes in these systems is a very expensive and time-consuming process. A lot of business logic and specific techniques create complex dependencies as to how PLM is implemented and as to what is needed in order to add specific new characteristics. At the same time, today’s business is very dynamic. These unmatched behaviors create a basic conflict between PLM implementation and the surrounding business environment.

Staged Assumption

This approach is directed to resolve the complexity of PLM implementation in the organization. Since all PLM expectations cannot be created in a single implementation shot, most of the implementation is done in stages. This is a very practical and efficient mechanism for separating PLM implementation on domains. In this way, each domain is treated separately as well additional ones being added year after year. The problem with this approach is the issue of “Change Management” that I discussed earlier in this post. From stage to stage, complexity of the system increases and is multiplied by inefficient change management, thus creating more and more expensive implementations. (I have to say that this characteristic is not unique for PLM and probable the same for all Enterprise systems).

Final conclusion for today: I don’t want to discover possible solutions and point to “magic” or “instant” technologies. However, I did want to about three fundamental behaviors of Product Lifecycle Management. Understanding these behaviors and their alignment with new technological achievements can change what we’ve been doing in PLM until now.

As usual, I’m looking forward to an open discussion, and will continue blogging about this topic.


About these ads

14 Responses to Do we need to fix PLM basics?

  1. “Staged Assumption” and “Change Management”…
    I wonder how many PDM implementations (might also apply to CAD or ERP) are starting from a blank sheet and can be done in stages. I would hazard a guess that for most people it’s a case of migrating from an old tool (or from Excel) to the new tool. So there’s a big bang data migration issue, from where the master data is from one day to the next. And the processes to maintain it.
    It’s VERY difficult to identify subsets of the data to migrate in waves, since reuse inevitably means parts turn up in multiple products. And user functions don’t like to use old tool for this product and new tool for that product. And as PDM is the heart of where info is created, it always has a lot of downstream interfaces who also want to switch from old to new in one step.
    So if amyone has found a way to carve up the problem I’d be interested!
    Jez

  2. Gerard Schouten says:

    Hi Jez. Being busy deploying a change management workflow tool I can tell you that a big bang approach implementing a new system AND migrating data is costly and time consuming. Therefore we decided to leave the old system as/is and to implement new. Our engineers may make the choice which one but not both. If new system will help the engineer to speed the decision process he/she will be keen on using it… Interfaces with other systems: none. If new system has proven itself we will consider interfaces with other systems.

  3. Jez, Thank you for your observation! I think you are right, and almost all implementations starts from data migration. And for many organization this first bing bang is very hard to do. Therefore I see how people introduce their implementation in steps. Sometime I see people starting new projects / product lines with new tools (you can see a lot of such examples in aerospace where most of programs are long). I believe, with today’s level of tool, ability to go with steps is mostly dependent on customer’s ability to see clusters of implementation and how it fit organization. The biggest problem is “change management”, so something you called “waves” are very expensive and not simple. -Oleg.

  4. Interesting input Gerard but we found it couldn’t work for us. We have >30 operational interfaces who want real-time (almost) downloads of new/changed data. We considered back-feeding the old system(s) and progressively migrating the interfaces but that would mean a lot of work on the old system(s) to match the ‘improved’ data model in the new system. In effect, all the forward data migration scripts would have to be duplicated for back-feeding. cheers Jez.

  5. Jez and Gerard, Thanks for your comments! Both decisions are costly in my view. Not that I can propose something better today :(. Back-feeding old system is expensive. If you don’t have many operational dependencies, and you chances to slio approach is better. But, after, consolidation will be yet another challenging project in my view. Core of these problems is “change management” of implementations. In your approach you are trying to jump on particular operational level and this initial level is related to complexity of data you manage in your organization. -Regards,Oleg

  6. Alec Gil says:

    Oleg, I like what you said about implementations change management. I believe that PLM implementations, for better or worse, are a never-ending process, in the same way continuous improvement is. The idea of continuously evolving systems, however, is inherently at odds with the big bang implementations, which I do not believe work well. In addition to the organization’s ability (or rather lack thereof) to absorb all of the changes at once, I do not believe we can anticipate and predict everything upfront.

    What a company must have is a vision, a PLM roadmap of sorts, of what it is trying to accomplish short and long term. This roadmap must be continually communicated and reinforced with upper management and other interested parties, so no one is surprised by what is coming or changing. The key is, with every new implementation stage, we must re-evaluate what already is in place and, if needed, optimize it.

    Another point that caught my attention is the dynamics of business versus “static views” of product data. I think what you are really referring to here is the disconnect between the CAD/PLM systems and more “traditional” business systems such as ERP. Business drivers that affect changes to product structure do not easily make their way into the PLM system – certainly not on the timely basis – and vice versa. At best, if systems are somewhat integrated, BOMs get loaded from PLM to ERP on some periodic basis.

    What we implemented in our company is a kind of dynamic interaction between the PLM and ERP systems where the business related changes affect engineering work and vice versa. The PLM/ERP interactions are not limited to the “batch loads” or the like, but can happen at any time, on users or systems demand, often repetitively on the same objects, to communicate the information that may have changed from the last update. These on-demand communications capture change dynamics that would have been impossible with the systems that are not deeply integrated. Only when the communicated data (ECOs, for example) reaches a certain maturity state, does the system prevent users from making further changes.

  7. Alec, thanks for you insight! What I like is how do you see PLM implementation as non-stop process not only involving additional segments of company business, but also reacting on changing business and surrounding environment. This is significant value company can gain from Product Lifecycle Management. I believe “dynamic integration” between ERP and PLM is sort of business rules that allows you manage process between two systems. In my view, this is something that hardly can be achieved (or requires significant resources). Best. Oleg.

  8. Oleg, I like your perspective on “PLM Models”. Thank you for your comment on my related blog post on “PDM Engines”. I think these two are distinct, but related topics in the respect that PDM Engines are about functionality, but PLM Models are about the behavior of the sytem. I believe both are important to consider when choosing a PLM strategy, or even a PLM point solution.

    To your discussion with Alec above, I think that a tightly coupled PDM “model” and ERP “model” offer a best of breed approach (flexible model that suits different audiences’ needs). I agree that such an integration is a significant investment, although it may be one of the most valuable elements of PLM (connecting the planning and execution sides of an organization).

    Regards, Jonathan.

  9. Jonathan, Frankly saying I see “Models” and “Functionality” is something that very much dependent and hardly can live separately. In the end this is all semantics how to call it. My point is that “Model” defines capability of system to manage data and PLM data is pretty complex. So, without be able to manage it properly PLM/PDM system becomes useless… PDM Engine in my view plays similar role, this is why I catch you blog post on this topic… Best. Oleg

  10. Oleg, I agree that “Models” and “Functionality” are tightly connected. I believe an example expresses this best: an ERP “model” of things which is not revision-based offers very limited “functionality” with respect to things like product structure management or CAD file management. So the “model” is certainly tied to the “functionality”. I mean to say that the “model” and “functionality” are different because the “model” is a more theoretical construct (which will not be understood by many people) whereas the “functionality” can be more practically applied and understood. To be sure though, a good PDM “Engine” isn’t worth much if it is part of an overall PLM “Model” that is inaccurate or incomplete.

  11. Jonathan, very good point. Model and Functionality are coming together always. My point in this post was that it is next to impossible to change them without huge investment called “upgrade”. -Oleg

  12. Martijn Dullaart says:

    Hi Oleg,

    This post also relates to http://plmtwine.com/2009/07/20/what-is-difference-between-plm-and-content-management-system/

    But this might be a better place for it…

    1/Functions and attributes
    I think we should start to think in contexts. Generally we tend to develop software in in a way that all functions and attributes managed on the class/object. For instance the class person can have a function “change pay grade” in the HR system. The role is stored on the person object but that attribute is only valid in the context of employee at a certain company. So this is actually polluting the class person with respect to other contexts like in a workflow management system where it might be that the class person has a function “assign task” where your pay grade is not relevant. This makes exchange of data and reuse of objects by other functions a lot more complex.

    2/Identification
    For instance a person in relation to the government (in US) you have your social security number that identifies you in that context. But in relation to the whole world the social security number is not enough, you need to at least know country and social security number or even more specific DNA is what identifies you uniquely. So based on context your are identified differently and might have different functions.

    3/Conclusion
    Therefore we should introduce contexts to manage the specifics. That means that we should only manage core attributes and functions on the class/object itself (make them more exchangeable), but all context specific functions and attributes should be managed on the context. In that way each context/relation becomes an interface for communication between objects.

    Note a context is not necessarily one relation, it can be a set of relations and a set of objects.

  13. Martijn,
    Ideas of context is good. Actually, since PLM systems supports flexible modeling you can model it as taxonomy of classes. And put relevant properties in the right place (i.e. SSN only for US employees). The problem I see in “Change management”. Our PLM systems (but I believe not only PLM) have very limited capabilities to manage change… you change schema and you need to make lots of other changes… Regards, Oleg

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 244 other followers

%d bloggers like this: