PLM Model: Granularity, Bottom-Up and Change

Few weeks ago, I had chance to post about PLM Data Model. I think, PLM space has a real lack of discussions about data modeling. It seems to me, PLM vendors and developers are too focused on process management, user experience and other catchy trends. At the same time, everybody forgot that data model is bread and butter of every PDM/PLM implementation. I want to open some debates about what I see missing in current PLM data models.

Granularity
I’m very happy, this word started to catch up attention of people. It came in multiple discussions I had last time with some of the colleagues in the CAD/PDM/PLM software domain. Chis mentioned in it his Vuuch (www.blog.vuuch.com) blog. Al Dean also had chance to talk about it on his Develop3D (www.develop3d.com). One of the problems in PLM is a diversity of implementation and needs. PLM tools implemented lots of functional goodies over the past decade. However, the customization becomes a mess. It looks to me, current data model organization is outdated in most of PLM systems these days.  The last revolution PDM/PLM made was about 15 years ago when the notion of “a flexible data model” was introduced. Today, the next step should be done.

Bottom-up
How to build an efficient data model for PLM implementation? How to build a model that answers to the specific customer needs. Current vendor’s proposal is to make a selection from the list of all possible “modules”. It comes in a form of “best practices”. In my view, it is really “bad” practices. Selecting of big data model chunks put too many model constraints and create compatibility problems. The idea of bottom-up data modeling relies on the capability to define very granular pieces of data and grow bottom up in building a model that reflects customer needs.

Cost of Change
What is the most killing factor in today’s PDM/PLM software. In my view, it is cost of change. PLM models become not flexible and keep lots of dependencies on PLM system implementations. The future, in my view, is building very granular functional services alongside with the bottom up data model schema. It will allow to decrease cost of change, reduce dependencies between components and in the end, reduce a cost of change.

What is my conclusion? I think, technology matters. Without thinking about technologies, PLM won’t be able to make a next leapfrog. It becomes urgent. PLM model is a natural starting point to improve PLM implementation.

Just my thoughts…
Best, Oleg

Share

About these ads

10 Responses to PLM Model: Granularity, Bottom-Up and Change

  1. yml says:

    Hello Oleg,
    This is a very interesting topic and it is related to another tendency cloud computing.
    Let me try to summarize why, cloud computing is the ability to start stop programatically servers on earth whenever you need to do it. I agree this definition is plain simple and boring marketing people might not like it but after a couple of years working in this environment this is my personal take way. The only beast which is a bit outside of this definition is “appengine (google)” since you do not have “full access” to the servers but only to the services : web server, datastore, …

    As you noticed the lack of flexibility in the data model is a big pain in the PLM evolution. If you track
    down the major technical root cause of this “rigidity” you will notice that the database is the culprit. More precisely the schema migration is a complex difficult operation which is so expensive and critical on systems that are already in operation that very few people dare to do it. There is a new kind of datastore that are emerging and that start to get a lot of traction, it is the NoSQL databases. There are several contenders : Bigtable (google), Cassandra (facebook), CouchDB, MangoDB … Each of them have some very interesting feature and a large part of this new databases are schema less, you can change the data model at runtime and IMHO this is a very interesting feature and I am ready to trade some features of my SQL database to gain this flexibility.
    Regards,
    –yml

  2. Yann, Thank you for the comment! I’m familiar with noSQL and related trends. I think, the biggest problem is granularity and data dependencies. In theory, you can put your data set in BigTable… However, if any change requires you to change all related pieces it becomes a mess. The SQL database is an engine is the only part of the deal, in my view. The logical data organization, dependencies and relations (this is what means for me – granularity) matters and not the way data stored on the server. The reason why people like Excel are because they think, they can change it. Actually, they can change. However, for a big organization it doesn’t work… best, Oleg

  3. yml says:

    Oleg,
    The fact that bigtable and friends allow you to add column at anytime without having to go throught a shema + data migration.
    It allows you to start with a much more simplistic data model focus on the core required element. Then you can slowly add columns and enrich the business logic supported by the app.
    This allows you to start with a simple application that your can carefully grow. Rather than starting with a datamodel that can virtually support anything.
    Regards,
    –yml

  4. Yann, I think, I got your point. It is an interesting observation. An ability to add a column is a very powerful feature of data stores like Big Table, Casandra etc .. However, the problem, in my view, is not only in a flexible schema. Today, there are proprietary PDM/PLM systems that can do the same. The problem I see is that overall changes in these systems are very expensive. The root cause of the change’s cost is not only in the data model change, but also in the overall system architecture. The top down system architecture is making systems less flexible and prevent system growth… Just my thought..Best, Oleg.

  5. yml says:

    Oleg,
    I got your point datamodele and their schema is not the only point of friction that slow down the evolution of the systemes.
    Regards,
    –yml

  6. Yann, Absolutely. Most of PLM systems I know allows you to customize /extend the schema. However, when it comes to the implementation it appears that it doesn’t bring value on the apps level and requires lots of customization and hand-wiring. Best, Oleg

  7. Cecil New says:

    The most advanced model for PLM I have seen is STEP AP239 (PLCS). It is an “information model”, which can be used to create a data model. I believe that Jotne EPM’s product ExpressWay does this.

  8. Cecil, Thanks for your comment! I think you are you are right. This is the most extensive model that open and shared so far. Best, Oleg

  9. [...] It corresponds to some of my previous blogs: PLM out-of-the-box: Missleading or Focusing? and PLM Model: Granularity, Bottom-Up and Change. The ability to deploy pre-configured solution and make an easy change by manipulating multiple [...]

  10. [...] It corresponds to some of my previous blogs: PLM out-of-the-box: Missleading or Focusing? and PLM Model: Granularity, Bottom-Up and Change. The ability to deploy pre-configured solution and make an easy change by manipulating multiple [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 252 other followers

%d bloggers like this: