PLM: configuration v customization. Let’s sort it out..

October 6, 2015


Enterprise software customizations are painful. Remember my old post – Is PLM customization a data management Titanic? Nobody likes to customize PLM software, but all companies are doing that during implementations to some degree. You can catch up on my previous articles about that – How to eliminate PLM customization problem and How to de-customize PLM. The demand to eliminate the need to customize systems, but how is that feasible?

My earlier conclusion is that PLM vendors need to think how to make implementation cost effective and to support flexibility of PLM products and tools. It is especially important in the era of cloud computing and growing number of cloud deployments. PLM vendors will have to invest in technologies and methods to simplify deployment, flexibility and speed of implementations.

Jim Brown and Stan Przybylinski, both well known analysts in PLM industry, just released a funny video and serious interview on PLM customization. Navigate here to read more. Watch the video:

It brings up a topic of a difference between customization vs. configuration. It might be confusing. Where is the border between customizations and configuration? So, I thought, it will be useful to clarify things a bit and put it in a perspective of modern technological trends and development. Both configuration and customization are aiming to alter software product behavior. At the same time, there is a difference in two approaches.


In the old days of enterprise software, customization, assuming altering of software code. Customized product was deployed by customer. It took time and was expensive. In addition to that, future releases of the product potentially becoming incompatible with customized version.

For the last 10-15 years, enterprise software (PLM software included) developed ways to customize software using API and data modeling changes. For most of PLM products the trick was to use only approved API and not to hack data model using direct SQL commands injections. That last one was a grey area. Many customers did it, but not everyone will admit that guilt.


The term configuration means that system behavior will be altered using vendor supplied configuration tools. Some systems provided more user friendly UI for administration, which became important, especially for software integrators running PLM implementations for their clients. Configuration tools are provided by vendors and, therefore, vendor is taking care of future compatibility between releases.

So, "configuration" assumes that you don’t need to write "code" to configure the system. But it can be a bit complicated. Especially when it comes to APIs. What if API is provided by vendor?

APIs – the devil is in details

Application Programming Interface (API) became popular for the last two decades. The demand for openness, integration and broader platform development made vendors to invest more in API development. Many of these APIs are used by vendors and partners for application development and… customization.

Here is the thing. APIs are getting more popular and easy to use. For the last decade, development of scripting languages like Java Scripts and others made APIs a very effective way to configure and customize system behavior. A lot of them are used for automation and integration.

Web APIs and cloud technologies

Cloud brings many challenges to enterprise software configuration and customization. Many well known techniques (especially related to SQL and database customization) cannot be used. Databases are hidden behind web and application servers. Multi-tenant cloud systems are bringing even more complexity to support database level customization.

As a result of web and cloud technologies development, there is an increased demand for two things – 1/ Robust configuration tools provided by vendors; 2/ Web based APIs. Together, API and configuration tools need to support the demand for PLM system flexibility.

What is my conclusion? It is important to understand what is behind "configuration vs. customization" semantics. Even more, it is important to align customer requirements with the level of flexibility PLM product and technology can support. The demand to provide open, flexible and configurable systems that can configured using tools and wide range of APIs. All these options should be supported by a vendor. The development of web APIs and cloud based automation tools makes both (configuration tools and APIs) important for successful PLM implementations. Just my thoughts…

Best, Oleg

Image courtesy of blackzheep at

Onshape, API and cloud architectures

April 23, 2015


I assume you are aware about Onshape, the new CAD software outfit founded by Jon Hirschtick and part of his old SolidWorks founding team. The software is in public beta for the moment. You can reference to my previous posts – PLM thoughts after Onshape public beta and Develop3D live set a stage for cloud CAD competition. I also recommend you Develop3D article – Onshape Beta goes live – start your engines. Navigate to the following link to discover more Onshape resources.

Integration is a very important thing in engineering application domain. Engineers and manufacturing companies are using multiple applications for design, simulation, product data and lifecycle management. Therefore, system architecture and availability of APIs is absolutely important in order to develop integrations and more specific and complex engineering data flows.

Earlier today, my attention was caught by Onshape blog by Steve Lewin-Berlin, which gives you some perspective on Onshape APIs. Onshape is using own APIs to create first integration with Graebert Drawings. Here is the passage from the blog explaining that:

I’ve been leading the development of the Onshape API for the past year. COFES was our team’s coming out party, marking the first public discussion of the API. The introduction of Onshape Drawings and our partnership with Graebert GmbH is an important part of the story.

We decided to build Onshape Drawings on top of the same API that will be available to partners. In a classic case of “eating our own dog food,” we believe that using the API for a significant internal component validates the capability and performance of the API. This also provided a clean interface between Onshape and Graebert technology, allowing us to leverage the extensive technology available in Graebert’s core drawing engine.

As you can see in the screenshot below, Onshape Drawings run in a tab just like Parts and Assemblies, and use a native Onshape style toolbar and command structure.


Last week at COFES 2015 I spent some time learning about what Onshape is doing with APIs and integrations. You will be able to integrate Onshape using three different approaches – file exchange, live link using REST API and by creating more complex integrated cloud workflows. Few pictures below can give you some idea about Onshape integrations (apologies for the quality of pictures. I’ve made them in standing room only during Onshape presentation at COFES).





Onshape is promising to make APIs and documentation available to broader audience later in May.

What is my conclusion? Hybrid architecture and APIs. For the next decade we will live in the world of new cloud apps and existing desktop tools. I can see people starting to use new cloud services in parallel with existing design applications. Openness will be even more important than before. It is critical to follow open integration architecture and REST APIs to support a mix of required integrations. Just my thoughts…

Best, Oleg

Image courtesy of Danilo Rizzuti at


Dogfooding and PLM APIs random thoughts

November 14, 2012

If you are long time enough in software business you should be familiar with the term "dogfooding" (or eat your own dog food). This term used to explain the situation or scenario in which company is using their own products to demonstrate their capabilities and quality. If you are not familiar with this process, navigate to the following Wikipedia article to read more. I liked some examples there, specifically, Apple one, which I wasn’t aware about –

Apple Computer president Michael Scott in 1980 wrote a memo announcing that "EFFECTIVE IMMEDIATELY!! NO MORE TYPEWRITERS ARE TO BE PURCHASED, LEASED, etc., etc." by the computer company, with a goal to eliminate typewriters by 1 January 1981.[9]

The following passage brings few more examples:

One perceived advantage beyond marketing is that dogfooding allows employees to test their company’s products in real-life scenarios,[3][5] and gives management a sense of how the product will be used, all before launch to consumers.[5] In software development, the practice of dogfooding with build branches, private (or buddy) builds, and private testing can allow several validation passes before the code is integrated with the normal daily builds. The practice leads to more stable builds[citation needed], and proactive resolution of potential inconsistency and dependency issues, especially when several developers or teams work on the same product. For example, Microsoft and Google emphasize the internal use of their own software products[citation needed]. For Microsoft, especially during the development stage, all employees across the corporation have access to daily Software builds of most products in development, including the Windows operating system.[citation needed]

Today, I want to speak about specific "dogfooding", which is related to PDM/PLM APIs or (Application Programming Interfaces). In the world of PLM implementations, the role of Open API becomes very important. Usually, when I’m working with customer requirements, I can see the following notes – external programming or customization as a way to resolve features or function absence available in the product. Yesterday, I had a chance to read the following TechCrunch article – 5 Rules for API Management. Even if you are not programmer or software engineer, have a read and make your opinion.

The article made me think of the complexity of API delivery in PDM/PLM as well as about "lifecycle". The latest is important – PDM/PLM products live very long period of time, and the development of stable APIs is a separate and almost "must have" a prerequisite. The 5 rules – design, documentation, analytics, universal access and uptime made a perfect sense to me. I found interesting note about the relationships between IT and business group (which is also very typical for many PDM/PLM implementations):

Enterprise API Management must include the entire Enterprise, not just the techies in IT. The SOA solution, and the other gateways as well, is focused on the IT person and not the business owner of the API program. This is reflected in the UI that they present in their free version as well as their language that includes things like “policies”; too much of the business rules are codified in complex policies that require a technical expert to really use.

However, I found the notion of analytics, mostly interesting, since it can address the idea and requirements of API management through the lifecycle of the product. Here is the passage to think about:

[how to] think about the collection and processing of all the statistics associated with the use of the API, with an eye toward supporting and encouraging effective usage and discouraging/limiting usage that is counter to your business or technology goals.

What is my conclusion? The days of single PLM platforms are almost gone. The future belongs to Networks. Data networks, product and cloud services networks. The ability to adapt a product to customer needs, to continue product development in a fast-changing customer environment and strategic goal for cloud, deployment set new goals in front of PDM / PLM developers. The importance of having agile and flexible API that can sustain many product releases and development cycles was never as important as of today. Just my thoughts…

Best, Oleg

Image is courtesy of TechCrunch article (Feature image courtesy of XPlane – under Creative Commons.)

PLM Implementations and Open APIs

May 8, 2012

Let’s talk nuts and bolts today. APIs.. If you think about any PDM / PLM implementation, the question about API is one of the most important. Why so? Because you know – it is near to impossible to get all done out of the box and via configuration. Even if marketing advertised and sales promised, you will have to have something to be done behind the scene using this magic word API.

PLM Openness

The topic of openness comes very often these days. I’ve been posting about openness about a year ago –PLM and New Openness. Notable news around PLM Openness is coming these days around so-called “Codex of PLM Openness” introduced by ProSTEP iViP. Navigate to the following link and you discover that majority of PLM vendors, including big-three-PLM (Dassault, PTC andSiemens PLM) are committed. Yesterday, during the opening session of annual Siemens PLM user conference – PLM World 2012 in Las Vegas, the topic of PLM Openness came into many conversations and even was captured by Siemens PLM blog.

Enterprise Systems and APIs

Enterprise systems have long history of API development. If you spent enough time in your life with databases and enterprise business you probably remember horrible stories of proprietary databases, move to SQL, hope of XML, believe in SOA / Web Services latest dreams about REST APIs. Last week, I came to a very interesting blog trilogy from CloudAve blog about enterprise architecture, APIs and more called – Simple Service Enterprise part 1, part 2, part 3. It is a bit long, but I recommend you to have a read. The following picture was resonating to my thoughts related to PLM implementations and APIs:

Here is my favorite passage that I’d apply to product lifecycle management and many other enterprise implementations:

…the fundamentals of information interchange: exposing business functionality, currently encapsulated in the back-end, to the outside world via services. These services are a one-to-one translation to back-end functions, which are one-to-one translations to business process steps themselves: the smallest level of business transaction.

Implementations, API and Open Data

Here is the idea how I see the future of open APIs. PLM system(s) is holding hostage of data and responsible for a set of process and transactions. Since PLM system cannot live in a vacuum, the interaction of PLM system with other systems in the enterprise (including various B2C and B2C services) is driven by processes. In order to have a productive API, you need to expose these processes using an appropriate level of granularity, including semantics of data (in this context, thinking about resources seems to me as an appropriate way). Having such a level semantically-resource-oriented-APIs can provide an easy and open way to interact with PLM system to build the most effective services.

What is my conclusion? To build a good API is a very complicated task. To make Open API is even harder. I can see a potential in exposing both semantics of data and related system functions in a way allowing me to use it and accomplish processes automatically. I think, web and REST give us a bit promise. The responsibility of vendors is to develop an appropriate level of granularity to make it usable. Just my thoughts…

Best, Oleg

Picture is courtesy of CloudAve.

Top 5 API Mistakes in PLM Software

October 13, 2010

In any enterprise software, there is a point of time we are asking the following question – "what APIs are supported?" We anticipate the need of API in order be configure, customize, tailor, change, adapt, etc. Use any marketing wants, in the end you’ll need an API to be able to make the software finally works in the way you want. APIs provided by software define a level of flexibility, and this is one of the important characteristics you need to take into account.

The "10 common mistakes by API providers" article on ReadWriteCloud made me think about mistakes vendors are making when designing and providing APIs. I decided to put my top 5 lists of typical problems I’ve seen in APIs for CAD, PDM, PLM software. I believe, this list is not unique in CAD/PLM and may represent some general trends as well.

1. Complexity of data and API structures
The simplicity of API is one of the most important characteristics. When it comes to complex products like CAD or PLM, a very typical problem is a high level of complications in API functions, dependencies and data structures. The level of knowledge required to work with API is growing enormously and the potential for bugs and mistakes is growing too.

2. Late availability compared to software release
This one is representing a painful situation when APIs becomes available only in the "next version" of product. Normally, it represents a lack of maturity in product functions. It often happens in a situation when a vendor is releasing a new version and anticipate function changes in the next version.

3. Inconsistency with core product functions
Another very painful situation when a product from a user standpoint behaves differently compared to API. It normally happens when development organization is inconsistent with regards to planned functions and API. The involvement of different teams of developers , separation of user-functions and API-development may cause these situations.

4. Not Stable APIs and Lack of Backward compatibility
When you customize software for customer or develop your own application you want to minimize potential problems when new version of software is coming to a market. An ability of software to support a stable set of APIs between releases is a very important characteristic. Not stable API can jeopardize a new version deployment or a new release of your software. This is a very complicated issue causes a lot of problems.

5. Lack of tests
There are software vendors consider developers a cheap workforce to debug a new software. I can see a higer probability to see an API problem, compared to user function in the same product. It can be prevented, in my view, but increased usage of own software API for internal development. In addition, it often happens when API considered as an "addition to software" and developed separately from core functions.

There is one more thing. Availability of API. I didn’t put it in the list for several reasons. Sometimes, CAD/PLM vendors are making decision to protect some aspects of their software by restricting APIs. This practice more related to business strategy and cannot be considered as "API mistake". This is another problem that often happens and needs to be taken into the account.

What is my conclusion? The amount of revenues in the market of engineering software related to services, and customization is very high. For different segments, it approaches 50% and even more. APIs is a core fuel behind these revenues. Consistent, stable and simple API is an important characteristic of every successful product in this market. Just my opinion.

Best, Oleg

PLM Content and Facebook Open Graph

April 28, 2010

Facebook F8 conference this week was a place for some very interesting announcements that, in my view, may have an impact on PLM too. I can recommend you to read a good summary of news introduced by Facebook here. In my view, it presents a very interesting dimension in the future development of Facebook. The question I’m asking – should PLM care? Is there something new presented by Facebook, that can catch a focus of current and future PLM development?

Web Content Creation

Facebook is making next steps in the development of content on the web. FB introducing a new way to build social sites where a site itself converted into meaningful content item able to accumulate links to other elements of web content. It presents another dimension in the way web content can be structured.

Social Graph

The notion of connection between social sites is presented in the Open Graph Protocol. Facebook OGP API will provide a way to browse through dependencies and connection between content elements based on social dimension. This is a very interesting approach in the development of rich metadata about web content.

PLM Content

I see a comparison between the way Facebook is building social web content and PLM content. For the last few years PLM presented a very hardwired way to create structured content. It creates a lot of complexity in user interaction and tools integration. The fundamentals of this content are in the ability to structure product information in various dimensions – design, bill of material, projects, etc. The similarity between Facebook social content and PLM structured content is obvious to me. However, Facebook presented an interesting approach to build it. There are several developments on PLM horizon that can fit such concepts – DS/BlueKiwi, PTC Windchill/SharePoint, Vuuch. It will be interesting to see the future development of these products.

What is my conclusion? Facebook is going to change a way to create the web content. Current PDM/PLM software seems very cumbersome from the standpoint on how to interplay with user on content creations. PLM can learn few lessons about how to create a content in the organization and across the value chain. Some concepts and ideas can be replicated in my view. What is your opinion?

Best, Oleg


Next Level of PLM Social Tools Development

January 8, 2010

I want to discuss what can be the next step in the development of social aspects of PLM tools as well as about what can become a major driver for future social PLM. Last year, we had chance to see multiple examples of how mature PLM vendors and small companies moved towards establishing products in the social domain. Social Innovation, Social Product Development, Social Design… All these buzzwords were used, but I want to dig inside and discuss how I can, practically, these tools can get some level of social acceptance in the enterprise.

Social vs. Siloed?

This is one of the questions that come to my mind when I’m thinking about multiple vendor race toward social tools. The major barrier is user adoption. How many social networks you can be a friend of? How you can track your participation in multiple forums, social groups, forums, etc. If tomorrow’s product will come as  social software from multiple vendors? SharePoint communities vs. Salesforce Chatter? How many other social networks and communities can practically exist in the organization? My conclusion is that social experience cannot be siloed  – the certain mechanism needs to be to allow people to communicate across business application boundaries.

Social API

How to organize cross application social experience. This problem is not new these days and exist in multiple social networks we have today – facebook, linkedin, myspace, ning etc. The option to integrate communication across these networks can be development of some social API that allows to the communicate in a singular way. An example of such an API can be OpenSocial:

OpenSocial helps these sites share their social data with the web. Applications that use the OpenSocial APIs can be embedded within a social network itself, or access a site’s social data from anywhere on the web.

I’d recommend you to take a look on Open Social for Enterprise white paper. Some interesting concepts are defined there about how API can be used to allow cross application social tool to co-exist and not to be siloed into specific application niches.

OpenSocial Architectural Concepts

Broadly speaking, OpenSocial defines two concepts. The first is gadgets, a component model based upon HTML, Cascading Style Sheets (CSS), and JavaScriptTM, that describes how content gets managed and rendered inside a Web browser. If you use sites like iGoogle, then you are already familiar with gadgets. The second is a set of APIs for accessing and working with social data. These APIs define how you access information about a person, their relationships, and their activities. These APIs are made available for use in gadgets, via a set of JavaScript APIs, as well as programmatically via REST. OpenSocial applications can take the form of gadgets that can be embedded into any container that supports the OpenSocial specification or traditional SOA services for integration. Taken together, the gadget component model, social APIs, and REST interfaces, provide a programming model that enables the creation of standards-based social applications.

You can see adoption of Social APIs and OpenSocial specifically for existing social networks from MySpace, Ning, LinkedIn, Google and some others.. .

What is my conclusion today? PLM is interesting to jump into social bandwagon. However, PLM will be able to do so only by adoption of some “open behaviors” that are considered as must-attributes of the social world. This is will be even more important in enterprise rather than in the consumer world.

Just my thoughts.
Best, Oleg

Share This Post


Get every new post delivered to your Inbox.

Join 290 other followers