The Complexity of Product Lifecycle and Google’s Blindspot

September 6, 2013

The power of search giants like Google is enormous these days. Think about the amount of information Google, Twitter and Facebook are processing and you will be knocked down by the numbers. Consumerization is a significant trend these days and everybody are thinking how we can apply well proven web and open source technologies in the enterprise field. Think about product designs, engineering documents, Bill of Materials – things that we commonly considering as product data. Eventually, the dream could be to see how Google’s engineers are recommending best parts to use or cracking Bill of Materials with 100s levels of data. Not so fast…

When it comes to a product data you can discover that this type of information processing is different from what we got to know on the web. It starts from the diminished importance of ranking mechanism based on other people discoveries. For example, if you happen to be searching for “Part CHI-93939-STD” it may not come up on the first pages of a search. But it may be found more directly via a connection to an existing assembly that references it. Data semantics in this case is more important than data ranking.

I recently came across the following study – Top Google Result Gets 36.4% of Clicks. Have a look at the charts, you’ll get my point quickly: if you are out of first five (5) page results, you essentially don’t exist. So if you’re “Lady Gaga”, you are certain to appear and ranked in the top pages. These days "social ranking" is adding some additional flavors to the overall search results. Nevertheless if you are “Part CHI-93939-STD”, then chances are, you don’t exist!

Another interesting blindspot of Google search – lifecycle data. Few days ago, I caught an interesting study – Filling a Search Engines Blindspots. Here is the passage describing lifecycle blindspot:

Today, Christian von der Weth and Manfred Hauswirth at the National University of Ireland in Galway identify one blind spot in Google’s coverage and describe their vision for how to fill it. This information blackspot consists of location-specific information that is only useful for people for short periods of time. An example would be a question such as whether an advertised bargain is still available at a particular shop. Another is to ask whether parking spaces are available at a public event such as an air show, music concert or such like. There is no way that a search engine like Google can index that kind of information that is specific to a particular location for just a short period of time.

What is my conclusion? Product data is extremely complex. It contains lots of relationships, dependencies and semantics. However, it is not everything. The most important element of product data is lifecycle information. Since product data is changing as a result of product development, use, maintenance, etc. systems need to be able to capture this product lifecycle data in a real time to provide a correct data representation for people in manufacturing companies and extended eco-system. It is not a trivial tasks and very interesting problem to crack. PLM software architects and other techies – be aware about complexity of product data lifecycle management. Just my thoughts…

Best, Oleg


Product Lifecycle Future in 60 years point-of-view BBC film

September 5, 2013

Speak to people about PLM and they will tell you about CAD design, BOMs, processes, ECOs, collaboration and other similar topics. I want to change it and speak about “lifecycle” part of PLM. The core function of the lifecycle is the ability of PLM system to maintain changes and manage snapshots of product design and organization in a specific period of time. Some of products have very short period of time. However, many of products such as airplanes, powerplants and even cars have very long lifecycle. The information about a specific aircraft can easily live for 50-60 years and more.

Few years ago, I discussed the problem of long term product data retention problem. Navigate here to refresh your memory. The main part of that conversation was about how to create logical model and physical data storage for data to be preserved for a long period of time.

Earlier today, I was watching a very interesting video created by BBC. Take a look below. Here is what they did.

Sixty years ago, the BBC filmed a train journey from London to Brighton, squeezed into just four minutes. Thirty years ago, we did it again. Now we are bringing it up to date, to see how much has changed – and how much is still the same. Here’s all three journeys side by side

Here is the challenge I can see in front of design, simulation and PLM systems. How we can store and play with design and other information about products that can help us to recreate a product virtually in a different periods of time. How I can create a product experience of “a specific version of a car” company designed ten, twenty or even more years ago? How we can store all information about product environment to recreate what we had on the street and in the city 50 years ago. Some information is available in GIS and other mapping services. Some of this information is available in PDM/PLM product designs and other data sources. To combine all these data sources together can be an interesting and real challenge for PLM system.

What is my conclusion? I think PLM industry is underestimating the value of lifecycle data and the ability of this data to influence the analytic and decision process. The ability to ‘recreate’ product experience in a different stages of lifecycle and period of time can provide a tremendous changes in the way we design, manufacture and support products in the future. Just my thoughts…

Best, Oleg


PLM Data vs. Process: A Turn Towards Linked Data.

September 4, 2013

Data vs. Process. The egg or chicken of PLM industry. This topic is near and dear to many people in PLM ecosystem . What comes first and why? My attention was caught by Jos Voskuil blog post – Mixing past and future generations with a PLM sauce. Have a read and make your opinion. I liked the following passage:

This culture change and a different business approach to my opinion are about modern PLM. For me, modern PLM focuses on connecting the data, instead of building automated processes with a lot of structured data. Modern PLM combines the structured and unstructured data and provides the user the right information in context.

Link is a powerful word. I appreciate the power of data connection. I reminded one of the writeups I did in Inforbix blog – Product Data: The Power is in the link more than one year ago. It goes back to Richard Wallis’ presentation on Semantic Tech and Business 2013 in Berlin.

…the power of the links in Linked Data – of the globally unique identifiers of things and relationships described by URIs (Uniform Resource Identifier) in RDF – for more seamlessly interconnecting data within users’ own domains and with other data in other domains, too…

Jos’ commentary made me think about process vs. data again. I addressed this topic few times in my past blogging. My first attempt was PLM: Controversy about Process and Data Management. I wanted to emphasize my strong believe in the need to solve the problem of product data access in an organization.

…the failure to design data access in organizations, was a recipe for disaster for many PLM implementations. PLM programs were focused on “how to improve processes” and forgot about how to put a solid data foundation to support cross-departmental process implementations. So, I’d like to put a quote from Bell Helicopter’s presentation during DSCC 2011 as something PLM vendors and customers need to remember – “to get the core data right first”. Just my opinion, of course.

My next attempt to talk about data vs. process was earlier this year. The discussion was triggered by Tech4PD dialog between Jim Brown and Chad Jackson. It was precisely named PLM’s chicken or Egg Scenario. In a bit confusing (to me) voting between "going beyond file control data" and "data beyond engineering has to be centralized, secure and accessible to PLM", I decided a process is more important. I explained myself in the post – PLM: Data vs. Process – Wrong Dilemma? My conclusion that focus on product lifecycle – a data set combined information about the product (data) with information about process (lifecycle).

The debate made me think about why Data vs. Process is probably a wrong dilemma in the context of PLM. In my view, the right focus should be on “lifecycle” as a core value proposition of PLM and ability of PLM to support product development. In a nutshell, product development is about how to move product definition (in a broad sense of this word) from initial requirements and design to engineering and manufacturing. If I go future, next stages of product definition will be related to maintenance and disposal. To define what represent product on every stage together with what is required to move product from one stage to another is a core value of product lifecycle and PLM.

What is my conclusion? I agree with Jos. Business is getting more data sensitive these days. Time ago, the value of data wasn’t predominant like today in our Google era. It is clear to everybody that "data matters" and the best you can do to prove your point is to bring "data points". This is why the ability to bring "linked data points" about a product becomes so valuable. This is the next PLM turn. Just my thoughts…

Best, Oleg


How PLM will embed information in products?

July 24, 2013

Experience is a new modern hype. You can see it everywhere these days. User experience, selling experience, learning experience, total experience, etc. You can continue the list… I want to talk about "product experience" today. This is obvious and new at the same time. Manufacturers are interested to know more about their products. It related to sales, usage, problem reports and defects, maintenance, etc. It becomes almost obvious – the more information you get about usage of your product during whole product cycle – the better you can do. I like old, but famous quote by W. Edwards Deming – "You can’t manage what you can’t measure."

An interesting article by SolidSmack caught my attention earlier today – InfraStructs: Embedded ID Tags in 3D Printed Objects Eliminate Need for RFID and Barcodes. The article speaks about how to embed the information in 3D printed objects:

California event is the announcement from Microsoft Research that they are developing embedded ID tags for 3D printed objects. Titled InfraStructs, the internal tags are created from the same 3D printing process already used to create the intended, printed object; effectively generating an internal, invisible tag that can be read with a terahertz (THz) imaging scanner.

How manufacturers will use and why PLM vendors will benefit is? The premise of existing RFID technology is to use a specific tag that must be attached to product that will allow to tag manufacturing items, spare parts during whole lifecycle. The idea was good, but the implementation is a bit complicated and still costly. By direct embedding of additional information manufacturing can achieve the next level of efficiency. Here is another passage from the article.

Ultimately, the benefit of this approach for manufacturers is that they can embed unique information such as serial numbers or simple programs in coded tags by integrating the design into a pre-determined 3D printed design. In turn, this eliminates the potential need for other (and oftentimes more expensive) identification systems such a RFID tags and electronic chips that can add cost and complexity to the manufacturing, as well as the need for bar codes which can be cumbersome to work with and are vulnerable to tampering.

What is my conclusion? We are moving towards connected world, where design and engineering parts will be more connected to their physical implementations. It will allow better measurement of product experience and, as a result -better product lifecycle management. Just my thought…

Best, Oleg


Do We Need a Delete Button in PLM?

May 14, 2013

Delete is a special function. In the system dealing with the live data, the meaning of delete is interesting. My first lesson about <delete> function in PDM was 25 years ago. In one of very first data management systems I implemented we used a special flag to mark deleted parts. Later on, I was discussing delete functionality with engineering managers of one of the firms. Think about parts used in production. How you can delete them? They can be not effective for usage, out of stock, discontinued, etc. However, you cannot literally delete them. Back 20 years ago the technology was different. We marked parts and revisions as "obsolete", but we didn’t keep them forever.

Yesterday, in the airport, the following CNET article caught my attention – Google’s Schmidt: The Internet needs a delete button. Schmidt is discussing the nature of internet to absorb data and information that cannot be deleted. Here is an interesting passage:

Actions someone takes when young can haunt the person forever, Schmidt said, because the information will always be on the Internet. He used the example of a young person who committed a crime that could be expunged from his record when he’s an adult. But information about that crime could remain online, preventing the person from finding a job. "In America, there’s a sense of fairness that’s culturally true for all of us," Schmidt said. "The lack of a delete button on the Internet is a significant issue. There is a time when erasure is a right thing."

Well, privacy has a different angle, of course. People are not Part Numbers. However, think about technology behind the internet these days. Think about Gmail. You can be doing email forever without deleting them. I’ve heard some rumors first version of Gmail had no delete functionality. Storage is cheap these days. You literally can keep all information created by design, engineering, manufacturing all the time without deleting this information. Isn’t it fascinating. It can change the way people design and manufacturing things.

What is my conclusion? Delete is a very specially functionality when it comes to systems dealing with a lifecycle. Internet is very much change our horizons in understanding what potentially can include a "total lifeycle" management. It also change a perspective of how to manage lifecycle for a particular eco-system such as PLM. The increasing lifespan covered by PLM systems can improve decision making and provide additional insight in the areas of product development, quality management and others. Just my thoughts…

Best, Oleg


PLM, Lifecycle and Google Timelapse ideas

May 10, 2013

lifecycle-plm.jpgManufacturing businesses are getting more dynamic these days. It is all about how to change and change fast. The days where manufacturing companies allowed months and even years to respond to business changes are gone. Competition is getting more aggressive. Cost pressure is getting tight. Companies need to have a way to analyze what they do in a perspective of time. This is actually one of the places where PLM technology can provide a bigger value in the future. Think about design changes and problems reported in your products with the time lapse of last 10 years. Think about quality and cost of suppliers for the last 5 years. How PLM can provide such type of insight and information?

Very often, when we speak about PLM, we want to emphasize the middle "L" of what PLM is accomplishing for our customers. It is about Lifecycle. This is where every PLM solution want to excite and provide a value. However, this part of PLM is not developed much.

I’ve been reading about Google Timelapse project earlier this week. Navigate here to read ABC news article – Google Timelapse: A Quarter Century of Earth’s Change. Working with the U.S. Geological Survey, NASA and Time magazine, search giant Google has unveiled a project that shows how planet Earth has changed over the course of a quarter century.

Another articleTime’s Timelapse story provides a bit more details about the project.

With the help of massive amounts of computer muscle, they have scrubbed away cloud cover, filled in missing pixels, digitally stitched puzzle-piece pictures together, until the growing, thriving, sometimes dying planet is revealed in all its dynamic churn. The images are striking not just because of their vast sweep of geography and time but also because of their staggering detail. Consider: a standard TV image uses about one-third of a million pixels per frame, while a high-definition image uses 2 million. The Landsat images, by contrast, weigh in at 1.8 trillion pixels per frame, the equivalent of 900,000 high-def TVs assembled into a single mosaic.

The technology is available online. You can try it by yourself. Read Google’s blog and navigate to the following link to experiment with Google Earth Engine. Here is a search for changes in Haifa district Israel I captured.

haifa-1984-2011.jpg

What is my conclusion? To think about time exploration in the context of manufacturing and engineering information is very inspiring. Data visualization can be potentially very cool and provide a lot of insight to manufacturing organization about how to improve their businesses. What is your take? Do you have an idea of engineering timelapse visualization? Speak up.

Best, Oleg


Product Lifecycle and Social Timeline

January 8, 2013

I want continue the conversation about the intersection of social software and PLM. Yesterday blog Why Social PLM 1.0 failed? made me think about how to find a single utility for user in that context. As Jim Brown mentioned in his comment earlier, the social hype calmed down and PLM eco-system didn’t change overnight. All together it brings me to the point of finding additional value-add services.

The following blog post caught my attention – 5 Fun Ways to Travel Back in Time with Your Social Data. The article speaks about tools helping us to discover, analyze and get back in history of our social channels – LinkedIn, Twitter, Facbook, etc. One of the tools – Vizify drove my special interest. The application with a very easy user interface helped me to get access and analyze my twitter behavior as well as connection to photo, linkedin and facebook. Take a look on few screen shots I’ve made. You can also see the results by navigating to this link.

I found an interesting intersection between social, timeline and big data. PLM lifecycle function is somewhat that can be improved by a similar experience. To have the ability to access product historical data, making analyzes and merge information can be extremely interesting. Think about the ability to merge product releases with customer social stream and defect database. I can bring more examples. The utility was able to analyze my data fast, easy and without heavy IT involvement. I understand that getting access to public services like Twitter and Facebook is easier than to your corporate ERP system. At the same time the trend towards simplification and added value cloud services is clear to me.

What is my conclusion? In my view, Social PLM 1.0 was completely focused on how to create and collaborate. To me it was an obvious attempt – collaborative design and engineering was always a focus of CAD/PLM vendors . However, think about single user utility and the value of additional services. Social timeline (or lifecycle timeline in the context of PLM) can be an interested feature and value added utility. Just my thoughts…

Best, Oleg


Why PLM needs to learn Open World Assumption?

December 6, 2012

Have you heard about OWA (Open World Assumption)? If you completed your Math 101 and Mathematical Logic time ago, refresh your memories by navigating to the following Wikipedia article. Here is the definition:

In formal logic, the open world assumption is the assumption that the truth-value of a statement is independent of whether or not it is known by any single observer or agent to be true. It is the opposite of the closed world assumption, which holds that any statement that is not known to be true is false. The open world assumption (OWA) is used in knowledge representation to codify the informal notion that in general no single agent or observer has complete knowledge, and therefore cannot make the closed world assumption. The OWA limits the kinds of inference and deductions an agent can make to those that follow from statements that are known to the agent to be true. In contrast, the closed world assumption allows an agent to infer, from its lack of knowledge of a statement being true, anything that follows from that statement being false.

The OWA approach is opposite to CWS (Closed World Assumption) used by programming languages and databases.

The closed world assumption typically applies when a system has complete control over information; this is the case with many database applications where the database transaction system acts as a central broker and arbiter of concurrent requests by multiple independent clients (e.g., airline booking agents). There are however many databases with incomplete information: one cannot assume that because there is no mention on a patient’s history of a particular allergy, that the patient does not suffer from that allergy.

PDM and PLM, as a typical database-driven applications, are following CWA approach. In many situations it makes a lot of sense. When you releasing BOM to production you want to be sure all line items in this Bill of Material are secured and released. However, it made me think that CWA might provide some limitation to PLM application development today and even more in the future. I’ve been reading Semanticweb.com blog – Introduction to Open World Assumption. Navigate to the link to read more. The article provide a good explanation about systems with complete and incomplete information. Here is the snippet of this definition.

The CWA applies when a system has complete information. This is the case for many database applications. For example, consider a database application for airline reservations. If you are looking for a direct flight between Austin and Madrid, and it doesn’t exist in the database, then the result is “There is no direct flight between Austin and Madrid.” For this type of application, this is the expected and correct answer. On the other hand, OWA applies when a system has incomplete information. This is the case when we want to represent knowledge (a.k.a Ontologies) and want to discover new information. For example, consider a patient’s clinical history system. If the patient’s clinical history does not include a particular allergy, it would be incorrect to state that the patient does not suffer from that allergy. It is unknown if the patient suffers from that allergy, unless more information is given to disprove the assumption.

Lifecycle and Incomplete information

I came to conclusion that incomplete information modeling approach (supported by OWA) can provide some advantages to the systems intensively focusing on product lifecycle modeling and lifecycle information modeling. Think about lifecycle as a information discovery. Modern PLM business problems are facing situation of information incompleteness almost every day. New regulations, changed business requirements, new product configurations, etc. All these situations require to apply changes to existing PLM systems. Flexibility is one of the key requirements. OWA approach can improve the ability of PLM system to support a change and to decrease the cost of this change.

What is my conclusion? Flexibility and cost of change are two major requirements to PLM systems today. The time when PLM development was focused on the OTB (Out of the box) approach is over. The ability to apply changes or to connect a new source of information without modification of the system code can be an interesting opportunity. PLM developers can check how to apply OWA principles and to make PLM system more robust and reliable. Just my thoughts…

Best, Oleg

Picture credit semanticweb.com article.


Are We Ready For PLM Art?

October 1, 2012

I’m in UK these days. Everything is on the wrong side :)… So, I decided to start from an unusual topic. What do you think about PLM art? No, I’m not crazy. I think, we are about to discovery new places where 3D and Lifecycle technology can take us. The idea came to my mind when I was reading TNW article about C4 creative coding framework. Take a read. Here is an interesting part:

Following in the footsteps of artistic-focused coding frameworks like Processing and openFrameworks, C4 is a new kid on the block with an interesting twist: it’s entirely iOS-focused. The decision to solely target a single, closed platform may seem like a limitation at first, but this hyper-focus on iDevices could prove to be a benefit, enabling C4 to do one thing and do it right. C4 emphasizes a media focused approach, and hopes to be the easiest way for developers to jump right into animation and multi-touch interaction for their iOS apps.

I found the following video somewhat amazing:

Watch this one as well:

The idea of popularization of technologies using tablet and mobile phone devices is getting more popular these days. Few days I go, I found quite interesting announcement from Dassault System about 3D experience of Paris via the new app on iPad. Navigate here to learn more about Pars’s heritage.

Of course, I have to mention one of my favorite games from Autodesk – Thinker Box.

What is my conclusion? Don’t be afraid to start on the wrong side of the street these days. The technology is taking us to different directions. Sometimes it sounds and looks unusual. However, nobody knows where it will take it. People tend to appreciate more games and fun these days. What if your next problem in the lifecycle of the machine or airplane will be presented as a game on the iPad that will help your customers to understand the experience? I think, it is possible. Just my thoughts…

Best, Oleg


Electric Design and PLM Roadmap

June 29, 2011

In the early beginning, solutions for manufacturing were focusing primarily on machinery and mechanical design. The historical reason here is simple – mechanical design was a key element of manufacturing for many years. However, the era of ‘mechanical design only’ ends. We can hear more and more about various aspects of combined solutions – Siemens PLM was coming with mechatronics already a couple of years ago. Earlier this month, on PlanetPTC, I’ve heard many stories about software related aspects of product design.

I’ve been reading Design New article yesterday – Mentor Takes a Lifecycle Approach to Electrical Design. It talks about latest Mentor announcement related to the expansion of their Capital electric design platform. This is my favorite passage (actually quote by Martin O’Brien):

The new Capital suite delivers on all of its traditional capabilities in addition to new functionality for designing the architecture and aiding service technicians supporting the finished product in the field. It also encompasses enterprise data management and compliance functionality, serving as a single repository to help manage and support the highly specialized materials and workflows associated with seeing a complex electrical system through each phase of its lifecycle.

Does it mean Electric Design is going to PLM route now? This is an interesting question. In my view, PLM approach is very successful when we deal with complex product development issues. Remember aircraft design, product configuration, etc. These are examples where product lifecycle management presented significant improvement and good results. Electrical design was standing separate long time. The same was for electronic and software. Is it going to change now?

The picture is courtesy of Design News blog.

The complexity of products is the real issue we need to discuss and mention in this context. Everything becomes more complex now. Ford T was a simple car. Nowadays, products become really complex. The integration of various elements is key problem manufacturing are facing these days.

What is my conclusion? I can see Mentor is going down to the road and implementing many features and functions we’ve seen in traditional PLM products. Lifecycle, Technical documentation, multiple functional representations. The word “single repository” mentioned by Mr. O’Brien made me worry a bit. In my view, traditional PLMs found themselves in the “single repository” mouse trap by trying to integrate everything in a single database. The cost and complexity of implementations are growing. Is it something vendors like Mentor can avoid? Learn from other mistakes? Is it possible in software word?

Just my thoughts…
Best, Oleg


Follow

Get every new post delivered to your Inbox.

Join 218 other followers