How to visualize future PLM data?

August 12, 2014

collective experience of empathetic data systems

I have a special passion for data and data visualization. We do it every day in our life. Simple data, complex data, fast data, contextual data… These days, we are surrounded by data as never before. Think about typical engineer 50-60 years ago. Blueprints, some physical models… Not much information. Nowadays the situation is completely different. Multiple design and engineering data, historical data about product use, history of design revisions, social information, data about how product is performing coming in real time from sensors, etc. Our ability to discover and use data becomes very important.

The ways we present data for decision making can influence a lot and change our ability to design in context of right data. To present data for engineers and designers these days can become as important as presenting right information to airplane pilots before. Five years ago, I posted about Visual Search Engines on 3D perspective blog. I found the article is still alive. Navigate your browser here to have a read. What I liked in the idea of visual search is to present information in the way people can easy understand.

Few days ago, my attention was caught by TechCrunch article about Collective Experience of Empathetic Data Systems (CEEDS) project developed in Europe.

[The project ]… involves a consortium of 16 different research partners across nine European countries: Finland, France, Germany, Greece, Hungary, Italy, Spain, the Netherlands and the UK. The “immersive multi-modal environment” where the data sets are displayed, as pictured above — called an eXperience Induction Machine (XIM) — is located at Pompeu Fabra University, Barcelona.

Read the article, watch video and draw your conclusion. It made me think about the potential of data visualization for design. Here is my favorite passage from the article explaining the approach:

“We are integrating virtual reality and mixed reality platforms to allow us to screen information in an immersive way. We also have systems to help us extract information from these platforms. We use tracking systems to understand how a person moves within a given space. We also have various physiological sensors (heart rate, breathing etc.) that capture signals produced by the user – both conscious and subconscious. Our main challenge is how to integrate all this information coherently.”

Here is the thing. The challenge is how to integrated all the information coherently. Different data can be presented differently – 3D geometry, 2D schema, 2D drawings, graphics, tables, graphs, lists. In many situations we can get this information presented separately using different design and visualization tools. However, the efficiency is questionable. Many data can be lost during visualization. However, what I learned from CEEDS project materials, data can be also lost during the process of understanding. Blindspotting. Our brain will miss the data even we (think) that we present it in a best way.

What is my conclusion? Visualization of data for better understanding will play an increased role in the future. We just in the beginning of the process of data collection. We understand the power of data and therefore collect an increased amount of data every day. However, to process of data and visualizing for better design can be an interesting topic to work for coming years. Just my thoughts…

Best, Oleg


Will public clouds help enterprises to crunch engineering data?

August 6, 2014

google-data-center-crunches-engineering-data

The scale and complexity of the data is growing tremendously these days. If you go back 20 years, the challenge for PDM / PLM companies was how to manage revisions CAD files. Now we have much more data coming into engineering department. Data about simulations and analysis, information about supply chain, online catalog parts and lot of other things. Product requirements are transformed from simple word file into complex data with information about customers and their needs. Companies are starting to capture information about how customers are using products. Sensors and other monitoring systems are everywhere. The ability to monitor products in real life creates additional opportunities – how to fix problems and optimize design and manufacturing.

Here is the problem… Despite strong trend towards cheaper computing resources, when it comes to the need to apply brute computing force, it still doesn’t come for free. Services like Amazon S3 are relatively cheap. However, if we you want to crunch and make analysis and/or processing of large sets of data, you will need to pay. Another aspect is related to performance. People are expecting software to work at a speed of user thinking process. Imagine, you want to produce design alternatives for your future product. In many situations, to wait few hours won’t be acceptable. It will be destructing users and they won’t use such system after all.

Manufacturing leadership article Google’s Big Data IoT Play For Manufacturing speaks exactly about that. What if the power of web giants like Google can be used to process engineering and manufacturing data. I found explanation provided by Tom Howe, Google’s senior enterprise consultant for manufacturing quite interesting. Here is the passage explaining Google’s approach.

Google’s approach, said Howe, is to focus on three key enabling platforms for the future: 1/ Cloud networks that are global, scalable and pervasive; 2/ Analytics and collection tools that allow companies to get answers to big data questions in 10 minutes, not 10 days; 3/ And a team of experts that understands what questions to ask and how to extract meaningful results from a deluge of data. At Google, he explained, there are analytics teams assigned to every functional area of the company. “There’s no such thing as a gut decision at Google,” said Howe.

It sounds to me like viable approach. However, it made me think about what will make Google and similar computing power holders to sell it to enterprise companies. Google ‘s biggest value is not to selling computing resources. Google business is selling ads… based on data. My hunch there are two potential reasons for Google to support manufacturing data inititatives – potential to develop Google platform for manufacturing apps and value of data. The first one is straightforward – Google wants more companies in their eco-system. I found the second one more interesting. What if manufacturing companies and Google will find a way to get an insight from engineering data useful for their business? Or even more – improving their core business.

What is my conclusion? I’m sure in the future data will become the next oil. The value of getting access to the data can be huge. The challenge to get that access is significant. Companies won’t allow Google as well as PLM companies simply use the data. Companies are very concerned about IP protection and security. To balance between accessing data, providing value proposition and gleaning insight and additional information from data can be an interesting play. For all parties involved… Just my thoughts..

Best, Oleg

Photo courtesy of Google Inc.


The end of single PLM database architecture is coming

August 5, 2014

PLM-distributed-cloud-database-architecture

The complexity of PLM implementations is growing. We have more data to manage. We need to process information faster. In addition to that, cloud solutions are changing the underlining technological landscape. PLM vendors are not building software to be distributed on CD-ROMs and installed by IT on corporate servers anymore. Vendors are moving towards different types of cloud (private and public) and selling subscriptions (not perpetual licenses). For vendors it means operating data centers, optimize data flow, cost and maintenance.

How to implement future cloud architecture? This question is coming to the focus and, obviously, raising lots of debates. Infoworld cloud computing article The right cloud for the job: multi-cloud database processing speaks about how cloud computing is influencing what is the core of every PDM and PLM system – database technology. Main message is to move towards distributed database architecture. What does it mean? I’m sure you are familiar with MapReduce approach. So, simply put, the opportunity of cloud infrastructure to bring multiple servers and run parallel queries is real these days. The following passage speaks about the idea of how to optimize data processing workload by leveraging cloud infrastructure:

In the emerging multicloud approach, the data-processing workloads run on the cloud services that best match the needs of the workload. That current push toward multicloud architectures provides the ability to place workloads on the public or private cloud services that best fit the needs of the workloads. This also provides the ability to run the workload on the cloud service that is most cost-efficient.

For example, when processing a query, the client that launches the database query may reside on a managed service provider. However, it may make the request to many server instances on the Amazon Web Services public cloud service. It could also manage a transactional database on the Microsoft Azure cloud. Moreover, it could store the results of the database request on a local OpenStack private cloud. You get the idea.

However, not so fast and not so simple. What works for web giants might not work for enterprise data management solutions. The absolute majority of PLM systems are leveraging single RDBMS architecture. This is fundamental underlining architectural approach. Most of these solutions are using "scale up" architecture to achieve data capacity and performance level. Horizontal scale of PLM solutions today is mostly limited to leverage database replication tech. PLM implementations are mission critical for many companies. To change that would be not so simple.

So, why PLM vendors might consider to make a change and to think about new database architectures? I can see few reasons – the amount of data is growing; companies are getting even more distributed; design anywhere, build anywhere philosophy comes into real life. The cost of infrastructure and data services becomes very important. In the same time for all companies performance is an absolute imperative – slow enterprise data management solutions is a thing in the past. To optimize workload and data processing is an opportunity for large PLM vendors as well as small startups.

What is my conclusion? Today, large PLM implementations are signaling about reaching technological and product limits. It means existing platforms are achieving a possible peak of complexity, scale and cost. To make the next leap, PLM vendors will have to re-think underlining architecture, to manage data differently and optimize cost of infrastructure. Data management architecture is the first to be considered. Which means end of existing "single database" architectures. Just my thoughts…

Best, Oleg


Importance of data curation for PLM implementations

August 4, 2014

curate-data-mess

The speed of data creation is amazing these days. According to the last IBM research, 90% of the data in the world today has been created in the last two years alone. I’m not sure if IBM counting all enterprise data, but it doesn’t change much- we have lots of data. In manufacturing company data is created inside of the company as well as outside. Design information, catalogs, manufacturing data, business process data, information from supply chain – this is only beginning. Nowadays we speak about information made by customers as well as machined (so called Internet of Things).

One of the critical problems for product lifecycle management was always how to feed PLM system with the right data. To have right data is important – this is a fundamental thing when you implement any enterprise system. In the past I’ve been posted about PLM legacy data and importance of data cleanup.

I’ve been reading The PLM State: Getting PLM Fit article over the weekend. The following passage caught my special attention since it speaks exactly about the problem of getting right data in PLM system.

[...] if your data is bad there is not much you can do to fix your software. The author suggested focusing on fixing the data first and then worrying about the configurations of the PLM. [...] today’s world viewing the PLM as a substitute for a filing cabinet is a path to lost productivity. Linear process is no longer a competitive way to do business and in order to concurrently develop products, all information needs to be digital and it needs to be managed in PLM. [...] Companies are no longer just collecting data and vaulting it. They are designing systems to get the right data. What this means on a practical level is that they are designing their PLM systems to enforce standards for data collection that ensure the right meta data is attached and that meaningful reports can be generated from this information.

PLM implementations are facing two critical problems: 1/ how to process large amount of structured and unstructured information prior to PLM implementation; 2/ how constantly curate data in PLM system to bring right data to people at the right time. So, it made me think about importance of data curation. Initially, data curation term was used mostly by librarian and researchers in the context of classification and organization of scientific data for future reuse. The growing amount and complexity of data in the enterprise, can raise the value of digital data curation for implementation and maintenance of enterprise information systems. PLM is a very good example here. Data must be curated before get into PLM system. In addition to that, data produced by PLM system must be curated for future re-use and decision making.

What is my conclusion? The complexity of PLM solutions is growing. Existing data is messy and requires special curation and aggregation in order to be used for decision and process management. The potential problem of PLM solution is to be focused on a very narrow scope of new information in design an engineering. Lots of historical record as well as additional information are either lost or disconnected from PLM solutions. In my view, solving these problems can change the quality of PLM implementations and bring additional value to customers. Just my thoughts…

Best, Oleg


PLM security: data and classification complexity

July 30, 2014

security-plm

Security. It is hard to underestimate the importance of the topic. Information is one of the biggest assets companies have. Data and information is a lifeblood of every engineering and manufacturing organization. This is a key element of company IP. Combined of 3D models, Bill of Materials, manufacturing instructions, suppliers quotes, regulatory data and zillions of other pieces of information.

My attention caught Forrester TechRadar™: Data Security, Q2 2014 publication. Navigate to the following link to download the publication. The number of data security points is huge and overwhelming. There are different aspects of security. One of the interesting facts I learned about security from the report is growing focus on data security. Data security budgets are 17% as for 2013 and Forester predicts the increase of 5% in 2014.

forrester-data-security-plm

The reports made me think about some specific characteristics of PLM solutions – data and information classification. The specific characteristic of every PLM system is high level of data complexity, data richness and dependencies. The information about product, materials, BOMs, suppliers, etc. is significantly intertwined. We can speak a lot of about PLM system security and data access layers. Simple put, it takes a lot of specifics of product, company, business process and vendor relationships. As company business is getting global, security mode and data access is getting very complicated. Here is an interesting passage from report related to data classification:

Data classification tools parse structured and unstructured data, looking for sensitive data that matches predened patterns or custom policies established by customers. Classiers generally look for data that can be matched deterministically, such as credit card numbers or social security numbers. Some data classiers also use fuzzy logic, syntactic analysis, and other techniques to classify less-structured information. Many data classification tools also support user-driven classification that users can add, change, or conrm classification based on their knowledge and the context of a given activity. Automated classication works well when you’re trying to classify specic content such as credit card numbers but becomes more challenging for other types of content.

In my view, PLM content is one of the best examples of data that can be hardly classified and secured. It takes long time to specify what pieces of information should be protected and how. Complex role-based security model, sensitive IP, regulation, business relations and many other factors are coming into play to provide classification model to secure PLM data.

What is my conclusion? I can see a growing concern to secure data access in complex IT solutions. PLM is one of them. To protect complex content is not simple – in many situations out of the box solutions won’t work. PLM architects and developers should consider how to provide easier ways to classify and secure product information and at the same time be compliant with multiple business and technical requirements. Important topic for coming years. Just my thoughts…

Best, Oleg


PLM implementations: nuts and bolts of data silos

July 22, 2014

data-silos-architecture

Data is an essential part of every PLM implementation. It all starts from data – design, engineering, manufacturing, supply chain, support, etc. Enterprise systems are fragmented and representing individual silos of enterprise organization. To manage product data located in multiple enterprise data silos is a challenge for every PLM implementation.

To "demolish enterprise data silos" is a popular topic in PLM strategies and deployments. The idea of having one single point of truth is always in mind of PLM developers. Some of my latest notes about that here – PLM One Big Silo.

MCADCafe article – Developing Better Products is a “Piece of Cake” by Scott Reedy also speaks about how PLM implementation can help to aggregate all product development information scattered in multiple places into single PLM system. The picture from the article presents the problem:

product-data-silos

The following passage is the most important, in my view:

Without a PLM system, companies often end up with disconnected silos of information. These silos inhibit the ability to control the entire product record and employees waste unnecessary time searching for the correct revision of the product design. As companies outsource design or manufacturing, it becomes even harder to ensure the right configuration of the product is leveraged by external partners.

Whether your company makes medical devices, industrial equipment, laptops, cell phones or other consumer products – PLM provides a secure, centralized database to manage the entire product record into a “Single Record of the Truth”… With a centralized product record, it is easy to propose and submit changes to the product design, track quality issues and collaborate with your internal teams and supply-chain partners.

The strategy of "single record of truth" is a centerpiece of each PLM implementation. However, here is the thing… if you look on the picture above you can certainly see some key enterprise systems – ERP, CRM, MES, Project and program management, etc. PLM system can contain scattered data about product design, CAD files, Part data, ECO records, Bill of Materials. However, some of the data will still remain in other systems. Some of the data gets duplicated. This is what happens in real world.

It made me think about 3 important data architecture aspects of every PLM implementation: data management, data reporting and data consistency.

Data management layer is focusing on what system is controlling data and providing master source of information. Data cannot be mastered in multiple places. Implementation needs to organize logical split of information as well as ability to control "data truth". This is the most fundamental part of data architecture.

Data reporting is focusing how PLM can get data extracted from multiple sources and presented in seamless way to end user. Imagine, you need to provide an "open ECO" report. The information can reside in PLM, ERP and maybe some other sources. To get right data in a right moment of time, can be another problem to resolve.

Last, but not least - data consistency. When data located in multiple places system will rely on so-called "eventual consistency" of information. The system of events and related transactions is keeping data in sync. This is not a trivial process, but many systems are operating in such way. What is important is to have a coordinated data flow between systems supporting eventual consistency and data management and reporting tools.

What is my conclusion? To demolish silos and manage single point of truth is a very good and important strategic message. However, when it comes to nuts and bolts of implementation, an appropriate data architecture must be in place to insure you will have right data at right time. Many PLM implementations are underestimating the complexity of data architecture. It leaves them with marketing slogans, burned budgets and wrong data. Just my thoughts…

Best, Oleg

picture credit MCADCafe article.


Will PLM Vendors Jump into Microsoft Cloud Window in Europe?

April 11, 2014

european-plm-cloud

Cloud is raising lots of controversy in Europe. While manufacturing companies in U.S. are generally more open towards new tech, European rivals are much more conservative. Many of my industry colleagues in Germany, France, Switzerland and other EU countries probably can confirm that. Europe is coming to cloud systems, but much slower. I’ve been posting about cloud implications and constraints in Europe. Catch up on my thoughts here – Will Europe adopt cloud PLM? and here PLM cloud and European data protection reforms. These are main cloud concerns raised by European customers – data, privacy and specific country regulation. With companies located in different places in EU, it can be a challenge.

Earlier today, I’ve heard some good news about cloud proliferation in Europe coming from Microsoft. TechCrunch article – Microsoft’s Enterprise Cloud Services Get A Privacy Thumbs Up From Europe’s Data Protection Authorities speaks about the fact Microsoft enterprise cloud service meets the standards of data privacy in several European countries. Here is a passage that can put some lights on details and what does it mean:

But today comes a piece of good news for Redmond: the data protection authorities (DPAs) of all 28 European member states have decided that Microsoft’s enterprise cloud services meet its standards for privacy. This makes Microsoft Azure, Office 365, Microsoft Dynamics CRM and Windows Intune the first services to get such approval. The privacy decision was made by the “Article 29 Data Protection Working Party,” which notes that this will mean that Microsoft will not have to seek approval of individual DPAs on enterprise cloud contracts. In its letter to Microsoft (embedded below), chair Isabelle Falque-Pierrotin writes, “The MS Agreement, as it will be modified by Microsoft, will be in line with Standard Contractual Clause 2010/87/EU… In practice, this will reduce the number of national authorizations required to allow the international transfer of data (depending on the national legislation).”

Majority of PDM / PLM providers are friendly with Microsoft tech stack. Some of them are completely relies on MS SQL server and other Microsoft technologies. Most of them are supporting SharePoint. Now, these PLM vendors have an additional incentive to stay with Microsoft technologies for the cloud. It can be also a good news for manufacturing companies already deployed PDM/PLM solutions on top of Microsoft technologies and developed custom solutions.

What is my conclusion? The technological landscape these days is very dynamic. The time, one platform worked for everybody is over. In light of technological disruption and future challenges tech giants will be using different strategies in order to stay relevant for customers. Will European cloud regulation keep PDM/PLM players with MS Azure and other Microsoft technologies compared to alternative cloud technological stacks? How fast will take to other players to reach the same level of compliance? These are good questions to ask vendors and service providers. Just my thoughts…

Best, Oleg


Follow

Get every new post delivered to your Inbox.

Join 250 other followers