Will public clouds help enterprises to crunch engineering data?

August 6, 2014

google-data-center-crunches-engineering-data

The scale and complexity of the data is growing tremendously these days. If you go back 20 years, the challenge for PDM / PLM companies was how to manage revisions CAD files. Now we have much more data coming into engineering department. Data about simulations and analysis, information about supply chain, online catalog parts and lot of other things. Product requirements are transformed from simple word file into complex data with information about customers and their needs. Companies are starting to capture information about how customers are using products. Sensors and other monitoring systems are everywhere. The ability to monitor products in real life creates additional opportunities – how to fix problems and optimize design and manufacturing.

Here is the problem… Despite strong trend towards cheaper computing resources, when it comes to the need to apply brute computing force, it still doesn’t come for free. Services like Amazon S3 are relatively cheap. However, if we you want to crunch and make analysis and/or processing of large sets of data, you will need to pay. Another aspect is related to performance. People are expecting software to work at a speed of user thinking process. Imagine, you want to produce design alternatives for your future product. In many situations, to wait few hours won’t be acceptable. It will be destructing users and they won’t use such system after all.

Manufacturing leadership article Google’s Big Data IoT Play For Manufacturing speaks exactly about that. What if the power of web giants like Google can be used to process engineering and manufacturing data. I found explanation provided by Tom Howe, Google’s senior enterprise consultant for manufacturing quite interesting. Here is the passage explaining Google’s approach.

Google’s approach, said Howe, is to focus on three key enabling platforms for the future: 1/ Cloud networks that are global, scalable and pervasive; 2/ Analytics and collection tools that allow companies to get answers to big data questions in 10 minutes, not 10 days; 3/ And a team of experts that understands what questions to ask and how to extract meaningful results from a deluge of data. At Google, he explained, there are analytics teams assigned to every functional area of the company. “There’s no such thing as a gut decision at Google,” said Howe.

It sounds to me like viable approach. However, it made me think about what will make Google and similar computing power holders to sell it to enterprise companies. Google ‘s biggest value is not to selling computing resources. Google business is selling ads… based on data. My hunch there are two potential reasons for Google to support manufacturing data inititatives – potential to develop Google platform for manufacturing apps and value of data. The first one is straightforward – Google wants more companies in their eco-system. I found the second one more interesting. What if manufacturing companies and Google will find a way to get an insight from engineering data useful for their business? Or even more – improving their core business.

What is my conclusion? I’m sure in the future data will become the next oil. The value of getting access to the data can be huge. The challenge to get that access is significant. Companies won’t allow Google as well as PLM companies simply use the data. Companies are very concerned about IP protection and security. To balance between accessing data, providing value proposition and gleaning insight and additional information from data can be an interesting play. For all parties involved… Just my thoughts..

Best, Oleg

Photo courtesy of Google Inc.


Will GE give a birth to a new PLM company?

July 9, 2014

ge-datamanagement-initiative

Navigate back into histories of CAD and PLM companies. You can find significant involvement of large aerospace, automotive and industrial companies. Here are few examples – Dassault Systemes with Dassault Aviation, SDRC with US Steel, UGS with McDonnell Douglas. In addition to that, involvement of large corporation as strategic customers, made significant impact on development of many CAD/PLM systems for the past two decades. Do you think we can see something similar in the future?

Inc. article GE’s Grand Plan: Build the Next Generation of Data Startups made me think about some potential strategic involvement of large industrial companies in PLM software business. The following passage can give you an idea of how startups will be organized.

A team from GE Software and GE Ventures has launched an incubator program in partnership with venture capital firm Frost Data Capital to build 30 in-house startups during the next three years that will advance the "Industrial Internet," a term GE coined. The companies will be housed in Frost’s incubator facility in Southern California.

By nurturing startups that build analytical software for machines from jet engines to wind turbines, the program, called Frost I3, aims to dramatically improve the performance of industrial products in sectors from aviation to healthcare to oil and gas. Unlike most incubator programs, GE and Frost Data are creating the companies from scratch, providing funding and access to GE’s network of 5,000 research assistants and 8,000 software professionals. The program has already launched five startups in the past 60 days.

This story connects very well to GE vision and strategy for so called Industrial Internet. The following picture can provide you some explanations of what is the vision of GE industrial cloud.

industrial-internet-applications

What is my conclusion? Industrial companies are looking for new solutions and probably ready to invest into ideas and innovative development. Money is not a problem for these companies, but time is very important. Startups is a good way to accelerate development and come with fresh ideas of new PLM systems. Strategic partnership with large company can provide resources and data to make it happen. Just my thoughts…

Best, Oleg

Picture credit of GE report.


PLM: Manufacturing Big Data Ngram Dream?

February 3, 2014

plm-data-god

My attention was caught this weekend by thedailybeast article with funny title – Why Big Data Doesn’t Live up to the Hype. I read the article and on my long travel during the weekend skimmed over the book Uncharted: Big Data as a Lens on Human Culture by Erez Aiden and Jean-Baptiste Michel mentioned in this article. The authors were instrumental in creating of Google Ngram Viewer.

The Google Ngram Viewer is a phrase-usage graphing tool developed by Jon Orwant and Will Brockman of Google, and charts the yearly count of selected n-grams (letter combinations)[n] or words and phrases,[1][2] as found in over 5.2 million books digitized by Google Inc (up to 2008).[3][4] The words or phrases (or ngrams) are matched by case-sensitive spelling, comparing exact uppercase letters,[2] and plotted on the graph if found in 40 or more books during each year (of the requested year-range).[5] The Ngram tool was released in mid-December 2010.[1][3]

The word-search database was created by Google Labs, based originally on 5.2 million books, published between 1500 and 2008, containing 500 billion words[6] in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese.[1] Italian words are counted by their use in other languages. A user of the Ngram tool has the option to select among the source languages for the word-search operations.[7]

Researchers have analysed the Google Ngram database of books written in American or British English discovering interesting results. Amongst them, they found correlations between the emotional output and significant events in the 20th century such as the World War II.[8]

If you never tried Ngram Viewer, you should. Navigate here and try it out. You can find some interesting trends. Here is my funny example – “data” is eclipsing “love” trend. Does it mean something? I’m not sure, but it is funny…

Screen Shot 2014-02-03 at 9.26.02 AM

Google certainly has a power to deal with such large projects. Everybody are trying to collect data these days. You can see some very interesting examples. Ambitions of CAD and PLM companies are not going so far… yet. Here is the idea for somebody with budget and free time – to collect product lifecycle information related to manufacturing industry, suppliers, material trends and consumer behaviors. More and more data becomes available publicly on the web. To collect and classify this information can help us to explore future demands and opportunities.

What is my conclusion? In data we trust. Data is a very powerful argument and we use it frequently. With globalization of manufacturing industry and ambitious to discover future trends and opportunity of manufacturing and supply chain, I can see collecting of publicly available manufacturing data as a key towards unknown unknowns. Just a crazy idea and my thoughts… Happy Monday!

Best, Oleg


Why PLM cannot adopt Big Data now?

January 30, 2014

plm-big-data-reuse

The buzz about Big Data is everywhere these days. From 2011 and up today, we can clearly see skyrocketing interest in Big Data as well as to what is behind this buzzword. Companies around the world are trying to figure out what Big Data means for them and how they can leverage it now. Engineering and manufacturing software vendors are doing this as well. I’ve been speculating about opportunity of PLM vendors to dig in Big Data last year. So far, I’ve heard lots of talks, but never seen much practical results of how Big Data can help to improve PLM products as well as influence product development processes.

I stumble on AllThingsD article – Big Data and the Soles of Your Shoes. Take 10 minutes and read the article. It speaks very nicely about modern customer-manufacturing e-commerce driven interaction. The overall process of information flow is interesting – product configuration, ordering system, materials supply, financial transactions, transportation and many other aspects. You can only imagine of how many data pieces should be moved behind this scenario – product information, bill of materials, manufacturing orders, shipment tracking, manufacturing process, delivery shipment. I specially liked the following passage coming as a conclusion of the article:

Big Data always comes across as “Big” first and “Data” second. What I urge you to do is think about the “small data.” This type of data is what happens every moment of every day. The humble pair of shoes represents small data. It’s a pair of shoes. It doesn’t pretend to be a space shuttle. But that pair of shoes has generated a massive quantity of data in its journey to you.

Small data represents the constant dripping faucet of information you generate every day. From ordering food at a restaurant to visiting a Web page to buying a pair of shoes, this faucet never stops. The amount of small data out there trumps the amount of Big Data.

The article made me think about interesting term coined by social scientists – Ambient Awareness. It refers to the information that surrounding us online – social networks, e-commerce other websites producing so-called activity streams. These streams creates business specific contextual information. The problem is that despite wide adoption of social networks in consumer spaces, organizations are still in a very premature phase of understanding how to use and leverage this information and how it might be relevant.

The challenge for most manufacturing organization is how to use right snippets of Big Data. Let’s take product design and cost assessment process. In my view, the opportunity is to see how product configurations and variety design options are impacting product cost. The pieces of data to make this analysis are in the data flow between vendors, suppliers, shipments, material cost, etc. Now think about engineer option to live in the ambient awareness of information driving towards right design for cost process. The main question that comes to my mind is related to ‘relevance’ of every bit of big data coming from outside. What is relevant to cost? What impact every bit of information has on overall cost? How to calculate it and how to put it in front of engineer at the right time?

What is my conclusion? Big data is a big opportunity for many companies. However, "big data" is too big and too abstract for companies to understand and use. Companies need to develop a way to use small bits of data coming from different sources to drive decision process and choose options. This is not simple and will not happen overnight for most of manufacturing companies using PLM systems. PLM vendors need to come with the approach how to inject small chunks of Big Data in the product development process. A task for PLM strategists and product managers. Just my thoughts…

Best, Oleg


Why PLM vendors need to hire data scientists?

December 4, 2013

plm-data-knowledge

The importance of data is growing tremendously. Web, social networks and mobile started this trend just few years ago. However, these days companies are starting to see that without deep understanding of data about their activities, the future of company business is uncertain. For manufacturing companies, it speaks a lot of about fundamental business processes and decisions related to product portfolios, manufacturing and supply chain.

It sounds like PLM vendors have a potential best fit to fulfill this job. PLM portfolios are getting broader and covers lots of applications, modules and experience related to optimization of business activities. In one of my earlier blogs this month, I was talking about new role of Chief Data Officer in companies. Navigate here to read and draw your opinion. However, to make this job successful is mission impossible without deep understanding of company data by both sides – company and vendors / implementers.

Few days ago, I was reading InformationWeek article – Data Scientist: The Sexiest Job No One Has. The idea of data scientist job is very interesting if you apply it beyond just storing data on file servers. Think about advanced data analyst job focusing on how company can leverage their data assets in a best way. The amount of data companies are generating is doubling every few months. To apply right technology and combine it with human skills can be an interesting opportunity. Pay attention on the following passage:

The role of data scientist has changed dramatically. Data used to reside on the fringes of the operation. It was usually important but seldom vital — a dreary task reserved for the geekiest of the geeks. It supported every function but never seemed to lead them. Even the executives who respected it never quite absorbed it.

But not anymore. Today the role set aside for collating, sorting, and annotating data — a secondary responsibility in most environments — has moved to the forefront. In industries ranging from marketing to financial services to telecommunications, the data scientists of today don’t just crunch numbers. They view the universe as one large data set, and they decipher relationships in that mass of information. The analytics they develop are then used to guide decisions, predict outcomes, and develop a quantitative ROI.

So, who can become data scientist in a manufacturing companies? Actually, this major is still not defined in American colleges. Anybody with good skillset of math, computer science and manufacturing domain knowledge can think about this work. So, I can clearly can see it as an opportunity for retired CAD and PLM IT managers spending their life on installation of on premise PLM software as soon as the software will be moving to the cloud environment.

What is my conclusion? In the past, installation and configuration skill set was one of the most important in PDM/PLM business. The time vendors spent on system implementation was very significant. PLM cloud switch is going to create a new trend – understanding of company data and business processes will be come and #1 skill So, PLM vendors better start thinking about new job description – people capable to understand how to crunch manufacturing data to create a value for customers. Just my thoughts…

Best, Oleg


Will PLM Data Size Reach Yottabytes?

October 14, 2013

bigdatasizing

Everybody speaks today about big data. It is probably one of the most overhyped and confused terms. It goes everywhere and means different things depends who you are talking to. It can be data gathered from mobile devices, traffic data, social media and social networking activity data. The expectations are the size of big data will be going through the roof. Read Forbes article Extreme Big Data: Beyond Zettabytes And Yottabytes. The main point of the article – we produce data faster than we can invent a name how to call it. Here is a scale we are more/less familiar with – TB terabyte, PB petabyte, EB exabyte, ZB zettabyte, YB yottabyte…

However, article also brings an interesting lingo of data sizes. Here are some examples: Hellabytes (a hell of a lot of bytes), Ninabytes, Tenabytes, etc. Wikipedia provides a different option to extend prefix system – zetta, yotta, xona, weka, vunda, uda, treda, sorta, rinta, quexa, pepta, ocha, nena, minga, luma, … Another interesting comparison came from itknowledgeexchange article. Navigate here to read more. Here is my favorite passage. The last comparison to Facbook is the most impressive.

Beyond what-do-we-call-it, we also have the obligatory how-to-put-it-in-terms-we-puny-humans-can-understand discussion, aka the Flurry of Analogies that came up when IBM announced a 120-petabyte hard drive a year ago. Depending on where you read about it, that drive was: 2.4 million Blu-ray disks; 24 million HD movies; 24 billion MP3s; 6,000 Libraries of Congress (a standard unit of data measure); Almost as much data as Google processes every week; Or, four Facebooks.

Forbes article made me think about sizes of PLM data, engineering data, design data. It is not unusual to speak about CAD data and/or design data as something very big. Talk to every engineering IT manager and he will speak to you about oversizing of CAD files in libraries. Large enterprise companies (especially in regulated industries) are concerned about how to store data for 40-50 years, what format to use, how much space it can keep and how it can be accessible. At the same time, I’ve seen a complete libraries of CAD components together with all design data coming from a mid size companies backed up with simple 1TB USB drive. I believe software like simulation can produce lots of data, but this data today is not controlled and just lost on desktops. One of the most popular requirements from engineers about PDM was the ability to delete old revisions. The sizes of PLM repositories for Items and Bill of Materials can reach certain size, but still I can hardly see how it compete to Google and Facebook media libraries. At the same time, engineering is just before to explore the richness of online data and internet of things. So, the size of engineering repositories will only grow up.

What is my conclusion? If you compare to Google, Twitter and Facebook scale, the majority of engineering repositories today are modest sized. After all, even very large CAD files can hardly compete with the amount of photo and video streams uploaded by billion people on social networks. Also, tracking data captured from mobile devices oversize every possible Engineering Change Order (ECO) records. However, engineering data has a potential to become big. An increased interest to simulation, analyzes as well as design options can bump sizes of engineering data significantly. Another potential source of information is related to an increased ability to capture customer interests and requirements as well as product behavior online. Just my thoughts. So, how fast PLM will grow to Yottabytes? What is your take?

Best, Oleg


PLM, BigData and Importance of Information Lifecycle

September 25, 2013

BigData is trending these days. It goes everywhere. Marketing people are in love with this name. It brings such a good taste of “big something”. It might be $$ or amount of problems it supposed to solve. It can be potentially related to big value proposition. Net-net, the amount of people and articles around you referring to the opportunity related to big data is probably skyrocketing. If you want to read more about big data, navigate to the following Wikipedia article – it is a good starting point.

CIMdata, well-known PLM advisory outfit, recently published an interesting paper about PLM and BigData. Navigate to this link, download research paper (it requires registration) and have a read. I’d say, this is the best reference about intersection of PLM and Big Data worlds. Here is what is the document about:

This paper focuses on the intersection of PLM and what has come to be known as “Big Data.” The increasing volume and growth rate of data applicable to PLM is requiring companies to seek new methods to turn that data into actionable intelligence that can enhance business performance. The paper describes methods, including search-based techniques, that show promise to help address this problem.

Search and analytic is one of the ways to dig into big data problem. Last year, I wrote about why PLM vendors need to dig into Big Data. Here is the link to my post – Will PLM vendors dig into Big Data?. I believe, BigData can provide a huge value to organization. To unlock this value is extremely important. However, looking on BigData hype these days, I got a feeling of wrong priorities and some gaps between vision of BigData and reality of PLM implementations.

I’ve been reading an ITBusinessEdge article – Three Reasons Why Life Cycle Management Matters More with Big Data. The main thing I learned in this article – even big data is going to change a lot, it won’t change some fundamental data management laws. Data lifecycle is one of them. Here is my favorite passage:

With Big Data, which can be unpredictable and come in many different sizes and formats, the process isn’t so easy,” writes Mary Shacklett, president of technology research and market development firm Transworld Data. “Yet if we don’t start thinking about how we are going to manage this incoming mass of unstructured and semi-structured data in our data centers.

It means a lot in the context of PLM systems. This is where I can see the biggest gap between BigData and PLM. It is easy to collect data from multiple sources. That’s what everybody speaks about. However, big data needs to be managed as well together with other information managed by PLM. Big Data is coming through the lifecycle of processing, classification, indexing and annotation. To connect pieces and to related big data pieces of information to PLM system – significant problem to think about. Engineers and other people in the company probably won’t be interested to access data itself, but analytics, insight and recommendation.

What is my conclusion? The value behind big data is huge. It can improve decision making, quality of service, suppliers bids and lot of other things. However, it creates a huge pressure on IT and organization in terms of resources, data organization and data infrastructure. PLM systems won’t be able to start with big data overnight. Whoever, will tell you “now we support big data” is probably too marketing oriented. PLM will have to focus on data lifecycle to bring a realistic big data implementation plans to organization. Just my thoughts…

Best, Oleg


What will come to PLM after cloud?

August 8, 2013

Cloud is going mainstream these days. It happens everywhere. It is hard to find a company or business today that is not interested about what cloud computing and mobile technology can do. PLM vendors came to the point where the accent of the question about cloud moved from "why" to "how". The same is happening today with customers. I posted about it few days ago about it in my blog – Dassault IFWE and PLM Cloud Switch.

PLM Cloud Switch made me think about the beginning of cloud trend in PLM. It was a bit funny to read again my first posts about cloud and PLM going back to 2008 and 2009 – Where is PLM on industry cloud map? PLM Architecture – Get off my cloud. However, the most interesting one I found was – How PLM application will change when they move to the cloud? Here is a Google trend snapshot I captured for "cloud computing" back in 2009.

It was interesting to see how the same trend looks today. I also want to speculate about "big data" as a potential next big thing that will come to change PLM as we know today. Take a look on the following screen shot I’ve made yesterday.

What is my conclusion? Cloud is not a differentiation for PLM anymore. Everybody is making cloud or at least thinking they can do it. The devil is in details now about how to make it right. This is where next cloud PLM battle will go. However, now is a good time to look at crystal ball and think what can become the next significant differentiations in the development of PLM solutions. Cloud solves many problem such as speed of IT deployment and implementation efficiency. It brings many new technological options to PLM development that were out of reach before. However, PLM is still a complicated journey. It is a good time to think how to make it different. Just my thoughts….

Best, Oleg


Why PLM should care about Web Data Commons Project?

June 10, 2013

Big data is one of the biggest hyped buzzwords of the last two years. With all hype around, it is very hard to find a good definition when it comes to a simple question about what “big data” means for every specific case in your industry and your applications. The following definition is how wikipedia describes big data:

Big data[1][2] is a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The challenges include capture, curation, storage, search, sharing, transfer, analysis,[4] and visualization.

Web is an interesting place to dig for new sources of information. These days web is going much beyond just web pages and database driven websites. Web contains lots of structured information that can be used by businesses. Manufacturing companies are one of them. Information about products, customers, interests, priorities – this is a new goldmine era for web researchers.

I’ve been skimming the information from semanticweb.com website. The publication about Web Data common project caught my attention. Web data common is about structured data on the internet. Here is an interesting snippet about what it does:

More and more websites embed structured data describing for instance products, people, organizations, places, events, resumes, and cooking recipes into their HTML pages using markup formats such as RDFa, Microdata and Microformats. The Web Data Commons project extracts all Microformat, Microdata and RDFa data from the Common Crawl web corpus, the largest and most up-to-data web corpus that is currently available to the public, and provide the extracted data for download in the form of RDF-quads and also in the form of CSV-tables for common entity types (e.g. product, organization, location, …). In addition, we calculate and publish statistics about the deployment of the different formats as well as the vocabularies that are used together with each format.

Dig a bit inside to learn about statistics of structured data. You can see some information here – Additional Statistics and Analysis of the Web Data Commons August 2012 Corpus. According to this statistic, product-related information is the most popular in the data corpus researched. Look on the following passage:

Products in RDFa. We identified three RDFa classes, og:”product”, dv:Product, and gr:Offering, that are used each on at least 500 different websites for describing products. og:”product” is the most popular class, being used by more than 19,000 websites.

In addition to that, product data was found in websites using microdata and microformats.

Reviewing all Microdata classes that are used in more than 100 different websites, we could identify four classes, schema:Product, schema:Offer, datavoc:Product, and datavoc:Offer, that are frequently used to describe products or offers. The following table shows the co-occurences of these classes with other product-related classes on the same website. For instance, 4,308 websites provide product data together with aggregate ratings for these products.In addition to the class co-occurrences, we analyzed which properties are frequently used to describe schema:Products. The table below shows that schema:Product/name, schema:Product/description, schema:Product/image, and schema:Product/offers are the most frequently used properties.

What is my conclusion? Manufacturing companies are looking how to improve the decision process related to products. The potential leverage can come from the analyses of web data about products and services. PLM vendors can think about non traditional approaches to get information about products and customers. Important. Just my thoughts…

Best, Oleg


PLM and Unknown Unknowns Use Cases

April 30, 2013

Recent tragic event in Boston, raised again the question about critical role of real time information integration. You may think, it is not something that related to engineering and manufacturing software. Until recent time, I’ve seen it exactly in the same way. However, with the latest trends in the developing of data and information systems, I can see how big data, data analytic and analyzes can be used by business enterprise software too. Getting back to events of 9/11, Donald Rumsfeld, US Secretary of State for Defence, stated at a briefing: ‘There are known knowns. There are things we know that we know. There are known unknowns. That is to say, there are things that we now know we don’t know. But there are also unknown unknowns. There are things we do not know we don’t know.’ Originally “unknown unknowns” statement was considered as a nonsense. However, if we think twice, the concept of unknown unknowns might be relevant to many companies in manufacturing.

One of the key roles of PLM these days is to help companies to innovate. There are some many definitions of “innovation”. You can think about innovating organization, innovative processes. Here is the thing. Most of companies these days are afraid about how not to get “surprised” by innovation coming from unknown innovators, competitors and other factors – new economic condition, financial impacts, new product segments, cross domain innovation, etc.

In my view, the key element of preventing “unknown unknowns” impact is to get better analyzes of the data in your company and outside. Companies owns a lot of data business data, stored in databases and mainframes behind the firewall. This is “known knowns” because in this area business decisions are generally made based on historical data. This is where PLM/PDM operates today. There are lots of data that mostly unstructured and resides in emails, blogs, internet, websites, etc. This is a place of “known unknowns”. Companies dealing with big data and some others are trying to solve today. The biggest danger is coming from unknown unknowns. We need a solution to fix it.

What is my conclusion? There are many things that can influence manufacturing organizations. We live in a very dynamic world. Market conditions are changing, new competitors are entering market in a very disruptive way, financial market influence, employees turnover. These are “unknown unknowns” of PLM and future innovative solutions software vendors can come with to market. Just my thoughts…

Best, Oleg


Follow

Get every new post delivered to your Inbox.

Join 246 other followers