How PLM can join semantic enterprise graph?

April 10, 2014

plm-and-enterprise-graph

Connectivity is a key these days and graphs are playing key role in the development of our connectivity. It doesn’t matter what to connect – people, information, devices. Graphs are fascinating things. Actually, I came to conclusion we live in the era of fast graph development. More and more things around us are getting “connected”.

It is almost two years since I first posted about Why PLM need to learn about Google Knowledge Graph. The story of Knowledge Graph is getting more power every day. GKG is growing. It represents “things” in the knowledge base describing lots of topics – music, books, media, films, locations, businesses and many others. Part of Google Knowledge Graph is fueled by Freebase – large collaborative database of structured data. Originally Freebase was developed by Metaweb and acquired by Google in 2010. It is still not completely clear how Google Knowledge Graph built. You can read some investigations here. Nevertheless, it is hard to undervalue the power of Knowledge Graph.

Another well known and publicly developed graph is Facebook social graph. Last year I posted – Why PLM should pay attention to Facebook Graph Search. Facebook graph represents structured information captured from Facebook accounts. It allows to run quite interesting and powerful queries (also known as Facebook Graph Search).

In my opinion, we are just in the beginning of future graph discovery and expanded information connectivity. It won’t stop in social networks and public web. I can see graphs will proliferate into enterprise and will create lots of valuable opportunities related to information connectivity and efficient query processing. Semanticweb.com article Let Enterprise Graph Tell You A Story speaks about enterprise as a set of Facebook pages. It explains how we can build a graph story of enterprise communication, collaboration, people activities, related data and other things. Here is my favorite passage from the article:

Wallace relies on Hadoop and graph database technology, with network data represented as a property graph. “Property graphs are utterly, totally extensible and flexible,” she said, and “the system gets smarter as you add more data into it.” The enterprise social network data generates triple sets (that John Smith created X Document that was downloaded by Jane Doe) that get pocketed into the graph, for example, as is metadata extracted from relational databases. A general set of algorithms can find a user within the graph and calculate his or her engagement level – activities, reactions, eminence and so on. “We now have a Big Data service with a set of APIs so people can query the enterprise graph,” she set, and then run analytics on those results that can drive applications.

I found this aspect of graph development very inspiring. To collect enterprise information into graph database and run a diverse set of queries can be an interesting thing. If I think about PLM as a technological and business approach, the value of graph connecting different part of information about product and activities located in different enterprise systems can be huge. These days, PLM vendors and manufacturing companies are using a diverse strategies to manage this information – centralized databases, master data management, enterprise search and others. Graph data approach can be an interesting option, which will make enterprise looks like a web we all know today.

What is my conclusion? The growing amount of information in enterprise organizations will change existing information approaches. It doesn’t mean all existing technologies will change overnight. However, new complementary techniques will be developed to discover and use information in a new ways. Graph is clearly going to play big role. PLM strategist, developers and managemers should take a note. Just my thoughts…

Best, Oleg

picture credit semanticweb.com


Who will create Google Sheets BOM (Bill Of Materials) Add-On?

March 11, 2014

bom-google-add-on

For the last few years, I’ve been chatting about the opportunity to use Google infrastructure and tools to innovate in PLM, engineering and manufacturing. Google enterprise apps influence on PDM/PLM market is still minor these days. However, I believe, Google cloud infrastructure and tools are consistently inspire established vendors and new companies to develop better solutions.

Earlier last week, I was discussing about how PLM can take over Excel spreadsheets. For long time, PLM tools have love and hate relationships with Excel. MS Office applications are very popular in every organization for collaboration. Think about SharePoint, Word, Excel. Specially Excel spreadsheet is a king tool in everything that related to BOM management. My old article “My Excel Spreadsheets: From Odes to Woes” speaks about pains related to the use of Excel for collaboration.

Online tools can solve many problems people are facing when use standalone Excel spreadsheets. Earlier today, Google informed about launching so called “add-on store” for Google Docs and Sheets. Read more here. One of the killing aspects related to Google Sheets Add-on is a transparent way to integrated application user experience within spreadsheet. Watch this video to see more.

Several applications were announced together with Google Sheets Add-on. I selected few of them that can make a lot of sense for engineering collaboration – Project Sheet (from forscale.project) and Workflows (from letterfeed.com). The following passage from TechCrunch article is my favorite:

With the help of add-ons, Google is clearly hoping to create a developer ecosystem around Docs. But maybe more importantly, these integrations will also make it more competitive in a landscape where Microsoft is now finally taking the online versions of its Office productivity suite seriously. For many desktop Office users, the ability to bring add-ons to the desktop versions of Word or Excel remains an important selling point

What is my conclusion? Eco-system or how it is now called “community” is an important element of future success. Microsoft relied on openness of Office and ability to develop add-ins very long time. In a modern world, Google Apps is a good infrastructure foundation for collaboration. It is still not clear if manufacturing companies are ready to trust Google as IT provider for their needs. I believe, a critical mass of application can be one of the factors that can influence future CIO and engineering IT managers decisions. Another obvious alternative is Office 365. Just my thoughts…

Best, Oleg


Engineers and Contextual Search Experience

February 28, 2014

engineers-context-search-plm

Web search became part of our life. We don’t search anymore, we "google" everything . The visible simplicity of Google created a feeling that magic of search can transform and simplify any software product behavior. CAD, PLM and other enterprise software companies liked the idea as well. Search is certainly getting into mainstream. Open source search libraries such as Lucene and Solr created environment for easy implementation and distribution of search products across multiple software solutions.

At the same time, not every search solution can lead to simplicity. So called "laundry list" of results can be very disappointing for customers and lead to many questions about results relevance. Data matters and data can be nasty. Especially when it comes to complex engineering design, and enterprise data management solutions. To index data located in enterprise software packages can be a tricky problem.

Even web is not a search paradise these days. Google is still web search king. Even so, the relevance of some Google results is questionable. The complexity of Web search multiplied by social networks, mobile, combined with commercial interests of web giants created complexity that can be compared to the complexity of enterprise software. In parallel, there is a clear trend is enterprise software to adopt successful ideas of social software and social collaboration.

Recent Mashable article Yahoo’s New Long Game: Contextual Search puts some lights on the innovation and possible ways to solve problem of relevance in web search results. This is my favorite passage:

When I look at things like contextual search, I get really excited," Mayer said at the conference. Contextual search seeks to take in a variety of factors aside from a simple input to generate results that are tailored to a person’s time, place and patterns. For instance, a normal search for sushi might turn up a Wikipedia page or various websites about sushi. If one were to look up sushi from a phone through a contextualized mobile search, it could conceivably return nearby sushi restaurants with review, advertisements and coupons. The reason for Mayer to get excited is twofold: Nobody has yet mastered contextual search and it has the possibility of generating a ton of revenue.

Yahoo contextual search made me think about potential of such type of advanced search option for engineers. The specifics of engineering environment characterized by number of data dependencies, connected information and complexity to calculate the relevance search results. Engineering data can generate large volume of matches that hardly can be filtered based on simple filtering mechanisms. Think about document numbers, material names, design element names. Search for "shaft", "tube" and "aluminum" can generate thousands of results that hardly can be distinguished, sorted and ordered.

This is a place where I think "contextual search" does fit in a perfect way. What can be used a context for search (query) mechanism? Actually, quite many elements of easy available data can be re-used – date, time, organization, project name, team, location, previously used assemblies, etc. Some of these elements can be captured from the environment (computer, browser, application) and some of them can be captured from directly from users via specific user interface (capturing semantic). Result – significant decrease in the number of search results and better relevance.

What is my conclusion? Search is not simple. Even Google simplicity is questionable when it comes to the reality of engineering and enterprise data. New algorithms and additional data analysis must be applied in order to improve the relevance of results. Contextual search is not completely new idea, but it can become the next big deal in improving of search and overall user experience. Just my thoughts…

Best, Oleg


Social PLM and domain-restricted Google+ communities

January 7, 2014

plm-google-private-communities

Social topic might have another interesting turn. During few days off I had last week, I was reviewing unread social feeds and found an interesting article speaking about new Google+ feature – an additional security layer. New configuration of domain restricted communities can insure only people from a specific organization are able to join. Google is clearly striking against Microsoft Yammer features that are going to be embedded into SharePoint.

Here is the TheNextWeb article. It provides a glimpse of what Google+ does. Here is the passage I found very important if I think about PLM, engineering and manufacturing.

Yet it goes further than administrators merely being able to set restricted communities as the default for the whole organization. Employees can also choose to create communities open to people outside of their domain, so clients, agencies, or business partners can join in the discussion. Once a community is created, an employee can share files, videos, photos, and events from Google Drive. Community owners can change settings, manage membership, or invite other team members to join.

The ability to add people from outside of the domain can make new feature applicable for supply chain communities. Work connected to Google Drive with the ability to share large files can help to share CAD files and other information.

Google Enterprise blog article Private conversations with restricted Google+ communities provides more information and screen captures. Google article helps you to understand types of communities and access layers that can be created – open, private and public. I found a short video which demonstrates Google+ private communities.

What is my conclusion? The way Google+ develops community can potentially fit very well to expand in the engineering and manufacturing organization. Google+ user experience is well known and adoption level can be high. In my view, the absence of security layer was a showstopper. Google Drive can help to share large file. Specialized CAD sharing networks like GrabCAD and Autodesk 360 will still have an advantage of CAD viewers and special design tools integration. How long will take to integrate the same tools into Google+? A good question to ask. Just my thoughts…

Best, Oleg


Why engineers need exploratory search?

December 13, 2013

engineers-exploratory-search

Search. One of the most powerful changes in experience we’ve seen for the last 10-15 years. It is interesting, but search was with us many years. Search or find was a functionality in every enterprise application already 20-25 years ago. However, before Google made us to believe and experience the power of web search, the importance of this function was clearly underestimated. Because of Google wide adoption, we associate almost everything in search innovation with Google. It is not unusual to hear that vendor or developer is comparing what they do in search with Google.

Interesting enough, search innovation happens outside of Google Headquarters too. Old article from 2007 – Top 17 Search Innovations Outside Of Google speaks about it. Have a read, compare your notes from 2007 and draw your opinion. Several innovation mentioned in the article aree very resonating with PLM, engineering and manufacturing. The following two are my top choices – result refinement and parametric search. Here are passages I captured:

Results refinement and Filters: Often a natural next step after a search is to drill down into the results, by further refining the search. This is different from the "keyword-tweaking" that we’ve all gotten used to with Google; it’s not just experimenting with keyword combinations to submit a new query, but rather, an attempt to actually refine the results set [akin to adding more conditions to the "where" clause of a SQL query] – this would allow users to narrow the results and converge on their desired solution.

Parametric search: This type of search is closer to a Database query than a Text search; it answers an inherently different type of question. A parametric search helps to find problem solutions rather than text documents. For example, Shopping.com allows you to qualify clothing search with a material, brand, style or price change; job search sites like indeed let you constrain the matches to a given zip code; and GlobalSpec lets you specify a variety of parameters when searching for Engineering components (e.g. check out the parameters when searching for an industrial pipe ). Parametric search is a natural feature for Vertical Search engines.

Another interesting writeup that drove my attention last week was LinkedIn post – The Changing Face of Exploratory Search. Daniel Tunkelang, speaks about modern trends in search, such as entity-oriented search, knowledge-graph and search assistance. The conclusion Daniel made in his article is that future of search is in combination of faceted search and search assistance. He called it exploratory search. I found the following quotes very insightful:

Exploratory searcher has a set of search criteria in mind, but does not know how many results will match those criteria — or if there even are any matching results to be found.

Combining entity-oriented search and knowledge graphs has led to the use of faceted search interfaces that expose entities to searchers and encourage searchers to combine entities into precise search queries.

Search assistance offers the promise of making faceted search more accessible to the average searcher, enabling searchers to compose faceted search queries as they type. Indeed, search assistance makes it possible to expose untrained searchers to a richer set of relationships than typical faceted search interfaces, approaching the expressiveness of a database query language like SQL. Facebook’s Graph Search offers a taste of what is possible by combining faceted search with search assistance. It encourages people to create structured queries inside the search box, using suggestions along the way to guide the process of query construction.

PLM vendors are looking towards how to provide search as part of user experience. For most of user today, search is a natural part of application. At the same time, engineering and manufacturing data is semantically rich and interconnected. The complexity of products is growing. Product configurations, bill of materials, suppliers, manufacturers and many other data islands. All together creates a complex data access problem.

What is my conclusion? Customer demands is to have simplicity of Google combined with the complexity of product configuration, multiple bill of materials, variety of document configurations, manufacturing and supply data. The idea of "exploratory search" can be very compelling for engineering and manufacturing data. Just my thoughts…

Best, Oleg


The future of Part Numbers and Unique Identification?

December 12, 2013

product-thing-unique-id

Identification. When it comes to data management it is a very important thing. In product data management and PLM it usually comes to the Part Numbers. Companies can spend days and months debating what to include in Part Numbers and how to do that. Smart Part Numbers vs. Dumb Part Numbers. OEM Part numbers, Manufacturing Part Numbers, Suppliers Part Numbers – this is only one slice of identification aspects in manufacturing and engineering. I want to reference few of my previous posts PDM, Part Numbers and the Future of Identification and Part numbers and External Classification Schemas – to give you some background of what potential problems or dilemmas you may have towards decision about numbering schemas and identifications.

These days product information is going beyond borders of your company and even beyond your stable supply chain. The diversity of manufacturers, suppliers, individual makers combined with increased amount of e-commerce is creating the need to use product identification more broadly and maybe in more synchronized and standard way.

My attention was caught by SearchEngineLand article – How Online Retailers Can Leverage Unique Identifiers & Structured Data. Read and draw your conclusion. Article speaks about usage of product unique identification in e-commerce – GTIN.

In e-commerce, there is a unique global identifier that is leveraged across all major comparison shopping engines and search engines: namely, a GTIN or Global Trade Item Number (better known in the U.S. as a UPC). These global unique product identifiers take the guessing game out of comparing two products to determine if they are the same item, eliminating the problems typically associated with entity resolution and big data — all you have to do is compare the GTINs.

The most interesting fact is the variety of GTINs are now part of schema.org product definition. Schema.org is the initiative supported by Google, Yahoo, Bing and Yandex about representation of structured information on web pages. Google can aggregate product based on the comparison of identical GTINs. You can see an interesting patent filled by Google – Aggregating product review information for electronic product catalogs. Here is an interesting description:

An analysis module collects product reviews and determines whether each product review includes a product identifier, such as a Global Trade Item Number (“GTIN”). For product reviews having a product identifier, the module adds the product review to the product catalog and associates the product review with the product identifier. For product reviews lacking a product identifier, the module initiates an Internet search using information from the product review and analyzes search results to identify a product identifier for the product review.

You can ask how it applies to PLM and Part Numbers. In my view the initiative to have a standard of structured data representation presents the technique that can be used by manufacturing companies and software vendors. Web shows how to do it in an open way and increase the value of data access and analyzes. Finding similar parts inside your company product catalogs and across supply chain with future optimization can be an interesting solutions. Manufacturing companies are trying to solve this problem many years. It can also lead to significant cost benefits.

What is my conclusion? Adoption of web technologies and practice becomes an interesting trend these days. In my view, enterprise software (and PLM is part of it) is struggling from protectiveness in everything that related to data. Keep data close, hold format and data management practice – this is a very short list of examples you can see in real life. It was well-known business practice for many years. However, the opportunity from openness can be bigger. Schema.org is a perfect examples of how an agreement between competing web giants can solve many problems in user experience and benefit e-commerce. Just my thoughts…

Best, Oleg


Will Google Set Future PLM Information Standards?

December 5, 2013

GKS-car-info

One of the core capabilities of Product Lifecycle Management is the ability to define and manage a variety of information about product – requirements, design, engineering, manufacturing, support, supply, etc. In order to do so, PLM vendors developed data management technologies and flexible frameworks that can handle product data. You may find some of my previous posts about PLM data modeling – PLM and Data Model Pyramid and What is the right data model for PLM?

Nowadays, Data Management is going through a cambrian explosion of database options. I touched it a bit in my writeup PLM and Data Modeling Flux. If you have few free minutes over the weekend, take a look on the presentation – PLM and Data Management in 21st century I shared on TechSoft TechTalk few weeks ago in Boston.

The biggest problem in handling product data and information is related to a growing complexity of data and dependencies. In addition to original complexity of multidisciplinary data representations (design, engineering, manufacturing), we can see a growing needs to manage customer data, supply chain, social networks, etc. I’ve been pointing on Google Knowledge Graph as one of the interesting technologies to maintain a complex set of interlinked information. Read my post – Why PLM need to learn about Google Knowledge Graph to learn more. Google is not the only vendor interested in how to maintain product data. My post – 3 things PLM can learn from UCO (Used Car Ontology) speaks about variety of technologies used to model car product information.

The following article caught my attention this morning (thanks to one of my blog readers) – Google adds car facts, prices to its Knowledge Graph. I found it very interesting and connected to my previous thoughts about how product information can be managed in the future. Here is a passage from the article:

Starting today, users can search the make, model, and year of a car to find out a variety of information, directly from the Google search page. For instance, if you search “Tesla Model S”, the Knowledge Graph will now show up and present you with the MSRP, horsepower, miles-per-gallon, make, and available trims. Different cars show a different set information, as well. Should you search “Ford Focus”, you will be presented with the MSRP, MPG, and horsepower, as well as the engine size, body styles, and other years.

I made few searches and captured following screenshots. You can see how Google Knowledge Graph semantically recognizes right information and snippets of data about vehicle.

MB-GL-GKS-info

The following set of information about Mazda6 also presents the fact GKS keeps the information about multiple models and model years (revisions?).

mazda6-2014-GKS

Mazda6-2012-GKS-info

What is my conclusion? Most of CAD/PLM companies are very protective about data models and the ways they store, manage and share data. It can provide a potential problem in the future, which will probably will become more open and transparent. We clearly need to think what standards can support future product information modeling. Here is the thing – consumerization can come to this space exactly in the same way it came to some other domains. The future product information management standards might be developed by web giants and other companies outside of CAD/PLM space. Data architects and technologies must take a note. Just my thoughts…

Best, Oleg

*picture is courtesy of http://9to5google.com blog.


Cloud PLM Scaling Factors

November 26, 2013

plm-cloud-scale

Scale is one of the most fancy word which is coming in place when different technologies are discussed or debated. So, speaking about cloud CAD, PDM and PLM, the discussion must go towards "scaling factor" too. Since PLM Cloud Switch finally happened, it became clear, vendors and customers will have to go down from buzzwordy statements about "cloud" into more specific discussions about particular cloud technologies they are using. Very often, cloud deployment is related to so called IaaS (infrastructure as a service) used by vendors to deploy solutions. PLM vendors are using IaaS as well. I’ve been talking about it a bit in my post – Cloud PLM and IaaS options. In my view, Siemens PLM provided the most bold statements about following IaaS strategy in delivery of cloud PLM solutions. At the same time, I believe all other vendors without special exclusion are using variety of IaaS options available on the market from Amazon, Microsoft and IBM.

An interesting article caught my attention earlier today – Google nixes DNS load balancing to get its numbers up. Article speaks about Google demoing cloud platform scaling capabilities. Google blog articles provides a lot of details about specific setup used for tests and measurement:

This setup demonstrated a couple of features, including scaling of the Compute Engine Load Balancing, use of different machine types and rapid provisioning. For generating the load we used 64 n1-standard-4’s running curl_loader with 16 threads and 1000 connections. Each curl_loader ran the same config to generate roughly the same number of requests to the LB. The load was directed at a single IP address, which then fanned out to the web servers.

It is not surprising that Google put some competitive statements trying to differentiate itself from their major competitor – Amazon. Here is an interesting passage from Gigaom writeup:

“Within 5 seconds after the setup and without any pre-warming, our load balancer was able to serve 1 million requests per second and sustain that level.”… this as a challenge to Amazon Web Service’s Elastic Load Balancing. ”ELBs must be pre-warmed or linearly scaled to that level while GCE’s ELBs come out of the box to handle it, supposedly,” he said via email. Given that Google wants to position GCE as a competitor to AWS for business workloads, I’d say that’s a pretty good summation.

The discussion about cloud platforms and scalability made me think about specific requirements of cloud PLM to scale and how it can be related to platform capabilities. Unfortunately, you cannot find much information about that provided by PLM vendors. Most of them are limiting information by simple statements related to compatibility with a specific platform(s). However, the discussion about scaling can be interesting and important. Thinking about that, I came to the 3 main group of scaling scenarios in the context of PLM: 1/ computational scale (e.g. when PLM system supposed to find design alternatives or resolve product configuration); 2/ business processing scale (e.g. to support a specific process management scale in transactions or data integration scenarios); 3/ data processing scale (e.g. required to process a significant data imports or analyzes). Analysis of these scenarios can be an interesting work, which of course will go beyond short blog article.

What is my conclusion? Coming years will bring an increased amount of platform-related questions and differentiation factors in PLM space and enterprise in general. It will come as a result of solution maturity, use cases and delivery scenarios. Cost of the platforms will matter too. Both customers and vendors will be learning about delivery priorities and how future technological deployment will match business terms and expectations from both sides. Just my thoughts…

Best, Oleg


How CAD/PLM can capture design and engineering intent

November 8, 2013

design-eng-intent

It was a big Twitter day. Twitter IPO generated an overflow of news, articles, memorable stories. For me, twitter become a part of my working eco-system, the place I use to capture news, exchange information and communicate with people. If you are on twitter, try Vizify to visualize you twitter account. I did it here. The most insightful information for me was the fact I tweet 24 hours a day… (well, I don’t know how Vizify deal with my time zone changes). It made me think about what impact Twitter-like ecosystem can provide on engineers and designers. It came to me as a continues thoughts about failure of Social PLM – Why Social PLM 1.0 failed and What PLM can learn from public social data?

I’ve been reading an article and interview with Biz Stone, Twitter co-founder and entrepreneur – Be emotionally invested. It is a good story. Read it, especially if you are involved in startup activity. One of interesting pieces that caught my attention was a story about Google working environment. Here is the passage:

“I used to just walk around. I don’t know if I was supposed to, but I’d just open doors and see what people were doing.” One led to a guy surrounded by DVRs. Stone asked what he was doing. “I’m recording everything being transmitted on TV all over the world.” Another led to “a sea of people operating illuminated foot-pedal scanning devices. “We’re scanning every book ever published.”

Another interesting article that caught my attention was about an interesting behavior – deleted tweets. Navigate to read – Why do people delete their tweets? University of Edinburgh researchers have been looking into the motives behind deleted Twitter missives. You can read more about this study here. The funny part of this mechanism is that it implements the old idiom – Word spoken is past recalling. Here is passage explained the research and how it works.

Right now there’s no way to tell whether you’ll be proud of your rousing 140 character defense of James Franco in a few years, or deeply, deeply ashamed. But hiding the evidence isn’t hard. Deleting a tweet is not a complicated process. If you don’t like what you wrote, you can trash it in a few clicks. And there are services like Tweet Delete that help you mass-delete older tweets.

These two examples – capturing of information streams from global and personal perspective made me think about how potentially we can capture engineering activities and discover design intent of decision making factors similar to techniques used to identify deleted tweets and other related twitter user behaviors. The challenge of CAD/PLM environment compared to Twitter is obviously security and open APIs. It is hard to capture information from design and engineering systems. In most of the cases, the information is secured and access is restricted.

What is my conclusion? There is a huge potential in analyzing of design and engineering activity from capturing information about people behavior. My hunch, it can become one of the places CAD/PLM companies and startups might crack to discover a future potential of design optimization and decision making. Just my thoughts…

Best, Oleg


What is the future of PLM data analytics?

October 31, 2013

analytics

The amount of data we produce is skyrocketing these days. Social web, mobile devices, internet of things – these are only few examples of data sources that massively changing our life. The situation in the business space is not much different. Companies are more and more involved into connected business. Design, supply chain, manufacturing – all activities are producing a significant amount of digital information every minute. In design, the complexity of products is growing significantly. Personalization, customer configurations and many other factors are creating significant data flow. Simulation is another space that potentially can bring a significant amount of data. I was watching presentation "Workspace 2020" made by Dion Hinchcliffe last week at the forum "Transform the way work gets done". Navigate to the slide share slides and take a look. One of the numbers stroke me – speed of data growth. Now (in 2013) we double data in 2 years. By 2020, we are going to double the data every 3 months.

era-of-change

The massive amount of data brings the question how engineering, design and manufacturing systems can handle this data and produce a meaningful insight and decision support for engineers and other people involved into development process. The question of data analytics came to my mind. In the past data analytics was usually associated with long, complicated and expensive process involving IT, resources, time and programming. Cloud and new computing ecosystem are changing this space. I was reading the announcement made by Tableau (outfit focusing on providing analytic dashboards and tools) – Tableau Software partners with Google to Visualize Big Data at Gartner IT Symposium.

The partnership mixes Tableau’s analytics with the Google Cloud Platform. The technology was presented at Gartner conference in Orlando recently. Here is an interesting passage explaining what Tableu did with Google:

“Tableau and Google created a series of dashboards to visualize enormous volumes of real-time sensory data gathered at Google I/O 2013, Google’s developers’ conference. Data measuring multiple environmental variables, such as room temperature and volume, was analyzed in Tableau and presented to attendees at the Gartner event. With Tableau’s visual analytics, Gartner attendees could see that from the data created, I/O conference managers could adjust the experience and gain insights in real time, like re-routing air-conditioning to optimize power and cooling when rooms got too warm.”

I found the following video interesting explaining how easy you can build some analytics with Tableu and Google Big Query.

It made me think about future potential of analytics we can bring into design and engineering process by analyzing huge amount of data – simulation, customer, operational. Just think about combining data collection from products in the field, mixed with some simulation analyzes that can be used to improve design decisions. Does it sounds crazy and futuristic? Yes, I think so. However, there are many other places that we considered crazy just few years ago and it became the reality now.

What is my conclusion? Data analytics is one of the fields that can provide a potential to improve design and engineering process by analyzing significant amount of data. Think about leveraging cloud infrastructure, network and graph databases combined with visual and intuitive analytic tools. It is not where we are today. This is how future will look like. Just my thoughts…

Best, Oleg


Follow

Get every new post delivered to your Inbox.

Join 218 other followers