I want to talk about hardware today. You probably surprised, but I hope not so much. During the last 10-15 years, the majority of works PDM/PLM systems were doing were focused on the commodity low end x-86 servers. There is nothing wrong with that. Nevertheless, I can see some new trends coming in this space. It comes with web development, large data scale, mobile, data analytic and more. I can clearly see two patterns in how vendors are using hardware. One of them is an attempt to build proprietary data centers from commodity level servers (eg. Google, etc.). Another one is to focus on how to delivery solutions bundled with specific highly profiled hardware platforms (IBM Pure Data, Oracle Exadata, Cisco, etc.). Data centers are an ideal place for such type of boxes.
I’ve been reading GigaOM article earlier today – Does Big Data really need custom hardware? The article itself is not about PLM. At the same time, it made me think about some examples author is using. Think about large computational tasks related to designing, rendering, simulation, data analyses or just check-out for a very sizeable assembly for a configured order. All these use cases requires data on scale. To get this information you need to have a very efficient data backend with a significant ability to scale in different dimensions. Here is an interesting passage from the article:
Where the generic server market has been commodified with low-end x86 servers companies like Teradata and EMC are doing their best to hold onto their hardware margins with specially designed systems. And it looks like IBM and Cisco have decided this is an opportunity not to be missed, and are taking it further. Cisco has released a unified computing system specifically designed to run SAP’s HANA database. Oracle is also heading down this path.
The question author is asking is actually a good one. Do we need high-scale performance data boxes or we can leave with the data centers built on top of commodity hardware? Here is another quote:
Instead of these two boxes representing a new hardware for big data these really represent that capitulation by the major hardware vendors to a services model. Technically these boxes may have different chips when compared with commodity servers, but what these guys are actually selling is the plug and play aspect. Sure a customer can buy cheaper boxes and download a Hadoop or other open source software (or pay a licensing fee and have someone like Cloudera manage it for them) but they want something that works with little or no effort.
So, what happens with CAD/PDM/PLM vendors? The expectations of companies are moving beyond simple engineering document management, checkin/checkout process, towards data analytics, social software and more heavy data oriented tasks. I can hear voices of "big data" discussions. However, there are not much clarity in these discussions yet. Vendors are going to re-think many data-driven paradigms. What path vendors will follow? Some vendors will follow cloud data centers and commodity hardware. Another group (eg. SAP HANA) is planning to develop proprietary server boxes.
What is my conclusion? The awareness about data driven backend systems is growing in manufacturing and other enterprise companies. In my opinion, PLM vendors are not there yet. To deliver scalable, performance oriented back-end is nontrivial task and this task can allow PLM software to scale. Cloud PLM opens new untapped classes of applications that we have never seen before. What path PLM vendors will take? Will PLM follow some of the paths alongside custom hardware, continue to use standard hardware and database software or move to open source to develop separate bundles – time will show? What is your opinion?