Sunday, May 17, 2009

A Better Historical Parallel for Clouds

Cloud computing is often associated with “commodity” pricing and efficiencies of scale, which is why the transition to a world with cloud resources harks back to the emergence of public utilities and mass manufacturing. Cloud networks may provide a utility of sorts, but there was never an industry-wide transition from in-house electricity generation to public utility, which is what is likely to happen with cloud computing. Furthermore, while standardization and interoperability has created a layer of abstraction between applications and their underlying data center infrastructure, clouds are far from offering any service as fungible as commodities, like water and electricity. History should provide some insight into what to expect from the emergence of cloud computing, but this industry parallel from 100 years ago is not ideal.

In reading and speaking to people in the industry, I have come to the conclusion the transition to cloud computing will most resemble the transition in the semiconductor industry, as it went from fully owned fabrication facilities(fabs) to third party foundries. At first glance, this might seem silly, but let me explain.

When foundries first emerged in the 1980’s, it was in response to both the
rising cost of fabs, and the emergence of many small, independent silicon design companies that had difficult financing their own manufacturing facilities. At the time, integrated device manufacturers(IDM), like TI and Intel, developed all of their own process technologies and manufactured at their own fabs. But as the global market for semiconductors grew and process technology advanced, many of the smaller IDMs were unable to keep up with the innovation necessary to remain competitive. Each new geometric process node raised the cost of fabs from $100m in 1985 to $1bn in 1994 (when the Fabless Industry Association was created) to an estimated $10bn today, making it prohibitive for only the very largest companies. Over time, even those that could afford to build a new fab found that the ebb and flow of semiconductor demand, combined with the difficult of capacity forecasting, rendered these operations highly inefficient over the long run, and a real distraction from their core business in the short run.

The emergence of foundries like TSMC offered an alternative for large silicon vendors who had difficulty managing their growth and costs. For small vendors, the impact was just as great, as the foundry model became essential for the emergence of the fabless industry, and companies like Nvidia, Qualcomm, Xilinx and Broadcom. As with any outsourcing transition, the resistance was led by arguments about IP leakage, and the strategic importance of owning and controlling their manufacturing as a competitive advantage. But such myopic views became a costly gamble with some chip companies ultimately buckling under the weight of their fab operations, and others simply ceding market leadership to their more nimble, fabless competitors.

The foundries that succeeded did so by building massive and highly efficient facilities, working closely with the process companies and leading customers. Today, the industry has only five or six leading foundries, with a handful of niche players using specialization to eke out an existence. While quality and the maturity of the technology process is critical, price is still the key determinant in choosing one foundry over another.

In thinking about the analogy I first want to point out the every growing investment required to launch a large-enterprise data center, which has risen to $500 million, from $150 million, over the past five years[1]. Additionally, the cost of running these facilities is rising by as much as 20 percent a year, partly due to the price of electricity, but also due to general opex. Secondly, due to the lead time required to design and build large data centers (2 years), capacity is added well in advance of actual business needs, forcing enterprises face a difficult task in forecasting data center requirements. For some of enterprises, data center spending is rapidly getting out of control as new data intensive applications for internal and external purposes consume valuable resources with an uncertain economic return.

While foundries and clouds provide very different services both solve a similar build vs outsource decision for their customer base. Both provide their customers with access to cutting edge technology, with minimum up front commitment, and the ability to scale infinitely at variable cost. In the case of the foundry business, there are relatively low switching costs and most large chip vendors give their business to multiple foundries to reduce risk and improve pricing. Thus far, there appears to be a similar dynamic with cloud services and their customers, but its still early days.

So what can we learn from the history of the fabless/foundry industry? The first is that due to the economies of scale required to stay competitive in terms of quality, technology and price, there are likely to be only a few big winners(aside from niche and local players). This is an issue, as most large hosting and managed service providers will make an attempt to morph into cloud service providers, with consolidation the likely result. Secondly, it will become harder and harder to justify continued spending on internal data centers as costs spiral out of control with an unclear ROI.

Finally, if there are only a few large players ten years from now, making a business selling hardware and software to these players may be harder than we think. I am reminded of the fact that despite the enormous annual capex of Google, few companies can claim to have built their success around selling to Google. On the contrary, Google has been building its own servers, and is now rumored to be building its own switches, in both cases buying off-the-shelf chips. Whether this will continue is a big question, and I am still undecided. After all, in the foundry business there is a clear distinction between companies like TSMC and Applied Materials, but this could be where the analogy fails.

In any case, one clear opportunity is to be the "fabless start-up" of the cloud era, taking advantage of cutting edge, third party data center technology and infrastructure to build a new businesses with a superior cost structure.

[1] McKinsey Quarterly November 2008 “Data Centers: How to cut carbon emission and costs”, William Forrest, James M. Kaplan, and Noah Kindler


  1. Adam,

    A good analogy, but one area I would beg to differ is the definition of ‘few’ when you refer to the number of clouds that would emerge winners. In my opinion there would be many specialized cloud service providers each focusing on a niche service, much more in numbers than the fab industry.

    Unlike the semiconductor industry, the cloud computing paradigm tends to address a larger user base. Apart from technology & price point, there are also other factors like compliance (for medical & financial industry), security, open vs. proprietary standards, portability, etc… that will determine the adoption of cloud by different industry verticals.

    As an example, the storage, computation and compliance requirement for an oil and gas industry doing simulation using terabytes of data is very different from a financial company that will do analysis of data for trading.

    Also, unlike the Fab industry, clouds can be made to work with each other using API, data from one cloud and computation on the second and display on the third.

    Rajeev Kutty

  2. Rajeev,
    Thanks for your thoughtful comments. I think you have a point, although I tend to believe many of the specialized cloud providers you anticipate may in fact be intermediaries, running a SaaS application riding on someone else's underlying data center infrastructure. So whether we define these as cloud providers or application providers(virtual cloud) might be a distinction. This is also what I refer to in my previous post about identifying premium services running on a commodity cloud base(