Why Companies Still Want In-House Data Centers
Sometimes it seems as if the cloud is swallowing corporate computing. Last year businesses spent nearly $230bn globally on external (or “public”) cloud services, up from less than $100bn in 2019. Revenues of the industry’s three so-called “hyperscalers”, Amazon Web Services (AWS), Google Cloud Platform and Microsoft Azure, are growing by over 30% a year. The trio are beginning to offer clients newfangled artificial-intelligence (AI) tools, which big tech has the most resources to develop. The days of the humble on-premises in-house data center are, surely, numbered.
Or are they? Though cloud budgets overtook in-house spending on data centers a few years ago, firms continue to invest in their own hardware and software. Last year these expenditures passed $100bn for the first time, reckons Synergy Research Group, a firm of analysts. Many industrial companies, in particular, are finding that on-premises computing has its advantages. A slug of the data generated by their increasingly connected factories and products, which Bain, a consultancy, expects soon to outgrow data from broadcast media or internet services, will stay on premises.
The public cloud’s convenience and, thanks to its economies of scale, cost savings come with downsides. The hyperscalers’ data centers are often far away from the source of their customers’ data. Transferring these data from this source to where they are crunched, sometimes half a world away, and back again takes time. Often that does not matter; not all business information is time-sensitive to the millisecond. But sometimes it does.
Even so, believes Arun Shenoy of Serverfarm, which works with both hyperscalers and data users, many large firms will think twice before they stick their heads completely in the cloud.