How to operate your physical data center like a virtual cloud
Too often in the history of the data center industry overcapacity and underutilization have combined to provide the worst of both worlds for a business. That is poor capital expenditure and wasteful ongoing operational costs.
As any CIO knows, the history of the modern data center began in the 1990s when Wintel emerged as a credible server-side compute engine and client server was the dominant architecture. This spurred the creation of the first data centers in the form we know them today. Cabinets, racks and rows of industry-standard servers were rolled into buildings and rooms in rapid response to ever increasing demand for more applications. It catalysed the creation of a new Real Estate category – that of digital infrastructure. New disciplines and skills were needed and data center architects, designers, mechanical and electrical engineers developed methods and solutions specifically for this new and exciting industry.
“Yet despite the extraordinary advances made in engineering and service delivery…some of the problems of today’s data centers were baked in from the start.”
As these physical assets were designed, built and commissioned it was sometimes forgotten that enterprise IT is always provided in response to a request from the business. IT’s role was to forecast the need and invest in server, storage, network and system management software. Facilities (I+O) provided the infrastructure. Capacity was often based on an overestimation of the requirement, which itself was based on peak load forecasts. Once in and commissioned it was rare that anyone ever checked what was actually being used. Tales of CPUs running at barely 10% capacity with direct or networked attached storage bought in huge capacity but never fully used, were common. Processing, memory, storage and even networking infrastructure were wastefully underutilized.
As the engineering discipline evolved and these facilities became specialized we truly entered the distributed computing data center era. Yet despite the extraordinary advances made in engineering and service delivery, such as spinning up a server or VM in minutes not months or, improved operations driving energy efficiency, some of the problems of today’s data centers were baked in from the start.
“Gaining insight to available capacity across on premise, in a colo or a cloud data center, managers can operate and deliver cloud-like services to their business through fast, informed decision making.”
When the enterprise IT needle shifted from running on relatively small numbers of very big, very similar boxes the time of heterogenous (the quality or state of being diverse) IT had arrived. Today’s environments are likely to host a huge range of different technologies including industry standard servers, direct and network attached storage, switches and routers, standalone converged systems, high density servers, low density tape drives….some standalone, and some virtualized. Each data center will have its own range of diverse management systems and methods.
Thankfully, solutions exist to address historic management inefficiencies.
ServerFarm’s InCommand™ combines people, process and a portal to provide a service approach to IT, facilities, and DC physical infrastructure management. It addresses the realities of the very complicated world of mechanical, electrical, building and IT and provides the solution to how make it flexible, agile and cost effective.
The case for intelligent investment to rapidly address the complexity of the DC
Data center infrastructure was never simple and is likely to become more complex.
The days of separate management of IT, facilities and physical data center infrastructure have gone. Yet, thinking this is a problem that can be solved by adding another software layer is misguided. An integrated approach that virtualizes management into a service is now a credible path forward.
ServerFarm’s unique capability is to take all physical assets in IT, facility and DC environments and return them as a virtualized service. By combining software, people and processes, ServerFarm’s InCommand can reveal a unique view of infrastructure.
“Predictive analytics is the key to helping create an efficient operational environment at a component level, across a site or even over multiple sites and cloud platforms. It is key to maximizing asset utilization through accurate forecasting by exposing the following: What happened? What is happening? What happens next?”
For example, gaining insight to available capacity across on-premise, in a colo or a cloud data center, managers can operate and deliver cloud-like experiences to their business through fast, informed decision making.
Building intelligence into management
A data center is, by definition, an interconnected ecosystem.
True detailed visibility brings transparency to what is actually happening in the data center including identifying stranded capacity. InCommand features mean that from commissioning to disposal, tracking of every data center asset is possible. This maximizes utilization and allows for accurate planning.
InCommand is an integrated management service. It is a software platform with proven workflows to inform decision making using accurate infrastructure data. Accurate information provides power, cooling and cable asset optimization by revealing real metrics about available capacity and utilization.
But it is not simply another data center management software layer. Integration extends to 3rd party monitoring systems through APIs and rich JSON functions to provide a comprehensive management service. Designed by people with years of deep data center expertise and experience of the day-to-day challenges of efficient operations, the software enabled service is flexible enough to integrate with user workflows.
The people and process service context is customer specific. Because it is a service based solution built on an Open Source platform, InCommand integrates with customer workflows and processes.
Being data driven and steps to enabling predictive analytics
Being data driven within InCommand goes beyond having an understanding what you have and what is available.
When it comes to planning, forecasting, utilization and life cycles InCommand provides insight of ‘what if’ scenarios. Predictive analytics is the key to helping create an efficient operational environment at a component level, across a site or even over multiple sites and cloud platforms. It is key to maximizing asset utilization through accurate forecasting by exposing the following: What happened? What is happening? What happens next? Real-time metrics are analyzed and InCommand ensures data is validated and accurate over time.
No other service combines professional data center expertise, with a constantly evolving software solution behind a cloud hosted portal to go beyond the harvesting of historic data being analyzed to maintain current capacity and utilization rates into simulation forecasts applying financial metrics in order to break the cycle of overcapacity and underutilization.
For users it provides a clear path to a new digital data center operating model which saves costs, maximizes physical asset utilization and by being virtual, creates workflow and service efficiencies with full KPIs and reporting for the management of everything from cables to clouds.
Just as the availability of cloud is changing enterprise IT to becoming increasingly service based so the management of the physical data center, facilities and IT assets must move beyond DCIM software to becoming a constantly evolving, responsive service-based solution.