5 Strategies for Converged Infrastructure Efficiency White Paper
Imagine buying a new sports car with the intent to drive it only one day per year. The driver would either have to be wastefully wealthy or insane, right? When you acquire something of value like that, you want to take it out, let it run, enjoy it — put it to the full use you envisioned before the purchase. You want to use it, but not abuse it. Wherever the line is between those two states, you want to live just a bit on the side of prudence. That’s the way to get the most value from your investment.
The same holds true for converged IT infrastructure. When an organization pays thousands upon thousands of dollars for IT processing capabilities, it makes intuitive sense that management would want shared compute, network and storage resources running on just this side of abuse. Keep them hot, but don’t burn them out; maximize the investment, but don’t blow it up. After all, isn’t that part of the strategy behind virtualization? When one application can’t fill up a server’s capacity, adopting multiple virtual machines on the same hardware can.
However, this IT infrastructure efficiency best practice is not always followed. According to a June 2015 study by sustainability consultancy Anthesis Group and Stanford research fellow Jonathan Koomey, business and enterprise data center IT equipment utilization “rarely exceeds six percent.” Adding insult to injury, current data from the Uptime Institute reveal that “up to 30 percent of the country’s 12 million servers are “actually ‘comatose’ – abandoned by application owners and users but still racked and running, wasting energy and placing ongoing demands on data center facility power and capacity.” The Anthesis study used data from TSO Logic spanning an install base of 4,000 physical servers. Thirty percent of these servers proved to be “using energy while doing nothing.”