The reduction of energy consumption in data centres has been a focus area for almost everyone concerned with engineering and technology for nearly a decade.
Emphasis on improving cooling systems and switched mode power supply efficiency and virtualization amongst others has yielded impressive energy benefits. I’m not saying everything that can be achieved has been, but there is little doubt that solid progress in these areas has been made globally.
The same cannot be said for fixed platform software. There are several reasons for this: Back in 2013 when we were conducting research for the Singapore Green Data Centre Technology Roadmap we wanted to take a look at each aspect of engineering and technology that consumes energy. What we found was totally baffling at first. You might think that the hypervisor and application developer community would be pretty hot on energy reduction techniques since software used by mobile platforms is highly tuned in terms of the energy performance balance. This is because retaining battery charge extends platform usage time, therefore it is a major consideration in application development along with low energy processors.
Whereas in a data centre battery power is not a concern, therefore the impact of energy-aware language structures, low energy coding techniques and the use of hardware sleep states have largely been overlooked by software developers. That seemed pretty odd to us, but the research supported the assertion that fixed platform developers generally aren’t cognizant of energy usage in the context of software.
You might well ask: What is the significance of software energy inefficiency? The answer is that it is as important, if not more important than the inefficient cooling systems affecting data centres a decade ago.
A key factor causing the problem is that energy proportional computing is not yet a reality. To understand why this is we need to look at typical enterprise active components particularly processors and their idle energy consumption. At zero percent utilization a typical processor averages 50 percent of its maximum energy consumption. So the question then is: How much time do processors spend idling? Many people have asked this, but the fact is we don’t really know because utilization varies so much amongst users. What we do know is that generally processors spend the majority of their time idling. So even if we assume the idle period is as high as 50 percent, this means that these devices are burning 50 percent of their maximum energy whilst doing nothing at least half of the time.
It gets worse! Now consider the upstream impact of being idle 50 percent of the time. We still have to power the IT devices, so there’s energy wasted in each section of the power chain; from the distribution transformer all the way through to the final DC to DC converters. Furthermore heat from this idle energy has to be removed so there’s even more energy wasted. In the roadmap we termed this the Energy Cascade Effect.
The need for action by the software community to deal with this issue is obvious. We thought initially that there’s not much that can be done about this. Surely you can’t just go around switching IT hardware off can you? Well it depends. No you can’t if the software is required to react instantly to a time independent event, but there’s more to it than that because business processes i.e. interconnected applications fall into three categories:
The drawback to using sleep states is a latency penalty due to the additional time required to wake in response to an event. This depends on the level of sleep used; the deeper the sleep the less energy used but it takes longer to wake.
The argument for routinely using sleep states stands up well in Categories 2 and 3. We don’t need an instantaneous response to many organizational processes. This means pretty much anything in Category 2 that is time independent and an adequate interval between events could benefit from using sleep states. One of the complexities and a key consideration in the implementation of sleep states is the nature of IT workloads. There will be instances where this technique cannot be justified, but there is without a doubt many opportunities where it can be employed.
It simply isn’t good to go on ignoring the issue, the energy saving opportunity is immense and the financial savings are too big to ignore. This was borne out by the Net present Value calculations we did for the Singapore Data Centre Roadmap.
In a situation like this, where most if not all organizations stand to significantly benefit financially, without adding risk and significantly lowering carbon emissions it surely is only a matter of time before the issue is addressed.
Sleep states are one important way of conserving energy. Others include energy cognizant coding of applications, application services and operating systems, also rate adaptation, energy aware mapping of virtual machines and energy optimised compilers.
There is much work to do, so now over to you hypervisor and application developers. Save the planet!