Does this actually make sense?

Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.

And:

A few companies say they are using extensively re-engineered software and cooling systems to decrease wasted power. Among them are Facebook and Google, which also have redesigned their hardware. Still, according to recent disclosures, Google’s data centers consume nearly 300 million watts and Facebook’s about 60 million watts.

Now I get very confused by sciencey things. But a Watt is a rate isn\’t it?

So a watt isn\’t what you use, it\’s the rate at which you use it? What you use is some number of watt-hours isn\’t it?

Help me out here, I\’m confused. Have the NYT got their units wrong or not?

OK, thanks for the explanations. It does make sense now. They say that the US uses 1/3 or so of this 30 billion, then later they say that total usage in US is something like 76 billion Kwh over a year.

Which, working back, indicates that they\’re on average, working at about 70% of that 30 GW. Which seems to make some sort of sense to me at at least.

14 comments on “Does this actually make sense?

  1. Think we’ve been there before on this one. Someone’s confusing supply with usage. For instance: Your house may have a 10KW supply. 40 amps odd. So that’s what you could pull before you blew the main fuse or burnt the supply cable. Most of the time, you’re pulling a fraction of that. The headroom is there because of a lot of equipment will impose a high load to start.
    If someone wanted to take a data center up from nothing, all that stuff coming on line at once would pull a lot of juice. Hence power management. The higher the supply rating the quicker they can get up. Once they’re up & running they’d be down to a fraction.

  2. > But a Watt is a rate isn’t it?

    Yes. Which is why its the measure of interest. The data centers (assuming the article is correct) use that many watts. I suppose you could multiply it up by seconds-in-a-day, and say it uses that many Joules-per-day, errm, but that would still be a rate. No?

  3. If you’re not specifying a time period, watts are the correct units. Power is an instantaneous measurement, energy (joules or watt-hours) is over a certain time period. The NYT have the right units.

  4. The NYT have their units right (but I don’t know about the magnitude). If the data centers consume 30 billion watts, or 30 gigawatts, then that’s a measure of their (average) energy use. So in one hour, they consume 30 GWh. That is, they use up 30 GWh/h = 30GW continuous usage. Or, as WMC says, 30 GW = 30 GJ/s.

    This, dear Tim, is extremely akin to the difference between income and wealth, the taxation of which confuses other commentators… (watts being a measure of income = a flow; and watt-hours or joules a measure of wealth = a stock).

  5. Agree jorb. But introducing kWh/mWh into it is where the NYT is cocking up. They’ve taken the supply potential & said that’s running 24/7. Even those illustrative 30 nuclear power stations wouldn’t be producing 30 nukes worth 24/7. What they should have done is ring Google & ask them for their consumption figures.
    If I remember rightly from the last time this nonsense surfaced, the entire IT industry – that’s from making the hardware right down to running your desktop – uses less than 2% of world electricity supply

  6. The Watt is the derived SI unit of power and so measures the rate of energy conversion or transfer in Joules per second.

    To say that datacentres use about 30 billion Watts (30GW) of energy but this does imply that that this is their usage per second and equivalent to 108TW/h.

    US annual electricity consumption is something of the order of 4,000 TW/h per annum, so I suspect that what NYT might actually mean is that datacentre power consumption is around 30Gw/h and not 30GW.

    Where the article is also wrong is in suggesting that 30GW is the equivalent of the output of 30 nuclear power plants – it isn’t.

    A typical nuclear reactor has a peak output of 1GW, so 30 reactors is correct, but as most plants have more than one reactor – the largest in the world, in Japan, has seven – it wouldn’t be correct to say its the equivalent of 30 power plants.

  7. The premise of the article is bollocks, either way. Fundamentally, not doing processing uses a hell of a lot less power than doing processing. Hardware is cheap – incredibly cheap compared to the software normally run on it. Of course there’s lots of under-used hardware, because it’s practically free in this context. When it’s not doing much, though, it uses next to no power.

    What does use a lot of power is, not surprisingly, doing lots of calculations. That’s where Google and the like are using all their power. Of course, since power consumption is a major expense for them, they’re looking at investments they can make in hardware and software development which will save power, but that’s completely different to merely cutting out waste.

    The sole extent to which the article approaches truth is that many IT departments are still very badly run by non-professionals who lucked into the post. They waste resources in every way, though.

  8. A data centre will generally pull the majority of it’s rated power for two reasons. Firstly, even though the average load on servers is say 20% of the server computing capacity, the power draw is still well above this because not all the load is down to the CPU.

    Secondly, even snazzy DC’s like Google/Facebook/Yahoo use utilise natural circulation for cooling to reduce non-computing power draw from HVAC…..which has the unfortunate consequence that the majority of the DC’s computers must be producing heat or such circulation stops and all the servers fry. I know of one DC when they actually brought in industrial space heaters….

  9. One of the reasons data centers will be using less power is they’re continually upgrading the processors. Wasn’t there something in the Reg recently about yet another reduction in chip architecture? Smaller=faster=less power consumption=cooler. That the way it goes. The reason a laptop on a battery can outperform a warehouse full of Spectrums using 10kW

  10. “even though the average load on servers is say 20% of the server computing capacity, the power draw is still well above this because not all the load is down to the CPU.”

    Nope, power draw is pretty much proportional to the amount of computation done. Sure, a lot of it goes to cooling and so on, but that’s needed in proportion to the amount of work done by the processors.

  11. While CPU powerdraw is proportional to the amount of calculations it’s doing, that’s not all that’s in servers – I’d guess the hdd’s in use draw a fair bit of juce, and that load is probbly fairly static regardless of the amount of data requests being proccessed…

  12. I’d agree that data centre power use is roughly baseload plus factor of current computation. There’s also a diurnal factor in that computation (and therefore power use) since most data centres running people-facing services — Facebook, Apple Maps (haha!) Google, Amazon, Microsoft Bing etc. — serve people in their geographic location to avoid the expense and delay of sending data on a round-trip across an ocean.
    A measure of how effective a data centre is in using input power is PUE (http://en.wikipedia.org/wiki/Power_usage_effectiveness) although its measurement isn’t really standardised. Suffice to say that given X input power, no more than about 90% and usually much less of X is actually used to drive CPU, RAM and disk and the laws of physics say you can’t do much better than that.
    The main way data centres can practically improve efficiency is reduce unnecessary computation – put servers to sleep when they’re not needed and spin them back up in anticipation of predicted load. Various sources indicate that this turns out to be quite hard, but we should definitely let the NYT journalists have a crack at it and see what they come up with.
    I suspect, however, that the energy used by data centres in serving your average journo is dwarfed by the amount of power their iMac uses playing their favourite music on iTunes; a 27 inch iMac eats 200-300W in normal use.

Leave a Reply

Name and email are required. Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>