Last weekend I went to Street Smart San Diego where, among many interesting booths, they offered test rides of various hybrid electric bicycles. I really liked the Eneloop from Sanyo which is coming to the U.S. this Fall. It’s not just an electric assisted bicycle (in the spirit of "mild hybrid" automobiles) but a hybrid integrated drive (in the spirit of Toyota’s hybrid synergy drive. You don’t have to think about controlling the electric motor. The way you ask for power is to pedal, and the bike matches your effort 2-to-1 at low speeds and 1-to-1 at high speeds. Coast on a slight downhill and it reclaims some energy to recharge the battery. Brake and it reclaims more.
The last of the 2009 SPECtacular
awards. SPECweb2005 is the
industry standard performance metric for web servers, and today it is
joined by SPECweb2009, the
industry standard performance and energy metric for web servers. The
benchmark includes a banking workload (all SSL), a support workload
(no SSL), and an ecommerce workload (mixed). This is the first
application of the SPECpower
methodology to potentially large system under test
configurations. In the initial
benchmark results you can see one system with and one without
external storage, and the test report lets you see the power
consumption of just the server, of the storage, and of the entire
configuration at various utilization levels. The entire committee did
a fantastic job with this benchmark. As always, I won’t list anyone’s
name without permission. (But give me the okay and I’ll update this
posting!) SPEC recognizes:
Frost (AMD) who
stepped in to fill a key developer role in an emergency with the
release clock ticking. He took over the control code after a sudden
reassignment, and frankly we handed him quite an undocumented mess.
Gary was up to the challenge and produced the finished code.
Another engineer from AMD
had primary responsibility for the reporting page generator. You
often can’t know exactly what information ought to go into a full
disclosure report (FDR) until you see it. Nor how you want it
organized and arranged. Nor what data integrity cross checks need be
present to avoid errors. So the committee changed requirements often
during development. But no matter how many requirements were placed
on him, he turned around with the needed code within a week!
An engineer from Fujitsu
Technology Solutions became the de facto quality assurance
office because of his thorough and methodical testing practices. If
there are a hundred ways software in general can go wrong, then there
are a thousand ways benchmark software can go wrong, as by its nature
it runs on systems stressed to the limit. When SPEC benchmark
software just works that is largely due to people like this engineer
who forsee, test, and diagnose every possible failure unanticipated
by the authors.
And, if you’d like to see all of the
SPECtacular awards, then follow
SPECtacular awards. The SPECpower committee has been busy. They
released version 1.10 of
the SPECpower_ssj2008 benchmark as a no-cost upgrade to existing
licensees. It adds support for measurement of multi-node (blade)
servers, improves usability, and adds a graphical display of power
data during benchmark execution. Review and publication of benchmark
results continues apace, with a spirited competition for first place,
and with ever more power
analyzers accepted for testing, and more test labs qualified for
independent publication. They have also been assisting several other
benchmark committees inside SPEC, and other industry
standard benchmark organizations, to implement energy measurement for
their benchmarks. SPECpower is more than just a benchmark; it is a
and the methodology is modified and expanded as necessary over time
to accommodate energy measurements for all the different workloads
which are relevant to the real world in those market segments. In
alphabetical order SPEC recognizes:
Microsystems) – As release manager he coordinated and
integrated development activities to keep the deliverables on
– He created stand-alone and network integrated tools for
automated results checking to help insure that results submissions
are correct and complete.
Greg Darnell (Dell)
– Author of the PTDaemon, he helped many other groups get started
measuring power for their benchmarks. He helps out with whatever
needs to be done, technical or organizational.
Technology Solutions) – He automated the process of determining
power analyzer precision, handled the acceptance of several new
power analyzers, and was instrumental in getting multi-channel
– He was primary developer of the Visual Activity Monitor, giving
an unique view of the system’s activity.
– If I tried to recount all the accomplishments Jeremy was cited
for I’d probably run into some internal blog size limit. Suffice it
to say he is a primary developer on many parts of the code, who
never turns down a plea for help, and who is never satisfied until
the entire benchmark package is right.
– As primary author/editor of the Power and Performance
Methodology, he organized the document to capture deep technical
consensus in the committee, and made it readable and understandable
for people new to the field.
– He designed the control software to drive multiple JVMs,
enabling multi node (blade) testing.
An engineer (AMD)
– Who created and maintained much of the web content explaining
the benchmark and methodology to the public.
Not NASDAQ – solar power. The ASES annual solar energy conference is in San Diego this week. The top question I get about my solar panels is how long is my return on investment? I did calculate it before we installed them. At current electricity prices and time value of money they will just break even over their useful life. (And we live near the ocean where morning fog obscures the panels on many summer mornings.) Still, if we had another price shock equivalent to the 1973 oil embargo they would pay back about twice the initial investment, in current dollars. And if we had another price shock equivalent to Kenny-Boy Lay’s market manipulation, they would pay back about five times the initial investment. Economically, call the panels zero cost insurance.
Now what’s the ROI on an SUV? Our solar panels cost about a quarter to a half the price of a big SUV. Will that Escalade have a productive life of 20 years? And over that time how many dollars will it return to your pocket? Or will it perhaps take more money out of your pocket? For the price of the SUV you could instead buy solar panels, zero out your electric bill, buy a Chevy Malibu (which sits in clogged traffic equally well as the SUV), and have enough money left over to pay for over 200,000 miles worth of gas for it, at $4/gallon.
So why aren’t there more solar panels in sunny Southern California? Why is Germany, in the cloudy wintery north, so far ahead of the U.S.? Two reasons: (1) money, and (2) money.
(1) Lots of people don’t have the luxury of deciding whether to spend discretionary money on a new SUV or on solar panels; they’re deciding whether to pay the mortgage, pay the electric bill, or fill up the gas tank. Ditto businesses hard pressed to show a profitable bottom line. Increasingly solar energy entrepreneurs are in effect buying energy "drilling rights" on rooftops. LA’s electric utility Edison is building the equivalent of a new generating plant by putting panels on the roofs of commercial and industrial buildings. The building owners pay nothing, and get a good long term locked in electricity rate. Here in San Diego, Hewlett-Packard is converting its campus to solar power. HP stockholders will pay nothing for it, and HP will get substantial energy cost savings in the future. While they’re at it, HP is matching the rebate to their employees who want to put solar panels on their homes.
(2) The recent earth shaking discovery that people are more willing to give goods and services in exchange for money, than to give them with nothing in return. (See capitalism.) The biggest barrier to local development of solar energy in San Diego has been a convoluted rate structure that in many cases actually made businesses that installed solar generators pay more money to use less electricity, than before they installed them. Small wonder that northern California is far ahead of sunnier southern California in solar power installations. Now that crazy rate structure is changing, which could bring a boom in locally generated solar power.
For homeowners in San Diego no change is forthcoming. Germany has all those solar installations because of a rate structure that pays for solar electricity at much higher than market rates. In San Diego you see many solar panel installations like ours covering a small portion of the roof. The rate structure here is fair up to the point that you replace your total annual electricity usage with solar power. Produce more than you use, however, and all the excess is just "donated" to the utility without compensation. So you’re okay if your solar installation is a bit smaller than you need, but it’s economic madness to make it any larger than you need. If not for this rate structure, our solar panel installation could have produced enough electricity for one or two of our neighbors in addition to our own needs.
More SPECtacular awards given at SPEC’s 2008 annual meeting in San Francisco, to members of the power committee who produced the SPECpower_ssj2008 benchmark. This wasn’t an easy benchmark to do, taking us into areas of engineering not so familiar to performance analysts. Along the way we picked up some new contributors, and some of us picked up some new knowledge and skills. Energy efficiency is increasingly important, and eventually I expect to see power measurements as part of every performance benchmark. But for now, SPECpower_ssj2008 is a great start that establishes a fair and practical methodology for consistent measurement.
As before I won’t cite names without permission, but will add them later if given the okay. SPEC thanks:
- Paul Muehr from AMD
- Greg Darnell, and another engineer from Dell
- Karin Wulf, and another engineer from Fujitsu-Siemens
- Klaus Lange, and another engineer from HP
- Jeremy Arnold, Alan Adamson, and another engineer from IBM
- Anil Kumar, and two other engineers from Intel
- an engineer from Sun
- Michael Armbrust from UC Berkeley RAD Lab
If 30% recycling is the good news then what’s the bad news? I coincidentally browsed two magazines the other day. Time had a short article on eWaste – old computers, monitors, and other electronic gear – and how it was recycled. Besides hazardous materials, it also contains many valuable reusable materials. The problem is that only 30% of eWaste is recycled and the other 70% piles up in landfills.
Then I read a long article in National Geographic that cast a dark shadow on the 30% of recycled eWaste. It showed Monitex, a Grand Prairie, Texas recycler that breaks down and safely recycles all the components. But more often recycling is outsourced. A foreign broker bids on the eWaste. It showed a lagoon in Ghana choked with monitors – the end point for so-called recycling. A boy carries cables to the fire fields where in thick clouds of dioxin and heavy metal laden fumes, the insulation is burned off so the copper can be sold. A man in India melts printed circuit boards to recover the lead – in the same pots where later the family meal will be cooked.
Perhaps it’s too self-righteous for us in the west to say that rather than have a job we consider bad, poor third world people ought to have no job at all. I don’t know how to judge what work is acceptable and unacceptable, and suspect I ought not to be the one to judge. But the point at which child laborers work in toxic conditions that drastically shorten their lifetime is definitely too much for me.
So I wouldn’t say I’m glad for the 70% of eWaste that is dumped instead of recycled, that at least it isn’t killing children in Ghana. But I might favor laws in the west against outsourcing recycling to countries that lack basic environmental and labor safeguards. And more than anything, I’m proud of Sun’s efforts to keep the hazardous materials out of the computer gear in the first place. Cheers for the EU’s "green design" directive! I hope all manufacturers apply those standards worldwide. That’s the only truly humane long term solution to the problem.
Electrical, not political. A DOE study found that – duh – if you give
consumers information about time varying cost of electricity they will
save money by shifting some power usage from peak to off-peak times.
Consumers in the study lowered their electric bills by 10% and lowered
their peak demand by 15%. This is a big deal because although the
operating cost component of electricity (fuel) depends on the total
energy consumed, the capital cost component (generating plants) depends
on the peak power generation.
Solar power is particularly valuable to a utility because its peak
production occurs in the middle of the day when summer demand from air
conditioning is highest. But there’s another peak around 5-6 when people
come home from work and turn on appliances, and by then solar power
production has fallen off. Thus adding photovoltaic power alone may not
drastically reduce peak requirements for fossil fuel power plants.
Wind power along the California coast has an almost complementary
generation curve to that of solar power, because of the onshore and
offshore breezes in the mornings and evenings. Adding wind power alone
may not drastically reduce peak fossil demand because the wind often
dies down mid-day when the air conditioning load is highest.
But adding solar and wind power together could greatly reduce peak
fossil demand, though perhaps not economically eliminate it entirely.
Then if you added time of day metering to allow consumers to voluntarily
shift their load, that would level even more peaks. Ditto various energy
storage systems like the plan to use night time wind power to pump water
back up a hydroelectric dam for use the next day, super capacitors, and
plug-in hybrid cars. The key to effective and economical use of
renewable energy is a balance of power supply with demand.
The computer industry tries to do the same thing with servers. Demand
for computing services typically follows daily, weekly, and monthly
cycles. When the data center is provisioned for the highest possible
demand, there is a lot of wasteful excess capacity. Even with the most
efficient hardware and the best power management software, running
servers at low utilization is extremely wasteful compared to moderate
utilization. So we try to balance computing supply with demand by
virtualization and workload consolidation, especially if we can find
workloads that are complementary (like wind and solar) in their resource
requirements and/or their load versus time of day.
As network capacities increase and software becomes more sophisticated, you can imagine systems configuring computing resources worldwide to
maximize computing power to the customer at minimum electric cost. Think
of a customer connected from California in the middle of a hot day with
time-of-day electric meters set to the highest price. Of course he might
be routed to servers in Europe or India where the computing demand is
off peak. He might also be routed to servers in Colorado where the
computing demand might still be high, but the electricity demand and
price might be lower. Or to Oregon where a heavy rainfall and cold wave
might mean cheap renewable hydro-power, even at peak electric demand;
and lower than usual data center cooling costs thanks to mixing filtered