Saturday, July 16, 2011

Green Data Center Showcases Techniques to Reduce Computer Energy Use

Orange Lead the Way

The Syracuse University Green Data Center uses novel techniques such as trigeneration with microturbines and absorption chillers to reduce energy use, creating a model its designers hope to replicate with other data centers as computer energy consumption soars.

Cooling towers on the roof give a hint of the operations that take place within the nondescript data center building.
Cooling towers on the roof give a hint of the operations that take place within the nondescript data center building.

On an overcast February day with snow on the ground and slush on the roads, I turn left and make my way through the South Campus at Syracuse University in upstate New York, about a mile from the main campus. I could’ve turned right for a tour of the main campus and a peek inside the famous Carrier Dome, where the Syracuse Orangemen play football and basketball, but that would have to wait until later. I come to a nondescript, gray, nearly windowless building, and I know I’m at the right place because I see cooling towers on the roof.

This is the new Green Data Center (GDC) at Syracuse, completed in December 2009 and used by the university as its primary computing facility. They design buildings like this to blend in with their surroundings and locate them in innocuous places. But that belies the mission that takes place inside and the unique engineering project behind this groundbreaking building.

Mark Weldon, executive director of corporate relations at Syracuse, greets me at the door and escorts me inside. “This is the greenest data center in the world,” he proclaims. He tells how their previous data center was housed in a 100-year-old building that had become too outdated to continue using.

In explaining how the project came about, Weldon says they partnered with IBM. “We wanted to start something big.” IBM responded by challenging them to design and build a data center that would cut energy use in half, and they gave them two years to do it. “With that timeframe, we couldn’t invent anything new. We put existing technology together in a unique way.” Kevin Noble, manager of engineering at Syracuse University for campus design, planning, and construction, joined us and commented, “This project has been one of the most interesting and complex ones I’ve ever done.”

As the fruit of this effort, the $12.4 million, 12,000-square-foot facility contains specially configured infrastructure space for a power plant, including mechanical and electrical equipment to run the building, and 6,000 square feet of primary raised-floor data center space for computers and servers.

Data centers such as this have taken on added importance with our society’s ever-growing computer use. Roger Schmidt, chief engineer for data center energy efficiency in the Server Group at IBM, states, “Storage has increased by about 69 times over the last decade, and servers have increased by about 10 times. It’s a huge explosion of IT equipment in data centers, and that contributes to a big power increase.” Compared to a typical commercial building, data centers consume 30 times the energy per square foot on average.

The GDC actually came about through a collaboration between Syracuse, IBM, and the New York State Energy Research and Development Authority (NYSERDA). Schmidt says IBM had worked with Syracuse for many years, holding meetings with the provost, engineering school, and data center operators. At first it was just about enhancing the old data center by putting in better equipment and best practices. When building a new one entered the picture, IBM donated $5 million in design services and computer equipment, and Syracuse got $2 million from NYSERDA.

Noble and his staff of five engineers guided the project, picking the design team and contractors and helping evaluate different options. One staff engineer, Jim Blum, served as project manager, and another one, Alex Medvedev, a mechanical engineer, served as the commissioning agent.

Fast-Track Design-Build Effort
The project consisted of two parallel design-build efforts that eventually merged. BHP Energy and GEM, Inc. handled design and construction of the power plant portion of the project, which included a trigeneration system and the incoming electrical distribution. Headquartered in Toledo, Ohio, GEM is a large mechanical-electrical construction firm, and BHP Energy is a design firm owned by GEM. BHP is headquartered in Hudson, Ohio, a Toledo suburb, and has offices in Toledo and Saratoga Springs, New York. The data center building itself and architectural design fell under VIP Structures in Syracuse. They retained an MEP (mechanical-electrical-plumbing) engineering firm, Towne Engineering of Utica, New York. Taking this approach, the team actually built the facility in 188 days to meet the deadline.

Dave Blair of BHP Energy explains the operation of the microturbines during a tour of the facility.
Dave Blair of BHP Energy explains the operation of the microturbines during a tour of the facility.

In reflecting on that, David Blair, president of BHP Energy and an electrical engineer, says, “It was probably the high point of my career. It was one of the most exciting projects I’ve ever been part of. I’m not a big fan of meetings, but the meetings at Syracuse were something I looked forward to. It was always an exciting experience because you had synergy when you bring a group of people together and you give them a goal of going beyond what’s been done before.”

Venturing into the power plant section of the building, Weldon took me into a room containing the backbone of BHP's integrated power system: 12 Capstone microturbines arranged in two rows of six for electric power generation. He explained that most data centers operate from the electrical grid and have diesel generators for backup power. “We can operate off the grid and use the grid as a backup.”

Gas-powered microturbines generate electrical power and heat for hot water and cooling.
Gas-powered microturbines generate electrical power and heat for hot water and cooling.

A microturbine is a combustion turbine engine that has come into vogue over the last 10 years for stationary applications as a form of distributed generation. Fueled by natural gas, the 12 microturbines here can generate all the power needed, enabling the data center to operate completely off-grid.

Capstone manufactures microturbines at two facilities in Chatsworth, Calif. and Van Nuys in the Los Angeles area and offers them in 30kW, 65kW, and 200kW sizes. They design and manufacture the electronic equipment, including generators and PLCs (programmable logic controllers) that control their machines. Their microturbines operate on a variety of fuels, including natural gas, biogas, flare gas, diesel, propane, and kerosene.

For this project, Capstone developed a new turbine product in six months, the Hybrid UPS (uninterruptible power supply) based on the C65, which produces 65 kilowatts of electricity. According to Steve Gillette, VP, business development at Capstone, “We can simply run the microturbines when the electric rates are high. It’s really a good match for a data center. We can now save money every day compared to the traditional UPS and backup diesel genset, which only adds value in the case of an infrequent outage.”

One component of Capstone’s microturbine design that makes them viable is an air bearing, which enables the turbine to spin at 96,000 rpm. This has a foil shaped like an airplane wing, and as the shaft starts to rotate, the foil pulls the ambient air in to create a thin film, and then it pushes that foil out slightly, so the shaft floats on air, minimizing friction and eliminating the need for lubrication. (Other turbines like those in jet engines use traditional oil-lubricated bearings because they have to support large mechanical loads.)

But even with this, Weldon points out what he considers the greatest area of energy savings in the data center. “When you get power from a utility, there are transmission losses.” Normally you have to convert high-voltage AC power from the grid to low-voltage DC power for computers. The GDC has its own DC sub-distribution system, with grid power routed through electronics in the microturbines. “Generating our own DC power saves about 10 percent of our energy use.”

Multiple Outputs Boost Efficiency
As good as they sound, microturbines convert only about 30 percent of the fuel energy to electricity, explaining why engineers like to capture the waste heat they generate for use in cogeneration applications to improve efficiency. In this case, they went a step further and employed trigeneration -- combined cooling, heat, and power (CCHP). As a distributor of Capstone turbines, BHP Energy has developed its ReliaFlex Power System, and this marked the first use of CCHP with uninterruptible power. As Gillette remarks, “We can get up to 80 percent total energy conversion efficiency compared to the electric utility grid that’s only 33 percent. You get two or three outputs from one fuel input.”

Driven by waste heat from the microturbines, absorption chillers chill water to cool the servers in the data center.
Driven by waste heat from the microturbines, absorption chillers chill water to cool the servers in the data center.

The 585F exhaust stream from each microturbine is collected in a common duct, and that flows to two heat-recovery modules, one for hot water and another for absorption chillers that make chilled water. These modules use conventional tube-and-shell heat exchangers.

I get to see this as we proceed into a room with the chillers and heat exchangers, where I am treated to a mechanical engineer’s dream full of brightly color-coded pipes and pumps. Two chillers generate 300 tons of cooling, 100 for the data center and 200 for the building next door, a 100,000-square-foot research and office facility known simply as 621 Skytop (its address). The system generates enough cooling that it could be used in warmer climates. Data centers need air conditioning most of the time to cool their computers and data servers. The chillers can chill water to as low as 45F, but currently they’re using 67F water for cooling both the servers in the data center and the space in the building next door.

Absorption refrigerators are a popular alternative to the standard four-stage (compressor, condenser, expansion valve, evaporator) vapor-compression variety where a source of waste heat is available to drive the cooling. The technology has been around since the 1970s. BHP Energy chose Thermax USA double-effect absorption chillers based on favorable experience with them in past projects.

Kevin Noble joined us again and explained just how you get cooling from heat in an absorption chiller. “It’s all magic,” he jokes. I would later pull my old thermodynamics textbook from the shelf to brush up on phase diagrams and refrigeration cycles so I could understand what he said. It seems an absorber, generator, and heat exchanger essentially replace the compressor found in a vapor-compression cycle. The chillers use water as the refrigerant, operating on the principal that water in a vacuum evaporates at low temperature. The vacuum is maintained by circulating a lithium bromide solution that absorbs the vapor from the evaporating water. The waste heat from the microturbine exhaust re-concentrates the solution by releasing the water vapor, which is then re-condensed in the cooling tower on the roof before passing through the expansion valve and on to the evaporator. With no moving parts other than water pumps, these chillers prove reliable and quiet.

Chilled water from the chillers is piped under the floor to racks of servers the size of refrigerators in the data center. Weldon showed me a rear door on a server rack with a heat exchanger in it that looked like a typical radiator coil with fins on it. The servers have fans that blow air horizontally outward through the doors. The cooled air then recirculates to cool the room and ultimately the servers.

Doug Hague, communications technician, peers inside a server cooled by IBM’s Rear Door cooling door.
Doug Hague, communications technician, peers inside a server cooled by IBM’s Rear Door cooling door.

This is IBM’s Rear Door Heat exchanger cooling door, made by Coolcentric. These remove heat more efficiently than conventional air conditioning. Sensors monitor server temperatures to determine how much cooling each door should provide; the environment can be controlled in each rack of servers.

Exhaust from the microturbines also flows through two Cain heat exchangers in the room with the absorption chillers to produce hot water. Noble says, “Depending on season and load, we can use that hot water to run the perimeter heat in the adjacent building, preheat the outside air used for ventilation, and produce domestic hot water. There are very few heat loads in the data center.”

Mark Weldon shows off batteries that start the microturbines and provide backup power.
Mark Weldon shows off batteries that start the microturbines and provide backup power.

Next, we went into a room containing 44 tons of sealed of batteries that augment the turbines. They start the turbines and provide emergency backup power in the unlikely event that all 12 turbines and the utility grid fail to provide enough electricity to maintain operations. The 300-volt battery banks generate at least 17 minutes of full data center power, permitting an orderly shutdown of computers in the event of a calamity.

Automatic Control System Does the Thinking
An automated control system complete with computers and PLCs decides which form of power to use in the GDC. In normal operation, power comes from the electrical grid, and the microturbines act as a current source with their output set to match the thermal requirement imposed by cooling the servers. With the loss of grid power, the microturbines kick on and act as a voltage source with the load setting the current. According to Noble, “With the utility rate structure in our area, it doesn’t make economic or environmental sense to operate the microturbines purely to generate power. You have to be able to use at least a portion of the thermal energy from their exhaust.”

In walking around the data center, Noble notes, “This is a lights-out data center. It has no staff and is typically controlled remotely from someone’s laptop computer.” He adds, “We have extensively instrumented this facility. The ultimate vision is to have it fully automated.”

Indeed, Mark Weldon showed me sensors in power strips along the doorway of a server rack, and the servers themselves have sensors. He estimates they have about 30,000 sensors for measuring temperature, amperage, voltage, and computing capacity (chip load), among other things.

But with all this technology employed in a quest to save energy and increase the efficiency of data centers, one question begs: Did they consider the use of renewable energy? When I posed this question to Noble, he replied, “We are actually considering supplementing our DC power system with solar panels. The adjacent building has a flat roof that’s over 75,000 square feet.”

The GDC is gradually coming online as equipment is being moved into it. Meanwhile, IBM uses the GDC as a showcase and research center for trying new technologies. According to Schmidt, “The idea is to deploy some of these technologies in our clients around the world.” He adds, “We’re working with the mechanical and electrical engineering departments at Syracuse University on software tools that will help our clients design better data centers and help their legacy data centers improve on energy efficiency.”

Hopefully, the creative thinking at the beginning of the project and the hustle to meet a tight deadline will pay off in many ways for years to come. While Syracuse University will benefit from reduced energy use in its computer operations, other data centers will as well as time goes on.

And now for that tour of the main campus and the Carrier Dome...