Data Centre Design – basic considerations

by | May 10, 2012 | Articles, Design

The key areas to consider when designing a data centre are airflow and cooling, humidification, power supply, and cabling pathways.

Airflow and Cooling

Heat is the principle destructive force for electronic equipment. This is especially true of data centres where a large number of server units are packed close together. Arranging servers and cooling units so that cooling is efficient and effective is important.

The most common method of cooling is utilizing rack-mounted fans to conduct air away from the servers. A “hot-isle/cold-isle” design is the most efficient means of removing heat via the air. Server racks will be arranged in long rows with space between them. On one side of the server, conditioned air will be conducted along the “cold-isle”. Server mounted fans will draw this cooled air into the rack. On the opposite side of the row, server mounted fans will expel heated air into the “hot-isle”. The hot air is then conducted out of the room.

There are several other methods for cooling besides airflow. Liquid cooling systems are common and efficient. They are somewhat more expensive to install, but are capable of handling much higher heat densities than air-cooled systems.  Commonly, liquid-filled coils are installed into the rack. Chilled water is pumped through these coils in a closed system.  Heated water conducts the waste heat to a chilling unit which then returns the cooled water to the server rack.

Humidification

Humidification is commonly the most wasteful component of data centre designs. The practice in older “close control” computer rooms, of using individual humidification control units with a very narrow control band of +/- 5%,  can create a situation where one unit is dehumidifying and a unit right next to it is humidifying. The power cost for both humidifying and for dehumidifying is enormous. Intense control of humidity levels is also no longer necessary. Modern servers can tolerate a range of humidity between 20%-80%. A safe humidity control range is 30%-70%. The most sensitive equipment tends to be magnetic tape rather than the servers themselves but it is unusual to find anything now requiring a humidity control range of closer than 40%-55%. An efficient humidification plan would involve a central unit and would allow for seasonal shift.  That is, the unit would be controlled with a “dead band” to keep the humidification level at the lower range when conditions are dry, to minimise the humidifier load, and at the upper range when conditions are humid to minimise the dehumidification load.  If the humidification system is of the evaporative cooling type the, waste heat from the data centre will provide the energy required and power consumption will be minimal.

Power Supply

Data centres are intense consumers of electrical power. This power is delivered from the grid as AC at 400 volts (or higher) but the electronic circuits within the IT equipment run on DC at around 6 -12 volts. Therefore the power needs to have the voltage reduced by a transformer and be converted from AC to DC by a rectifier. This is usually done by a power supply unit (PSU) within each piece of IT equipment but this is highly inefficient and adds to the waste heat that must be conducted away from the rack by the cooling system. Furthermore the power will probably already have been converted from AC to DC and back again within a UPS (see below) so the losses are doubled. Converting the power to DC only once and then delivering to the racks as low voltage DC is common in telecoms and telephone exchanges typically run at 50v DC. However while this is fine for low power comms equipment there are problems with more power hungry items such as servers because the lower the voltage the greater the amps and consequently the cable sizes (and losses) go up. There are also potential safety issues with high power DC distribution and although there are some data centres which use this system it has never really caught on. A provision for backup power is also vital to data centre operations. An emergency generator and Uninterruptible Power Supply (UPS) system with backup batteries should be installed in case of any interruption in the power grid. The backup batteries must be sufficient to completely power the system during the transition between grid power and emergency power. The latest UPS’s have an energy saver mode (ESM)which runs the power straight through the unit without going through the usual “double conversion” process which is necessary when the system goes to battery backup. This greatly increases the efficiency of the UPS at all loads. Switching from ESM to double conversion occurs in less than half of one AC cycle and will not even be registered by the IT equipment down the line.

Due to the high-energy demands of large data centres, self-generation may be an option for such facilities. In this case, gas fired generators will provide the power for the site and the waste heat from the engines can power absorption chillers thus reducing cooling costs. Consideration needs to be given to how back-up will be provided in the case of a disruption of the gas supply and conventional diesel standby generators may still be required. Alternatively instead of making the self-generation scheme totally independent of the power grid they could be linked such that the power grid becomes the “emergency power”. Whichever route is taken any mission critical facility should have at least N+1 redundancy on the UPS’s and generators.

Cabling Pathways

Server organisation and cabling is very important for a data centre. The ability to take-away or add-on components or racks to the system without interrupting operations is very important. Cabling should be organised in such a way that individual racks can be disconnected easily from the system. The cabling should also allow for additional racks to be added easily.  Separation of power and data cables is essential to prevent interference.

Easy replacement is important. IT equipment such as servers, storage devices and switches are typically replaced every 3-5 years. Cabling, on the other hand, is expected to last several generations of rack equipment and be replaced every 15-20 years.