When should your data centre upgrade its cooling system infrastructure?
It’s so easy to push it to the back of your mind but like every single piece of technical equipment, air conditioning systems need replacing, upgrading and re-thinking.
Preventive maintenance and regular ‘health checks’ are essential but as technology leaps forward (especially in the field of AI), it’s critical to optimise the lifespan of your infrastructure. After all, in data centres, especially, even a minor temperature malfunction can have a very significant and financial impact on the public and businesses that rely on you.
In our business, it’s all about 100% uptime and ensuring customers have a cooling system that always operates – is yours giving you and your customer the optimal experience?
Let’s look at some key areas:
- State of the art component technology is developing. Fast. This means continual improvements in equipment design and efficiency. Simply put, new components allow for equipment to be optimally designed providing lower operational costs, reduced noise outputs and improved reliability. Fewer breakdowns and less maintenance hours mean obvious benefits for you, your customers and your systems.
- When much of the cooling equipment used today was first designed and installed, it worked perfectly with the environmental conditions at that time, both room and ambient temperature. However, over the last decade we have seen higher ambient temperatures with a higher number of days above 40-45 degrees. Most of the older cooling systems developed 15 years ago and currently being used, especially in Sydney were designed to work at 35 degrees.
As a result of these higher temperatures, we've seen a high frequency of equipment failures occurring in the summer season affecting networks and customers.
There are inherent problems with using out-dated equipment, aside from the equipment not working efficiently and to its optimal capacity, it often fails to meet the Australian government’s Minimum Energy Performance Standard (MEPS) and ASHRAE design standards. Many of the components within these dated systems become obsolete and are simply not available to replace. With no obvious, easy or reliable options – the entire system can be rendered useless.
In addition, refrigerant gas regulations are continually changing, and some older refrigerants are just no longer viable. If they can be sourced, they can be extremely expensive. Stulz always aims to be at the cutting edge of developmental technology and new practices, that's why it's been fascinating to be part of the on-going work and conversations around the future of new server technologies and cloud-based data storage.
- Essentially a data centre, whether it’s in-house or outsourced, houses servers, mainframes, storage devices, routers, switches or other computing and communication equipment in IT racks and needs to be supported by critical infrastructure. And with the IoT as part of our daily living, our job is clear.
Yes, room heat loads may have been reduced due to the outsourcing of the cloud, or there might be fewer servers as a result of virtualisation and new server technologies but the need for robust, effective and reliable cooling equipment, cannot and will never change. At the moment, we are seeing a higher number of system failures because the cooling equipment is oversized for the available heat load in the data centre. The golden rule? Cooling must always be closely adjusted to the heat load.
We all know there are costs with cutting-edge equipment and on-going maintenance by field-based service technicians but the commitment to your changing needs, along with the changing needs of the technology and the industry means Stulz is poised and ready to match your requirements to state-of-the-art equipment to protect and future-proof your business.
The future’s here, and we’re ready. Are you?