Unusual data centers

Bahnhof Data Centre, Sweden
Cristina De Luca -

June 03, 2021

In March 2021 OVH had one of its data centres in Strasbourg, France, destroyed by fire and another partially damaged, paralysing the services of more than 3 million websites, including government agencies, banks, shops, news and gaming services. The shutdown of one of Europe’s largest data centre companies, a direct competitor to Amazon Web Services, Microsoft Azure and Google Cloud, yielded a loss of millions of euros to the company’s customers.

In a time when data is already more important than oil, it is not hard to imagine the damage this also caused to the company, its image and reputation. The OVH case, however, serves as a double example: if on the one hand it showed what can happen when a disaster strikes a large operational centre, on the other it illustrated how its management should behave in the face of such a crisis.

What customers expect is full transparency about the causes, the damage and the actions being taken by the data centre to return their services to normality. And that is what OVH has apparently done. But it is known that this is not always the case. When making the decision to outsource information processing and storage services to a cloud provider one has to weigh the risks and costs. And what history shows is that the returns are generally higher, even when taking these events into account.

Aside from “natural” disasters, there are human ones. In recent weeks a cyber attack on Colonial’s pipeline on the US east coast disrupted supply across the country, causing fuel prices to rise and a major national upheaval. Hackers broke into the company’s servers, took possession of the pipeline’s digital controls and demanded a ransom. In February a hacker gained access to the water system of a town in Florida and tried to inject a dangerous amount of a chemical compound into the water circulation. And a few years ago a group hacked into power station systems in Ukraine, turning off electronic switches and causing a blackout that affected thousands of people.

These are risks that any company runs when its systems are connected to the internet in any way. And the human motives are as diverse as possible, such as financial gain, protest, revenge, competition, or any other that causes some damage to the attacked company (or its customers) and any bonus to the attacker.

Thus, it is no surprise that many data centres are investing in more efficient ways of protection and recovery from disasters, such as earthquakes, tornadoes, fires, explosions, power outages, overloads, invasions, network unavailability, among many others that can – and will – happen. Among these investments, two call our attention: the construction of underground data centres and the installation of underwater facilities.

The search for more “hidden” locations has already led dozens of companies to build underground data centres, many taking advantage of ready-made structures, ranging from abandoned limestone mines, such as the Bluebird Data Centre in Missouri, glacial mines, such as the Lefdal Mine in Finland, nuclear shelters, such as the Bahnhof Data Centre in Sweden, to church basements, such as the Barcelona Supercomputing Centre, MareNostrum. The underground offers great advantages in temperature control, construction costs and the fact that they are out of public view.

MareNostrum, Barcelona Supercomputing Centre

Since a large part of data centre costs is energy consumption, when choosing an underground location companies have been looking for access to cheaper, renewable energy, such as windmills or solar power. And taking into account that nearly 50% of the energy costs of a data centre is spent on keeping the temperature cool and stable, the smaller the thermal amplitude of the environment the better, without direct incidence of sunlight during the day and protected by thick stone walls. Ventilation, in this case, can be done mostly naturally, by vertical or horizontal ducts, or by the circulation of glacier water in some cases.

Among the challenges that every data centre has to address are all the redundancies in power and network connections. While this point is easy to address in a land-based data centre, whether above or below ground level, underwater it is more complicated. Microsoft pulled an experimental data centre out of the ocean in July 2020 to assess the condition of machines and structures, both internal and external, after just over 2 years of submersion. The Natick project was using solar and mechanical energy, obtained from the ocean waves, to keep it running. In its network connection to the surface, it was even using post-quantum cryptography to ensure the security of the data processed there.

Microsoft’s Natick Project

Installed in a metal cylinder no more than 12 metres long and sunk under almost 120 metres of saline water off the coast of Scotland near the Orkney Islands, the data centre housed 12 racks with 864 servers and 27.6 petabytes of storage, consuming close to 240kW. And it was filled with Nitrogen gas, to avoid the wear and tear caused by Oxygen to the metallic parts. During the time of the experiment, the company compared its performance with another one on the surface, always running the same processes and with the same connections. It concluded that the reliability index of the underwater project had been eight times higher than the equivalent on the surface, with fewer server failures.

Despite being an experimental project, it proves that building underwater data centres is not far from becoming mainstream: it is possible, feasible and has lower costs and greater reliability than a terrestrial data centre. And it is insulated from human error and other disasters that can happen on the surface. On the other hand, it presents the impossibility of physical manipulation of its components and an exposure – even if partial – of its connection cables.

No matter what type of data centre, it is worth remembering that the service needs, besides being cost-effective, to include redundancy, security, local support, varied connections, flexibility of compositions and validated disaster recovery procedures. If it has specific certifications for your application and provides uptime statistics, so much the better.

As in the case of OVH, having redundancy and local backup is not enough. You need your data and algorithms to be stored in another location as well, one that can be instantiated immediately in the event of a breakdown in the original installation. And this process must be done and reviewed by your company when hiring the service, to ensure that it is working. It is not unusual to see cases where intensive backup processes are built, only to find out that, on the day of a real failure, the backups are corrupted and there is no way of recovering them. Therefore, the recovery process must be carried out cyclically.

The increasingly frequent cyber attacks also imply having reinforced security, both logical and physical, with encryption mechanisms for data storage, processing and transmission, and prevention systems, such as firewalls, routing, alerts and access blocking. The damage from a ransomware attack can be small compared to the legal costs of leaking customer data.

Many data centres have passed inspections by certification bodies and offer specific seals, such as HIPAA for healthcare services, ISO 20000-1 for IT services and 27001 for security, SSAE 18, PCI DSS for financial services, among others. Depending on the service your company provides to its customers and the region of the world they are in, one or more of these certifications will be required.

And depending on the distance that separates your employees from the data centre chosen to run your systems and store your data, it will be beneficial for the contracted company to also offer remote support 24×7 every day of the year, by means of technical experts who can physically access your server and perform local operations that cannot be done remotely, via SSH for example. Sometimes, in co-location services, it is necessary to change a network card, a memory module, or even a disk and the cost of sending an employee may not be compensated.

The geographical coverage of the communication network and the last mile provider that will connect your offices, your employees and your data centres is another critical point. In general, data centres offer a variety of connection options, which may or may not be redundant (having more than one network entry in case one of them fails). More important than speed, variety and redundancy, however, is signal latency. In financial applications connected to stock exchanges, for example, a few nanoseconds can make the difference in the value obtained in purchasing and selling shares, which can mean millions in lost opportunities. And in the management of an oil pipeline or power grid, a split time difference can lead to a leak or an explosion.

Therefore, choosing the type of cloud architecture among the options offered by the data center should be carefully considered. High-frequency financial applications, such as buying and selling stocks, may benefit from co-location close to the stock exchange servers, while systems that require high security in banking transactions should take better advantage of a hybrid architecture. In that sense, the greater the flexibility of data centre options, the better.

And, of course, never fail to check uptime statistics, load volume and number of clients, as well as getting to know some customer success stories of your future contractor. In particular, knowing how the data centre handles potential crises helps predict the kind of relationship your company will have if something goes off track. Always seek to understand what the recovery processes are in the event of natural – and human-caused – disasters, and try to simulate how your services will behave once installed and running remotely. Ask what happens in the event of an earthquake, a tsunami, a power failure, a provider network outage, a denial of service attack, a hacker invasion, or even if a rat gnaws on the sprinkler pipes.

Underground or underwater, your next data centre may not even be here on Earth. If NASA and some partners have their way, in a few years time it will be possible to process your data in orbit but, of course, with a latency a little higher than terrestrial networks, greatly reducing the scope of possible operations. However, taking processing into space opens up new fronts for materials and genetic research, in conditions very different from those that can be simulated on the planet’s surface. Until then, local options are still many and far more reliable.