Nscale launches AMD GPU-based cloud services

A close-up view of a high-performance GPU
Sheila Zabeu -

May 21, 2024

Nscale, in collaboration with AMD, is launching high-performance GPU-based cloud services. The company is the result of a spin-off process from Arkon Energy, a datacenter infrastructure company with a capacity of 300 MW, and will provide services from Arkon Energy’s Norwegian N1 Glomfjord plant, which is powered by hydroelectric energy and cooled naturally.

According to Nscale, it is also the first Artificial Intelligence (AI) GPU service provider to utilise AMD’s Instinct MI300X accelerators, which provide sustainable high-performance, low-cost computing resources. With this, Nscale aims to set a new standard for cloud systems based on AMD technology, as an alternative to NVidia’s solutions.

Another advantage pointed out by Nscale is the strategic positioning of Nscale’s N1 Glomfjord unit, which makes the centre one of the most cost-efficient in the world for large language models (LLM) and AI training, guaranteeing high levels of performance and sustainability.

‘In the coming years, Artificial Intelligence will be present in all companies, products and services, driving an unprecedented demand for infrastructures. Nscale’s mission is to democratise high-performance computing so that the potential of AI can be explored,’ says Josh Payne, co-founder and CEO of Nscale.

According to the executive, Nscale’s AI cloud services platform is 100 per cent renewable and vertically integrated from the connection to energy sources to the cloud. ‘By combining AMD’s cutting-edge technology with abundant, low-cost energy from Norway, we are offering an AI platform in the cloud that is not only powerful, but also sustainable and economical,’ adds Payne.

‘At AMD, we believe in the transformative power of Artificial Intelligence and are excited to be working with Nscale to deliver instances of the AMD Instinct MI300X accelerator in the cloud. Through Nscale’s AI Cloud Services platform, the MI300X will deliver highly competitive Generative AI inference performance, relying on the AMD ROCm open software ecosystem to offer a compelling total cost of ownership (TCO) advantage over market alternatives,’ says Andrew Dieckmann, corporate vice president and general manager of AMD Instinct Business.

The main features of cloud services based on AMD chips are:

  • Vertical AI cloud: Nscale operates the entire AI stack, from modular datacenters to high-performance computing clusters, allowing each layer to be optimised for greater performance and cost efficiency.
  • Sustainability: Located in northern Norway, Nscale’s solution is powered by 100 per cent renewable energy and uses natural cooling solutions to ensure sustainable high-performance computing services.
  • Cost-effective solution: Nscale’s vertical integration and low-cost renewable energy sources make it possible to provide GPU-based computing solutions that, according to the company, are the most affordable on the market.
  • Unrivalled expertise: Developed specifically for AI use, Nscale’s solutions enable the installation, configuration and management of fast supercomputing clusters in order to accelerate Artificial Intelligence R&D activities.

Nscale wants to cater for a wide range of user profiles, from start-ups to large companies that want to take advantage of the benefits provided by AI technologies, deep learning and Big Data analysis, but face barriers imposed by high costs and environmental concerns associated with cloud services based on traditional GPUs.

Where the energy comes from

Arkon Energy is a company that operates mainly in the datacenter infrastructure and renewable energy sectors. It focuses on developing and operating bitcoin mining solutions powered by renewable energy. It also acquires distressed datacenter assets around the world, mainly in the US market.

At the end of 2023, it had already announced that it would invest $30 million in an Artificial Intelligence cloud services project at its datacenters in Norway to help meet the demand generated by Generative AI and LLM model training. The intention is to address the shortage of specialised infrastructures to serve most of these underlying AI technologies.