FacebookTwitterLinkedIn

Edge Computing in space: faster actions without ground interference

https://network-king.net/wp-content/uploads/2021/05/pexels-pixabay-41006-769x450.jpg

How to assess the state of health of a sick astronaut? How do you know if atmospheric and surface conditions on a planet present any danger to a crew on the verge of exploring the place? How to find the ideal spot on the lunar surface to fuel a small rover carrying supplies? Or how to test the equipment and spacesuits of astronauts who need to leave a space station to carry out repairs to the outer structure? Space presents big challenges, and they require critical decisions to be made quickly. In seconds, not minutes or hours.

Transmitting an information of a few bytes from the Moon to a ground receiving station can take 5 to 20 minutes. The first panoramic image the Perseverance rover took when it landed on Mars in February this year took almost two days to reach Earth. The landing confirmation itself took 11 minutes, time for the radio signal to cross the 330 million km that separate the two planets.

Bottlenecks like this hinder any quick decisions that have to be made after processing the received information, which already arrives late on Earth. So concentrating more computing power at the opposite end of the Earth can solve many of the challenges faced by researchers, scientists, astronauts and the like when working in orbit around our planet. But other problems add up to transmission distance, such as limited physical space, the power required to make things work, and equipment cooling mechanisms.

The name given to the technology that allows local processing of data collected at the edge is Edge Computing, which speeds up decision making and the configuration of specific actions in the shortest possible time. Applied from agriculture to autonomous cars, the technology has evolved to bring more and more computing power to the edge, freeing up communication paths with cloud processing centres. And the applications become even more interesting in space missions – where communication channels are long with high latency, and large local computing power would depend on vast power supplies.

The big challenge of Edge Computing is to balance local computing power, in a small physical space, consuming little power and minimal bandwidth for communication with command centers. And yet achieve the feat of processing vast amounts of data locally to create an optimized decision-making solution that can positively affect your business or your mission. This also increases the challenge of monitoring this infrastructure. Monitoring is vital to protect critical assets, ensure network performance, identify systemic anomalies and identify internal/external cyber threats.

According to MarketsandMarkets, the Covid-19 pandemic has accelerated the global Edge Computing market, which is expected to grow from $3.6 billion in 2020 to $15.7 billion by 2025. The need for more focused technologies at the edge where data is collected has increased and will continue to grow, due to several factors: the evolution of autonomous cars, the space race, the increase in the number of employees working remotely are just a few examples. All of them overload communication network bandwidths, increasing processing latency and demanding more computing power and local storage.

The launching of more and more satellites, rockets and the expansion of space stations such as the ISS (International Space Station) has demanded an enormous advance in energy capture and storage compositions, computing power in nanometric circuits, protocols and means of communication and storage.

NASA, IBM, RedHat, and HPE initiated this year an Edge Computing DNA sequencing project on the ISS, connecting the space station’s systems to datacenters on Earth. Using Hewlett Packard Enterprise’s Spaceborne Computer-2 (SBC-2), astronauts and researchers on the ISS will be able to perform experiments with information collected locally and from other satellites, without having to send the data back to Earth for processing. The computer was sent to the ISS aboard the Cygnus spacecraft, attached to the Antares rocket on the NG-15 (15th Northrop Grunman Resupply Mission to Space Station) mission, launched on 20 February from Wallops Island, Virginia, USA. The mission was named after black mathematician Katherine Johnson, who played an instrumental role in the first manned space voyages.

The aim of taking Edge Computing into space is to speed up the results of experiments conducted on space stations so that decisions can be made more quickly to change the direction of research, making technology a key factor for future space missions to the Moon, Mars and beyond. It can also be used to improve calculations of trajectories and routes of other spacecraft, satellites, among other basic needs for displacement and daily life in space.

Only in the DNA sequencing Genesis in Space-3 project, the data collected would take weeks to reach the hands of scientists on Earth. By using containers with analytical code aboard the ISS, the scientists’ dependence is no longer a barrier to quick results. The solution uses Red Hat CodeReady Containers on OpenShift clusters – which package everything needed for machine learning code to run, along with operating system tools and libraries – and will allow research to be replicated in terrestrial datacenters, where scientists will develop, test, and create new analytics code to be sent back to the space station.

Another important Edge Computing operation is that which organizes nanosatellites, in orbit 400 to 600 km from the Earth’s surface, to perform a common sequence of tasks, such as studying changes in weather, monitoring natural disasters, and warning of potential national security breaches. A nanosatellite is by definition a space technology object that weighs less than 10 kg and more than 3,000 have been launched into space since 1998. Organised in constellations of tens, or hundreds and with solar panels mounted on their surfaces, they are capable of operating at a peak of up to 7.1W.

Chart showing constellations launched and being planned, with their respective numbers of nanosatellites

Constellations launched and in planning – https://www.nanosats.eu

But power capacity is not the only limiting factor of these gadgets. Their small volume does not allow much technology to be packed into them, in particular reducing the on-board circuits and even the possible focal length of their camera’s lenses. Nanosatellites store information until they are over a terrestrial reception station, downloading at that moment all the data they have collected. This provides a window of up to 5.5 hours from capture to transmission to Earth. Ground stations that have a 200 Mbps link, for example, can receive up to 15GB of data in each 10 minute session. Even in ideal weather situations, this would limit reception to 9 nanosatellites per revolution, making 112 strategically placed stations needed to cover a constellation of 1000 nodes.

Even with improvements in ground station capabilities, the nanosatellites themselves have limits on their communication links, making the task of taking “orders” from down here very laborious and inefficient. So giving that control power to the satellites or space stations themselves is the only effective way to organise hundreds of nanosatellites on special missions. And getting them to act collaboratively can dramatically decrease the amount of data each needs to collect and transmit, making a network of receiving ground stations more easily able to handle the growing number of constellations with planned launches in the coming years.

And many of these nanosatellites may come equipped with new processors designed specifically for use in space. Companies such as Loft Orbital, for example, are developing Edge processors that can work as in Mesh networks and perform more complex computations in parallel, making them a solution for heavy image processing and analysis projects, or weather simulations, for example.

There are no barriers to the applications of Edge Computing technology. In fact, Gartner indicates that in 2018 only 10% of the data generated by companies was processed outside centralised data centres, but that by 2025 this figure will reach 75%. The sky is no longer the limit for this technology.

FacebookTwitterLinkedIn