Keeping Astronauts Safe with Cloud-based AI in Space

Currently, entities ranging from national defense departments to oil companies are pushing computing to the cloud. They believe that storing and using data in the cloud makes it more accessible from different locations and minimizes data loss in the event of a disaster. Others are focused on computing at the edge, because they rely on IoT and other connected devices to collect data where it is generated. For example, a robot on the factory floor may use a camera to capture an image of the car it assembled.

“The path forward is to understand the value and strength of each. There’s a balance between them,” says Mark Fernandez, Americas HPC Technology Officer at Hewlett Packard Enterprise (HPE) and principal investigator for the HPE Spaceborne Computer-2 (SBC2) system on the International Space Station (ISS).

“The mental shifts we’re making will help edge operations, from manufacturing to satellite operations, to take in data from edge devices, regenerate an inference model using AI, and redistribute the data to edge devices,” says Fernandez. Eventually he hopes that this will “also help doctors and first responders craft personalized medical treatment at the edge. Here, the edge includes [medical activities in] remote areas, areas impacted by natural disasters and the battlefield.” This also includes the most remote places of all: space.

Currently, the ISS is employing Artificial Intelligence (AI) algorithms on SBC2, which is enabling ISS scientists to find new ways to maximize safety and efficiency in remote environments. Some of their recent breakthroughs include using AI to identify damage to gear using video, process more of the astronauts’ health-related data on-board and test the quality of 3D printed metal parts.

He adds that one of the perks of doing his job is seeing mental light bulbs “go off” in the heads of people working in other fields. “When they see how things work on the ISS, they consider changes in how they collect and apply data [on Earth].”

Even a Small Spacesuit Tear is a Big Deal

The determination of damage to spacesuit gloves is one of the most successful uses of AI algorithms relying on cloud and edge technologies. Astronauts on space walks, also known as extravehicular activity (EVA), can tear their gloves on sharp corners of equipment or the spacecraft itself. Historically, astronauts performed several visual checks of the gloves before and after space walks. They would take multiple photos of their gloves after each EVA, then the photos were transmitted back to a team of experts on Earth and each photo would be reviewed individually. 

An AI app reviews an astronaut’s inspection of wear on a glove following a spacewalk. (Image courtesy of Microsoft Cloud.)

However, due to the increasing communications delays that occur the farther a spacecraft gets from Earth, having ground-based teams checking equipment for damage will become significantly more difficult. Finding ways to decrease a crew’s dependence on ground networks is also critical because power, cooling and network access are not always stable. For example, there are multiple times a day where the ISS’ location in orbit means the crew does not have connectivity. Therefore, NASA wanted to increase the ability of onboard systems to make it easier to check for damage and maximize the use of equipment and the astronauts’ time.

The shift in procedure came in 2022 when Microsoft, HPE and NASA collaborated on an AI workload test to run on SBC2. NASA and Microsoft developed a computer vision application that identifies the condition of the space gloves. The application runs on Microsoft’s Cloud computing platform.

NASA and Microsoft then deployed the app on SBC2 and the ISS’s AI-enabled software and hardware platform. The app allows both local and remote analysis of the glove’s condition. The idea of pushing the safety validation to the edge, making it feasible to do in a place as remote as the ISS, should assist in designing future missions to the moon and Mars.

One of the keys to success was switching from photos to video.

“Instead of processing individual photos, we now process a video of the astronaut looking for the damage,” says Fernandez. “This allows us to process 60 pictures a second at the edge. We were able to reduce the download of images to Earth by 97 percent. That will save bandwidth, time and potentially lives.”

He adds that the AI app may someday monitor an astronaut in real time as they do an EVA. Hopefully, in the near future, an astronaut will not have to spend time taking photos after they return to the ISS. The AI could then reduces the risk of having to cut an EVA short, and enable ground support to easily compare glove conditions after different EVAs.

Individualized Medical Care, Starting with the Genome

NASA performs DNA analysis on astronauts on a regular basis to determine how individuals are being affected by the radiation in space. Using AI algorithms onboard with Microsoft’s Cloud computing platform decreases processing and sequencing time from months to minutes.

“We can also get a much finer resolution of the data,” says Fernandez. “We can examine how radiation is affecting men versus women. We can [also] compare how radiation is affecting people from different ethnic backgrounds, like people of Asian descent versus people of European descent.”

Searching for new gene mutations quickly will help NASA see if mutations are benign or linked to cancers that would require immediate care. The same tools used for gene processing and sequencing can also be applied to ultrasounds and X-rays. The overall improvement in operations was made possible by a mental shift on how to consider potential concerns.

“We said, why not compare Joe’s genome today to Joe’s genome yesterday? That’ll be faster than comparing it with his genome when it was assessed by ground support,” says Fernandez. “If there are no changes, we’ll know the mutation has not gotten worse. This change also enables personalized, customized medicine at the very farthest edge.” By using this comparison method, the astronauts don’t have to send the whole gene sequence to Earth, only the parts that differed.

The same concept can be applied to other scenarios at the edge, like doctors treating patients who require constant monitoring, as well as robots and manufactured products on a factory floor.

“Say you have a reasonably sized data center that’s taking in data from devices at the edge, like assembly robots,” says Fernandez. “You can regenerate an inference model with AI and redistribute it to the edge devices to perform their job more autonomously. Then all of the edge devices perform their job better.”

Future Applications Include 3D Printing

One project in progress for HPE is the use of AI algorithms to perform 3D printing in space. Building parts onboard the ISS will reduce the need and costs for resupply ships, which typically cost $10,000 per pound to get off the ground.

HPE is working on the concept with the Cornell Fracture Group (CFG), a group of researchers based at Cornell University who study the deformation and failure of structures. Members of the CFG have worked on numerous projects on topics ranging from plastic deformation to aluminum crack growth for the U.S. Army and the U.S. Navy.

The first step of testing was to develop a modeling software that could simulate 3D printed metal parts. The CFG then worked with HPE to test this software on SBC2 with edge computing. The goal was to determine the performance and likelihood of failure when printing metal parts in space.

“Houston, We Don’t Have a Problem,” Says Future Cloud-based AI

NASA, HPE and Microsoft are also partnering on more advanced cloud services.

“Microsoft is considering an instance in Azure that’s Spaceborne Computer-like,” says Fernandez. “That means the operation will perform work more autonomously, without as much input and communication from a central server. Companies that now just hope their code will work will soon have greater confidence that it will work. The code will have already been tested through Azure.”

HPE is also working on AI apps to help edge-based cloud computers like SBC2 know what “normal” conditions look like. This is a hard objective to achieve, because the definition of “normal” can shift.

The basic concept involves an AI app checking gauges and filters on a person’s behalf. The AI app will pair with SBC2, or a similar on-premise data center for a company, to send an “everything is fine” message back to Houston or headquarters. That will save effort for a human team. SBC2 or the data center could raise a red flag if something is wrong. The AI app could then alert ground support or headquarters more quickly and with more data than a person could.

“When we get a crew that is headed to Mars, it could take seven to ten months to get there,” says Fernandez. “It’s not good for their mental health to get up every day and check the same gauges, filters and readouts. Yet it has to be done because there will be limited or no bandwidth back to Earth. Bringing together AI, edge and the cloud has the potential to make everyday life aboard a spacecraft easier and more enjoyable for people.”