20,000 Terabytes Under The Sea

Northern Isles, Microsoft’s experimental subsea datacenter, is retrieved after two years in the ocean. (Image courtesy of Microsoft.)

The ocean, normally used for storing sushi, may soon be home to something new: datacenters.

Undersea datacenters are Microsoft’s wettest moonshot, and the company recently concluded phase two of its research project on the concept, Project Natick. On June 7, after two years below the surface, Microsoft dredged up its experimental subsea datacenter off the coast of Scotland’s Orkney Islands. Christened Northern Isles, the once-pristine submarine cylinder was encrusted with barnacles and other sea scum obscuring the Microsoft logo.

Northern Isles, before and after. (Images courtesy of Microsoft.)

Why Does Microsoft Want to Put Datacenters in the Ocean?

Northern Isles was deployed on June 21, 2018, at the European Marine Energy Center in Northern Scotland. It was retrieved two years later in June 2020. (Source: Google Maps.)

There are three main reasons:
  1. Energy. Up to 40 percent of a land-based datacenter’s energy use goes towards cooling the electronics. By putting a datacenter on the sea floor, you can take advantage of more efficient water cooling.

  2. Latency. Half of the people on the planet live within 100 km of a coast. By putting datacenters just off shore, you gain latency advantages over remote datacenters in the middle of nowhere.

  3. Scalability. Land-based datacenters must be individually designed for a specific location. With the sea’s uniform environment, you can deploy identical datacenter modules wherever they’re needed.

The Origins of Project Natick

Sean James, a former U.S. Navy submarine sailor turned Microsoft employee, first floated the concept of an undersea datacenter in 2013. Microsoft’s Special Projects team was tasked to evaluate the idea, and Project Natick was born (named after the town of Natick, Massachusetts, for no particular reason).

In 2015, Microsoft accomplished phase one of Project Natick, meant to demonstrate the feasibility of subsea datacenters. The prototype was a vessel called the Leona Philpot (named for a Halo character), which spent 105 days under the Pacific Ocean off the coast of San Luis Obispo, California. The Leona Philpot was a success, proving that datacenters could indeed function beneath the waves.

The Leona Philpot was deployed in August 2015 off the coast of California. (Image courtesy of Microsoft.)

Phase two of Project Natick sought to evaluate the logistics and economics of commercializing an undersea datacenter. Northern Isles was a full-sized vessel and a fully functional datacenter, whereas the smaller Leona Philpot consisted mostly of dummy loads.

Now, after two years under the sea, the Northern Isles datacenter appears to have been just as much of a success as its predecessor.

How Does an Undersea Datacenter Work?

An undersea datacenter is fundamentally the same as its oversea counterpart, being a container filled with computing and storage equipment that lets us all stream shows on Netflix. Northern Isles packs 864 servers and 27,600 terabytes of storage into a 40 foot long pressure vessel (roughly the size of a standard shipping container).

Because Microsoft has more experience building software than submarines, it enlisted the help of France-based Naval Group to design and manufacture the Northern Isles vessel.

Construction of Northern Isles at a Naval Group facility in Best, France. (Image courtesy of Microsoft.)

Like other datacenters, undersea datacenters must include cooling infrastructure. Most datacenters use fans to circulate cold air over the servers, and then compressors to re-cool that air when it gets too hot. The compressors take a lot of energy, and these cooling systems take up to 40 percent of the datacenter’s total energy.

Many newer datacenters use an alternative technique called free-air cooling. Rather than condensing the same hot air over and over, free cooling brings in outside air to cool the servers, saving the energy requirements of the compressors and reducing cooling overhead to only 10 to 30 percent.

In contrast, Northern Isles kept its electronics cool with a heat exchanger system of the same type found on submarines. Internal heat exchangers transferred heat between the servers and a liquid coolant, which was then pumped to heat exchangers on the outside of the vessel and transferred to the ocean. The colder the ocean, the better this solution works. At Northern Isles’ depth of 36 meters (118 feet), the ocean temperature is consistently below 15 °C.

The Leona Philpot, which used this same heat exchanger method, achieved an energy overhead of just 3 percent—the lowest of any datacenter known to the Natick team. Microsoft hasn’t revealed the cooling overhead of Northern Isles, but it’s safe to assume a similarly low value.

In order for an underwater heat exchanger system to work properly, it has to be protected from what’s called biofouling, the natural process of sea life claiming eminent domain on anything left in the ocean for longer than an hour. If the surfaces of the heat exchanger were to be overgrown by sea life, the system would become less effective.

Naval Group’s Stephane Gouret holding a sea urchin that took up residence on Northern Isles. (Image courtesy of Microsoft.)

Microsoft tested the use of antifouling coatings and active deterrents like sound and light to prevent biofouling on the heat exchanger surfaces. These techniques proved effective on the Natick prototypes, but the team is continuing to research biofouling prevention.

Other than on the heat exchangers, Microsoft is happy to provide a home for sea life—in fact, the Natick team has filed a patent application for using underwater datacenters as artificial reefs.

Will Project Natick Turn the Sea Into a Hot Tub?

Most of the energy pumped into a datacenter’s IT equipment winds up as heat, so you may be worried that undersea datacenters like Northern Isles will cause ocean temperatures to rise. And they will, the same way putting a drop of tabasco in the ocean will make it spicier.

“The water just meters downstream of a Natick vessel would get a few thousandths of a degree warmer at most,” explained the Natick team in a 2017 article. Any heat coming off a Natick vessel is quickly dissipated by ocean currents, the team claims.

Ok, but there’s one question left to ask: how many undersea datacenters would you have to put in the ocean to heat it to a boil?

Northern Isles consumes 240 kW of power, so let’s take that as its heat output for the sake of simplicity. In fact, let’s take a lot of things for the sake of simplicity and pretend the ocean is a uniform body of water with a mass of 1.36 x 1021 kg, a temperature of 3.52 °C, a specific heat capacity of 3993 J/kg°C, and a boiling point of 406.05 °C (it’s under a lot of pressure—an average of 367 atm—hence the high boiling point).

Now we plug these numbers into the classic thermodynamics equation:

Where Q is the energy needed to change the temperature T from one value to another, given a mass m and specific heat capacity c. In our case:

The energy works out to 2.2 x 1027 J, or 6.1 x 1011 TWh.

To put that in perspective, last year the entire planet burned through about 1.6 x 105 TWh. If we kept that up, it would take us 3.8 million years before we could muster enough energy to do the job.

But let’s assume we have the energy at our disposal. One Northern Isles datacenter can heat the ocean at a rate of 240 kW, which amounts to 0.0021 TWh of heat energy per year. Given that timeframe, we would need 290 trillion Northern Isles working together to bring the oceans to a boil. And a ton of spaghetti to make it worth our while.

It’s All About the Environment

You don’t have to worry about undersea datacenters turning the oceans into a hot tub, and the Natick team is confident that any local heating effects will be insignificant. However, that doesn’t mean we should stop investigating.

“Marine spatial planning protocols and the licensing of activities would still have to be followed with any associated Environmental Impact Assessment having to be performed,” cautions marine biologist Dr. Gordon Watson.

Despite these potential concerns, Microsoft is positioning Project Natick as a green solution (or should that be blue?). Besides using less energy than a traditional datacenter, Northern Isles was also 100 percent powered by renewable energy from wind, solar, tidal, and wave sources. Microsoft anticipates that future undersea datacenters could be co-located with offshore wind farms or other local sources of renewable energy.

Is the Ocean Really a Good Place for a Datacenter?

The ocean may actually be the best place for a datacenter. Though Microsoft is still analyzing the data on Northern Isles, there are many advantages to placing data in the deep:

It’s spacious. Land is expensive, especially close to major population centers. The sea floor near the coast provides virtually unlimited space for datacenters that can remain close to major cities.

It’s secure. Traditional datacenters aren’t easy to break into, but you know what’s even harder to break into? A datacenter on the bottom of the ocean. At 36 meters deep, Northern Isles is just within the range of human divers with normal equipment (that’s why that depth was chosen—just in case someone needed to go down there). Commercial deployments would be even deeper.

It’s reliable. Northern Isles was a lights-out datacenter, i.e., it wasn’t designed for anyone to go inside. The pitch black environment of the vessel was comprised of dry nitrogen at 1 atm of pressure. With no corrosive oxygen and no technicians interfering with components, the computing equipment in Northern Isles experienced a failure rate just 1/8 of a replica on land. The Natick team is still studying the reasons for the low failure rate, but the colder operating temperature in Northern Isles may also have been a contributing factor.

The servers on the Northern Isles vessel were eight times more reliable than the same equipment on land. (Image courtesy of Microsoft.)

What’s Next for Undersea Datacenters?

Microsoft hasn’t revealed any details on Natick phase three. We don’t know if the project is set to continue, or if it’s dead in the water. But unless there are serious setbacks Microsoft is clamming up about, Project Natick looks to be a continuing success story.

“If there is a next phase, it would be a pilot,” explained Ben Cutler, Natick’s Project Manager, in a 2018 episode of the Microsoft Research podcast. “And now we’re talking to build something that’s larger scale. So, it might be multiple vessels. There might be a different deployment technology than what we used this time, to get greater efficiency. Those are things that we’re starting to think about, but right now, we’ve got this great thing in the water [Northern Isles] and we’re starting to learn.”

A commercial Natick vessel would probably look a lot like Northern Isles, which was designed for easy transportation and deployment (less than 90 days from factory to operation, according to Microsoft). Northern Isles was designed for up to five years of maintenance-free operation, after which a commercial vessel would be hauled up and fitted with new servers before being submerged once more. After a few iterations of this, the vessel would be recycled and the seabed restored to its natural state.

For now, we’re still swimming in the shallow end of undersea datacenters. But as datacenter usage continues to go up, there may be no better place to go than down.

For more on how Microsoft is experimenting with datacenters, read The Pros and Cons of Hydrogen Fuel Cells as Backup Generators.