Mercury Systems Plans to Improve Life-Cycle and Network Management of U.S. Navy Control Systems

A recent whitepaper written by Rick Studley, Mercury Systems chief technologist, outlines how hyper-composable IT systems can be used to lower costs in U.S. Navy combat, weapon and C4ISR (command, control, communications, computers, intelligence, surveillance and reconnaissance) systems by implementing an information technology system that draws upon a combination of open computing strategies used by social media giants, micro container strategies used by the shipping container industry and common extensible platforms used by the automotive industry. This solution proposed by Mercury Systems will lower the cost of updating what is described as a disconnected environment full of institutional inefficiencies.

Key topics of discussion include:

  • Integrating systems that had previously been in silos with a common, open architecture IT infrastructure enables the use of common computing and storage assets which lowers costs
  • Composable systems and an IT design that creates a level of abstraction for hardware component infrastructure allows for rapid adjustment in system components to meet changing needs
  • Modular and scalable architecture facilitates continuous updates to capabilities and lowers over all total cost of ownership

There are a few terms worth explaining if you don’t come from a defense background: Hyper-composable IT architecture, rugged servers and Modular Open Systems Approach (MOSA).

Hyper-composable IT Architecture

Hyper-composable architecture systems are built with smart infrastructure capabilities. They intelligently mold underlying storage to support a more radical and targeted understanding of mission critical applications from the perspective of application intent by optimizing performance and protection levels. By decoupling storage provisioning from effort spent managing this infrastructure, a more targeted effort can be expended to optimize the performance and price of every application. This includes the cost of updating and maintaining each application on their own and collectively as a group to maintain the highest level of performance and lowest level of failure.

Rugged Servers

As you can imagine, for military IT systems,failure is not an option. To prevent any failure that could cost people their lives, they use rugged server providers with defense domain expertise. As the term implies, rugged server providers design and manufacture servers that are “hardened” to integrate advanced security features that provide a more secure operating environment and protect critical IT systems from damage that can occur in extreme environments. This means protection from shock and vibration incurred during attacks or extreme natural events, minimizing the size, weight and power (SWaP) that they use, as well as reducing acoustic noise, protecting from extreme pressure environments and electromagnetic interference.

Modular Open Systems Approach (MOSA)

To understand the essence of the whitepaper, knowledge of the Department of Defense’s (DoD) MOSA initiative is helpful to understanding how Mercury’s proposed solution plans to solve existing interoperability challenges created by the Navy’s disparate systems.

The DoD underscores the importance of its modular open systems approach in the lifecycle of its major defense acquisition programs (MDAP) and major automated information systems (MAIS) to keep up with rapidly evolving technology and advancing threats that require faster cycle times for modifying war fighting capabilities. MOSA system designs have several important characteristics: the system interface shares verifiable common standards, it is highly cohesive yet loosely coupled and contains several modules that independent vendors can compete for by designing and building them to the DoD’s specifications.

MOSA also benefits military-grade IT systems by choosing from defense companies that develop innovative systems that accelerate and simplify the addition and integration of new capabilities into existing systems and improve interoperability issues while partially relying on used components, modules and technology from any supplier over the system’s life cycle.

The Navy’s Disparate Control System Is a Problem

If every control system used by the Navy is independent of all others, yet each requires costly certifications similar to if not the same extended environment, leveraging a common processing or display system would be costlier. From a security standpoint, wouldn’t a high-value target like the Navy benefit from a decentralized control system infrastructure versus a centralized one? Or does a decentralized network of control systems lend itself to attack or degraded performance because each is not updated, refreshed or installed with a centralized network and infrastructure?

Interestingly, the commercial technology, modules and components that ensure infrastructure stability at social media giants (Facebook, LinkedIn, Twitter etc.) changes rapidly to balance performance and reliability and prevent systemic failures or catastrophic drops in performance. To help the Navy radically improve its approach, Mercury Systems has integrated various commercial elements from the private sector into its proposed solution.

Mercury Systems Proposes a Solution

Mercury Systems proposes a centralized system architecture based on a common modular equipment rack for the Navy, which it calls a “Navy standard Hyper Infrastructure.” There are several elements to this proposal, including HyperComposable compute, storage networking, graphics and special function modules. It also includes a HyperScale Leaf and Spine Interconnect Fabric that would serve several mission-critical elements, including machinery control systems, C4ISR, combat and weapons systems.

HyperScale Leaf and Spine Interconnect Fabric features equidistant endpoints, zero jitter, low latency, nonblocking, zero packet loss and wire rate performance for all packet sizes and port combinations that fairly divide traffic in all scenarios and higher quality buffering with predictable buffer allocation. (Image courtesy of Mercury Systems.)

These HyperScale Leaf and Spine Interconnect Fabric systems are made from a small set of building blocks that include common storage, I/O and computing modules. According to Mercury Systems, these building blocks can scale up to build systems large enough to handle the vast infrastructure of the Navy.

For the Navy’s existing legacy network topologies, Mercury’s HyperComposable systems could theoretically be deployed in way that is similar to the modern blade server or rack-mount systems. InfiniBand or Ethernet fabrics would serve as the core network and be adopted to avoid pain points during new construction and numerous technology insertion cycles. The Ethernet fabric, or the InfiniBand-based core network, would last for at least half of the lifecycle of the platform or ship itself.

A problem that Mercury Systems plans to avoid is one that commonly occurs at the outset of any implementation or technology insertion cycle. Generally, when the total cost of ownership calculations are applied to installing and implementing new technology platforms, the cost of support, training, software certifications and shipboard industrial work is sometimes skipped over or underestimated. The cost of taking a system or systems offline for new technology refreshes is sometimes missed as well. Some costs, like unexpected maintenance issues, are simply difficult to factor in. Critically, financial forecasting for periods when systems need to be down is difficult. Therefore, it is challenging to add such costs into total ownership calculations.

The Compatibility of Leveraging Successful Strategies from Big Tech, Shipping and Automotive Manufacturing Industries

According to Mercury Systems, strategies employed by massive tech companies such as Microsoft (Azure Cloud), Amazon (Amazon Web Services), Google and Facebook include lasting and flexible approaches based on open-computing structures modified for reuse and synchronous technology insertion among centralized control systems. Defining and adapting the open, scalable and highly modular network architectures used by tech giants to build and deliver shipboard combat systems for the Navy would create more uniform data centers and networks that could dramatically lower the cost of maintaining and adapting future systems as universal changes become necessary.

As noted in the Studley white paper, the Navy has higher security requirements than any commercial entity, including Google, Facebook and LinkedIn. In fact, Mercury Systems proposes leveraging commercially successful strategies from three industries. The firm argues that the system it has created is based on a unique combination of successful strategies from the social media, shipping and automotive manufacturing industries. The company contends that its unique system will work not only for the Navy but also for the entire DoD.

From Social Media: Data Center Optimization

From social media, Mercury Systems is borrowing elements from the Hyperscale Cloud, the Open Compute Project (OCP) and the Open19 Foundation. Hyperscale refers to the capability of an IT infrastructure to scale up seamlessly as demand increases at a rapid pace. The Open Compute Project was created and is maintained by major tech companies like IBM, Cisco, Alibaba, Seagate Technology, Google and Facebook to optimize scalable computing by openly sharing hardware designs that can deliver the best performing servers, storage and data centers.

The elements of a common modular infrastructure for the Navy include a disaggregation of storage, servers and networks through open, modular and software defined everything (SDx), where a common software controls numerous connected machines while simultaneously directing different types of user activity. (Image courtesy of Mercury Systems.)
Logistically, with over 2 billion daily Facebook users, the act of loading a single user’s home page is not as simple as it would appear based on each user’s individual experience. When anyone clicks on their home page, thousands of pieces of data are processed by accessing hundreds of servers and delivering the result in seconds. Perhaps even more incredible is the fact that over 80 percent of daily users are not located in North America, Facebook’s home continent.

This real-time accomplishment would be more difficult using the architecture of traditional data centers. Data centers were once architected on a three-tiered topology that consisted of core routers, access switches and aggregation routers. Though widely deployed, the legacy three-tiered architecture is no longer suitable for the workload and latency demands of hyperscale data center campus environments like those of Facebook. Because of this inadequacy, hyperscale data centers have migrated to a leaf-and-spine architecture, where the network is divided into two stages. The spine stage aggregates and routes packets to their destination, and the leaf stage connects end hosts and load balance connections through the spine.

So, the logistics of Facebook and most commercial data centers use HyperScale Spine and Leaf Interconnect fabrics to ensure low-latency and high multipath bandwidth experiences on an infrastructure that is extraordinarily resistant to hardware failures of any kind. The ability to provide continuous and consistent service level agreements (SLAs) for each customer—regardless of the millions of other simultaneous users—fuels these organizations’ success. Virtually every contemporary data center relies on HyperScale Spine and Leaf Interconnect fabrics. Open compute designs used by Facebook to optimize its data centers saved the company more than $1 billion between 2011 and 2014.

From the Shipping Industry: The Intermodal Shipping Container

In shipping, products are handled manually in ways that seem incredibly inefficient when compared to the International Standards Organization (ISO) shipping container.

By standardizing the industry through the introduction of the Intermodal shipping container (ISO shipping container), many previous issues with end-to-end shipping, storage and transportation were addressed and optimized for transcontinental shipping. (Image courtesy of indiamart.com.)
Mercury's standard Hyper Infrastructure with HyperComposable module configurations, or micro containers, are designed to benefit data centers in the same way that ISO containers did for transcontinental shipping.
The new XR6 RES HD computing modules with the latest Intel Xeon Scalable processors and three extra storage, PCI Express expansion, and managed switch modules for mission-critical applications. The XR6 RES HD modules use commercial-off-the -shelf (COTS) components and connect to three scalable and extendible RES HD chassis. (Image courtesy of Mercury Systems.)
The HyperComposable micro containers standardize mechanical mounting, cooling, power, size, volume and environmental resilience of data centers by removing physical technologies from the infrastructure. This allows for a more facile abstraction and replacement of older technology modules with newer technology as it becomes available, avoiding all shipboard industrial work. Adding to these benefits would be the fact that older modules can be used as spares for emergencies or even be deployed on non-mission-critical platforms.
Mercury Systems introduced the RES-HD and HDversa to supercharge hyperscale datacenters like those used by Google, Facebook, Microsoft and Cisco to streamline serviceability, scalability to the cloud and extensibility to mission-critical systems. (Image courtesy of Mercury Systems.)

From Automotive Manufacturing: Customizable Common Extensible Platforms

The automotive industry has adopted a group of common extensible platforms that can be altered to manufacture different vehicle types by adding modular assemblies like cockpits, trains, engines and bodies, for example. In all, 95 percent of the estimated 33 million vehicle units manufactured by the leading 12 original equipment manufacturers (OEMs) will likely be based on just three OEM core platforms. There is plenty of data to support the cost-effectiveness and reduced time to market of these common extensible platforms.

Analogously, Mercury Systems proposes that a Hyper Infrastructure Common Modular Equipment Rack (CMER) will act in a similar way to the common extensible platforms used by leading OEMs in the automotive industry. If the Navy deployed CMERs at different end points on the HyperScale fabric platform, applications could be developed on a combination of HyperComposable Modules built with Common Modular Subracks (CMSR), virtual machines and Linux containers.

Long-Term Cost-Effectiveness and Performance Optimization

Mercury Systems is proposing that the Navy create a common module library like the NAVSEA Common Source Library, which will be made up of pre-certified micro containers that can last for the duration of an operational environment’s life cycle for all control systems in complete synchrony. (Image courtesy of Mercury Systems.)
The flexibility and high level of customization that characterize both the CMER and CMSR will accommodate multiple technology insertion cycles through the replacement of existing modules aboard a ship. Anything integrated, tested and certified in the lab will run exactly as well on any of the Navy’s target platforms.

Cost of ownership and implementation for these types of platforms will theoretically improve performance through centralization and significantly reduce the cost of naval combat systems and C4ISR systems by implementing open computing strategies with existing technologies and methodologies that have proven to be extremely successful for leading companies in the social media, automotive manufacturing and shipping industries.

Implications of Mercury Systems Proposal Beyond the US Navy for Private Sector Industries

As technology advances in the private sector, solutions similar to the one proposed by Mercury Systems for the Navy are needed to secure new and vulnerable control systems underlying critical systems.

Autonomous Vehicles

If autonomous vehicles are to become as popular as human-driven vehicles, government entities like the U.S. Department of Transportation (DoT) are going to have to adopt DoD industry standards and systems engineering practices. The latest official guidance from both the National Highway Traffic Safety Administration (NHTSA) and DoT is called “Automated Driving Systems 2.0: A Vision for Safety,” which actively encourages emulating military control systems to maximize safety. Waymo, Alphabet’s driverless car company, acknowledges the vulnerability of its autonomous cars connected to the internet by hackers—the company hasn’t yet connected its system to the internet for fear of suffering a devastating and humiliating hack.

What about critical systems already in play in the private sector? After all, the solution proposed by Mercury Systems is for the control systems of the Navy, which is active 24/7.

The U.S. Power Grid Systems

The U.S. power grid is one such critical system. It is a massive, interconnected machine with a lot of moving parts and vulnerabilities. Could a solution similar to the one proposed by Mercury Systems work for it? There are over 160,000 miles of high-voltage transmission lines and millions of miles of low-voltage distribution lines where energy is generated and transferred from over 7,300 power plants and 55,000 substations. This massive energy network is organized by three major interconnections and operated by 3,000 utility entities and 70 balancing authorities.

With all societal functions in the last few decades, the grid has become increasingly dependent on information technology, improving in overall responsiveness to changing power demand and its ability to integrate new sources of energy, like those generated from solar and wind energy. This increased dependency on information technology comes with new risks, like cyber attacks. Until the devastating attacks on the Ukrainian power-grid by Russian hackers in 2015—the first ever confirmed hack of a power station—and again in 2016—which shut down power for a fifth of residents in Kiev—cyber attacks on power grids were not taken as seriously in terms of allocated resources as they are now.

Bottom Line

As the need continues to arise for more secure IT infrastructure to protect critical systems for public sector entities like the Navy, the need will continue to rise in parallel for private sector or mixed sector entities like autonomous vehicles and the U.S. power grid. As new threats emerge, critical IT infrastructure solutions like the one proposed by Mercury Systems—based on following MOSA guidelines, using rugged servers with the ability to replace modules quickly for tighter integration and seamless improvements based on secure and successful commercial strategies—will continue to become the order of the day.

Mercury Systems has sponsored this post. They have no editorial input to this post. All opinions are mine. —Andrew Wheeler