New Algorithm Provides 10x Speed-Up for Combinatorial Optimization

Visualizing the new simulated bifurcation algorithm, which is up to 10 times faster than existing methods. (Image courtesy of Goto et al.)

Many problems are easy to verify but difficult to solve. If I give you two sets of numbers, A and B, and ask you if the sum of numbers in A is equal to the sum of numbers in B, you could pull out a calculator and tell me the answer in moments. However, if I gave you a set C and asked you to split C into two subsets with equivalent sums, it would be a much harder problem.

It’s a very famous open question in mathematics whether or not problems that can be quickly verified (a class called NP) can also be quickly solved (a class called P).1 It’s strongly suspected that these are distinct problem classes—in mathematical terms, that P≠NP—but this has never been shown. If you can answer it one way or another, there’s a million dollars in it for you.

Until then, it remains difficult to solve NP problems like the above number partitioning problem. But engineers are often more interested in practical results than theoretical perfection, and fortunately there are many heuristics that can find approximate solutions to NP problems. Better solutions are out there, but we don’t have infinite time to look, so we settle for the approximation.

Researchers from Toshiba have recently discovered a new heuristic for approximating NP problems known as combinatorial optimizations. There are many combinatorial optimization problems, and they all involve selecting one solution from a vast array of possibilities. Number partitioning is one such problem. Another classic example is the traveling salesman problem, which aims to find the shortest route a salesman could take to visit every city on his list and return where he started.

The new heuristic is called simulated bifurcation, and it’s exciting in two ways: it significantly outperforms existing heuristics for combinatorial optimization, and it can do so using conventional computing systems (i.e., it doesn’t rely on specialized hardware like quantum computers). In this article, we’ll take a closer look at simulated bifurcation and its potential applications.

Why Care About Combinatorial Optimization?

The traveling salesman problem isn’t just a game mathematicians play for fun; it’s got clear real-world implications. Imagine you’re the salesman trying to plan your route—every extra mile you travel is an extra cost in time and money. It’s a no-brainer to look for the shortest route. The better your heuristic for doing so, the more time and money you can save.

It’s not just salesmen who can benefit from better heuristics. Combinatorial optimization problems crop up all over the place, and being able to solve them—or rather, approximate a solution—has tremendous utility in finance, logistics, electronics, transportation, molecular biology and many more diverse domains.

It turns out that many combinatorial optimization problems can be mapped onto a single problem called the Ising problem. This is a problem in statistical physics that attempts to find the ground state of the Ising model of ferromagnetism. Let’s not get bogged down in the details here—the crux of the matter is that the Ising problem is a combinatorial optimization problem and many other combinatorial optimization problems can be reformulated as the Ising problem.

In other words, if you can solve the Ising problem, you can solve the traveling salesman problem, the number partitioning problem, and many other practical yet difficult problems.

The Ising Problem

It happens to be true of many NP problems that if you can solve one of them, you can solve all of them. The Ising problem isn’t special in this regard. What makes the Ising problem so useful is its ability to be implemented on many physical systems.

Quantum computers, for example, can solve the Ising problem via techniques called quantum annealing and quantum adiabatic optimization. Alternatively, special-purpose optical systems called Coherent Ising Machines (CIMs) can simulate the Ising problem using laser pulses. Of course, conventional digital computers can also be used. A popular digital algorithm for the Ising problem is called simulated annealing.

Toshiba’s new heuristic, simulated bifurcation, is an approach to the Ising problem that falls into that last category: it can be implemented on digital circuits like FPGAs or GPUs. Better yet, simulated bifurcation outperforms simulated annealing, the current top digital algorithm. But it doesn’t stop there. Simulated bifurcation also outperforms state-of-the-art CIMs, which are expensive purpose-built systems. The researchers didn’t test the new heuristic against quantum approaches, though quantum computing still has a long way to go before being practical (despite some contrary claims).

Simulated Bifurcation

Researchers Hayato Goto, Kosuke Tatsumura and Alexander R. Dixon presented simulated bifurcation in a paper published last year: Combinatorial optimization by simulating adiabatic bifurcations in nonlinear Hamiltonian systems. The new algorithm is based off Goto’s previous work describing how quantum nonlinear oscillators can be used to solve combinatorial optimization problems. Simulated bifurcation abandons the need for quantum computing and takes a numerical approach to simulate nonlinear systems that map onto the Ising problem.

We won’t go into the mathematics of simulated bifurcation, but there are several features of the algorithm worth pointing out. We’ve mentioned that, like simulated annealing, simulated bifurcation can be implemented on conventional computers. Unlike simulated annealing, simulated bifurcation can take full advantage of parallel processing, giving it good potential for acceleration.

The researchers implemented simulated bifurcation on a single FPGA to solve a 2,000-node Max-Cut problem, a combinatorial optimization problem reducible to the Ising problem. Compared to both simulated annealing and CIM techniques, simulated bifurcation was not only quicker but found better approximate solutions. The graph below shows how each approach performed on the problem (the goal was to minimize the Ising energy as quickly as possible).

(Image courtesy of Goto et al.)

Simulated bifurcation (red) reaches the lowest energy in the quickest time (bold lines are the average, dashed/dotted lines are best/worst cases, respectively). Transposing this data onto the actual Max-Cut problem makes the comparison even clearer (below, the goal was to maximize the cut value in the lowest amount of time).

(Image courtesy of Goto et al.)

In 0.5ms, simulated bifurcation provided a higher average cut value than a CIM machine in 5ms and simulated annealing in 50ms. The researchers concluded that simulated bifurcation is about 10 times faster than CIMs. In another comparison, the researchers implemented simulated bifurcation on an 8-GPU cluster to solve a 100,000-node Max-Cut problem. There was no CIM comparison, but simulated bifurcation was about 10 times faster than the best simulated annealing algorithm.

The Simulated Bifurcation Machine

According to the researchers, simulated bifurcation could be made to perform even better on custom hardware like ASICs or multi-FPGA systems. In the meantime, even with conventional digital computers, the heuristic seems to be a significant step up from existing techniques for combinatorial optimization.

Toshiba, which sponsored the research, is betting that simulated bifurcation will prove its value. The company has commercialized the heuristic with a cloud offering they call the simulated bifurcation machine, which gives customers access to a set of solvers for combinatorial optimization problems expressed as the Ising problem. If you’re a salesman scratching your head about your upcoming travel plans, the simulated bifurcation machine may be just what you need.



Here, “quickly” means that the problems are solvable in polynomial time.