Autonomous Robots in the Operating Room

Smart Tissue Autonomous Robot, or STAR, during a supervised operation. (Image courtesy of Dr. Axel Krieger.)

We have always been fascinated by robots—from evil robots out to destroy humanity and take over the world (Terminator) to alien robots that can turn into everyday vehicles (Transformers). Famous science fiction author Isaac Asimov even devised the “Three Laws of Robotics,” which are now a basis for the study of ethics in robotics. What we had previously thought to be science fiction is fast becoming reality, as shown by Boston Dynamics.

Robots are already quite prevalent in the medical field and poised to further revolutionize it. A report by Allied Market Research estimated that the global surgical robots market could grow to $98.7 million by 2024—an 8.5 percent growth from 2017. The rise of artificial intelligence (AI), miniaturization and high-speed processors are some factors contributing to these advancements.

The most famous example nowadays is the DaVinci system by Intuitive Surgical. The machine has performed more than 7 million procedures since the company was launched in 1995 and was one of the first to be cleared by the FDA for general laparoscopic surgery. It is basically a series of tiny highly dexterous robot arms that can be used for minimally invasive procedures. Surgeons control the robots from a nearby console and view the patient’s target anatomy through a high-definition 3D image. This offers them a more precise range of motion and control beyond normal human capabilities. 

Nevertheless, the robots themselves are not actually doing the surgery but helping surgeons perform what is known as robot-assisted surgery. These kinds of robots are known as second-generation surgical robots.

Remarkably, first-generation robots were autonomous and could perform certain procedures without the help of a surgeon. The first, Probot, was invented in 1991 and used in a urological procedure by Imperial College London. Another was the Selective Compliance Assembly Robot Arm (SCARA), which was developed in 1992 and used for a total hip arthroplasty (THA). Surgeons, however, wanted robots that would assist them in performing their duties not replace them. Consequently, the first generation of robots gave way to the second generation.

According to the Wall Street Journal, research is now underway to devise new automated technologies that can take over repetitive tasks, such as suturing. This will allow surgeons to concentrate on more complicated tasks and prevent mental and physical fatigue, especially during procedures that can go on for many hours.

Let’s take a look at the technologies that are currently in service or development.

Your ROBODOC Will See You Now

The ROBODOC by Think Surgical is a first-generation autonomous robot that is still in use today. It was used for a THA in 1992 and received FDA clearance in 1998. ROBODOC can perform complicated hip and knee operations by converting CT scans of the affected joint into a 3D virtual bone model, which the surgeon can use for pre-operative planning. Because of the 3D images, the surgical procedure can be tailored to each patient’s unique anatomy, and an implant can be chosen accordingly. 

ROBODOC procedure. (Image courtesy of Biomedhealhtech.com.)

During surgery, the surgeon uses a “digitizer” to locate the patient’s anatomy by selecting points on the bone surface. A monitor displays the general locations of the points to be registered on the bone. These are matched to the bone surface model generated preoperatively by the computer. The surgeon verifies the accuracy of the registration by touching bone surfaces with the digitizer. If the selected location is within the target on the bone surface as shown on the monitor—red crosshair targets on the blue bone surfaces—the surgeon accepts the registration.

This plan is then imported into the computer-assisted tool, which cuts the specific regions of the bone based on the preoperative plan and, essentially, acts as a CAD/CAM machine. Meanwhile, the surgeon supervises the procedure by watching the monitor and cutting tool to ensure that the system is operating properly. The surgeon then places the implant and finishes the procedure.

A study was carried out by Song et al on 30 patients who underwent bilateral sequential total knee replacements, with one knee being replaced by robotic implantation and the other by conventional implantation. Radiographic results showed that even though the robotic sides had longer operation times and skin incisions, they demonstrated a decreased incidence of blood clots and post-operative bleeding. In terms of long-term benefits, the more precise fit, fill and alignment of the implant resulted in less stress, decreased bone loss and reduced leg length discrepancies.

STAR Performer

In an autonomous surgery revolution, another set of experiments by researchers from Children’s National Hospital and Johns Hopkins University showed off the Smart Tissue Autonomous Robot’s (STAR) capabilities. With only minimal guidance, the STAR was able to stitch together pieces of intestinal tubing from a pig in both a lab setting and live operation. The research claims that the robot can match, or even improve upon, the safety and precision of a human surgeon while damaging less of the surrounding flesh. This is particularly outstanding because soft tissue surgery is difficult to perform, unlike the ROBODOC, which operates on stiff and nondeformable bone. Irregular soft tissue—skin, fat and muscle—can resist a cutting tool and suddenly give way, causing the tool to make inaccurate cuts. STAR was able to compensate by visually tracking both its intended cutting path and its cutting tool, constantly fine-tuning its plan to accommodate such movement. 

The STAR’s vision system relied on near-infrared fluorescent (NIRF) tags placed in the intestinal tissue by researchers. A specialized NIRF camera tracked the markers while a 3D camera recorded images of the entire surgical field. Combining all this data allowed STAR to make its own plan for the suturing job and adjusted that plan as tissues moved during the operation. Surgeons found that the robot’s stich placement needed to be corrected less than both the DaVinci and keyhole surgery methods.

In the next stage of the experiment, the robot went head-to-head with expert surgeons. Both were required to cut a straight 5 cm line. Judgments were made based on deviation from ideal cut line and the amount of char, damaged flesh, surrounding the incision. STAR made cuts that were closer to the desired length, deviated less from the ideal cut line and caused less char. Finally, the researchers used STAR to cut a fake tumor made of clay out of a piece of pig fat. STAR was able to perform the requisite cuts with precision, despite a thin layer of tissue placed over the fake tumor to make it harder for the robot to discern its target. Again, STAR was able to accomplish its tasks thanks to markers placed by the researchers beforehand.

This is still a concept though, as the experiments conducted were limited in scope and performed under tightly controlled conditions.

Skin-to-Skin

One of the demands of surgeons using robots is haptic feedback during surgery to reduce tissue damage. National University of Singapore (NUS) and Intel Corp. researchers are developing an ultra-sensitive robotic silicon finger meant to mimic the sense of touch that surgeons need to identify organs, cut tissue and apply the correct amount of force. Labeled the Asynchronous Coded Electronic Skin (ACES), the device is made up of 100 small sensors and is about 1 square centimeter in size, allowing it to detect human touch 1,000 times faster than the human nervous system. It can identify the shape, texture and hardness of objects within 10 milliseconds—about 10 times faster than the blink of an eye.

The device is made up of 100 small sensors and is about 1 square centimeter in size. (Image courtesy of National University of Singapore.)

The team drew inspiration from the human sensory nervous system. Unlike the nerve bundles in the human skin, ACES comprises a network of sensors connected through a single electrical conductor. This also differentiates it from existing technologies that have interlinked wiring systems, which can make them prone to damage and difficult to scale up. The technology could potentially be employed in the form of a haptic glove to give surgeons the ability to remotely feel what the robot feels.

Cruise Control

Minimally invasive procedures require navigating from a small incision to the site that needs to be operated on. This has previously been accomplished through robots controlled by joysticks or guided through the body by external forces such as magnetic fields. A potentially disruptive innovation is a self-navigating robotic catheter, created by bioengineers at Boston Children’s Hospital. The device was inserted into the base of the heart of a pig, from where it propelled itself using a motorized drive system. Using a haptic vision sensor, it navigated along the beating ventricular wall to a leaky valve near the top of the ventricle without a surgeon’s involvement.

The robot in operation. (Image courtesy of Science Robotics.)

Using a navigational technique known as “wall following,” the catheter’s sensor sampled its environment at regular intervals—much like insect antennae or rodent whiskers—and was able to discern whether it was in contact with blood, the heart wall or valve. It was also able to judge how hard it was pressing—to prevent damage to the heart—using a tiny camera. This was supplemented by data from preoperative scans and machine-learning algorithms, essentially creating a map of the cardiac anatomy. Automation of the navigation allowed the surgeon to concentrate on using the occlude, a small metal plug, for optimizing valve repair.

The robot was successfully able to arrive at the heart valve over repeated trials in more or less the same amount of time as the surgeon using either a hand tool or a joystick-controlled robot. Additionally, it could eliminate the need for fluoroscopic imaging, which can expose patients to ionizing radiation.

Are Robots Cut Out for the OR?

While these systems demonstrate supervised execution of surgical plans in practice, perfecting and implementing the more complex and involved tasks is the next step in enhancing autonomy in surgery. 

There are a couple of prohibitive factors that need to be considered for the uptake of these robot surgeons. First is the cost. The DaVinci system can cost up to $2.2 million with all its bells and whistles. Second, studies have raised doubts about their efficacy. Even though the surgeries cost one-third more than other minimally invasive surgeries, they suffer from a similar complication rate. There have also been cases where patients have sued the manufacturer for allegedly failing to provide proper training to the doctors on how to operate the system. Beyond the technical challenges, ethical issues may provide another obstacle in bringing automation to the operating room. As mentioned earlier, some countries have already started to build frameworks for ethical law in robotics. Much work still needs to be done in this regard, allowing for the risk of technology overtaking the legal framework as systems increase in their level of autonomy.

These issues definitely need to be addressed, and further detailed studies will be carried out as more robots are inevitably introduced into the operating room. If the time comes, would you opt for an autonomous robot to operate on you?