Sign In

Communications of the ACM

ACM News

Robots Augment Surgeons


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
The da Vinci system enables the surgeon to operate through several small incisions with enhanced capabilities including high-definition 3D vision.

The da Vinci robot, introduced in 2000, has been used to record surgeries as the basis for developing a "language of surgery," according to Gregory Hager of Johns Hopkins University.

Credit: Intuitive Surgical Inc.

"See one, do one, teach one." That is how the pioneering American surgeon William Halsted described the art of surgery back in 1889. Almost 130 years later, these six words still describe the essence of how surgeons develop their operating skills, says Gregory Hager, Mandell Bellmore Professor of Computer Science at Johns Hopkins University.

Hager, who specializes in computer vision and robotics, works with a team of researchers to develop technologies that can fundamentally change the way the surgeon of tomorrow can be trained: not solely in a qualitative way, but making use of quantitative performance measurements recorded by robots that, at the same time, can augment the capabilities of surgeons. Hager made time in his busy schedule to speak with writer Bennie Mols. 

In which ways can robots augment human surgeons?

Surgery is a unique combination of physical and cognitive skills, but human surgeons have their limits. The surgeon's eyes are limited in discerning details, such as different layers in tissue or differences between tissues. The surgeon's hands can manipulate tissue with a precision of 100 micrometers (millionths of a meter) at best. The surgeon's brain can handle only a certain amount of information at a time.

Robots allow us to surmount these limits. With a robot, the surgeon can operate down to a precision of 10 microns. The robot can also magnify and augment images. Intuitive Surgical, the company that first introduced the da Vinci operating robot in the year 2000, now has a system in which you can inject a fluorescent dye into the body and discern different types of tissue by color under near-infrared illumination.

Apart from amplifying the skills of the surgeon, you are using robots to train surgeons. How does that work?

The main idea is that we use the robot to record data from an operation: all the video data, and all the hand movement data that a surgeon performs. These are phenomenally valuable data that can be used to evaluate the surgeon's performance, to monitor the progress of skill acquisition, to detect errors and deficits, to recommend better practices, and to demonstrate best practices.

What have been the results so far?

We started recording with the da Vinci robot already back in 2000, when the robot was introduced. In 2006 we made our first major recordings. After that, we developed a so-called "language of surgery." Just like ordinary language can be broken down into a hierarchy of grammatical structures, surgery can also be broken down into a hierarchy of structures. A surgical procedure can be broken down into phases, characterized by different classes of maneuvers, like dissecting or suturing. And maneuvers can be broken down into gestures, like where do you want a needle to enter and to exit the tissue? Using advances in machine learning, we are now able to automatically identify every level of the surgery hierarchy.

What can you do with this language of surgery?

On the basis of our datasets, we can now distinguish an expert from a novice and an intermediate. We see that experts move much more efficiently and are much more organized. We can also grade skills. And we have early results of detecting errors or abnormal performances. Furthermore, we have started to use all this information to actively train surgeons.

I imagine surgeons get a bit worried about having their performance quantitatively being compared to those of their colleagues.

We are interested in making somebody a better surgeon, not in penalizing somebody. Athletes seek out coaching and analytics to improve themselves; why shouldn't we do this in surgery?

A study in bariatric surgery, used to help people losing weight, has quantified the differences between the top and the bottom quartile of surgeons. In the top quartile, the patient mortality was 0.05%, as compared to 0.26% in the bottom quartile. That's a five-fold difference. For complications the difference was 4.2% versus 14.5%.

Training can significantly reduce the number of readmissions, reoperations, complications, and even the number of deaths.

How do you see the future of robots training surgeons?

For the first time in history, we will be able to quantify the dose-response relationship of surgery. Think about taking out lymph nodes; you can take out more, you can take out less. What is the relationship between the amount of nodes you take out and the outcome for the patient? Quantitative data will allow us to determine the right dose of surgery, not just for taking out lymph nodes, but for many types of surgery.

What are your thoughts about fully autonomous surgical robots?

In my lifetime, I will not see any interesting level of autonomously operating robots. There is a lot of variability in surgery: the dynamics of tissues, the anatomy of every individual patient. A good surgeon constantly adapts to that. We still don't have a robot hand that shows the same capability as a human hand.

Having said that, some components of surgery will become automatized, but human judgment is still needed for many of the small variations that arise. Robots can learn from human surgeons, not to replace them, but to help make surgeons better at what they do."

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.


 

No entries found