From Art to Science: How Machine Learning Is Changing the Surgical Landscape
Robotic surgery has already proven its value in improving surgical outcomes. The next frontier is using data collection and artificial intelligence to help guide surgeons in the OR and develop surgical simulators that can expand and enhance training opportunities. We spoke with Anthony Fernando, CEO and president of Asensus Surgical, maker of the Senhance Surgical Device, to learn more about the future of robotics and machine learning in surgical care.
Could you tell us a little bit about the Senhance Surgical System and how it is being used by healthcare professionals?
Fernando: There are two components to the device: The Senhance system is the robotic component, which includes the robotic manipulation and instrument manipulation. Then there is a complementary part, which we call the Intelligent Surgical Unit. That is the augmented intelligence machine learning component. The two work together, but they are distinct components of the system.
Our focus is on laparoscopic surgeries. In the U.S., we are approved for general surgery and gynecology. In Europe, we are approved for general surgery, gynecology, neurology and non-cardiac thoracic surgery.
How does the Intelligent Surgical Unit machine learning enhance or augment the performance of the robotic system?
Fernando: There is a camera used to perform the keyhole surgeries. What we do is we take those images from the camera, digitize them and then we harvest that information in real time and provide it back to the surgeon. Similar to a GPS in your car that gives you directions as you travel, we have the capability to guide the surgeon and be a helping hand while they are performing surgery.
This information can also be used in training. There is a phrase, ‘see one, do one, teach one.’ That is the methodology currently employed all over the world in training surgeons. With our system, it’s all about data. We collect data and then we know what good looks like. We have the visual data from the camera and then we also have the robotic manipulation data to say, for instance, how many moves did one have to make in order to accomplish a task? And was that task accomplished well or was there room for improvement? And we can do that in the context of the human anatomy, because anatomically we are all slightly different. With data and machine learning, the system recognizes differences in anatomy and differences in the surgeons’ techniques.
We are in the process of building digital twins based on that data, so the system will know what a good outcome looks like. This will allow us to totally change that training paradigm. Instead of ‘see one, do one, teach one,’ we can put the knowledge of many thousands into the hands of one. Learning will be significantly shortened because you essentially have someone watching over your shoulder guiding you through the process. That’s how we envision changing the landscape of training in respect to surgery.
Does this mean that someone who is in training could learn how to perform a procedure without actually working on a patient?
Fernando: Today, a doctor who graduates medical spends approximately 5,000 to 6,000 hours in training before they become a surgeon and before they do their first solo surgery. If you think about the digital twin environment, you can accelerate that significantly, because now they can practice on a real simulator, similar to flight training for pilots.
The anatomy can be digitized, so you can do simulated procedures and you can also look at procedures that other surgeons have done and then follow their techniques. This means that individuals wouldn’t need to spend time watching other surgeons and how they perform surgery, and they wouldn’t have to go to an animal facility or a cadaver lab. All of this shortens the training period. Residents or surgeons learning a new procedure can perform the surgery in a simulated environment and gain a lot of experience before going into the OR.
What is the ideal setting for use of the Senhance system?
Fernando: The surgical simulation piece obviously can be done remotely. But if you want to use the digital twin concept while you’re performing the surgery on a patient, then that would be done inside of an operating room.
The primary goal with augmented intelligent machine learning is to reduce variability in surgery so that the outcomes can be more predictable. That is our primary goal. We want to change surgery from being an art to a science. And with the digital tools, we can be very precise, we can be very objective and we can deliver the same outcome every time. That is what we are after and what we are working towards.
Original post: https://www.medtechintelligence.com/feature_article/from-art-to-science-how-machine-learning-is-changing-the-surgical-landscape/
Post publish date: