The robotic surgeon will see you now



[ad_1]

Sitting on a stool a few feet from a robot with long arms, Dr Danyal Fer wrapped his fingers around two metal handles near his chest.

By moving the handles – up and down, left to right – the robot mimicked every little movement with its own two arms. Then, when he pinched his thumb and forefinger together, one of the robot’s tiny claws did much the same. This is how surgeons like Dr Fer have long used robots when operating on patients. They can remove a prostate from a patient sitting at a computer console across the room.

But after this brief demonstration, Dr Fer and his fellow researchers at the University of California at Berkeley showed how they hope to advance the state of the art. Dr. Fer let go of the handles and a new kind of computer software took over. As he and the other researchers watched, the robot began to move entirely on its own.

With a claw, the machine lifted a small plastic ring from an equally tiny peg on the table, passed the ring from claw to claw, moved it across the table, and took it down. hooked carefully to a new peg. Then the robot did the same with several other rings, completing the task as quickly as it had been when guided by Dr. Fer.

The training exercise was originally designed for humans; moving the rings from one ankle to another is how surgeons learn to operate robots like the one in Berkeley. Now, an automated robot performing the test can match or even surpass a human in dexterity, accuracy and speed, according to a new research paper from the Berkeley team.

The project is part of a much larger effort to bring artificial intelligence into the operating room. Using many of the same technologies that underpin self-driving cars, autonomous drones, and warehouse robots, researchers are also working to automate surgical robots. These methods are still far from being used on a daily basis, but progress is accelerating.

“These are exciting times,” said Russell Taylor, professor at Johns Hopkins University and former IBM researcher known to academia as the father of robotic surgery. “This is where I hoped we would be 20 years ago.”

The goal is not to remove surgeons from the operating room but to lighten their workload and perhaps even increase success rates – where there is room for improvement – by automating certain phases of the operation. surgery.

Robots can already exceed human precision on some surgical tasks, such as placing a pin in a bone (a particularly risky task in knee and hip replacements). The hope is that automated robots can bring greater precision to other tasks, like incisions or sutures, and reduce the risks associated with overworked surgeons.

In a recent phone call, Greg Hager, a computer scientist at Johns Hopkins, said surgical automation would progress much like the Autopilot software that guided his Tesla around the New Jersey Turnpike as he spoke. The car was driving on its own, he said, but his wife always had her hands on the wheel, in case something went wrong. And she would take over when leaving the highway.

“We can’t automate the whole process, at least not without human oversight,” he said. “But we can start to create automation tools that make a surgeon’s life a little bit easier.”

Five years ago, researchers at the Children’s National Health System in Washington, DC, designed a robot capable of automatically suturing a pig’s intestines during surgery. It was a notable step towards the kind of future Dr. Hager envisioned. But it came with an asterisk: The researchers had implanted tiny markers in the pig’s intestines that emitted near-infrared light and helped guide the robot’s movements.

The method is far from practical, as the markers are not easily implanted or removed. But in recent years, artificial intelligence researchers have dramatically improved the power of computer vision, which could allow robots to perform surgical tasks on their own, without such markers.

Change is driven by so-called neural networks, mathematical systems that can learn skills by analyzing large amounts of data. By analyzing thousands of photos of cats, for example, a neural network can learn to recognize a cat. Likewise, a neural network can learn from images captured by surgical robots.

Surgical robots are equipped with cameras that record three-dimensional video of each operation. The video is shown through a viewfinder in which surgeons watch while guiding the operation, looking from the robot’s point of view.

But subsequently, these images also provide a detailed roadmap showing how the surgeries are performed. They can help new surgeons understand how to use these robots, and they can help train the robots to handle the tasks themselves. By analyzing images that show how a surgeon guides the robot, a neural network can learn the same skills.

This is how researchers at Berkeley worked to automate their robot, based on the da Vinci Surgical System, a two-armed machine that helps surgeons perform more than a million procedures per year. Dr Fer and his colleagues collect images of the robot moving the plastic rings under human control. Then their system learns from these images, identifying the best ways to grab the rings, pass them between the claws, and move them to new pegs.

But this process came with its own asterisk. When the system told the robot where to move, the robot often missed the spot by millimeters. Over months and years of use, the many metal cables inside the robot’s twin arms stretched and bent in small ways, so its movements were not as precise as they needed to be.

Human operators could compensate for this change, unconsciously. But the automated system couldn’t. This is often the problem with automated technology: it struggles to cope with change and uncertainty. Autonomous vehicles are still far from being widely used as they are not yet agile enough to handle all the chaos of the everyday world.

The Berkeley team set out to build a new neural network that analyzed the robot’s errors and learned how much precision it was losing with each passing day. “It learns how the robot’s joints change over time,” said Brijen Thananjeyan, a doctoral student on the team. Once the automated system could account for this change, the robot could grab and move the plastic rings, matching the performance of human operators.

Other labs are trying different approaches. Axel Krieger, a Johns Hopkins researcher who was part of the Pig Suture Project in 2016, is working to automate a new type of robotic arm, with fewer moving parts and which behaves more consistently than the type of robot used by the Berkeley team. . Researchers at the Worcester Polytechnic Institute are developing ways for machines to carefully guide surgeons’ hands as they perform particular tasks, such as inserting a needle for a cancer biopsy or burning the brain to remove a tumor.

“It’s like a car where the lane keeping is autonomous but you still control the gas and the brake,” said Greg Fischer, one of the Worcester researchers.

Many obstacles lie ahead, the scientists note. Moving plastic dowels is one thing; cutting, moving and suturing the flesh is another. “What happens when the angle of the camera changes?” said Ann Majewicz Fey, associate professor at the University of Texas at Austin. “What happens when the smoke gets in the way?”

For the foreseeable future, automation will work alongside surgeons rather than replace them. But even that could have profound effects, said Dr Fer. For example, medics could perform surgery at distances far greater than the width of the operating room – miles or more, perhaps, helping wounded soldiers on distant battlefields.

The signal offset is too large to make this possible at this time. But if a robot could handle at least some of the tasks on its own, long-distance surgery could become viable, said Dr Fer: “You can send a high level plan, then the robot could execute it.”

The same technology would be essential for remote surgery over even longer distances. “When we start operating on people on the moon,” he said, “surgeons will need entirely new tools.”

[ad_2]

Source link