Feeling Through Seeing: Vision-based Force Estimation in Robot-assisted Surgery by Humans and Machines

9:30am - 10:30am
ZOOM: https://hkust.zoom.us/j/94218980263 Meeting ID: 942 1898 0263 Passcode: 883549

Tissue handling is an important skill for surgeons to perform safe and effective surgery. In robot-assisted minimally invasive surgery (RMIS), such skill is difficult to acquire due to lack of haptic feedback. RMIS surgeons learn to estimate tissue interaction forces through visual feedback, often over many hours of in-vivo practice. Tissue handling skills are notoriously difficult for surgical educators to quantitatively evaluate and provide feedback on because the human raters cannot directly know the force applied by surgical instruments. Thus, such gold standard expert video review can have poor inter-rater consistency, while also being time-consuming to conduct and lacking actionable feedback. My research leverages the RMIS telesurgical robotic platform as both a sensor and actuation suite to (a) develop automated data-driven vision-based force estimates that can provide objective measures of tissue handling skills or serve as input data to facilitate robot autonomy, and (b) provide multimodal robot-mediated real-time feedback to the RMIS surgeon to improve their tissue handling skill during actual surgery and in training.

 

In this talk, I will present models and algorithms for vision-based force estimation in RMIS from both human and machine perspectives. From the human perspective, I evaluate the effect of haptic training on human teleoperators’ ability to visually estimate forces through a telesurgical robot. From the machine perspective, I design and characterize multimodal deep learning-based methods to estimate interaction forces during tissue manipulation for both automated performance evaluation and delivery of haptics-based training stimuli to accelerate tissue handling skill acquisition. The results demonstrate that human teleoperators and machines can learn visual force estimation from haptic training and multimodal manipulation data respectively, setting the stage for future work in improved methods for human-machine skills development and autonomous robot-assisted surgery.

講者/ 表演者:
Mr. Zonghe Chua
PhD Candidate, Collaborative Haptics and Robotics in Medicine Lab, Department of Mechanical Engineering, Stanford University

Zonghe Chua is a Ph.D. candidate in the Collaborative Haptics and Robotics in Medicine Lab in the Department of Mechanical Engineering at Stanford University. He is originally from Singapore, received his B.S. in mechanical engineering from the University of Illinois at Urbana-Champaign in 2015, and received his M.S. in mechanical engineering from Stanford University in 2020. He will complete his Ph.D. in 2022. Before coming to Stanford, he was a mechanical designer at Yaskawa Electric America. His research interests include human-in-the-loop robotic systems, with a specific focus on robotic telesurgery, data-driven methods for automated user performance evaluation, and haptic feedback. He has collaborated with Intuitive Surgical, Inc. to use the da Vinci Surgical System as a platform for surgical skills learning and automated performance evaluation. He is currently a Young National University of Singapore Fellow, and the Lubert Stryer Bio-X Stanford Interdisciplinary Graduate Fellow co-advised by Prof. Allison Okamura, Ph.D. from the Department of Mechanical Engineering and Prof. Sherry Wren, M.D. from the Department of Surgery.

語言
英文
主辦單位
電子及計算機工程學系
新增活動
請各校內團體將活動發布至大學活動日曆。