Tissue handling is an important skill for surgeons to perform safe and effective surgery. In robot-assisted minimally invasive surgery (RMIS), such skill is difficult to acquire due to lack of haptic feedback. RMIS surgeons learn to estimate tissue interaction forces through visual feedback, often over many hours of in-vivo practice. Tissue handling skills are notoriously difficult for surgical educators to quantitatively evaluate and provide feedback on because the human raters cannot directly know the force applied by surgical instruments. Thus, such gold standard expert video review can have poor inter-rater consistency, while also being time-consuming to conduct and lacking actionable feedback. My research leverages the RMIS telesurgical robotic platform as both a sensor and actuation suite to (a) develop automated data-driven vision-based force estimates that can provide objective measures of tissue handling skills or serve as input data to facilitate robot autonomy, and (b) provide multimodal robot-mediated real-time feedback to the RMIS surgeon to improve their tissue handling skill during actual surgery and in training.
In this talk, I will present models and algorithms for vision-based force estimation in RMIS from both human and machine perspectives. From the human perspective, I evaluate the effect of haptic training on human teleoperators’ ability to visually estimate forces through a telesurgical robot. From the machine perspective, I design and characterize multimodal deep learning-based methods to estimate interaction forces during tissue manipulation for both automated performance evaluation and delivery of haptics-based training stimuli to accelerate tissue handling skill acquisition. The results demonstrate that human teleoperators and machines can learn visual force estimation from haptic training and multimodal manipulation data respectively, setting the stage for future work in improved methods for human-machine skills development and autonomous robot-assisted surgery.