Skip to main content
28 Mar 2022

Seven million people suffer surgical complications each year, but with sufficient surgical training and review, 50% of these complications could be prevented. To improve surgical performance, existing research uses various deep learning (DL) technologies including convolutional neural networks (CNN) and recurrent neural networks (RNN) to automate surgical tool and workflow detection. However, there is room to improve accuracy; real-time analysis is also minimal due to the complexity of CNN. In this research, a novel DL architecture is proposed to integrate visual simultaneous localization and mapping (vSLAM) into Mask R-CNN. This architecture, vSLAM-CNN (vCNN), for the first time, integrates the best of both worlds, inclusive of (1) vSLAM for object detection, by focusing on geometric information for region proposals, and (2) CNN for object recognition, by focusing on semantic information for image classification, combining them into one joint end-to-end training process. This method, using spatio-temporal information in addition to visual features, is evaluated on M2CAI 2016 challenge datasets, achieving the state-of-the-art results with 96.8 mAP for tool detection and 97.5 mean Jaccard score for workflow detection, surpassing all previous works, and reaching a 50 FPS performance, 10x faster than the region-based CNN. A region proposal module (RPM) replaces the region proposal network (RPN) in Mask R-CNN, accurately placing bounding boxes and lessening the annotation requirement. Furthermore, a Microsoft HoloLens2 application is developed to provide an augmented reality (AR)-based solution for surgical training and assistance.

Value-Added Bundle(s) Including this Product