Image-matching based navigation system for robotic ureteroscopy in kidney exploration

More Info
expand_more

Abstract

Kidney stone disease has become the most common disease of the urinary tract. Flexible ureteroscopy (fURS) is one of the methods to perform diagnosis or treatment for the stones inside the kidney. However, because of the complex structure inside the kidney with multiple calyxes and the limitation of the ureteroscope, surgeons find it difficult to know which calyx is being observed and where the ureteroscope tip actually reaches during the ureteroscopy procedure by only using the ureteroscope camera views. To solve this problem, this thesis proposed an image-matching-based navigation system for robotic ureteroscopy in kidney exploration. The system consists of two parts: pre-operation and post-operation. During pre-operation, the virtual ureteroscopy (VURS) environment is rendered from the 3D kidney model, which is generated from computed tomography (CT) scan data. The VURS images of calyxes inside the kidney are collected and then processed to perform edge feature extraction as the matching virtual image database (VID). In post-operation, real calyx images (RCIs) are collected. For each RCI, image processing is used, and its edge features are extracted as the matching input. By performing the matching algorithm between edge features of the RCI and the VID, the best matching result image in the VID can be found according to the RCI. Then the location of the ureteroscope can be shown in the virtual ureteroscopy environment. This thesis uses an open source kidney model to replace the data from CT scans due to resource limitations. Unity software is used to generate the VURS. The VID is collected manually. A robotic ureteroscope prototype and a 3D-printing kidney phantom based on the open source kidney model are used to simulate the fURS procedure to get RCIs. After applying image processing techniques to RCIs, the shape context (SC) matching method is used to find the best matching image from the VID. The results are validated by the EMT system. The whole system is validated through both simulation images and the experiment. The successful rate of matching the simulation images is 92%, while it reduces to 62.5% when matching the RCIs. This performance difference is caused by the poor image quality of RCIs, which can be improved by selecting a more suitable light source and a digital camera.