What is it about?
Most of the mixed reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial–temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.
Featured Image
Photo by National Cancer Institute on Unsplash
Why is it important?
The proposed system enhanced mean-value cloning algorithm that helps to maintain the spatial–temporal consistency of the final composite video. The enhanced algorithm includes the three-dimensional mean-value coordinates and improvised mean-value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discolouration artefacts around the blending region.
Read the Original
This page is a summary of: A novel solution of using mixed reality in bowel and oral and maxillofacial surgical telepresence: 3D mean value cloning algorithm, International Journal of Medical Robotics and Computer Assisted Surgery, March 2021, Wiley,
DOI: 10.1002/rcs.2224.
You can read the full text:
Contributors
Be the first to contribute to this page