Mixed reality and augmented reality in the operating room
One of the main challenges for the surgeon during a minimally invasive intervention is to process the data coming from various image sources displayed on several screens, scattered around the operatingroomtable, and extract the most relevant information for the current stage of the surgery.
Several data sources are available such as the image from the optical endoscope or microscope, the navigation system, pre- and intra- operative volumes (CT, MR or X-ray), and segmented surfaces of anatomic/pathologic structures. As of today, all this information is presented on different displays of various resolutions, sizes, and position/orientation around the operatingroomtable. Due to lack of space in the sterile field these displays are often placed at a significant distance from the surgeon thus the information is not necessarily in the surgeon's line of sight and eye contact with the surgical field may be lost when the surgeon looks at the various display sources. The main aim of this work is to test if it is possible to use augmented reality glasses in the operating room in order to display imaging data coming from different sources in either one selected field (surgical field view) or following the surgeon’s movement. The idea is to study if augmented reality glasses are of benefit and can be used to fuse and display various image sources. Second, we want to test the feasibility of displaying several live streams in one scene and merge this with the navigation scene (i.e. CustusX) and the image of the surgical field and display the fused result on an off the shelf holographic device (i.e. Microsoft Hololens).