Editorial Type:
Article Category: Research Article
 | 
Online Publication Date: 01 Jul 2011

A Novel Technique for Three-Dimensional Reconstruction for Surgical Simulation Around the Craniocervical Junction Region

,
,
,
,
, and
Page Range: 274 – 280
DOI: 10.9738/CC14.1
Save
Download PDF

Abstract

Performing surgeries on the craniocervical junction presents a technical challenge for operating surgeons. Three-dimensional (3D) reconstruction and surgical simulation have improved the efficacy and success rate of surgeries. The aim of this study was to create a 3D, digitized visible model of the craniocervical junction region to help realize accurate simulation of craniocervical surgery on a graphic workstation. Transverse sectional anatomy data for the study were chosen from the first Chinese visible human. Manual axial segmentation of the skull base, cervical spine, cerebellum, vertebral artery, internal carotid artery, sigmoid sinus, internal jugular vein, brain stem, and spinal cord were carried out by using Photoshop software. The segmented structures were reconstructed in 3 dimensions with surface and volume rendering to accurately display 3D models spatially. In contrast to conventional 3D reconstruction techniques that are based on computed tomography and magnetic resonance imaging Digital Imaging and Communications in Medicine (DICOM) inputs and provide mostly osseous details, this technique can help to illustrate the surrounding soft tissue structure and provide a realistic surgical simulation. The reconstructed 3D model was successfully used in simulating complex procedures in the virtual environment, including the transoral approach, bone drillings, and clivus resection.

The craniocervical junction region consists of the occipital bone surrounding the foramen magnum, the temporal bone, the atlas and axis vertebrae, and the lowest one third of the clivus.1 There are many important structures located within this region, such as the medulla oblongata, glossopharyngeal nerve, vagus nerve, accessory nerve, hypoglossal nerve, vertebral artery, internal carotid artery, and internal jugular vein. Performing surgical procedures in this region requires a great deal of precision and surgical training. Training in the virtual environment by means of preoperative surgical simulation may help improve the success rate of these types of surgery.

The application of 3-dimensional (3D) visualization technology in medicine has made it possible to display complex anatomic structures on computers with reasonable technical accuracy. The technology has already been employed for reconstructing a prototype of a human body digitized model for making 3D radiologic diagnoses and for surgical simulation and training purposes. The technology also has potential application for surgical simulation around the craniovertebral junction. Currently, treatment of the craniocervical junction region relies on 2-dimensional (2D) images from computed tomography (CT) or magnetic resonance imaging (MRI) to reconstruct the skull base. Although these techniques have obtained good visualization of the osseous structures,2 the images themselves are gray level with low resolution, and it is difficult to identify the surrounding soft tissues, such as nerves, blood vessels, ligaments, and fascia. Conversely, the images created by the Chinese visible human (CVH) project are composed of colored serial cross sections with high resolution3; thus, minuscule structures are identifiable. Unfortunately, the CVH has not yet been able to provide fully automatic segmentation of these structures on the basis of the original image because of lack of bright outline and boundary between these soft tissues.

In order to distinguish and outline the contours of the craniocervical junction structures in this study, we used manual segmentation followed by 3D reconstructions to create a more robust 3D model and to establish the surgical computer simulation system in this foundation. In this study, we evaluated the feasibility and utility of creating a 3D model of the craniocervical junction region from the CVH to simulate the skull base operations and to reach the craniocervical junction region directly without injuring the adjacent functional structures.

Materials and Methods

Segmentation on 2D sections

The thin serial transverse sectional images of the craniocervical junction region selected from CVH data sets with a thickness slice of 0.5 mm were used as input data. Each cross section was marched and registered accurately through reserved fiducial rods. Manual segmentation was used to outline the structures from each of the 2D sections by using Photoshop software (Adobe, San Jose, CA, USA). During segmentation, each image was amplified to 200%; the skull base, atlas, axis, cerebellum, brain stem, vertebral artery, internal carotid artery, and internal jugular vein of the craniocervical junction region were outlined on 2D sections with the magnetic lasso tool. Then, each structure was established in a new layer and was filled with a different gray value (Fig. 1). The gray level and red, green, blue (RGB) color values of the segmented structures are shown in Table 1. The final segmented image was then saved in PSD format.

Figure 1. The interface of image segmentation in Photoshop software.Figure 1. The interface of image segmentation in Photoshop software.Figure 1. The interface of image segmentation in Photoshop software.
Figure 1 The interface of image segmentation in Photoshop software.

Citation: International Surgery 96, 3; 10.9738/CC14.1

Table 1 The gray level and RGB color value of segmented structures
Table 1

3D reconstruction and display

Segmented images were then converted by Photoshop software to BMP files and were transferred to the graphic workstation. The HP XW 9300 graphic workstation (Hewlett-Packard, Palo Alto, CA, USA) was equipped with the following hardware: two AMD Hyper Transport processors with 16 GB internal storage (AMD, Sunnyvale, CA, USA) and two Quadro FX 4500 graphics cards (Nvidia, Santa Clara, CA, USA). The operating system was Microsoft Windows XP. The software used for additional image processing was Amira 4.1.1 software (TGS Company, France), which provided a large number of module types to allow visualization of various kinds of scientific data as well as the creation of polygonal models from 3D images. By using the software, all visualization techniques were able to be arbitrarily combined to produce a single scene. Moreover, multiple data sets were able to be visualized simultaneously, in either several viewer windows or in a common window. Amira software has 6 special capabilities: direct volume rendering, isosurfaces, segmentation, surface reconstruction, surface simplification, and generation of tetrahedral grids. After segmentation, the craniocervical junction region structures were subjected to surface rendering reconstruction. The serial transverse section images were loaded for volume rendering reconstruction on the graphic workstation. By combining surface rendering with volume rendering reconstruction, the segmented images were ultimately reconstructed.

Results

A 3D model of the craniocervical junction region was created on a graphic workstation (Figs. 2 and 3), which was able to effectively display the 3D position with respect to the proximal structures, including the skull base, superior cervical spine, brain stem, cerebellum, internal carotid artery, and vertebral artery. By using a combination of surface rendering and volume rendering, the models of craniocervical junction region were successfully recreated (Fig. 4). These helped to simulate the approaches to the craniocervical junction region and pontocerebellar triangle surgery by orthogonal dissection of the volume data set (Figs. 5 and 6). Because the 3D model was generated through volume rendering reconstruction, it could be sectioned in any orientation; moreover, the spatial location and adjacent relationship of the main structures of the craniocervical junction region could be displayed in true color. By combining surface rendering reconstruction and volume rendering reconstruction, the adjacent relationship between the segmented structures in false color and the structures in true color that were not segmented (e.g., bone, brain stem, cerebellum and vessels, nerves) could be displayed clearly. By using a 3D VR simulation system, the actual procedures were able to be effectively simulated, as evidenced by practicing the application of osteotomy during skull base surgery (Figs. 7 and 8). During the simulation, the system was also able to provide stereographic images of bone, nerve, and vessel surfaces to precisely predict the outcome of every step and the whole procedure.

Figure 2. The interface of 3D surface reconstruction in Amira software. This image is an axial view of the 3D model.Figure 2. The interface of 3D surface reconstruction in Amira software. This image is an axial view of the 3D model.Figure 2. The interface of 3D surface reconstruction in Amira software. This image is an axial view of the 3D model.
Figure 2 The interface of 3D surface reconstruction in Amira software. This image is an axial view of the 3D model.

Citation: International Surgery 96, 3; 10.9738/CC14.1

Figure 3. Anterior view of the 3D model.Figure 3. Anterior view of the 3D model.Figure 3. Anterior view of the 3D model.
Figure 3 Anterior view of the 3D model.

Citation: International Surgery 96, 3; 10.9738/CC14.1

Figure 4. The reconstructed structures of the craniocervical junction region, combining surface rendering with volume rendering.Figure 4. The reconstructed structures of the craniocervical junction region, combining surface rendering with volume rendering.Figure 4. The reconstructed structures of the craniocervical junction region, combining surface rendering with volume rendering.
Figure 4 The reconstructed structures of the craniocervical junction region, combining surface rendering with volume rendering.

Citation: International Surgery 96, 3; 10.9738/CC14.1

Figure 5. Analog display of the structures in a transoral approach to the superior cervical spine.Figure 5. Analog display of the structures in a transoral approach to the superior cervical spine.Figure 5. Analog display of the structures in a transoral approach to the superior cervical spine.
Figure 5 Analog display of the structures in a transoral approach to the superior cervical spine.

Citation: International Surgery 96, 3; 10.9738/CC14.1

Figure 6. Analog display of the structures in a posterior-lateral approach to the foramen magnum and superior cervical spine.Figure 6. Analog display of the structures in a posterior-lateral approach to the foramen magnum and superior cervical spine.Figure 6. Analog display of the structures in a posterior-lateral approach to the foramen magnum and superior cervical spine.
Figure 6 Analog display of the structures in a posterior-lateral approach to the foramen magnum and superior cervical spine.

Citation: International Surgery 96, 3; 10.9738/CC14.1

Figure 7. Virtual resection of the clivus to display the ventral view of the brain stem.Figure 7. Virtual resection of the clivus to display the ventral view of the brain stem.Figure 7. Virtual resection of the clivus to display the ventral view of the brain stem.
Figure 7 Virtual resection of the clivus to display the ventral view of the brain stem.

Citation: International Surgery 96, 3; 10.9738/CC14.1

Figure 8. Virtual resection of the petrous bone to display the sigmoid sinus and pontocerebellar trigone.Figure 8. Virtual resection of the petrous bone to display the sigmoid sinus and pontocerebellar trigone.Figure 8. Virtual resection of the petrous bone to display the sigmoid sinus and pontocerebellar trigone.
Figure 8 Virtual resection of the petrous bone to display the sigmoid sinus and pontocerebellar trigone.

Citation: International Surgery 96, 3; 10.9738/CC14.1

Discussion

Typically, the first steps in creating a 3D reconstruction model are to segment the raw data and to identify the structures of interest, such as the internal organs, skeletal structure, or vasculature. Accurate segmentation, therefore, is the foundation of effective 3D reconstructions. Enhancing the precision and speed of segmentation of human body images is at present the bottleneck in the entire process of creation of a comprehensive virtual environment. Dr. Spitzer from the Center for Human Simulation quipped that achieving success in the complicated virtual human anatomic world depends on 3 things: segmentation, segmentation, and segmentation alone.4 To date, several kinds of technologies have been introduced for medical image segmentation, but none of them have as yet been established as an ideal fully automatic segmentation method. Most of these systems rely on an interactive segmentation method dependent on the user domain that combines automatic and manual segmentation. Other well-known segmentation approaches exist, such as thresholding, region growing, and pattern recognition; however, most of these are semi-automatic and require significant intervention from the user.5,6 The CT and MRI DICOM images are gray scaled and are processed by using automatic or semi-automatic segmentation to rapidly and accurately separate the anatomic structure. In contrast, the images of the visible human are real color and represent dissection between the structures; however, they lack obvious color discrimination and at present do not have a good method to carry out automatic segmentation. Thus, this task of segmentation requires an anatomy expert to distinguish the structure and to segment them manually. The manual segmentation technique is often time consuming and needs access to good computational resources; at the same time, it is extremely essential and valuable for its potential benefits. In this study, we used the manual segmentation method to distinguish the important structures on 2D real-color images, specifically to outline the contour of bone, brain stem, cerebellum, blood vessels, and nerves of the craniocervical junction region. The segmentation of the craniocervical junction region is the foundation of our 3D reconstruction. Photoshop was utilized to outline the contour lines of each structure that was filled with one kind of fixed gray level in the foundational chart level. After such segmentation processing, the important structures of interest were distinguished according to that specific gray level. After the segmentation was completed, the background of the real-color image was deleted, complicating the remaining layers to obtain a whole grayscale image. By loading these serial images into the Amira software suite, the 3D surface models of the interested structures were reconstructed with threshold value segmentation. The surface rendering model used was fidelity in morph. Each kind of the structures could be reconstructed and displayed in 3D space; therefore, it was important that we authenticated the accuracy of this kind of segmentation method.

The 2 most common approaches that are used to create 3D visualization of (usually segmented) medical data are surface extraction and volume rendering. Surface-based rendering methods are carried out by extracting an intermediate surface description of the relevant objects from the volume data. Only this information is then used for rendering. In volume rendering, however, images are created directly from the volume data, and no intermediate geometry is extracted. This is generally preferable to surface rendering techniques, because all of the gray level information originally acquired during the scanning process is maintained, making it an ideal technique for interactive data exploration. Threshold values and other parameters that are not clear from the beginning can be changed interactively. Furthermore, volume-based rendering allows a combined display of different aspects, such as opaque and semitransparent surfaces, cutting, and maximum intensity projections. However, volume rendering techniques are computer-user intensive and, traditionally, these techniques are a far slower process than surface rendering.7 In this research study, we reconstructed the important structures by surface rendering after segmentation. Then, by using the same sequence and coordinating real-color images, we carried out volume rendering. Combining surface rendering with volume rendering overcomes the deficiencies of surface rendering alone; specifically, this technique does not provide overly artificial images and results in no loss of anatomic data on structures that have not been segmented. By using this interface, we were able to observe its spatial position and relationship for the adjacent structures.

Computer-based anatomy models are perfectly suited to interactive examination of an anatomic situation prior to surgical intervention. Moreover, recently developed simulation systems have allowed for a realistic rehearsal of medical interventions to be carried out on a completely virtual basis. The objective of skull base surgery is most often to remove tumors in the posterior cranial fossa. For this purpose, it is necessary to drill through the occipital bone without damaging the highly delicate organs proximal to that area, including the brain stem, vertebral artery, sigmoid sinus, and dura. Surgeons today learn the techniques to access this complex anatomy by practicing on cadaveric material. Because cadaver availability is limited, a virtual reality simulator that enables unrestricted practice of the different laterobasal surgical approaches is of high value.8 The anatomy of the craniocervical junction region is complex, because there are extensive nerve and vasculature networks running through the osseous structure. As such, the risk of injury and possibly fatal damage is high when the surgeon has inadequate training and/or lacks familiarity with the anatomy of this region.9 Therefore, we reconstructed the main structures of the craniocervical junction by surface rendering and displayed the relationship in space by combining it with volume rendering. Interactive orthogonal sectioning of the 3D volume image helped simulate the operation of the transoral approach to the superior cervical spine and the posterior-lateral approach to the foramen magnum and superior cervical spine. In this environment, it became possible to perform virtual skull bone drilling, such as virtual resection of the clivus to display the ventral view of brain stem or virtual resection of the petrous bone to display the sigmoid sinus and pontocerebellar trigone.

Parallel development in this field is ongoing in hopes of improving the applicability and effectiveness of this technology. Although techniques like rapid prototyping and 3D solid free-form fabrication that uses surgical simulation software, such as MIMICS (Materialise, Leuven, Belgium), have provided a chance for surgeons to practice on a physical model, they are limited to training on osseous structures with bone drilling and osteotomies.1012 Conversely, this particular technique provides a real-time environment and is especially useful to understand and simulate the surgical environment, given the presence of several important soft tissue structures, such as arteries, veins, and nerves. Still, there is room for progress, because a comprehensive virtual surgery system for the craniocervical region has yet to be established; improvements in hardware and software will undoubtedly enhance our abilities to make a virtual reality environment a reality.

Conclusion

Simulation applications are increasingly being used in clinical and health care education settings, with a particular focus on improving surgical outcomes. In this study, we segmented the images of the main structures in the craniocervical junction region by using the CVH data set and a combination of volume and surface rendering to ensure that all the important structures were visualized. We obtained images that were not too artificial, and surgical simulation was successfully carried out. Certain surgical procedures around the craniocervical junction, such as the transoral approach, clivus resection, and occipital drilling, can be successfully simulated and will improve training standards and surgical outcomes.

Acknowledgments

Sponsored by grant No. 60771025 from the National Natural Science Fund of China.

References

  • 1
    Yang, SY
    and
    GaoYZ
    . Clinical results of the transoral operation for lesions of the craniovertebral junction and its abnormalities.Surg Neurol1999. 51 (
    1
    ):1620.
  • 2
    Page, C
    ,
    TahaF
    , and
    Le GarsD
    . Three-dimensional imaging of the petrous bone for the middle fossa approach to the internal acoustic meatus: an experimental study.Surg Radiol Anat2002. 24 (
    3
    ):388392.
  • 3
    Zhang, SX
    ,
    HengPA
    , and
    LiuZJ
    . Chinese visible human project.Clin Anat2006. 19 (
    2
    ):204215.
  • 4
    Spitzer, VM
    and
    WhitlockDG
    . The visible human dataset: the anatomical platform for human simulation.Anat Rec1998. 253 (
    1
    ):4957.
  • 5
    John, NW
    . The impact of Web3D technologies on medical education and training.Computers & Education2007. 49 (
    1
    ):1931.
  • 6
    Brenton, H
    ,
    HernandezJ
    ,
    BelloF
    ,
    StruttonP
    ,
    PurkayasthaS
    ,
    FirthT
    , et al
    . Using multimedia and Web3D to enhance anatomy teaching.Computers & Education2007. 49 (
    1
    ):3253.
  • 7
    Robb, RA
    . Visualization in biomedical computing.Parallel Computing1999. 25 (
    13–14
    ):20672110.
  • 8
    Pommert, A
    ,
    HohneKH
    ,
    BurmesterE
    ,
    GehrmannS
    ,
    LeuwerR
    ,
    PetersikA
    , et al
    . Computer-based anatomy a prerequisite for computer-assisted radiology and surgery.Acad Radiol2006. 13 (
    1
    ):104112.
  • 9
    Cokkeser, Y
    ,
    NaguibMB
    , and
    KizilayA
    . Management of the vertebral artery at the craniocervical junction.Otolaryng Head Neck2005. 133 (
    1
    ):8488. .
  • 10
    Bibb, R
    and
    WinderJ
    . A review of the issues surrounding three-dimensional computed tomography for medical modelling using rapid prototyping techniques.Radiography2010. 16 (
    1
    ):7883.
  • 11
    Olszewski, R
    ,
    TranduyK
    , and
    ReychlerH
    . Innovative procedure for computer-assisted genioplasty: three-dimensional cephalometry, rapid-prototyping model and surgical splint.Int J Oral Max Surg2010. 39 (
    7
    ):721724.
  • 12
    Mazzoli, A
    ,
    GermaniM
    , and
    RaffaeliR
    . Direct fabrication through electron beam melting technology of custom cranial implants designed in a phantom-based haptic environment.Mater Design2009. 30 (
    8
    ):31863192.
Copyright: International College of Surgeons
Figure 1
Figure 1

The interface of image segmentation in Photoshop software.


Figure 2
Figure 2

The interface of 3D surface reconstruction in Amira software. This image is an axial view of the 3D model.


Figure 3
Figure 3

Anterior view of the 3D model.


Figure 4
Figure 4

The reconstructed structures of the craniocervical junction region, combining surface rendering with volume rendering.


Figure 5
Figure 5

Analog display of the structures in a transoral approach to the superior cervical spine.


Figure 6
Figure 6

Analog display of the structures in a posterior-lateral approach to the foramen magnum and superior cervical spine.


Figure 7
Figure 7

Virtual resection of the clivus to display the ventral view of the brain stem.


Figure 8
Figure 8

Virtual resection of the petrous bone to display the sigmoid sinus and pontocerebellar trigone.


Contributor Notes

Reprint requests: Shao-Xiang Zhang, PhD, Department of Anatomy, College of Medicine, Third Military Medical University, Chongqing 400038, People's Republic of China. Tel.: +86 23 68752005; Fax: +86 23 68818745; E-mail: zhangsx@mail.tmmu.com.cn
  • Download PDF