Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

The Mri Of Human Brain Psychology Essay

Structural image segmentation from the MRI of human brain provides an image map in which we can visualize the various structures and substructures of the human brain. In order to understand the anatomy of the brain, segmentation of these subcortical brain structures from MRI head scans is important. Nowadays there is an increase of subjects in cohorts and sensitivity is remaining a requirement in segmentation of these subcortical structures. Therefore, we need an analysis of the merits and demerits of existing segmentation tools. The segmentation methods can be broadly classified into three types namely Manual, Semi-automatic and fully automatic.

In earlier days, segmentation of hippocampus (HC) was done by the physicians manually. This can be done by going through the MRI of head scan of any human, slice by slice, very carefully and then marking the required subcortical structure boundaries with minute care. Manual segmentation takes much time and is biased by the person who does the segmentation. Then the semi-automatic techniques came. These techniques are popular because they can substantially reduce the segmentation time. The main demerit is that human intervention is needed in between the process. It may include procedures such as manually editing the delineations of the structure which is to be segmented. The third category is fully automatic segmentation method in which the whole segmentation procedure is capable of progressing automatically without involving human intervention during the process. Automated procedures have not become as popular as semi-automated procedures. The reason is the complexity and uncertainty of segmenting an MR image. Also most of the developers have the strong feeling that in order to get accurate as well as meaningful results the human operator input is required. It is also the human tendency not to believe the machine completely.

The manual segmentation is considered to be the gold standard is a reference for all other segmentation methods. There are many works which report the segmentation results based on the manual segmentation of corresponding subcortical structures in the brain, such as Pruessner et al [11], Pantel et al [12], Barnes et al. [13], Frazier et al.[14] and Strasser et al. [15].

The changes in the volume of hippocampus for aged and Alzheimer’s disease (AD) patients were assessed by Giovanni B. Frison et al. [16]. This study was conducted on high resolution MRI at 3 Tesla. 19 AD patients were taken for this study and T1-weighted images were acquired for them. Then using manual tracing (along with the help of additional software), hippocampal formation was isolated with 40 minutes per subject. The group differences and correlations were assessed using radial atrophy mapping by averaging hippocampal shapes for the subjects using 3D parametric surface mesh models. Percentage difference and significance maps were produced. The CA (Cornu Ammonis) refers to a substructure of the hippocampal formation defined on the basis of internal structure in the human brain. It is composed of three fields, CA1 field, CA2 field and CA3 field. The aim of this study is to compare the hippocampus structural changes in AD patients with that of healthy aged persons. Regions corresponding to the CA2 and CA3 fields were relatively spared in both ageing and Alzheimer’s disease. Hippocampal atrophy in Alzheimer’s disease maps to areas in the body and tail that partly overlap those affected by normal ageing. Specific areas in the anterior and dorsal CA1 subfield involved in Alzheimer’s disease were not involved in normal ageing. These patterns might relate to differential neural systems involved in Alzheimer’s disease and aging.

The demerits of the manual segmentation process are that they are time consuming and has inter-observer and intra-observer variability. It has become obvious that studies on small number of subjects do not provide definite results to certain questions in the analysis of manually segmented Hippocampus. Although manual segmentation provides accurate measures of hippocampal volume, automated processes would trim down the subjectivity of the segmentation process. This would of course provide a substantial saving of time. Using computer techniques to facilitate the segmentation process has therefore become a crucial part of the analysis of structural scans.

To overcome the difficulties in manual segmentation, researchers moved to find an alternative. As a result, a number of semi-automated segmentation techniques have appeared in the recent years. Although manual segmentation remains as the gold standard, Semi-automated techniques are by far the most frequently used methods. These methodologies have gained popularity as they reduce the processing time of segmentation, although they still involve human intervention.

Chupin et al. ([17] [18]) used a Markovian deformation process with a deformable constraint based on prior knowledge, which is calculated with manual intervention. A user must manually define bounding boxes and create seed points to make algorithm to work and thus making the entire process as semi-automated.

A study by Ashton et al [19] use elastic deformable model with seed points and constraints to segment the hippocampus. The seed points and the constraint were produced from boundaries that had previously been traced manually. These methodologies speed up processing time of segmentation considerably compared to manual segmentation, allowing them to process a large number of scans. The necessity of including human involvement still increases processing time, and it provides a subjective element to segmentation as in manual segmentation.

Wei Wei Lee [20] proposed a semi-automatic hybrid approach for segmentation of Hippocampus. This method combines both low-level image processing techniques, thresholding, hole filling(based on adjacent voxel connectivity) distance transformation, and high level image processing techniques, and termed it as Geometric Deformable Model(GDM) in a sequential pipeline. The operation of the GDM is based on constraint modeling and cost function minimization. The GDM incorporates 5 constraints which are integrated together to form a local cost function (potential function) associated with each vertex in a 3D model. The geometric model is iteratively deformed to a position that minimizes its local cost function with a coarse (large time step) to fine (small time step) approach.

Automated image analysis methods provide a promising alternative for extracting quantitative information from neuroimaging data. They do not suffer from inter- and intra rater biases, and are able to process large amounts of data without human intervention.

The application of voxel-level three-dimensional fluid registration to serial MRI is described [21]. This fluid registration brain model determines deformation fields modeling brain change, which are consistent with a model describing a viscous fluid. In this method, first, suitable values for the viscosity-body-force-ratio, α, and the number of iterations, were established and the convergence, repeatability, linearity, and accuracy are investigated by comparing the results with expert manual segmentation. The mean absolute volume difference between fluid and manual segmentation was 0.7%. Fluid registration has potential importance for tracking longitudinal structural changes in brain particularly in the context of the clinical trial, where large number of subjects may have multiple MR scans.

A comparative study has been done by Rajendra A. Morey et al. [22] to analyze the performance of two popular as well as fully automated tools namely FSL/FIRST and FreeSurfer with that of the gold standard (manual tracing). This study helps to understand the shape of hippocampus. In their method they have computed the volume overlap as well as volume difference. They have also found across-sample correlation and 3-D shape analysis at group-level. In addition, they have also computed the sample size estimates for a range of effect sizes for conducting between-group studies.

An assessment for the performance of standard image registration techniques for MRI-based automated segmentation of the hippocampus was presented by O. T. Carmichael et al. [22]. The study was based on elderly subjects with Alzheimer’s Disease as well as mild cognitive impairment (MCI). They have collected structural MR images from Alzheimer’s Disease Research Center [23] at the University of Pittsburgh. The subjects were of 54 years and gender-matched healthy individuals, with probable AD, and another set with MCI. The Hippocampi from the subject images were segmented by the automated segmentation methods using cohort atlases such as AIR, SPM, FLIRT, and the fully-deformable method of Chen [24]. The segmented results were aligned to the Harvard atlas [25], MNI atlas [26], and also with manually-labeled subject images [27]

A study of three different automated methods calculating rates of atrophy in the hippocampal region was reported in [28]. They look at fluid change that is calculated by the three methods, Jacobian change, region propagation or boundary shift. They found that the boundary shift results were in good agreement with the manual results. The method however, can only be used for longitudinal studies, to assess the change from one hippocampus to another.

Carmichael et al. [29] considered Spoiled gradient-recalled (SPGR) volumetric T1-weighted MR images acquired in the coronal plane for a set of 20 subjects with AD, 19 subjects in the and 15 under control categories. They correspond to the patients enrollment in the University of Pittsburgh Alzheimer's Disease Research Center between 1999 and 2004. Hippocampi were manually traced on all subject images by an expert rater who was highly consistent with two other trained raters . Widely disseminated registration software packages (AIR, SPM, FSL, and Chen’s method) aligned the subject images to the standard MNI and Harvard atlas images, and subject hippocampi were estimated by transferring traced hippocampi from the atlases to the subject image space. Additionally, subject hippocampi were estimated by aligning subject images to cohort “atlases”—that is, randomly selected subject images augmented with manual tracings.

A comprehensive survey highlighting different clustering techniques used for image segmentation is described by S.Thilagamani et al. [30]. The basic principle behind clustering, clustering concepts and image segmentation has been analyzed. They have concluded that through the application of clustering algorithms, image segmentation can be done in an effective way. They have proved that spectral clustering technique can be used for image clustering because images that cannot be seen can be placed into clusters very easily than other traditional methods. They have finalized that in general, clustering is a hard problem and clustering techniques help to increase the efficiency of the image retrieval process.

The segmentation of hippocampus in five patients with mesial temporal sclerosis is described by Robert E. Hogan et al. [31]. Using magnetic resonance (MR) imaging, they verified the precision as well as reproducibility of hippocampal segmentations. They used the deformation based segmentation method. The results produced by them had 92.8% , overall percentage overlap with that of automated segmentations. The overlap with manual segmentations was 74.8%. They concluded that deformation- based hippocampal segmentations can be the best method for measuring the hippocampal volume in this patient population.

Gang Lin et al. [32] presented a hybrid watershed method combining two approaches (distance transform and mathematical model). Initially they combined geometric distance with intensity gradients and used it as an initial step for the watershed algorithm. Next, they used an explicit mathematical model. These models are used for the cell nuclei anatomic characteristics (size and shape measures). This method constructs the model from the data automatically. Initially over-segmentation is performed on the image data. It is followed by merging statistical based model. They calculated a confidence score for each detected nucleus. Then they measured the fitness level of the nucleus with the model.

A new registration method to segment the hippocampus from brain MRI automatically is proposed by Xiangbo LIN et al. [33]. In their work, a combination of local affine transformation and optical flow based non-rigid registration, which has the advantages of modifying the larger geometric deformation and intensity differences simultaneously is used. Quantitative evaluation using manual segmentation is performed on 10 subjects by computing the spatial overlap (Kappa Index (KI) value) as the evaluation criterion. The average KI value for their method is 0.7749, while it is 0.5811 with another semi-automatic method, ITK-SNAP, a semi-automatic method to segment hippocampus from MRI of human brain.

Pierric Coupe et al. [34] presented a nonlocal patch-based method using expert segmentation priors to achieve hippocampus segmentation. Inspired by recent work in image denoising, the proposed nonlocal patch-based label fusion produces accurate and robust segmentation. All images in the database were first denoised with the three dimensional (3D) block-wise nonlocal means filter proposed for MR images.To ensure that each tissue type has the same intensity within a single image, the well-known N3 intensity non-uniformity correction was used. Each subject was linearly registered to the MNI 152 template into the stereotaxic space using ANIMAL. Finally, the intensities of the images were set in [0-100] and were normalized. The contrast and the luminance information are preserved by performing the global normalization of the entire 3D image. An initialization mask is constructed around the structure of interest. The patches were selected and compared. The matched areas are segmented.

Sandeep S. Koushik et al presented an unsupervised and image driven algorithm is presented for segmenting the hippocampus [37]. It is a hybrid approach. In this method, a combination of a coarse segmentation and the surface evolution is used. Using the region growing method a coarse solution is derived. This is further refined with the application of modified version of water flow model based on physics (Liu and Nixon, 2007) [38].

A new method is presented in [39] for segmenting hippocampus. This method is based on prior knowledge obtained from another technique. A local map is constructed based on the training data set using graph-cuts. This is used to calculate the balance between current image and the prior information. The map of the training set is combined to create a multi-atlas appropriate to produce a map for target image.

A knowledge based for segmenting corpus callosum is reported by Maxime Taron et al. [40]. In this work, a higher order implicit polynomials is used for representing shapes. They estimated uncertainties existing on the registered shapes. This is used for modeling the prior knowledge of the target object. This non-linear model is integrated with data item to separate the target region (to be segmented) from the background.

A computational method is presented in [41] to segment the subfields of hippocampus from ultra high resolution MRI. They developed a mesh-based probabilistic atlas from the prior segmentation results. They used image registration model to roughly outline the hippocampus. Then they used atlas meshes to segment the actual region.

An appearance based segmentation of hippocampus and amygdale was proposed by Shiyan Hu et al. [42] which works in two stages. Stage one is the training phase which does the preprocessing which includes 3 steps. Step one involves image intensity non-uniformity correction by Sled (Sled et al.,1998), image registration using 3 techniques namely linear registration, local linear registration and non-linear registration. Step 2 is intensity normalization based on a common volume of reference. Step 3 includes the manual labeling and builds the shape and appearance modeling. Stage 2 is segmentation phase which uses a search algorithm applied iteratively to optimize the model parameters.

Hongzhi Wang et al. performed the hippocampus segmentation using stable maximum likelihood classifier ensemble algorithm [43]. In this work a multi-atlas based label fusion technique is used to segment hippocampus from MRI. It establishes a one-to-one correspondence between an unknown image and a reference atlas image through deformable registration, using which segmentation is done. The results are produced by combining the segmentations obtained by referring to different atlases. The accuracy in segmentation is improved by an error correction technique. Segmentation produced by multi-atlas approach is taken as input and improved through learning based error correction step. The error correction task is done by a new classifier ensemble algorithm.

The literature review reveals that every method has their own merits and demerits. In this work we propose to develop 7 methods to segment the hippocampus.

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.

More from UK Essays

Doing your resits? We can help!