Data Recovery Techniques In Digital Forensics Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Computer misuse has come to pass the major issue in the world. Due to the development of the current technologies people used computers for various felonious activities. To spy those activities computer scientists have explored their research in digital forensic area. Due to that new research areas have been arisen. {\bf Data Recovery Techniques in Digital Forensic} has taken place potential sense in those research areas. This literature review indented to discuss data recovery techniques which are being currently used in Digital forensics. Furthermore it will be discussed future planes and the directions about this research area.


{\bf Data recovery} is the process of salvaging data from damaged, failed, corrupted, or inaccessible secondary storage media when it cannot be accessed normally. Often the data are being salvaged from storage media such as hard disk drives, storage tapes, CDs, DVDs, RAID, and other electronics. Recovery may be required due to physical damage to the storage device or logical damage to the file system that prevents it from being mounted by the host operating system.\\


{\bf Computer forensics} is a branch of forensic science pertaining to legal evidence found in computers and digital storage media. Computer forensics is also known as {\it digital forensics}\\


The goal of computer forensics is to explain the current state of a {\it digital artifact}. The term digital artifact can include a computer system, storage medium (such as a hard disk or CD-ROM), an electronic document (e.g. an email message or JPEG image) or even a sequence of packets moving over a computer network.

\section*{Outline }

The survey is divided into six parts. The contents of chapters as follows.\\


{\bf Chapter 2:} In chapter two we describe about the {\bf Image Forensics} and the current technologies which are available for identification and recovery of JPEG files with missing fragments and identify imagery processed software by using JPEG quantization table.\\


{\bf Chapter 3:} In chapter three we describe about the {\bf Operating Systems Forensics.} In this chapter it mainly focused on detecting file fragmentation point using sequential hypothesis testing, forensic analysis of the windows registry in memory, windows memory dumps, extraction of forensically sensitive information from windows physical memory and recovering deleted data from the windows registry.\\


{\bf Chapter 4:} In chapter four we describe about the {\bf Data and File forensics.} This section focused on forensics data recovery and examination of magnetic swipe card cloning devices, forensic memory analysis: files mapped in memory and predicting the types of file fragments.\\


{\bf Chapter 5:} In chapter five we describe about the {\bf Legal Frameworks for Data Recovery.} In this section it will be described about DIALOG framework, digital evidence provenance supporting reproducibility and FACE: Automated digital evidence discovery and correlation.\\


{\bf Chapter 6: Conclusion} contains my opinions and ideas regarding the survey including summarizing discussion of all the technologies and their current status.

% End of chapter 1


% Start of chapter 2

\section*{\Huge Chapter 2}

\section{Image Forensics}

\subsection{Recovery of JPEG files with missing fragments}

When considering about image forensics, recovery of fragmented files have been become a challenging task for encoding files like JPEG. There are two issues related to fragmented JPEG file recovery. First issue is the efficient identification of the next fragment of a file which is going to recover and the second issue is the recovery of file fragments cannot be linked with existing image header or else there is no image header.

\subsubsection{Review of existing approaches }

There are three techniques have been proposed for recovery of fragmented JPEG files. A JPEG file contains with three things. They are ordered sequence of markers, parameter and entropy encoded segments that are spread over multiple blocks. In the first stage of recovery fragmented JPEG files, all the data blocks of the storage device are scanned for known file markers. Each marker contains two bytes in length which the first byte always contains 0xFF \cite{Missing} value and the second byte containing a code that specifies the marker type. The second stage can be started after getting the marker type. In this stage data extracted from the first file block which contains that image marker. In this approach every time new data block is merged with previously merged file block. Using the above approach we can get the recovered file in the last stage.\\


In this existing approach there are two types of file fragments cannot be recovered. First type is the stand-alone fragments whose headers are not available and the second type is the disrupted fragments which cannot be linked to the header due to loss of data.\\


Existing approach there are two drawbacks, because in that approach there are two types of file fragments cannot be recovered. But in new approach those problem had been solved. This approach mainly concerned about file fragments whose headers are not available and the disrupted fragments.\\

\subsubsection{Recovery of disrupted fragments}

In JPEG standers there is a marker called restart marker. In each and every JPEG image there are eight restart markers and each is represented by two byte code (0xFFD0 - 0xFFD7). These markers are appearing only entropy-code segment in the JPEG files. Therefore they can directly search in the file data.\cite{Missing}\\


In JPEG file the DC coefficients of all color components are encoded to difference value rather than the absolute value. When the restart marker hit for that value, the DC difference is reset to zero and the bit stream is synchronized to a byte boundary. This property of the restart marker is used for recovering the disrupted fragments of the JPEG file. These fragments can be directly identify by the restart marker because unique restart marker code appearing periodically. Using the above technique and after identify any of seven restart markers, the original file can be successfully recovered.\\

\subsubsection{Recovery of stand-alone fragments by using pseudo headers}

Without a valid header the JPEG file cannot be decoded. Due to that in this technique the pseudo header will be introduced and it will be used to recover stand-alone fragments. When constructing the pseudo header the information that can be gathered by analysis of encoded files not sufficient. Then some other data should need for constructing the pseudo heard such as the camera type of the image, the name of the software which is used to edit the image, or the web page which is used to download the image. All these factors are used when constructing the pseudo header. After reconstructing the pseudo header and using the {\bf Huffman} table of the image the recovery process can be done to recover stand-alone fragments.\\

\subsection{Identify imagery processed software by using JPEG quantization table}

Using JPEG image quantization tables the forensics examiners can detect whether the image was made by the computer software or not. The quantization table for digital ballistic technique was firstly introduced by {\bf Farid (2006).} In that report he used 204 images, one per camera at the device's highest quality setting. {\bf Chandra and Ellis (1999)} had showed how to compute the scaled tables found in existing JPEG image by using base quantization tables. \cite{Kornblum2008}




\caption{Standard JPEG quantization tables scaled with Q=80 \cite{Kornblum2008}}


There are four types of quantization tables. Those are,


\item Standard tables

\item Extended standard tables

\item Custom fixed tables

\item Custom adaptive tables


\subsubsection*{How to use quantization tables for ballistics }

JPEG quantization tables can be used for digital ballistic to identify whether the image was edited by the computer software or not. Using this technique there is a software library called Calvin which helps programmers to use quantization table for digital ballistics. The goal of the {\bf Calvin} library is to identify images which are created by real camera. The Calvin library can extract the quantization table from the existing image and it compared by its quantization table database and displays the result. By default it contains standard tables, extended standards tables and Adobe Photoshop tables. User can adds more quantization tables to the Calvin library by the display mode in the tool. By the comparison mode user can compare the image by its quantization table database and gets the result.

% End of chapter 2


% Start of chapter 3

\section*{\Huge Chapter 3}

\section{Operating Systems Forensics }

\subsection{Detecting file fragmentation point using sequential hypothesis testing}

When considering about detecting file fragmentation point regarding operating systems forensics, nowadays there are some techniques available. {\bf Sequential hypothesis testing} is one of the technique discussed in hear. By using the sequential hypothesis testing we can identify the fragmentation point of the file by sequentially comparing adjacent pair of blocks from the starting block of the file until the fragmentation point is reached.\\


{\bf File carving} is another technique which is used to extracted data files from the digital device without assistant of the file table or other disk meta-data. One of the challenge in file carving is to recover the files which are fragmented.\\


We can improve the performance of identifying fragmentation point of the file by using {\bf Garfinkel's (2007) bifragment gap carving technique and Pal and Memon's (2006) Parallel Unique Path(PUP)} techniques for recovering fragmented files.

\subsubsection{File Fragmentation}

File fragmentation is happened due to file is not stored in the correct sequence on consecutive blocks on the disk. Fragmentation typically occurs under one of the following scenarios.


\item Low disk space

\item Appending/editing files

\item Wear-leveling algorithms in next generation devices

\item File system


If file is fragmented the traditional file curving techniques are failed to recovery the file.

\subsubsection{Fragmented file curving}

When a file gets fragmented there are three steps to recover the file.


\item Identify the starting point of the file

\item Identify the block belonging to file

\item Order the block correctly to reconstruct the file


There are two techniques introduced using above three steps to recover the file. Those two techniques are {\bf Bifragment gap carving} and {\bf Graph theoretic carving}. By Bifragment gap carving introduced the {\bf fast object validation} technique for recovery of fragmented files. This technique can recovers files which have headers and footers fragmented into two fragments. By using {\bf cycle redundancy} checking we can validate the results. Graph theoretic carving done the recovery process by finding the optimal ordering of blocks.

\subsubsection{Sequential hypothesis testing}

When using the sequential hypothesis testing for recover fragment files, the test is done in three stages. Those testing stages are {\bf problem formulation, forward fragment point detection test and reverse fragmentation point detection test}. The sequential hypothesis testing for fragmented file recovery steps are mentioned bellow \cite{Pal2008}.


\item Identify starting block of the file

\item Sequentially check each block after first and determine fragmentation point/file end

\item If fragmentation point is detected, find starting point of the next fragment

\item Continue with step two from starting point of next fragment


\subsection{Extraction of forensically sensitive information from Windows physical \\ memory }

To extract of forensically sensitive information from windows physical memory, the existing techniques usually rely on string matching. But there are new techniques based on analyzing the call stack and the security sensitive APIs. This new technique allows extracting sensitive information which cannot be extracted by string matching.

\subsubsection{Review of existing techniques}

In past few year there were so many researches done in this category. {\bf Sarmoria and Chapin (2005)} presented a run time monitor to log read and write operations in memory-mapped files. Also there is a tool called {\bf BodySnatcher} (Schatz, 2007) which can be injected and independent acquisition operating system into the potentially compromised host operating system kernel. There are various software based tools developed recent years. {\bf WinEn, EnCase, MemParse, KnTList, PTFinder} and {\bf DVD toolkit} are some of software based tools which can be used to extract sensitive information from windows physical memory.\\


Application/protocol fingerprint analysis and the Call stack analysis are the newest techniques which are used to extract sensitive information from the windows memory.

\subsubsection{Call stack analysis}

A call stack is the structure used by the operating system to store information about active subroutines of each program. Execution stack, control stack or simply stack is the synonyms for the call stack. Using the call stack analysis it can be extracted sensitive information from windows physical memory. The analysis phase can done using following steps \cite{Hejazi2009}.


\item Locating the stack memory associated with each thread

\item Locating stack frames for each function call on the stack

\item Understanding the function that has been called

\item Reconstructing the Import Address Table (IAT) of the process image

\item Comparing each called function with the list of forensically Sensitive functions

\item Extracting the parameters that are present in the stack


Using the above steps of call stack analysis the investigators can extract forensically sensitive information from windows physical memory.


\subsection{Forensic analysis of the Windows registry in memory}

Windows family operating systems used hierarchical registry database to store information that is necessary to configure the system. Sensitive data such as passwords, encryption keys are stored in those registries. According to the past several years research, registry can contains great deal of information that are used by forensic examiners.\\


To analysis windows registry it is necessary to find out where in memory hives have been loaded and known how to translate cell indexes to memory addresses. If we have good understanding about the windows configuration manager it will helps us to analysis windows registry easily. For example the configuration manager in Windows XP references each hive loaded in memory using the {\bf \_CMHIVE} data structure \cite{Dolan-gavitt2008}. The \_CMHIVE contains several pieces of metadata about the hive, such as its full path, the number of handles to it that are open, and pointers to the other loaded hives on the system. In Windows XP Service Pack 2 memory image there are thirteen hives. Currently log on users, local service users and the network service users' data were kept in the {\bf NUSTER} and {\bf UsrClass} hives. Other hives are security manager hive, system hive, the security hive, the software hive, and two volatile hives.\\


By using the cell indexes and the hives the forensics examiners can analysis windows registry and can obtain data in the memory. The experiment results are shown in the research \cite{Dolan-gavitt2008}.

\subsection{Recovering deleted data from the Windows registry}

The Windows registry kept wide variety of data in the system such as system core configurations, user specific configurations, information on install applications and user credentials. But analysis of the windows registry is the critical thing for forensic analyzers. {\bf Timothy D.Morgan} \cite{Morgan2008} has provided an algorithm for recovering deleted keys, values and other structures in the context of the registry as a whole.

\subsubsection{Review of existing techniques }

{\bf Harlan Carvey} and {\bf Derrick Farmer} researchers have shown that how to recover registry hives and other data structures from system memory image. {\bf Russinovich (1999)} and {\bf Probert} research is about the registry internal structure.

\subsubsection{Data recovery process}

{\bf Timothy D.Morgan} introduced a new algorithm for recover deleted data from the windows registry. He named this tool as {\bf reglookup-recover} tool. His experiment results were shown below.




\caption{New User Entries (log values truncated) \cite{Morgan2008}}


In his experiment he used reglookup-recover command line tool and the SAM registry. The figure 2 shows the snap shot of the SAM registry. In his experiment he created new user under the administrative privileges and the above snap shot was taken. After that he deleted that user and took another snap shot of the {\bf SAM} registry by using the reglookup-recover tool \cite{Morgan2008}.




\caption{Delete User Entries \cite{Morgan2008}}


According to the figure 2 the new user was created as {\bf "Kobayashi"}. Then three keys and four values are added to the SAM registry. After deleting that particular user the newly created keys are also deleted. But the figure 3 results show that the user "Kobayashi" was still existed. Because neither the "Kobayashi" or "000003EC" keys nor any sub keys, their {\bf MTIME}s are set to when they were last modified before deletion. The recovered value has no path because the Kobayashi key had been overwritten. Like wise we can recover data from the windows registry which were deleted. By using this tool reglookup-recovery, we can recover deleted data from {\bf system, software, SAM} and {\bf security} registries.

\subsection{Windows memory dumps}

Currently there are some tools and techniques available to examine windows memory dumps. {\bf Mariusz Burdach's Windows memory forensics tool kit} can analysis full memory dumps of systems running Microsoft Windows. {\bf Chris Betz's MemParser} tool can analysis active processes dumps and their process memory. {\bf George M.Garner Jr.} and {\bf Robert-Jan Mora} invented {\bf Kntlist} which can be used to evaluate several kernels internal list and tables to produce huge list of processes, threads, handles and other objects. These all tools used same technique which is the Microsoft Windows operating system maintains tables and doubly-linked list in order to keep track of its resources \cite{Schuster2006}. This technique is named as {\bf Direct Kernel Object Manipulation (DKOM)}.

% End of chapter 3


% Start of chapter 4

\section*{\Huge Chapter 4}

\section{Data and File forensics}

\subsection{Forensics data recovery and examination of magnetic swipe card cloning devices}

Nowadays the magnetic swipe card technology is used in many areas. Credit cards, debit cards, mobile phones top-up and security identification cards are some examples for that. In magnetic swipe cards data is typically store in tracks. There are three tracks in the magnetic stripe. Credit cards normally uses only track one and two. Track one stores the account number, card holders name and the card expiration date. Track two is developed for the banking industry. It contains the copy of the track one and the card holders' name.\\


Using the {\bf Mini-123} communication protocol data can be extracted which are stored in the magnetic swipe cards. Mini-123 protocol is implemented by using the networking library called {\bf GNET}. It is written in "C" language and built using {\bf GLIB} as a basis. GNTE library has following features.


\item Support TTY operation

\item Simple handshaking

\item Multi linked capability

\item Expandability

\item Simple format





\caption{GNET handshaking \cite{Masters2007}}


By using the GNET library and the Mini-123 communication protocol technique, data which have been stored in the magnetic swipe cards can be extracted. Following example shows sample data which were extracted by the swipe card \cite{Masters2007}.



\item Record: 000

\item Timestamp 09:35:39, 09/11/2006

\item No Track 1 data

\item No Track 3 data

\item Account code: 8944129990123456789

\item Validfrom:12/99


\subsection{Forensic memory analysis: Files mapped in memory}

The most popular technique for recover file from the memory is {\bf Carving. Scalpel} and {\bf Smart Carving} are some of the carving techniques available in the industry. But those techniques used linear carving algorithms. Linear carving algorithms failed to recover the files which are get fragmented. Therefor carving techniques are less effective to recover files from memory dumps.\\


Due to the above problem {\bf R.B. vanBaar, W. Alink, A.R. vanBallegooij} \cite{Baar2008} introduced new approach to recover mapped files in memory. This process is done using three methods. Those methods are allocated file-mapping structures, unallocated file-mapping structures and unidentified file pages.

\subsubsection{Allocated file-mapping structures}

This method was implemented by using {\bf Schuster's PTFinder} curving algorithm. Using it they could identify hidden and existed process structures. Those hidden and existed structures contain pointers to VAD root and Object table. {\bf Dolan-Gavitt (2007)} described how to travel VAD tree. By using that theorem and going through the Object table it's possible to reconstruct private files.

\subsubsection{Unallocated file-mapping structure}

Using this method author mentioned about recovering files which are closed by the processes. When file handles are closed by the processes, file data may still be retained in memory. These files can be recovered by carving for {\bf Control Areas} and {\bf Page Table} structures.

\subsubsection{Unidentified file pages}

To recover unidentified file pages MD5 hash is used. The file data is still present in the pages. To compare those data with file in the hard drive the MD5 hash is used. Authors mentioned that use this technique is not suitable because to match hashes required access to the file system, also this technique does not link information about processes and these files have been altered in memory will not be recognized.\\


By using above three methods authors implemented prototype tool for recovering files. First tool writes reconstructed file into a folder. Second tool creates a XML file contains information per page.

\subsection{Predicting the types of file fragments}

Determining the type of the file fragment is a problematic in computer forensics. Two algorithms are introduced to predicting the type of a fragment \cite{Calhoun2008}. One algorithm is based on {\bf Fisher's linear discriminant} and the other algorithm based on {\bf longest common subsequence} of the fragment with various set of test files.\\


By looking at the file header we can easily identify the type of the file. But if that metadata is lost current computer forensic software may not be able to correctly identify the type of the fragment.\\


By using {\bf centroid fileprints} approach {\bf Karresand} and {\bf Shahmehr} developed the {\bf Oscar} method for identifying the types of file fragments. This algorithm is optimized for JPEG files.

\subsubsection{Type prediction with Fisher's linear discriminant }

In this method {\bf William C. Calhoun, Drue Coles} have used predict file types to create mathematical model based on ASCII frequency, entropy and other statistics. They used Fisher's linear discriminant function for this classification. Known data and known groups of classifications are used in this approach. This technique predicts the unknown type of the fragment to the known type according to the function that returns the highest value. To get that decision the software used several different statistics and combination of statistics including frequency of ASCII codes, entropy, modes, mean, standard deviation and correlation between adjacent bytes.

\subsubsection{Type prediction with longest common substrings and subsequence }

If two files are in same type they will probably have longer substring in common, than files are different. Based on this idea authors developed the second algorithm.\\


By using first and second algorithms authors have shown their experiment results in the paper \cite{Calhoun2008}. According to those experiment results 100\% accuracy can be distinguished JPG's from GIF's. 92\% accuracy in distinguishing JPG's from BMP's and 87\% accuracy distinguishing PDF's from BMP's. The longest common subsequence technique works better without ASCII and entropy in distinguishing PDF's from GIF's. The linear discriminant technique is better for distinguishing BMP's from JPG's and GIF's. Finally authors mentioned that the linear discriminant is better for identifying large fragments or of a large number of fragments. But accuracy is better in the algorithm which was used longest common subsequence technique.

% End of chapter 4


% Start of chapter 5

\section*{\Huge Chapter 5}

\section{Legal Frameworks for Data Recovery}

\subsection{DIALOG framework}

In digital forensics investigation world {\bf Digital Investigation Ontology} is known as {\bf DIALOG} framework. DIALOG framework is for the representation, reuse and analysis of Digital Investigation knowledge. DIALOG framework plays number of roles in the Digital Investigation field.


\item DIALOG plays as a knowledge repository

\item DIALOG plays as a case manager

\item DIALOG plays as an evidence unification mechanism

\item DIALOG plays as an investigation guide


Currently DIALOG models the digital forensics fields through four main dimensions. They are,


\item Crime case

\item Evidence location

\item Information

\item Forensic resource


Following figure illustrate the top level ontology of the DIALOG framework \cite{Kechadi2009}.




\caption{Top level ontology of DIALOG \cite{Kechadi2009}}


Using this DIALOG framework investigators can model windows registry and then they can do their investigation part from it. The easiest thing in this technique is DIALOG helps investigators to keep the knowledge repository.\\


As it was mentioned in Chapter 3 there is a good technique for recover data from the windows registry. By combining it with this DIALOG framework technique that investigation can be done in great accuracy.


\subsection{Digital evidence provenance supporting reproducibility and comparison }

Authors {\bf Brian Neil Levine, Marc Liberatore} introduced a new format called {\bf Digital Evidence Exchange (DEX)} \cite{Levine2009} to discover the digital evidence provenance. Special of this new format is it's independent of forensic tool that discovered the evidence. This new approach DEX has number of advantages. First advantage is two investigators can exchange, compare and reproduced the results of the investigation. Secondly investigators can use DEX standard output to verify and validate against known test data. This process is called as {\bf N-version programming (NPV)}. Thirdly output can be generated to the commonly agreed output format. This new technique is efficient because it describes digital evidence generally.

\subsubsection*{Design and Implementation of DEX}

DEX is the open source implementation. DEX is designed for achieving two goals. First goal is DEX description and the raw image file should be sufficient for reproducing evidence. The second goal is, it should be differently identified by two different investigation of the same raw evidence.\\


Authors implemented tool as a java library which was created, extending and comparing capabilities and set of wrappers over command line forensic tools. Tool can capture abstraction of the forensic object, their attributes and relationships between them. Tool also includes context sensitive comparison functionality.\\


Using these tool two examiners can exchange and compare the results of their investigation. There is one problem in DEX comparison. That is the comparison functions for each type of element has to be customized up to some extend.

\subsection{FACE: Automated digital evidence discovery and correlation }

Authors {\bf Andrew Case, Andrew Cristina, Lodovico Marziale, Golden G. Richard} and {\bf Vassil Roussev} presented a framework for automatic evidence discovery and correlation from variety of forensic targets. This framework is known as {\bf FACE}. Their prototype demonstrates analysis and correlation of a disk image, memory image, network capture and configuration log files. They also presented an advanced memory analysis tool called {\bf ramparser}, for automated analysis of Linux systems.

\subsubsection{Review of existing techniques }

{\bf FTK} and {\bf Encase} are digital forensic suites that offer a point and click interface for analyzing capture disk images. {\bf Scalpel} or {\bf Foremost} can be used to carve sequences of bytes into recovered files. Above tools are disk analysis tools.\\


{\bf Encase enterprise edition} belongs to live forensic tool category. Also it belongs to offline memory analysis and log analysis tools category. {\bf OnlineDFS} is also belongs to this category. To discover hidden processes we can use {\bf Kornblum}. This tool use advanced address translation technique.

\subsubsection{FACE }

FACE is the integrated forensic framework in the ramparser. Ramparser is a tool which provides deep analysis of Linux memory dumps. Especially the current version of the ramparser is available to handle range of 2.6 kernel variants. FACE framework provides automated parsing of multiple forensically interest objects and correlation between the results of them. In FACE framework it used five main objects for its data. They are memory dumps, network traces, disk images, log files, and user accounting and configuration files. Authors mentioned that by using ramparser and the FACE framework technique increase forensic investigation process in at least two ways. Those are \cite{Case2008},


\item Automatically performs routine correlation tasks

\item Presents a logical view of the entire target computer system


% End of Chapter 5


% Start the chapter 6

\section*{\Huge Chapter 6}

\section{Conclusion and future works }

Data recovery technique in digital forensic has become interesting research area. This area can be classified into several categories such as Image forensic, Windows forensic, Data and file forensic, Web forensic and etc. According to above sub categories various research have been explored.\\


When considering about the Image forensic category using JPEG quantization tables for digital ballistic is a huge step forward for examiners and they can improve their productivity and success. As the future research, researches can examine about the image reflection and will be able to find the person who takes the image.\\


Detecting file fragmentation point using sequential hypothesis testing is foremost technique for data recovery in windows forensic area. As future work in this area creating models for other file formats like Microsoft office, Email, etc are suitable. When analyzing windows registry, current techniques are only for the Windows XP service pack 2. Due to that in future researchers' can focus on other platforms as well.\\


Using data recovery techniques we can extract data from magnetic swipe card cloning devices. By using smart cards it gives more security for the data which has been stored in the swipe cards. Also in future contact less smart card technology will become more popular.\\


In forensic memory analysis, files mapped in memory currently support MmCi structure. This structure is similar to MmCa structure but not clear what the function of the structure is. New research can be started to determine clear function for MmCi structure.\\


When talking about the Data recovery techniques using legal frameworks, DIALOG framework is a foremost technique. Because it is application independent and it can be used for variety of purposes. This framework can also be used in distributed environments.\\


When discussing about the FACE framework which is integrated in the ramparser, following improvements can be done in future.


\item Improved correlation

\item Improved visualization

\item Improved interoperation


% End of chapter 6