Problem Statement And Motivation Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The main goal of our project is to design and implement a complete solution for medical professionals to enable them to view the medical images on handheld devices like tablets and smart phones. With the increase in data speeds and integration of high resolution displays on mobile devices it is now possible to implement a project like ours and provide medical professionals with mobility. The goals we are aiming to achieve are as mentioned below

Mobility for medical professionals.

Backend processing of medical images.

User friendly interface

The following bullets summarize the objectives of our project in order to achieve the mentioned goals.

To setup a private cloud with CPU-GPU setup that will be used to store images and run the application

To implement an OpenGL program for image processing.

To implement one dimensional and two dimensional FFT computations this will be an important tool for data analysis.

To connect remotely to the server setup from anywhere from an embedded device using a secure connection like SSH.

1.2 Problem statement and motivation

Biomedical imaging, today, is a very important part of medical field. It is the technique of projecting images of human body or organs for medical purposes. It uses state of the art available technologies to provide 2- and 3- dimensional images of the human body. These images are used for medical diagnosis by clinicians and doctors, helping doctors to conclude the need of surgery or other evasive diagnostic technique. Different techniques like radiology, magnetic resonance imaging (MRI), ultrasound, etc. are used for medical imaging.


Figure 1. CAT scan of human brain [1]

The medical images like CAT scan and MRI Scan provide a 3-dimentianl view of the internal organs by generating a series of images of the organ from different angles and positions. These images are very high resolution images and require a lot of storage space. Making available these images for viewing and projecting a 3-dimensional view using the images require a lot of processing power and data transfer. As a result, though there are a lot of tablets and smart phones with high processing power available in the market, they still cannot be used directly in the medical field to view biomedical images. This really poses a problem to the medical fraternity traditionally as this problem does not allow mobility for the doctors.

As mentioned earlier in the problem statement, since viewing biomedical images is not possible on handheld and portable devices, doctors who wish to view images at a remote location would traditionally print the images and carry them to their remote locations. This creates problems in accuracy due to low quality prints and transportability. Also printed medical images cannot be altered by law. This does not allow the doctors to make any notes on them.

The main motivation of this project is to provide medical professionals with mobility. The medical images need to be processed before they are displayed on a screen. This is a very processor intensive task and cannot be performed even by the best of tablets available today. The processing power to process and display a small set of 2048*2048*24 resolution images at 30 fps (frames/second) is in the range of around 4 Tera FLOPS. This kind of processing power can only be provided by high end CPU or CPU-GPU setup.

The main need of this project is to remotely provide the needed processing power and data transfer capability required for biomedical images. In this project we will setup a private cloud with CPU-GPU infrastructure. The GPU selected will have processing power in terms of tera-flops. The main focus of the project will be on graphics processing needed to display images as a three dimensional model. For this purpose, we will be implementing all the graphics processing using OpenGL. All the images would be stored in this cloud and will be processed on a requirement basis. The entire OpenGL task will be performed by the server. The only task on the end client will be presenting the processed image. To achieve this client device needs to support OpenGL and be able to run it. Any mobile device running on android and ios have OpenGL support and can be used by the medical professionals. The application will be implemented using OpenGL for image processing and CUDA for parallel processing using Nvidia graphics card. For this project, we will be using the development board provided by Dr. Harry Li as the mobile device to present the images and setup secure connection to the server.

1.3 Project application and impact

The application and impact of our project is very large in the medical field. Our prime aim with this project is to provide medical professionals with mobility for viewing biomedical images. This would make it really easy for doctors to access reports and images even on the go and send out diagnosis to the patients. The other application of our project would be for paramedics. In case of an accident because of previously known medical history, paramedics can access easily the reports and images and start the treatment while being transported to the hospital.

1.4 Project results and expected deliverables

We are delivering software package to perform graphics processing on the server, perform one dimensional and two dimensional FFT on a set of inputs. The ARM11 based Samsung S3C6410 development board will be used as a mobile device which will be remotely connected to the server over Wi-Fi connection. The mobile device will be used to remotely invoke the programs to generate graphics output. The ARM11 board will have Ubuntu as the OS with OpenGL ES support to run the graphics programs.


ARM11 based S3C6410 board with OpenGL ES support


OpenGL programs from texture mapping and triangulation

One Dimensional FFT-convolution

Two Dimensional FFT program

CUDA program for two dimensional convolution on image

1.5 Market Research

The Medical Device Manufacturing industry in United States has enjoyed strong revenue growth recently and is forecast to continue expanding in the years ahead. Since 2007, revenue increased at an average annual rate of 12.8%, and sales are expected to grow 7.4% in 2012 alone, to $64.7 billion. [2] Globally, United States of America, European Union and Japan are the major manufacturers and consumers if medical images. Together they account for almost 90% of the market.

Figure 2. Medical Device Market Distribution [2]

As per the recent market research and surveys conducted it is predicted that the consumer electronics trend is shifting from desktop and laptop towards tablets and handheld devices. With the advancement of handheld devices and better display now available it makes more practical sense to carry a tablet or smart phone instead of carrying a laptop. It is predicted that the tablet sales would keep on increasing and the desktop sales will decline going forward. By 2015 tablet sales would surpass desktop PC sales which essentially mean that the world is moving towards mobile devices. With the inclusion of retina display in latest iPad and even higher resolution screen on Nexus tablet it is very clear that the tablet market is going to improve on the display going forward. This will be really beneficial for this product as it would allow better and clear display of images on handheld mobile devices.

figure 3

Figure 3. PC Sales by Form Factor [3]

Over the years data speeds on mobile devices have grown very rapidly. The typical data speed on mobile devices has gone up form around 200kbps on 3g technology to more than 30Mbps on the latest 4G LTE advanced technology. This has enabled to establish data connectivity on mobile devices to enable performing tasks involving high internet speed requirements.

Figure 4

Figure 4. Mobile Data Speed

Considering the current scenario, with the sales of tablet PCs and smart phone increasing in the near future and the data speed increasing over the mobile network, it is now the best time to develop project providing the medical field with much needed mobility.

1.6 Project report Structure

This section discusses the structure of the report. The document structure is mainly based on the template report given by Dr. Jerry Gao. The template provided was helpful and gave us a proper report structure to follow and helped us document the information in a logical manner.

This report consist total of seven chapters that discusses different aspects of the project. The very first chapter gives a brief introduction about the project. It also discusses the objectives and needs of the project and the basic motivation behind developing the project. Chapter two mainly discusses the background and related work for the project. It discusses the technologies that will be used for the project. This section also discusses existing similar projects.

Chapter 3 gives information on the project plan and schedule for the project. In this section we give a detailed schedule for the different tasks involved to complete the project and task allocation among the team members. In chapter 4 we will be discussing about the software system requirements which mainly briefs about the customer, business, application requirements.

The next chapter is focused on the software technologies and techniques used for this project. The project being in a very nascent stage, we expect there to be changed in the software as we start implementing. Though there would be changes as we start implementing, the core design and the functionality would remain intact. The system architecture is explained here with aid of diagrams. It also explains API logic design and the user interface.

Chapter 6 talks about the tools that we will be using in our project. It contains a list of all the tools and software that we will be using. The last chapter deals with the testing plan of our project. Testing is a very important phase of product lifecycle. It will help us make our product very effective by making it bug free. The last part of the project is the Appendix where we have included any information, which is either missed or not included in the body of the report.

Chapter 2 Background and Related Work

2.1 Background and used technologies

The focus on the project is on Embedded Multimedia computing. We are familiar with the ARM processors through the courses such as embedded hardware, Wireless architecture and Embedded Software. We intend to extend the work done in those courses by incorporating embedded graphics for transfer intensive multimedia computation. The main idea is to process the images with the resolution of 2048*2048*24 .The goals are to create an embedded platform with graphics, Video display from CPU-GPU desktop (private cloud) along with the Wi-Fi link. We intend to emphasis on convolution with 1D and 2D FFT. Hamming filter is used for the convolution process. The handheld devices or smart phones do not provide the computation power required for such images that typically lie in the teraflop range. Hence we are planning to use the CPU-GPU private cloud to do the computation tasks. For this purpose, we are investigating the graphics cards by NVIDIA that have Teraflop processing capabilities. The computation can be speed up by using parallel computing using the CUDA architecture. For the graphic rendering purposes, OpenGL will be used. Benchmarking will be done to demonstrate the speed up achieved, possibly on the Amazon cloud as well.

The images should readily be available through the web server; hence Java can be used to execute the scripts by means of common gateway interface. The ARM11 board (S3C6410) will be used for control and visualization purposes.

We had a discussion with our advisor regarding the advantages of using android platform in the project. We decided that it would be a good academic learning experience and also help us in the long run in the industry. Android is one of the most preferred choices for mobile devices. It uses standard Linux kernel with new features such as shared memory driver and power management. We ported Android on the ARM 11 board assuming that it would help if the project is packaged as an app for hand held devices. But looking at the trend, it seems that Android might not be the best open source version to go ahead. Hence we used Linux kernel 2.6.38 with remote communication capabilities that could get data from a NAS server or a CPU-GPU machine. The remote access is achieved using a secure ssh, scp and ftp service.

2.1.1 ARM 11 System Specifications

The S3C6410 is a 16/32 bit RISC processor that is designed to provide cost effective and high performance solution for embedded applications. It has powerful hardware accelerators for motion video processing, 2D graphics, scaling and display manipulation. There is a graphic 3D engine that can accelerate OpenGL ES 1.1 and 2.0 rendering. It also includes a vertex shader and a pixel shader.

The ARM subsystem is based on the ARM1176JZF-S core and adopts the defacto AMBA bus architecture.

Description: Mini6410Description: Mini6410

Figure . ARM 11 mini 6410 Board [4]

The summary of features on the S3C6410 RISC microprocessor:

Java acceleration engine and 16KB/16KB I/D cache.

2D graphics acceleration with BitBlit and rotation.

3D graphics acceleration with 4M triangles @ 133 MHZ.

I2S and I2C support, flexibly configurable GPIOs.

Multimedia format Codec for encoding and decoding of MPEG-4/H.263/H.264.

533MHZ at 1.1 v and 667 MHZ at 1.2 v.

Port USB 2.0 OTG and USB 1.1 Host.

NAND Flash Interface with x8 data bus.

Mobile DDR interface with x32 data bus.

Advanced power management for mobile applications.

Figure . Samsung S3C6410 Block Diagram [4]

2.1.2 Introduction to parallel programming using CUDA

CUDA (Computed Unified device Architecture) is a scalable parallel programming model with minimal extensions to the C environment. It is a parallel computing architecture developed by NVIDIA and is used for graphics processing. In the recent times, the GPUS have evolved into highly parallel multicore systems allowing very efficient manipulation of large blocks of data.

Design of a CPU is optimized for sequential code performance. It uses complex control logic to allow instructions from a single thread of execution to execute in parallel or out of sequence while maintaining the semblance of sequential execution. On the other hands, GPUs are built with multiple computing cores, each of which is heavily multithreaded, in-order, single-instruction issue processor that shares its control and instruction cache with other cores. The GPUs are optimized to increase the throughput of parallel applications. Via CUDA, one seeks to execute the sequential segments on the CPU while using the GPU for the numerically intensive segments.

Graphics Card name

No of Cores



Memory Bandwidth(GB/s)


GeForce GTX 590(2 die GPU)






GeForce GTX 580






GeForce GTX 570






GeForce GTX 560Ti






GeForce GTX 560






GeForce GTX 480






GeForce GTX 470






GeForce GTX 465






GeForce GTX 460






Table . Graphic card comparison

Figure . Price Estimation Analysis (200 $/T Flop approx)

Key Parallel Abstractions are

Hierarchy of concurrent Threads.

Light weight synchronization primitives.

Shared memory model for cooperating threads.

Figure . Graphic frame Buffer [5]

As can be seen from the above pipeline, the GPU performs a host of computing operation (vertex, shader, raster etc). Though NVIDIA GeForce 3, GeForce 6800, 7800 and 8800 series added a lot of support to graphics programming, it wasn't until NVIDIA Tesla Architecture that GPU was treated as a processor. NVIDIA selected a programming approach that would declare the data parallel parts of the workload.

In Tesla GPU, the shader processors became fully programmable processors with large instruction memory, instruction caches and instruction sequencing logic. The Tesla GPU introduced a more generic parallel programming model with hierarchy of parallel threads, barrier synchronization and atomic operation to dispatch and manage the computing work. Programmer no longer needed to use the graphics API to access the GPU parallel computing capabilities.

CUDA finds varied uses in accelerated rendering of 3D graphics, medical analysis and simulation of virtual reality based on CT and MRI scan images, physical simulations such as fluid dynamics.GPU computing play a major role because the algorithms used in the project are computationally intensive. When we are implementing an algorithm it helps if the processing time can be reduced from hours to minutes.

2.1.3 Graphics rendering using OpenGL

Open Graphics Library, or OpenGL as it is more popularly called, is a standard graphics specification defining cross-language, cross-platform API for writing application to produce 2D/3D graphics. It was developed by Silicon Graphics in 1992 and is currently managed by the Khronos Group. The current version of the specification is 4.2 released in August 2011.

OpenGL basically accepts primitives such as lines and polygons and converts them to pixels using the OpenGL state machine (shown in Fig 5). Most OpenGL command either issue primitives or configure how the pipeline processes these primitives.

Description: 430px-Pipeline_OpenGL_%28en%29

Figure . Simplified view of the graphics pipeline [6]

Evaluator basically evaluates the polynomial functions that describe certain inputs such as NURBS surfaces. Rasterisation converts the information into pixels. The polygons are represented by the appropriate colors by means of interpolation algorithms. Lastly, the fragments are inserted into the frame buffer.

The model for interpretation of OpenGL commands id client- server. An application issues the commands and those are processed by OpenGL server. It is network transparent. A server can maintain several GL contexts and the client can connect to any of the contexts.

The OpenGl utility library (GLU) contains several groups of commands that complement the core interface by providing support for auxiliary features. We are using GLUT primarily that provides a limited but straightforward portable interface to the window system. To implement the higher level OpenGL functions such as linear decoration we used the GLFW (GL framework) that is a platform independent. The main things it aids are in threading, input and setting up polygons, windows by establishing an OpenGL context. It is written mainly in c and provides support for Windows, Linux, Mac Os and FreeBSD. It has support for shared and universal binaries and MinGW targets. The full screen support for Windows on X11 is also available.

The GLFW library does not do much drawing on its own but instead uses the OpenGL library functions. The texture loading with tga is supported by GLFW.

Figure . OpenGLPipeline [7]

2.2 State-of-the-art technology

Massively parallel processing is a quiet revolution. There will always be more data than cores, hence it becomes necessary to build the computation around the data.

The graphics processing units that render rich graphics for games are used in various imaging technologies. The core capability of the GPUS to break down complex computation into smaller tasks is projecting dramatic improvements across the medical imaging spectrum.

There are advancements in cancer detection at the University of California, San deigo where using GPU-based technologies the reconstruction time is 100 times faster [11].Breast cancer is detected in minutes with GPU powered Ultrasound by technician - a medical device company.

The GPU based medical imaging provides faster, safer high quality diagnostics in the current GPU revolutionized world.

2.3 Literature survey

1. Blackberry Medical App "eUnity"on Playbook[10]


The blackberry Playbook has a medical app that displayed patient data for orthopedic surgeons that will help surgeons design knee replacements. The image files can be attached to a patient's medical record with facility to preview and open the same. This uses Adobe flash and is not a native app. The medical app is called "Clients Outlook eunity imaging application" and provides secure access to healthcare professionals.

2. Chen Tiejian, Wang Yaonan, Zhang Hui ,Xiao Changyan(2010)"An Embedded3D medical image processing and visualization platform based on dual core processor", Intelligent control and Automation 2010. 8th world congress 7-9 July,2010. IEEE.DOI:10.1109/WCICA.2010.5554388.Retrieved from IEEE explore database


This paper explains the implementation of parallelization using intel dual core processor for 3 D image processing. The visualization algorithm parallelization method and medical data processing are described. The system proposed by the authors uses hardware resources to the maximum extent and meets space efficient imaging requirements in the medical industry.

3. GPU accelerated Medical Image Technology [8]


The Research focus is on Anders Eklund, from the Linkoping University in Sweden. The research is based on the algorithms such as image registration and image denoising for medical image processing. It is concentrated on functional magnetic resonance imaging(fMRI) where brain activity is found from the MRI. Such a brain interface will help paralyzed people communicate.

Figure 11. Brain Computer interface example [8]

The emerging trend in medical imaging is to make the processed data available in remotely so as to get real time input. Such a growing demand directly places high value on the computational performance.

In context with project

Our project aims to server medical doctors to process medical images at real-time. It instead uses principles of convolution and de-convolution for image processing .For TeraFlop computation, the parallel programming architecture implemented in CUDA and OpenGL is used for graphic rendering. The secure communication is facilitated using open ssh.

Chapter 3 Software System Requirements and Analysis

3.1 Domain and Business Requirements

3.1.1 Activity Diagram:

Figure . Activity Diagram

3.1.2 Domain State Diagram:

Application Domain Diagram:

Figure . Application Domain Diagram

3.2 Customer-Oriented Requirements

In this prototype, user is who will interact with the system using Handheld device will access the Service through a web browser. When user pass through the Authentication server he will be provided with an interactive Page which will provide him with the following Options.

Zoom In: This will Zoom In the view Point of The current Image

Zoom Out: This will Zoom Out the view Point of The current Image

Rotate Clockwise: This will Rotate the view Point of The current Image Clockwise

Rotate Anti-Clockwise: This will Rotate the view Point of The current Image Anti-Clockwise

3D View: This will produce the image in 3 dimensional View

2D View: This will produce the image in 2 dimensional View

Use Case Diagram:

Figure . Use Case Diagram

3.3 System Function Requirements

The project is a combination of hardware and software. The functional requirements for the hardware and software part are listed as below.


Functional Requirements

Server Machine

Provide Connection Port

Provide Network Gateway

Provide Database Connectivity

Provide GPU Cloud Connectivity

GPU Graphics Card

Provide GPU Processing Environment

Client Handheld Device

Provide Platform For Web Browser

Provide IP Gateway

Provide User Interface


Provide Connectivity between Client and Server

Table . Hardware Functional Requirements

Software Functional Requirements:


Functional Requirements

Authentication Server

Provide user with authentication page

Get username and password

Authenticate user and redirect to application page

Application Server

Provide with display panel to display result image

Provide Control Panel to operate on current image

Provide list of service

Provide list of Images

Gather information and perform appropriate task

Cloud Server

Provide requested Service to Application Server

Perform 3D & 2D Convolution

Perform 3D & 2D De-Convolution

Provide GPU Environment for computation

Provide OpenGL Platform for operation

Table . Software Functional Requirements

3.4 System Behavior Requirements

This prototype, user is who will interact with the system using Handheld device will access the Service through a web browser. When user passes through the Authentication server he will be provided with an interactive Page which will provide him with the following Options.

Action Expected Result

Zoom In: This will Zoom In the view Point of The current Image

Zoom Out: This will Zoom Out the view Point of The current Image

Rotate Clockwise: This will Rotate the view Point of The current Image Clockwise

Rotate Anti-Clockwise: This will Rotate the view Point of The current Image Anti-Clockwise

3D View: This will produce the image in 3 dimensional View

2D View: This will produce the image in 2 dimensional View

3.5 System Performance and Non-Function Requirements

Non-functional requirements are also equally important. The parameters such as availability, security, and flexibility are non-functional requirements of a system. These functionalities assure the quality of a system.




The Client should be able Access Services Regardless the geographical location of Server


Services should be available any time

Quick response

Service response should have minimal response time.

It should not take long time to retrieve image.

When multiple users are using the same application, it should manage to provide the output in average time


It should have proper and simple user interface.

It should be user friendly


Secure environment is necessary. The data of 3D image should be transferred securely.(Cont*)


The quality of the image should be good enough that a research can find the small details of it

Table . Performance and Non-Function Requirements

3.6 System Context and Interface Requirements

Our system simulates the activity where a user is able to connect to a remote imaging processing, where the user can maneuver a view object in three dimensions. In the feedback given to the user, the user sees an image that is fed back to the user by the Application Server imager. The user will of be asked for a username or password because our focus in not on verifying the authenticity of the user.

The interface of the elements will be not dependent on the development platform used for our implementation. Android, which is basically based on a Linux file systems will be ported on ARM9 development kit. This node will act as the client side that will provide receive provided by the application server and GPU cloud. The concept of cloud instrumentation, which is still a niche industry, is used for our project implementation. Cloud implementation technology allows the users to remotely connect and control an instrument and obtain all diagnostic related or feedback data back from the controlling location situated at the other side of the IP cloud. Cloud computing can be categorized into three basic categories, viz. software as a service (aka SAAS), platform as a service (aka PAAS) and infrastructure as a service (aka IAAS). Our implementation is based on the SAAS architecture where the user remotely accesses the instrument situated at the other location. This concludes our discussion on the context and interface requirements. We have also talked a little about the category of cloud service we are using in our project and how different elements come together to form as one system. Moving on the next section where we will discuss a little about the technology and resource requirements.

3.7 Technology and Resource Requirements

The following are the technologies we are using for design and implementation in this project.

ARM11 Development Platform

We are using ARM11 architecture based Friendly Arm mini6410 development board. Samsung S3C6410 processor is at the heart of this board. The development board will be used as the handheld mobile client device to display the OpenGL output. It will also be used to connect remotely to the server and send commands to execute the programs on the server.

802.11 Communication Protocol

This communication protocol will be used to set up wireless connectivity on the board. 802.11 is an IEEE standard to setup wireless LAN communication to any Wi-Fi enabled device. It operates at 2.4, 3.6 and lesser used 5 GHz frequency bands. This protocol is used to setup wireless communication between the ARM11 development board which acts as the client device and the server. Parameters are passed from the client device to the server through the http protocol. The parameters will be read at the server side and used to trigger the OpenGl programs on the server.

Ubuntu 11.04 and above OS(development environment)

Ubuntu is being used in our project as the development environment set for the OpenGL programs. The main reason to use Ubuntu as the development environment is that it is a free OS requiring no licenses and can be used by anyone. We started with Ubuntu 11.04 and later upgraded to Ubuntu 12.04 version. Ubuntu environment is mainly used to design and develop OpenGL programs that will be used for graphics rendering.


Open Graphics Library, or OpenGL as it is more popularly called, is a standard graphics specification defining cross-language, cross-platform API for writing application to produce 2D/3D graphics. OpenGL basically accepts primitives such as lines and polygons and converts them to pixels using the OpenGL state machine. OpenGL is used for graphics rendering. In this project we have rendered a unit dimension wireframe cube, ported image on the cube by texture mapping and rendered cube using triangles using the triangulation technique.

Chapter 4 System Design

4.1 System Architecture Design

Figure System Architecture

The above diagram shows the system architecture for our project that we followed. As shown in the above diagram, each field for our project will be designed and developed separately. Our main aim will be to bring together and integrate all the different elements of our project. The main focus for this project is image processing and calculation which will be carried out in the private cloud environment over the CPU-GPU infrastructure. The processed image will be rendered using OpenGL.

We performed functions like 1D FFT , 1D IFFT , 1D Convolution & Hamming filter on the image to render the images to be displayed in to a 3d environment. Some of the functions such as edge detection,feature extraction are implemented.The wireframe model, texture mapping or linear decoration and triangulation with different images on different faces of the polygon was done.

These functions on image will be performed in CPU-GPU environment which has a higher performance. This CPU-GPU environment will act as a server providing service to the hand-held device. As the images and computations take place at server the hand-held device is not loaded with high computations hence can perform better.

When the data is transferred to hand-held device, this device will be running an application that is using OpenGL support that will render the processed image sent by the server. This application will also take care of understanding the user inputs and convert it to server commands.

Java Development Environment will be used in this project. The Environment is very simple to use and have a very friendly GUI. Eclipse is based on the JAVA platform itself; hence it is platform independent, Has no complicated installation procedure and is open source. These are dome of the reasons we choose to use Eclipse as our JAVA Development Environment.Eclipse is a open source community, which is a non-profit member support corporation which focuses on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle. The Eclipse was originally created by IBM in November 2001 and supported by a consortium of software vendors. The Eclipse Foundation was created in January 2004 as an independent not-for-profit corporation to act as the steward of the Eclipse community [1]. The independent not-for-profit corporation was created to allow a vendor neutral and open, transparent community to be established around Eclipse. Today, the Eclipse community consists of individuals and organizations from a cross section of the software industry [1].

One dimension FFT decomposes a sequence of values into components of different frequencies. This operation is useful in many fields (see discrete Fourier transform for properties and applications of the transform) but computing it directly from the definition is often too slow to be practical. An FFT is a way to compute the same result more quickly: computing a DFT of N points in the naive way, using the definition, takes O(N2) arithmetical operations, while an FFT can compute the same result in only O(N log N) operations[3]. The difference in speed can be substantial, especially for long data sets where N may be in the thousands or millions-in practice, the computation time can be reduced by several orders of magnitude in such cases, and the improvement is roughly proportional to N / log(N). This huge improvement made many DFT-based algorithms practical; FFTs are of great importance to a wide variety of applications, from digital signal processing and solving partial differential equations to algorithms for quick multiplication of large integers [3].

A wire frame model is a visual presentation of a three dimensional or physical object used in 3D computer graphics. It is created by specifying each edge of the physical object where two mathematically continuous smooth surfaces meet, or by connecting an object's constituent vertices using straight lines or curves. The object is projected onto the computer screen by drawing lines at the location of each edge. The term wireframe comes from designers using metal wire to represent the 3 dimensional shape of solid objects.3D wireframe allows to construct and manipulate solids and solid surfaces. 3D solid modeling technique efficiently draws high quality representation of solids than the conventional line drawing.[6]

A texture map is applied (mapped) to the surface of a shape or polygon.[1] This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a UV coordinate) either via explicit assignment or by procedural definition. Image sampling locations are then interpolated across the face of a polygon to produce a visual result that seems to have more richness than could otherwise be achieved with a limited number of polygons. Multitexturing is the use of more than one texture at a time on a polygon.[2] For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games as graphics hardware has become powerful enough to accommodate it in real-time.[7]

4.2 System Data and Database Design

For our project, the data base will consist of sets of high resolution medical images that will be the input for our application. Since the images can be given to the application program directly, there will not be any need of database management for the images. The user input will be captured by the web server and the corresponding application program will be run. The image set may be stored on the private cloud itself and will not require any separate data server for it. So this is a redundant section for our project.

4.3 System Interface and Connectivity Design

Figure . System Interface Diagram

For our project the System is divided into various subcomponents. One of them is the server side where the image is processed in the CPU-GPU environment. The other half is the client side that is the embedded device. The interface between these two half's is bridged by the 3G or WIFI technology.

The connectivity is provided by using a 3G or WIFI module connected on either side of the half. This interface also helps to divide the computation's which is the main aim of the project.

The server used is NAS that is running turnkey Linux 11.3.There is currently no facility for secure access on the ARM 11S3C6410 Board. Here Ssh server is compiled for arm-linux-gcc so that any devices or files attached to a remote machine can be accessed by the board. A host of services such as SCP, ssh-agent, sftp, ssh-keygen, sftp-server, Ssh-add, ssh-keyscan are also enabled on the embedded device. Wi-Fi is also enabled on the ARM11 board. To facilitate streaming audio and video files, CGI is implemented on the BOA webserver. There is the ability to remotely access the graphics card from remote machine to compile CUDA programs.

Turnkey Linux is an Ubuntu based distribution. As such, it includes all the ease of use and stability offered by Ubuntu. However, the advantage with Turnkey Linux is its distribution. Turnkey Linux developers strip Ubuntu of various features and build specialized appliances. These appliances vary across multiple domains. Some examples are LAMP appliance, Drupal 6/Joomla! Appliance, Torrent Appliance, Ruby on Rails Appliance etc. The NAS appliance is known as the File Server Appliance.

Figure 4. . eXtplorer Access from Browser

Installation of File Server Appliance is very straightforward. Download the ISO from Turnkey Linux website and burn it onto a CD/DVD using one of the many available programs. After this, boot to CD. The File Server Setup automatically detects all the drives available and asks the user to choose the installation drive. Though our File Server Appliance support cifs out of the box, it is advantageous to use NFS. The main reason for this is SSH tunneling is much easier to setup via NFS.

The ARM11 S3C6410 mini 6410 board supports a variety of USB Wi-Fi cards. The USB device is first plugged in the USB host slot and the Wireless utility in started. The available service set identifier with the signal strengths is identified. One can use the scan facility in the wireless application to also manually scan all available networks. Select the desired network and enter its password to connect. The status reads as connected once the connection is successful.

OpenGL libraries such as freeglut and GLFW are used to perform various tasks such as building a wireframe model and linear decoration with single and multiple images.

We have included a detailed report in the appendix detailing the remote communication interface.