An Introduction To Neural Network Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This thesis describes is about introduction to Artificial Neural Networks. The a variety of neural networks are interpreted and demonstrated, applications of neural networks like ANN, Perceptron, Types of Neural network Backpropagation are explained, and a detailed historical background is provided. The relation between the artificial and the real thing is also investigated and explained. Finally, the mathematical models involved and implementation and evaluation of Neural Network for classification of bacterial data. Using Matlab's programming the system will be developed and demonstrated.

INTRODUCTION TO NEURAL NETWORK:

Artificial neural networks be changed into artificial intelligence and were intended to imitative the biological neural networks originate in the brain which is formed from many connected artificial neurones collectively in a great deal the similar way as the brain neurones. Artificial neural networks present benefit in excess of conventional computing in areas such as trend prediction and pattern recognition generalisation.

In the early days, the attempts of neural networks are considered to be a phenomenal of that era. In 1943 Warren McCulloch and Walter Pitts is a joint author of abstracting a pattern of a simple a neural network by using electronic circuits. Subsequently, in 1950 year when the computers revealed, they used current technology in order to model advance neural networks. In addition, the physiologists, phsychologists and computer engineers had shown their effort in evolution of artificial neural networks over the next decade. The neural vision in files carried out by neurobiologist namely frank Rosenblatt. The neural processing occurred within that interested Rosenblatt and formed "Perceptron "neural network. Many other models with Perceptron exhibit great pledge with much initial accomplishment.

The artificial neural networks had a great hype, but led to disappointment due displeased claims and having restricted computing power existing at that time even though it had a great success. Anyway, there also exist deliberative limitations to development. In 1969, a book contains the limitations of Perceptron model was released which is released by Marvin Minsky and Seymour. Due this effect the funding for artificial neural networks was limited.

Several Scientists keep doing something to develop neural network models in spite of the fact there was a minimal funds. Perceptron model was developed by Paul Werbos and initiated a new back-propagation network. Steve Grossberg, Teuvo Kohonen, and Henry Klopf researchers also developed a new model. Anyway, Hopfield of Caltech exhibited a paper to the national Academy of Sciences in order to rise again, but his appeal to develop technologies not to simply create models that possibly usable to real life problems. In 1985 American Institute of Physics organized a conference on Neural Networks for Computing and by 1987 the Conference organized by Institute of Electrical and Electronic Engineers attracted more than thousand attendees. Eventually, this interest has been continued till this day, as the neural networks found treated from medical diagnosis tools to speech recognition software.

What is Neural Network

The analysis of neural networks is one of the innovative multidisciplinary research fields which has developed in retort to the dispute of thoughtful brain, it is enjoying a great arise of an activity. The enthusiasm in developing a neural network technology is to create an artificial system that can carry out "intelligent" roles alike to those managed by the human brain. The thoughts of brain are considered as a major computational component in neural networks is mathematical substance and these are being utilized for 3 complete research purposes.

As a model computation

As a model of brain structure , and

As a model of cognitive processes

The cognitive modelling field include the physical or mathematical modelling of the activities of neural system.

Conventionally, the neural network used to consign to a network or circuit of biological neurons. The current handling of the term often called artificial neural networks, which are arranged of artificial neurons.

Biological neural networks are constructed using biological neurons that are associated or functionally linked in the peripheral nervous system or the central nervous system. In neuroscience they are frequently recognized as groups of neurons which carry out a precise physiological function in laboratory analysis.

Biological Neuron

Schematic Diagram of Neural Network

Artificial Neural Network:

An artificial neural network is an abstraction of mathematical models of biological nervous systems. After the beginning of simplified neurons by McCulloch and Pitts a first beat of attention in neural networks are revealed.

The artificial neurons is a fundamental processing rudiments of neural networks which is also called simply neurons, the possessions of the synapses are depicted by connection weights that transform the result of the related input signals, and the neurons which is presented by nonlinear attributes is symbolized by a transfer function. The neuron impulse is then calculated as the weighted sum of the input signals, which is transformed by the transfer function. By adjusting the weights in suitability to the preferred learning algorithm the learning potential of an artificial neuron can be achieved.

Artificial Neuron network

A simple neuron:

An artificial neuron contains several inputs and one output. It has two modes of operation; the training mode and the using mode. In training mode Neuron can be trained for fussy input patterns where as in using mode, at the input the trained input pattern is detected and related output turns into current output.

A Simple Neuron

Why use Neural Network:-

Neural networks have a capability to explain problems which do not have an algorithmic solution or the presented solution is excessively complex to be found. The problems that people are good at solving can be deal with using the neural network. The problems may include prediction and pattern recognition. Neural network have been useful within the medical region for clinical diagnosis, signal analysis and interpretation, image analysis and interpretation, and drug development.

Neural networks are relevant to multivariate non-linear troubles. Neural Network can be used to detect all probable connections between predicator variables, and ease of use several training algorithms. Unlike the conventional computers, neural network uses distinctive access to solve the problems. In order to solve the problem computers uses set of instructions for limited solution gaining capability than the neural networks. Neural network process the information in the same manner as the human brain performs. Neural network can be treating to carry out statistical modelling and give an innovative alternative to logistic regression. It is a mixture of collect of interrelated with the neurons to work out the fussy problem. Disadvantages include reducing over fitting requires a large compact of computational attempt, "black box" nature, greater computational burden, the empirical nature of model growth and the sample has to be large.

Parallel distributed computing can conquer effort with processing massive amounts of data connected with serial computation. Biological systems are capable to accomplish very high processing rates; in contrast with digital computers operating at greatly higher speeds could be the evidence for this.

A typical neural network is composed of input units X1, X2, ... corresponding to independent variables a hidden layer known as the first layer, and an output layer (second layer) whose output units Y1, ... correspond to dependent variables

Where X1, X2 are input layers.

H1, H2 are output layers.

Y1 is output layer.

Weight matrices W (1) and W (2) are used to adjust weights. The values of the hidden units are obtained from the formulas

The activation function f is usually of sigmoid form and may possibly be a logistic function, hyperbolic tangent, etc.:

The approximate output is enhanced with a repetitive learning method in which the output for a variety of inputs vectors are compared with targets and an average error term E is computed

Pattern Recognition:

Pattern recognition is a important application of neural network which possibly carried out by using trained feed-forward neural network as shown in fig. In order to connect outputs with the input patterns, the network should be trained. It recognize the input pattern and attempts to output the related output pattern. The supremacy of neural network arrives when a pattern that has no output connected by it, is specified as an input. In situation, it delivers the output that matches to a trained input pattern in other words least altered from the given pattern.

Types of Neural Network:

Neural Networks can be arranged on the basis of:

Feedforward / Feedback

Global /Local Learning

Supervised / Unsupervised learning

Feedforward networks:

These networks permit signals to travel one direction only; from input to output. There is no loops. The output of any layer does not concern that same layer.

Feed-forward ANNs (figure 1) allow signals to travel one way only; from input to output. There is no feedback (loops) i.e. the output of any layer does not affect that same layer. They are widely used in pattern recognition. This kind of organisation is also called as bottom-up or top-down.

Simple Feedforward Networks

Feedback Networks

These networks permit signals to travel in both directions by introducing loops in the network. It is very powerful and be capable of get exceptionally complicated. These have connections from outputs to inputs; a neuron could use its hold output as an input.

Global/Local Learning

The adjustment in each weight of global learning affects all outputs of the network. For example backpropagation network.

A weight adjustment in local learning affects one or only a few outputs.

The training examples are available when global learning are used.

Real time application are useful in local learning.

Supervised/Unsupervised:

Supervised learning claims the learning process to be absorbed: training examples are supplied.

Supervised learning (competitive, self-organising) - neurons participate to act in response to input patterns. No precise direction is known from the external environment.

Perceptron:

Perceptron introduced in the late 1950's by Minsky and Papert and the Perceptron term coined by frank Rosenblatt and attempted to understand human memory learning and cognitive processes. With some additional, fixed, pre-processing the perceptron becomes to be an MCP model which contains neuron with weighted inputs. Component labelled A1, A2, Aj, Ap are known as association components. The function of these components is to extract specific, localised featured from the input images. Perceptrons imitate the essential proposal behind the mammalian visual system. Even though their skilled extended a lot more these were essentially used in pattern recognition. In 1969 Minsky and Papert published a book which contains the description of limitation of single Perceptrons. The book explained precisely that single layer perceptrons could not do pattern recognition operations like whether shape is connected or not. In 1980's they realised that providing suitable training to multilevel perceptrons, it can do these operations

Model of a Perceptron:

+---------------+

A[1] o------ |W[1] T |

A[2] o------ |W[2] +---------+ +-------------------+

. | . | ___ |_________| __ Squarewave |_______\ Output

. | . | \ | S | __| Generator | /

. | . | /__ | +-------------------+

A[n] o------ |W[n] | Sum |

+-----+---------+

S = T + Sum( W[i]*A[i] ) as i goes from 1 -> n

Output = 1 if S > 0; else -1

Where A[n] are the perceptron's inputs

W[n] is weights which are applied to related input

T is the Threshold

The square wave generator just twists the output into positive or negative number. Whenever we give perceptron, Depending on the input's weights and a threshold we will get positive or negative output.

Perceptron are two types

Single layer network

Multilayer network

Single layer network:

A single layer perceptron network composed of one or more artificial neurons in parallel. Each neuron in this provides one network output, and is usually linked to all of the external inputs.

Multilayer network:

A multilayer perceptron is a feedforward artificial neural network model that represents sets of input data onto a set of appropriate output. We know that simple perceptrons are restricted in their characterizational capabilities. Consider simple layer cannot represent XOR function. Anyway, it is simple that XOR can be denoted by a multilayer perceptron by writing in terms of basic functions AND, OR, and NOT.

Characteristics of Neural Networks:

Autonomy

Modelling capabilities

Understand ability

Adaptability

Robustness

Complexity

Back Propagation:

Backpropagation is a general method of learning artificial neural networks in order to perform a task. It is a supervised learning method which is carried out from the Delta rule. It needs a learning that can compute the desired output for any particular input. It is very useful in which there are no connections that loop so feed forward may consider as an example. The term is an abbreviation for "backward propagation of errors" and it needs the activation function used by the artificial neurons is differentiable.

As the algorithm's name implies, the errors propagate backwards from the output nodes to the internal nodes. So, it is used to compute the gradient of the error of the network correspond to the network's modifiable weights. This gradient is nearly always use utilized in a sample stochastic gradient descent algorithm we locate weights which diminish the error. Backpropagation usually permits quick convergence on acceptable local minima for error in the variety of networks to which it is satisfy.

Training:

By changing the weights on the associations between layers, the perceptrons output might be "trained" to equivalent a required output. Training is skilled by sending a specified set of inputs during the network and comparing the results with a set of target outputs. If there is a difference between the actual and the target outputs, the weights are adjusted on the adaptive layer to construct a set of outputs nearer to the target values.

Delta rule:

The output vector is compare to the required response. There is no adjustments are made, if the difference is zero or else weights are used to diminish this difference.

The correction or change in weight from layer i to j is given by:

Δwij = η* oi * δj where

η is the learning rate,

oi represents the activation of layer i and

δj is the error, or difference between expected and actual output

All the networks weights can be modified by these amounts at each iteration

according to:

wjk ← wjk + Δwjk

Patterns repeatedly exist to the network, and the errors are used to adjust the pattern of connectivity in way that the network's responses turn into more accurate.

The generalised delta rule needs that the activation function be monotonic. Due to that most ANNs are depends on a sigmoid-shaped activation function

Limitation of Delta Rule:

It operates with simple input-output

Anyway, if there are hidden layers, the Delta Rule does not assurance convergence, as local minima strength exists.

Back Propagation Example:

Back propagation's mainly generally used activation function is the sigmoid:

oj = 1/(1 + e-netj)

netj = ∑wijoi + θj

θj is the weight from

a bias unit (=1, always on)

Some significant properties:

If netj=0, oj=0.5 ("undecided")

netj needs to be ±∞ to raise/lower oj to +1 or 0

Requirement Analysis:

For We are using Matalab which is an interactive kind of system whose central data element is an array and it's not needed dimensioning. This language enables to solve many technical problems, particularly those with matrix and vector formulations, with in a division of the time we can write a program in a scalar non-interactive language such as C.

As the matlab functions that we use in this work are as follows:

1. Matlab 'Function' reference

Declares M-file function

Syntax:

function [out1, out2, ...] = funname(in1, in2, ...)

function [out1, out2, ...] = funname(in1, in2, ...) defines function funname that accepts inputs in1, in2, etc. and returns outputs out1, out2, etc.

2. Newdata Function:

For many applications, the best data construct for this data is a structure. However, if you routinely access only the first two fields of information, then a cell array might be more convenient for indexing purposes.

This example shows how to access the first and second elements of the cell array TEST:

[Newdata, name] = deal (TEST {1:2})

3. max function:

Largest elements in array

Syntax:

C = max (A)

C = max (A,B)

C = max (A,[ ],dim)

[C,I] = max (...)

Description:

C = max(A) returns the largest elements along different dimensions of an array.

If A is a vector, max(A) returns the largest element in A.

If A is a matrix, max(A) treats the columns of A as vectors, returning a row vector containing the maximum element from each column.

4. Sort function:

Sort array elements in ascending or descending order

Syntax:

B = sort (A)

B = sort (A,dim)

B = sort (..., mode)

[B,IX] = sort (...)

B = sort (A) sorts the elements along different dimensions of an array, and arranges those elements in ascending order.

5. For function:

Execute block of code specified number of times

Syntax:

for variable = expression

statements

end

Description:

The general format is

for variable = expression

statement

...

statement

end

6. Size Function:

Array dimensions

Syntax:

d = size(X)

[m,n] = size(X)

m = size(X,dim)

[d1,d2,d3,...,dn] = size(X)

7. Polyfit Function:

Polynomial curve fitting

Syntax:

p = polyfit(x,y,n)

[p,S] = polyfit(x,y,n)

[p,S,mu] = polyfit(x,y,n)

8. End Function:

Terminate block of code, or indicate last array index

Syntax:

while expression % (or if, for, or try)

statements

end

B = A(index:end,index)

9. Whos Function:

It List the names and types of simulink data logging objects contained by a Simulink.ModelDataLogs or Simulink.SubsysDataLogs object.

Syntax:

log.whos

tsarray.whos

log.whos('systems')

log.whos('all'

Design Analysis:

The observable fact of this assignment is to assemble the bacteria by charging the microelectrodes with suitable frequencies. In the specified assignment explanation of the data the water goes from the source and washes through value 1 and value 2 by the signal and goes to image analyzer. The image analyzer assembles bacteria and sticks the bacteria and pumps the purified water through the pump and releases bacteria to wash through value 3 and value 4. In bacteria data we have 37 rows and 61 columns by means of the five input values sa0907, sa1704, ec1404, ec1104 and sm1310. Besides captivating the specified data we could plot the normal graph and mesh graph and gets the output in raw curve so, we now consider six new preprocess data values to get the soft curve approximately and we have to delete firstly the 61th column to get the exact soft curve without bacteria. The program as follows:

Implementation and Testing:

function newdata = Preprocess(data)

newdata = data;

[nrows, ncols] = size(newdata);

for i=1:nrows

Grad(i,:) = polyfit(1:ncols,newdata(i,:),1);

end

for i = 1:nrows

for j = 1:ncols

newdata(i,j) = newdata(i,j)-Grad(i,1)*(j);

end

end

end

mesh(sa0907);

figure, plot(Preprocess(sa0907))

sm1310

figure, plot(sm1310)

ec1104

figure, plot(ec1104)

ec1404

figure, plot(ec1404)

sa1704

figure, plot(sa1704)

load bacteria;

patterns=[newsa1704(:,1:40) newec1104(:,1:40) newec1404(:,1:40) newsa0907(:,1:40) newsm1310(:,1:40)];

patterns=premnmx(patterns);

%p=patterns';

A=[0;0;0;zeros(34,1)],B=[0;0;1;zeros(34,1)],C=[0;1;0;zeros(34,1)],D=[0;1;1;zeros(34,1)],E=[1;1;1;zeros(34,1)];

%A=[0;0;0],B=[0;0;1],C=[0;1;0],D=[0;1;1],E=[1;1;1];

%targets=[A B C D E]

targets=[repmat(A,[1 40]) repmat(B,[1 40]) repmat(C,[1 40]) repmat(D,[1 40]) repmat(E,[1 40])];

%t=targets';

[trainV,valV,testV] = dividevec(patterns,targets,0.20,0.20);

%net = newff(minmax(p),[10 size(t,1)]);

%net = train(net,trainV.P,trainV.T,[],[],valV,testV);

mynnet = newff([minmax(premnmx(patterns))],[5 37],{'purelin','logsig'},'trainlm');

init(mynnet);

net.trainParam.epochs = 1000000;

net.trainParam.goal = 0.01;

%net1=train(mynnet,patterns,targets)

net1=train(mynnet,trainV.P,trainV.T,[],[],valV,testV)

%unseenpattern=[0 0 0.9 0.9;0 0.9 0 0.9];

%figure

result=sim(net1,patterns)

%result2=sim(net2,patterns)

figure

plot(patterns,targets,'o',patterns,result,'x')

The implementation and evaluation of Neural Network of bacterial data can also be classified by using the following functions.

Newff:

This command produces the network entity and also set the starting weights and biases of the network; as a result the network is set for training. There are times when we could reinitialize the weights, or to perform a convention initialization. The subsequent section describes the details of the initialization process.

net=newff ([input vector], [size], {'tansig','purelin'},'traingd');

Initializing weights (init)

The weights and biases must be initialized before a feedforward network trained. The newff will unintentionally set the weights, but we could again initialize them which are done by using init command. This takes a network entity as input and gets the network object with every weight and biased. We can see how a network can be initialized.

Net = init(net);

Simulation (sim):

The function sim simulates a network. Sim takes the network input p, and the object net, and proceeds the network outputs. We can use sim to simulate the network for single input vector.

A=sim (net,p)

Sim is used to compute the outputs for a concurrent set of three input vectors which is batch mode form of simulation and it contains all input vectors are place in one matrix. This is a added proficient than presenting the vectors one at a period.

A=sim(net,p)

Conclusion:

Neural Networks easiness to makes them an exceptional option for solving real word pattern recognition problems in which ideal data is not always available. Training a network could be a difficult process. For example speech transcription and natural language need huge quantity of data to train. Also when the system is implemented it may not converge regardless of the amount of training, the weights cannot always settle at unusual values. In these cases, building neural networks turn into more of an art than science. By adding powerful computational tool and their complicated mathematical computation positions them to be utilized to solve a broad variety of problems.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.