Neural Networks For Sp Routing In Wsns Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

CHAPTER 6

6.1 INTRODUCTION

From the literature, most of the researchers concentrated only on recurrent type HNNs for routing in networks. The use of feedforward NNs are very limited or almost nil for routing problems. Therefore, this chapter discusses the scope of feedforward NNs for routing in WSNs. In WSNs the execution time of algorithm is also a critical factor to save the energy of the sensor nodes. Generally GA and NNs are iteration based algorithms. Therefore, these algorithms consume considerable amount of execution time and battery energy of nodes. The conventional recurrent and feedforward NNs are tuning based. The neuron weights and biases are to be adjusted based on the input and output values during the training as well as testing. Finding a feed-forward NN algorithm for SP routing in WSNs with fast convergence nature is focused on. The ELM is one of such new algorithms for Single hidden Layer Feedforward Networks (SLFN). The time consuming tuning concept is not used in ELM. ELM based SLFN for SP routing in WSNs is proposed.

6.2 FEEDFORWARD NETWORKS FOR ROUTING IN WSNs

The problem can be formulated as explained in the previous chapter. Feedforward NNs have been extensively used in many fields due to their ability: (1) to approximate complex nonlinear mappings directly from the input samples; and (2) to provide models for a large class of natural and artificial phenomena that are difficult to handle using classical parametric techniques. On the other hand, they lack faster learning algorithms for neural networks. The traditional learning algorithms are usually far slower than required (Huang et al, 2006).

6.2.1 Learning Methods

Many learning methods mainly based on gradient have been developed over the past thirty years. Back-Propagation (BP) and its variants are most popular. It is clear that the learning speed of feedforward neural networks is in general far slower than required. Two key reasons are: 1) the slow gradient-based learning algorithms are used to train the neural networks, and 2) all the parameters of the networks are tuned iteratively. Therefore these algorithms consume more time.

6.2.1.1 Training algorithm for back propagation NNs

The training algorithm of the back propagation involves four stages (Sivanandam et al, 2010), viz.,

Initialization of weight

Feed forward

Back propagation of errors

Updation of weights and biases.

During first stage which is the initialization of weights, some small random values are assigned. During feed forward stage each input unit () receives an input signal and transmits this signal to each of the hidden unit z1… … zp, then calculates the activation function and sends its signal to each output unit. The output unit calculates the activation function to form the net for given input pattern.

During back propagation of errors, each output unit compares its computed activation yk with its target values tk to determine the associate error for that pattern with that unit. Based on the error, the factor δk (k= 1,2,…,m) is computed and is used to distribute the error output unit back to all units in the previous layer. Similarly δj (j= 1,2,…,p) is computed for each hidden unit zj. During the final stage, the weight and biases are updated using the factor and the activation. Usually different learning algorithms used in different SLFNs architectures. The following are some of the limitations of existing SLFNs: 1) Some parameters have to be tuned manually, 2) Local minima and 3) Time-consuming.

6.2.2 Extreme Learning Machines

Traditionally, all the parameters of the feedforward networks need to be tuned and thus there exists the dependency between different layers of parameters (weights and biases). For past decades, gradient descent-based methods have mainly been used in various learning algorithms of feedforward NNs. However, it is clear that gradient descent-based learning methods are generally very slow due to improper learning steps or may easily converge to local minima. And many iterative learning steps may be required by such learning algorithms in order to obtain better learning performance (Huang et al, 2006).

It has been shown that SLFNs (with N hidden nodes) with randomly chosen input weights and hidden layer biases (and such hidden nodes can thus be called random hidden nodes) can exactly learn N distinct observations. Unlike the popular thinking and most practical implementations that all the parameters of the feedforward networks need to be tuned, one may not necessarily adjust the input weights and first hidden layer biases in applications. The input weights and hidden layer biases of SLFNs can be randomly assigned if the activation functions in the hidden layer are infinitely differentiable. After the input weights and the hidden layer biases are chosen randomly, SLFNs can be simply considered as a linear system and the output weights (linking the hidden layer to the output layer) of SLFNs can be analytically determined through simple generalized inverse operation of the hidden layer output matrices.

Based on this concept, Huang et al (2006) proposed a simple learning algorithm for SLFNs called extreme learning machine whose learning speed can be faster than traditional feedforward network learning algorithms like back-propagation algorithm while obtaining better generalization performance. Figure 6.1 shows the simple architecture of the feed forward networks. xj represents the inputs provided to the input nodes 1 to n. Hidden layer contains L number of nodes. ai represents input weight and bi represents the bias at node i. B1 to BL are the output weights of hidden nodes. oj is the output of NN.

Given a type of piecewise computational hidden nodes, if SLFNs can work as universal approximators with adjustable hidden parameters from a function approximation point of view, the hidden node parameters of SLFNs can actually be randomly generated according to any continuous sampling distribution. All the hidden node weights are independent from the target functions or the training datasets. All the parameters of ELMs can be analytically determined instead of being tuned. ELM is a simple tuning-free algorithm. ELM could generate the hidden node weights without seeing the training data.

(ai, bi)

Output node

Hidden nodes

oj

β1

β L

β i

i

1

L

1

n

xj

Input nodes

Figure 6.1 Feedforward Network Structure

6.2.3 Mathematical Model

Huang et al (2006) provided a mathematical model for ELM as follows. Given any bounded nonconstant piecewise continuous function g (integrable for Radial Basis Function (RBF) nodes), for any continuous target function f and any randomly generated sequence {(aL, bL)},

(6.1)

holds with probability one if ßi is chosen to minimize the

, i = 1, · · · , L. (6.2)

For N arbitrary distinct samples (xi , ti ) Rn Ã- Rm, standard SLFNs with L hidden nodes and activation function g(x) are mathematically modeled as,

(6.3)

where, ai : the input weight vector connecting the ith hidden node and the input nodes or the center of the ith hidden node.

βi : the weight vector connecting the ith hidden node and the output node.

bi : the threshold or impact factor of the ith hidden node.

or : the hidden node output function

is equivalent to Hβ = T, where

=

and T= (6.4)

H is called the hidden layer output matrix of the neural network.

From the above mathematical formulation a three step learning model can be used as follows:

Given a training set

activation function f, and the number of hidden nodes K, Assign randomly input weight vectors ai and hidden node bias bi, i = 1,···,K.

Calculate the hidden layer output matrix H.

Calculate the output weight β: β = H-1 T, where T = [t1,…..,tN]T.

where H-1 is the Moore-Penrose generalized inverse of hidden layer output matrix H. Given any nonconstant piecewise continuous function f, if continuous target function g(x) can be approximated by SLFNs with adjustable hidden nodes f then the hidden node parameters of such SLFNs needn't be tuned. Instead, all these hidden node parameters can be randomly generated without the knowledge of the training data.

6.3 SIMULATION AND RESULTS

0 1 5

0 2 7

0 3 9

1 2 10

1 5 7

2 4 13

2 5 8

2 6 7

3 4 12

4 6 8

5 7 6

5 8 15

6 7 9

7 8 3

7

13

12

9

5

10

8

7

8

9

6

15

7

3

2

0

1

3

4

5

6

7

8

Figure 6.2 Example Network with 9 Nodes

After the deployment of large number of sensors in the field, they will self-organize themselves and collect the location details of them. To group the sensors into small sized clusters the k-means clustering algorithm could be used. After clustering, the SP route has to be calculated using the proposed NN. The network topology will be formed based on the location details of the sensors within the clusters. For simulation purpose the network size of clusters are assumed as 10,15,20,25,30,35,40,45, and 50. All the simulations are made using MATLAB 7.0 on Pentium IV machine. The sensor network topologies are created randomly. The Cartesian distance between two nodes is assumed as the link weight or cost. Source and destination nodes are also selected randomly. A total of 100 random network topologies in each category with varying size (10 to 50 nodes) and randomly assigned (normalized) link costs were investigated. The network field size is assumed as 100x100 units. Maximum segmentation size is assumed as 40 units. Link cost matrix and link state matrix are used as inputs. The activation function used in the proposed algorithm is a simple sigmoidal function g(x)=1/(1+exp(-x)). In experiments, all the inputs have been normalized into the range [0, 1] while the outputs (targets) have been normalized into [-1, 1]. As seen from ELM algorithm, the learning time of ELM is mainly spent on calculating the Moore-Penrose generalized inverse H-1 of the hidden layer output matrix H. Each line in the WSN defines a bi-directional link between two nodes and its respective cost. For example:

0 9 4

defines the bi-directional link between nodes 0 and 9 with cost 4. The outputs of neural networks contain the number of neural network iterations and path cost. A simple example of network with nine nodes is given in figure 6.2 with their link and cost details. Figure 6.3 shows the calculated shortest path between nodes 0 and 8 with a path cost of 21.

13

12

9

5

10

8

7

8

9

6

15

7

3

2

0

1

3

4

5

6

7

8

Figure 6.3 Example Graph with Shortest Path

Figure 6.4 A Randomly Created Network with Shortest Path (50 nodes)

The performance of the proposed algorithm is compared with Hopfield, Ali & Kamoun and Park & Choi's algorithms through computer simulations. The results show that the proposed algorithm exhibits the fastest rate of convergence as well as the good quality of solution on par with other algorithms. Table 6.1 compares the algorithms based on convergence time. The proposed algorithm converges to a stable state in 0.1240 seconds with about 400 iterations (for 20 nodes) and 0.1457 seconds (for 30 nodes). It shows huge improvement over other algorithms. The time required for the algorithm is very less compared to other NNs. In addition, the proposed algorithm retains its robustness amidst changing network topologies. Table 6.2 shows that the proposed algorithm performs almost similar to other algorithms. The success rate of proposed NN is slightly lower than other algorithms for all cases. During non-success convergences also the algorithm produces valid sub optimal paths. It is an advantageous in WSNs for distributed energy consumption. The same nodes will not be used frequently. The shortest path obtained by the algorithm for a randomly generated network with 50 nodes is given in figure 6.4. The figure shows that the shortest distance between nodes 1 and 50 is 96.3284 units.

Table 6.1 Performance Comparison of Algorithms- Convergence Time

Algorithm

Convergence time (sec.)

10 nodes

15 nodes

20 nodes

25 nodes

30 nodes

35 nodes

40 nodes

45 nodes

50 nodes

Proposed SLFN

0.0617

0.0622

0.1240

0.1284

0.1457

0.1862

0.1937

0.2245

0.2637

Hopfield

0.1029

0.228

0.4739

0.6012

0.7306

0.8226

0.885

0.9236

1.1231

Park & Choi

0.099

0.2824

0.4241

0.5347

0.6178

0.7431

0.8218

0.9012

1.0288

Ali & Kamoun

0.0604

0.1863

0.3954

0.4832

0.6172

0.6985

0.8122

0.8997

0.9876

Table 6.2 Performance Comparison of Algorithms- Success Rate

Algorithms

Success rate (%)

10 nodes

15 nodes

20 nodes

25 nodes

30 nodes

35 nodes

40 nodes

45 nodes

50 nodes

Proposed SLFN

84.7

83.49

79.54

77.58

74.93

72.56

71.86

70.78

70.12

Hopfield

89.27

88.35

84.26

84.06

81.51

80.26

80.11

78.34

78.17

Park & Choi

92.32

91.87

88.47

85.26

83.19

82.87

81.33

80.16

80.01

Ali & Kamoun

98.54

92.39

89.43

89.28

85.71

84.29

83.98

82.53

81.37

6.4 CONCLUSION

A neural network based near-optimal routing algorithm employing a single hidden layer feedforward neural network that guarantees a highly efficient computation has been proposed in this chapter. It is suitable for multi-hop wireless sensor networks due its fastness. Compared with Hopfield and other NNs, ELM can be implemented easily since, there is no parameter to be tuned except an insensitive parameter 'L'. The results of accuracy of ELM are almost similar to Hopfield and other NNs for SPP. The result is the speedy convergence to a stable state. An optimal route is a bonus. Simulation studies show that the proposed routing algorithm is insensitive to variations in network topologies. Due to its fastness, this algorithm is suitable for routing in WSNs.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.