# Signal Processing Based Communication Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

In any system involving signal processing based communication between two entities then there would definitely exist a case of data loss in it. To overcome these types of problems in communication system they have developed the algorithms to predict the future values of signal based on the past samples, these types of algorithms are called as prediction algorithm and if it is done external to the frame then it would be called as external prediction. In the same way to diagnose the system continuously while system is working then it is done by on line tracking of signal by internal prediction methods which is basically a system identification problem.

In scope of this study we used various methods for both system identification and external prediction of the signal. The various methods applied in the project to study this are Normal Equation, Levinsion Durbin, and Leroux Gueguen. Other than studying these methods its also show how various parameters of the system are interrelated to each other such as choosing best LPF (linear predictive filter order) which is nothing but the frame size based on the error of frame size. We also determine the prediction gain of the signal which helps in determining the performance of the system. In this current project all the values taken for the error and as well as for the prediction gain are mean in nature. The main aim of this is to analyse all above said approaches for the both cases and finding the best linear predictive filter order

Keywords: Linear Predictive Filter Order, System Identification, External prediction, Levinsion Durbin method, Leroux Gueguen, Normal Equation

## AIM OF THE PROJECT

The main objective of the project is to study various linear prediction methods and analyse them for both internal and external prediction of the signal. Other main objective of the program is to determine how the various parameters of the signal processing are interrelated. The main parameters which are to be studied in the project are the error, linear predictive filter order (frame size), prediction gain, computational delay. The main limitation in doing this project is of the constraint implied on the number of signals to be predicted given a frame size "n" is half of it.

## CHAPTER 1

## INTRODUCTION

## INTRODUCTION

Linear prediction plays a vital role in the modern day speech coding algorithm. The fundamental idea of the linear prediction is of estimation of future samples based on the past values of the input signal within a signal frame, the weights used to compute the linear combination are found by minimizing the mean square prediction error. Linear prediction analysis is an estimation procedure to calculate the auto regressive parameters, given sample values of the signal. In doing so we will come up with two main concepts of prediction which are internal prediction which is mainly a system prediction problem and external prediction problem where the actual prediction takes place for future values.

Now coming on first type of estimation that is system identification problem, in this type of estimation we will always predict the same signal which is given to us. This type of prediction mainly includes the prediction of coefficients in relation to that specific sample values. In this case we will be having the facility of having data in this specific frame as we don't consider the data loss condition within in the frame. In the case of second prediction we can see that we are predicting the future values of the frame were we don't have the scope data availability due to data loss condition. In both the cases the main difference between then is in case when we are doing the prediction of the samples and in which direction we are doing the prediction of the samples. Mainly in internal prediction we will be doing the prediction in the backward direction whereas in external prediction we will be doing the same thing in the forward direction. Other main condition in successful prediction of data is we always prediction window size (n) to be always less than or equal to half of the linear predictive filter order.

## WHAT IS IT USED FOR

The applications of the linear prediction accompanies a vast ranges of the fields which include control engineering, communication, digital signal processing, and many other fields.

The linear prediction mainly includes the estimation of the data samples either within the given desired data frame were data is given or exterior of the data frame were data is not given

The main applications of the linear prediction are as fallows

The main application of these predictive coding algorithms is in the implementation of it in kalman filters in predictive estimators.

These predictive algorithms are implemented in the speech encoding of low bit rate for compression of data.

These algorithms are used in the prediction of the stock markets and analysis of it, with regards to stock market the main goal is to predict the future status of the stock based on the previous values it.

Format estimation is a clear application of linear prediction because of it tendency to model the peaks in the envelope

Fundamental Frequency estimation in the signal by using the linear prediction coefficients. This done with help of the auto correlation function we using for finding the linear prediction coefficients.

## MATLAB

## GENERAL INFORMATION

Linear prediction is defined as the concept of finding coefficients of an auto regressive model with the help of the input signal itself by using the following methods described above and again with these coefficients we recreate the signal which would be called as predicted signal. The formula used for calculating the predicted signal is as given below

In formula above we can see the new variables such "M" and were they represent the linear predictive order and the predicted signal respectively. We can also see the variable "a" representing the linear predictive coefficients that we have determined. To find these linear predictive coefficients we will derive a mathematical derivation by the condition stating that mean square error should be as least as possible. The error in the signal can be derived as given by the formula below and the figure given below describe the whole picture

FIGURE1: LINEAR PREDICTION

## NOMENCLATURE

LPF : Linear predictive filter order

S[n] : Original input signal

S^[n] : Predicted signal samples

ai : Linear predictive coefficients

M : Frame size

PG : Prediction gain

e[n] : Error signal samples

R[n] : Autocorrelation array

Rs : Autocorrelation matrix

rs : Right hand side autocorrelation array

Apart from the above said nomenclature we will also be getting various variables such as "k", á¼‘ and etc, which are created in order to find out the LPF coefficient. These coefficients are not of any parametrical importance but these are required for computational purpose.

## CHAPTER 2

## INTERNAL PREDICTION

## INTERNAL PREDICTION

Internal predication is basically a system identification problem. Instead of predicting the signal tracking of the input signal in the real time scenario is taking place. The LPC coefficients derived in the internal prediction from the estimated autocorrelation values using the frame's data are applied to process the same frame data. In the present scenario of internal prediction concept of using the "sliding window" is applied on top of the "stationary window" given in the book. The basic difference between both the concepts is that in sliding window will be always changing the input data taken for the input by shifting the window by one sample. The fashion in which the window will be shifted in terms of internal prediction is backwards because of calculating the last sample of the frame first using all the data available in the frame and later on moving backwards by considering previous sample into the window and relieving the present calculated sample. This is explained neatly in the figure given below.

FIGURE: Showing the sliding window concept in internal prediction

## DERIVATION OF NORMAL EQUATION ALGORITHM

The algorithm used for the calculation of the internal prediction for the signal is derived from the concept of the minimising the mean square error between the original signal and the predicted signal therefore primarily it is needed to know how we will find the predicted signal out of the coefficients. To find the predicted signal the corresponding formula is to be used, This is given below

From the above equation predicted signal of the internal prediction can be computed.To minimise the mean square error of the predicted and original signal which is as done in the mathematical derivation below.

In the above equation "J" is called as the cost function which is precisely a second order function of LPC's and this can also visualize the dependence of the cost function on estimates a1, a2, a3, a4,..............,aM as a bowl shaped (M+1) dimensional surface with M degrees of freedom. In order to minimize the above equation we need to derivate it and the answer of it should be equated to "0"

Now rearranging the above equation we will be getting the following equation below

The above equation is obtained by taking the terms in LHS to RHS of the equation there by resulting in the above equation and this is also obtained by letting some correspondence. Which are as given below?

For k=1, 2, 3, 4... M

Now from the above given equations bottom equation is used for finding the autocorrelation array because it is easy to simplify both in matlab as well as in practical values. The resultant will be an autocorrelation array of size "M (frame size)" with which we will be forming an autocorrelation matrix by using first (M-1) samples of the auto correlation matrix. The autocorrelation matrix has some unique properties using which matrix is formed. Some main properties of the autocorrelation matrix are as fallows

Properties of autocorrelation matrix

The principal diagonal elements in the matrix always the same element

Autocorrelation matrix is a symmetric matrix

It is also a square matrix for whose inverse matrix definitely exists

The general structure of the auto correlation matrix is shown below

Now from the equation 2, 3 and 4 we will get the below equation which is as fallows

In the above equation it is know that Rs represent the auto correlation matrix, from introduction we know that "a" represent the linear predictive coefficients. Finally coming to the last variable rs represent the transpose of the array formed from the elements Rs[1] to Rs[m] of the autocorrelation array. The general form of the is

T

So the from the above equations we will get the values for the coefficients as

Now from the above equation values of the coefficient corresponding to the same frame are derived. But in matlab the methodology followed for finding the Linear predictive coefficients are as fallows

## GENERAL MATLAB PROCEDURE

Initially taking all the sample values of the input sample required for the calculation are copied into an array, the no samples taken into the array is defined by frame size we choose

Now by flipping of the complete array from left to right as this allows me to do computation for autocorrelation array easier.

Now multiplying the first sample with first sample initially and later on first sample with the corresponding sample values.

Now after finding the autocorrelation array we will be generating the autocorrelation array matrix and also the matrix of the right hand side by the transpose.

After the finding of the matrix and column matrix of RHS, we will be extracting the coefficients using the eq 5

With help of these predicted coefficients we will be predicting the last sample in the frame of prediction.

Now repeating the above steps by sliding the window by one sample towards left as explained in the "figure2" until all the samples of the frame are predicted.

Above said is the implemented procedure applied for the normal equation so we will be therefore following the same procedure for all the remaining algorithms in the internal prediction until the step of finding auto correlation array.

## NORMAL EQUATION

For the internal prediction, white noise signal forms the input original signal. White noise is generated in the matlab code using the function "rand" and this is needed to be seeded as the signal can be changing at every execution of the program by using command "seed". Now depending on frame size the required amount of data is copied from white noise signal to other signal. Initially by finding the autocorrelation array from these input samples and then afterwards corresponding linear predictive coefficients are found

Now with these two signals that are original and predicted signal we will be determining the best frame size, least error and predictive gain which are all interrelated parameters of the signal.

MATLAB CODE FOR INTERNAL PREDICTION BY NORMAL EQUATION

In the below said program computation is done as described above in the internal prediction. But apart from that calculation and saving of the data for best linear predictive filter selected is done. The linear predictive filter order is the technical word used for the frame size of the prediction.

The mat lab code for the internal prediction of the frame by the normal equation is given at this link

Normal equation code for finding best LPF order

The mat lab code for the internal prediction and also the selecting best Linear Predictive order, other parameters related is given by the link

Normal Equation code for independent frame size

Table below shows the data as computation time, error in the predicted signal and predictive gain of the signal

Frame size

25

26

27

28

29

30

31

32

33

34

35

36

37

38

Prediction Gain

-3.5498

-3.3834

-5.2507

-2.1058

-10.2498

7.8378

-5.9393

-4.3842

-5.9677

-5.1695

-4.3889

-9.5406

-3.6686

-5.0363

Error

1.6392

0.9788

3.0297

0.4957

2.7267

0.4245

0.6672

0.7522

1.0041

0.5366

1.4076

1.9901

1.3371

1.6748

Computation Time

0.0339

0.0149

0.0159

0.0172

0.0187

0.0198

0.0214

0.0231

0.0245

0.0259

0.0278

0.0299

0.0322

0.0346

Frame Size

39

40

41

42

43

44

45

Prediction Gain

5.3421

-11.5633

-6.9455

-12.8013

-6.0093

-2.0225

-3.2351

Error

0.4676

1.1952

1.0228

2.9438

0.8420

0.5894

0.9342

Computation Time

0.0366

0.0392

0.0410

0.0434

0.0446

0.0487

0.0505

Table showing the output of the Normal Equation for internal prediction

On observance of the computation table we will draw out some of the conclusions which are as fallows

How sizes of the frame influence the output of the linear prediction?

With the increase in the LPF order the computation time required for the computation also increases gradually which is to be addressed

The error computed are by taking absolute mean of both predicted as well as original signal is always positive

Prediction gain gives great scenario of performance of the system therefore higher the value of it higher it is feasible.

The result of the best frame is 30 and its parametrical values are optimal compared to other value

## INTERNAL PREDICTION BY LEVINSION DURBIN METHOD

The main reason for the movement from a method giving very pleasant result to that of the one whose results are not so, is the computation time taken by the normal method involving the inversion of the matrix is very time consuming. Whereas in levinson Durbin method we will be utilizing the property of the autocorrelation matrix that it is Toeplitz matrix (if all the elements of its principal diagonal elements are equal and also any other diagonal parallel to main axis are also equal). The other main reason for shifting from normal to levinson is that the inversion of a matrix is a computationally demanding in hardware

In the levinson method we will be following the same procedures till finding of the autocorrelation array and thereafter the procedure differs in finding of the linear predictive coefficients. In levinson Durbin method we need to carry out the same number of iterations as equal to that of the frame size to find out all the sample values in the frame of the predicted signal. The algorithm utilised for the computation of the levinson Durbin method is explained and stated properly.

ALGORITHM FOR LEVINSON DURBIN METHOD

Same method of normal equation will be implied till the finding of the autocorrelation array. Then after the procedure of solving will differ in this method, so the below given algorithm clearly stats from the autocorrelation array. As we have used mat lab for the solution of these

Methods and matlab clearly doesn't support the array with negative indices and as well as zero as index so consider precautions have been taken while implementing the algorithm by shifting the whole logic to positive indices. Here the variable "l" cleary states the no of iterations and its maximum limit is can be "M" (size of the frame)

Initialization first iteration (l=0)

Jo=R[0]

Recursion : for l=1,2,3,.......M

Step 1: compute the l th RC

Step 2: Calculate LPCs for the l th order predictor

Stop if l=M

Step 3: compute the minimum mean square predictor error

MATLAB CODE FOR INTERNAL PREDICTION BY LEVINSON DURBIN METHOD

In this matlab code we have included the complete part on finding autocorrelation array from normal equation and there after we have implemented the concept of the Levinsion Durbin method. The remaining concept of the finding the linear predictive coefficients is same as we have done in the normal equation. Here to we have implemented the two codes out of which one is used for finding the parametric dependencies and other one is a general matlab code which will work for all frame sizes individually.

Matlab code for finding the parametric dependencies is at the following link

Levinson Durbin code for finding best LPF order

Matlab code for any frame size individually is given at the following link

Levinson Durbin Code for independent frame size

Frame size

25

26

27

28

29

30

31

32

33

34

35

36

Prediction Gain

-2.7505

33.9656

45.4930

30.1631

9.7875

50.9400

40.4751

6.9151

1.2654

-2.8053

-6.6353

-7.0012

Error

1.8611

0.2182

0.0571

0.01446

0.5883

0.0690

0.0844

0.8693

0.5974

0.7356

0.6805

1.5499

Computation

Time

0.0507

0.0304

0.0310

0.0374

0.0341

0.0359

0.0416

0.0409

0.0456

0.0515

0.0535

0.0559

Frame Size

37

38

39

40

41

42

43

44

45

Prediction Gain

12.7679

11.1993

-11.4513

60.5063

10.0912

23.4340

-1.0739

-14.9640

-6.1271

Error

2.5515

0.3062

1.5360

0.0405

0.6005

0.2243

1.0544

1.8611

1.7270

Computation time

0.0591

0.0653

0.0863

0.0699

0.0735

0.0799

0.0855

0.0966

0.0947

Table: showing the output results of the Levinsion Durbin method

From the result obtained from the execution of the program is that the best frame value can be utilised for the computation is 40 and its values are optimal when we compared to the other frame sizes parametrical values. It's observed from the program that the dynamic range of the output (predicted signal) is greater in tendency when compared to the output of the normal equation. The error and as well as the predictive gain calculated are mean in nature as that of in normal equation. In both the methods we have obtained different frame size which is optimal in nature this is because we are using two different methods which work on different properties of the autocorrelation matrix of the right hand side to obtain the linear predictive coefficients. Even though it great deviation in the computation time of both the methods but it can be significantly observed if we take large frame size or in the case of external prediction were we will use large frame size (reason is explained at relevant location)

## LEROUX GUEGUEN ALGORITHM

Leroux and Gueguen proposed this method in 1979 to compute linear predictive coefficients from the auto correlation values derived from the normal equation method. This method is also an extensional concatenated part of normal same as like that of the Levinson Durbin method. The main reason for developing this method of computation of LPC coefficients as it eliminates the large dynamic range obtained by the levinson Durbin method when computations are done in fixed point environment. The main reason for getting elimination of this problem is by taking the application of Schwarz inequality in computation of this method. The basic algorithm of Leroux Gueguen also starts from the autocorrelation array found out from the normal equation.

One of the basic problem in this is that it can be implemented on the odd number frame sizes because as its algorithm implies the division of the autocorrelation array from the normal equation into two equal half of the and one element for the zeroth location.

ALGORITHM OF LEROUX GUEGUEN IS AS FALLOWS

The logic implied till the finding of the autocorrelation array is the same as the normal equation logic and as we have used mat lab for the computation of this logic we have some of the precautionary steps to avoid zero and negative indices by shifting the same logic for positive indices.

Initialization: l=0, set

Recursion: for l=1, 2, 3... M

Step 1: compute the lth RC

Stop if k=M

Step 2: Calculate the parameters

These above calculated epsilon parameters forms the LPC coefficients for this particular method.

MATLAB CODE FOR LEROUX GUEGUEN METHOD

As this the extensional part of the normal equation from autocorrelation, in the code the basic difference we will be seeing is will be from the autocorrelation array to that of LPC coefficients finding.

Mat lab code implemented for the finding of the best linear predictive filter order is as given at the link

Leroux Gueguen code for finding best LPF order

Mat lab code implemented independently for finding the parameters for a given frame size is given at the link

Leroux Gueguen for independent frame size

Frame Size

25

27

29

31

33

35

Prediction Gain

4.9666

4.1984

5.1718

0.7011

-3.0359

-13.7850

Error

0.4.32

0.2922

0.4204

0.4932

0.5001

3.0123

Computation

Time

0.0235

0.0072

0.0081

0.0090

0.0095

0.0106

Frame size

37

39

41

43

45

Prediction Gain

-13.6948

-9.8877

-15.0956

-8.8094

-10.0672

Error

4.1451

0.6910

1.0029

0.8091

0.7329

Computation time

0.0119

0.0128

0.0137

0.0150

0.0161

Table showing the output results of the Leroux Gueguen Method

By the result obtained from the above clearly states that the best value suited for the frame selection is 27 as it posses the least value of the error and as well as all other values is optimal in nature. It is n observed that in all the three methods applied for the implementation of the internal prediction had given different values for the best frame size depending on the computational method chosen.

## COMPARISION OF THE METHODS

The best way for the comparison of all the three methods is by comparing their performance for their best frame size selected as shown in the table

Method Used

Prediction Gain

Error

Computation Time

Normal Equation (30)

7.8378

0.4245

0.0198

Levinson Durbin Method (40)

-60.5063

0.6005

0.0699

Leroux Gueguen Method (27)

4.1984

0.2922

0.0072

Table showing the comparison of all the three methods

## CHAPTER 3

## EXTERNAL PREDICTION

## EXTERNAL PREDICTION

External prediction is the case of the pure prediction of the samples. In external prediction we will be predicting the value of the sample based on the autocorrelation between the past M samples in the frame. The predicted sample is purely predicted in nature as we are predicting its value in the case when he had faced difficulty in fetching the original signal. In this prediction the sliding window will be moving forward in direction towards right as and then it moves it picks up more and more predicted samples which are not true values in nature. As having the facility of having both the original signal and as well as predicted signal, it's an assumption taken that for some period of time there will be no availability of data replicating the condition of data loss. But as we are doing this by having both the signals in hand, it's possible to calculate all the things which we had done in the case of internal prediction and can choose the best LPF order (frame size).

The other important thing that is to be kept in the mind while selecting the frame size for the external prediction is that the frame size is to be considerably bigger so that we can cover maximum amount of data loss in case of problem. This condition is to be taken seriously because we can only predict to the maximum extent of the half of the size of the frame.

Figure showing the sliding window nature of the external prediction

## EXTERNAL PREDICTION BY NORMAL EQUATION

All the implications as well as the conditions taken into the consideration for doing this same method in internal prediction holds good even in the external prediction of the frame. The main and basic difference between the internal prediction and external prediction is that of data availability, movement of the window sliding. External prediction we have to find all the coefficients of the sample taken for the frame size and with that first sample prediction is possible. This predicted signal sample gets bind with the moving window of the original signal samples and there after again finding the next sample in the process. This process continues based on the no of the samples to be predicted in the frame so that the impact of the data loss can be minimised.

The other main factor to be kept in the mind while selecting the frame size is that the number of the external predicted samples should be only half of the frame size. Not taking care of this condition the performance of the out can deteriorate very rapidly.

MATLAB CODE FOR NORMAL EQUATION EXTERNAL PREDICTION

Mat lab code implemented for finding the best value of the frame size is given at the link below.

External prediction by Normal Equation

Frame Size

Prediction Gain

Error

Computation Time

Frame Size

Prediction Gain

Error

Computation

Time

60

-16.8324

1.1147

0.0714

80

-9.1849

1.5314

0.0898

62

-9.1065

1.4197

0.0558

82

-11.1392

0.7446

0.0937

64

-6.7607

0.5900

0.0595

84

-13.1317

0.8157

0.0983

66

-10.2341

0.6914

0.0635

86

-8.2524

0.6740

0.1039

68

-9.8660

0.6489

0.0668

88

-11.5712

0.6627

0.1071

70

-14.2619

1..0570

0.0707

90

-16.7442

1.1665

0.1119

72

-8.6143

0.8018

0.0746

74

-14.8672

1.2543

0.0799

76

-13.6357

1.5302

0.0824

78

-8.7552

1.5739

0.0844

Table showing the output for external prediction by Normal Equation

On observance of the computation table we will draw out some of the conclusions which are as fallows

How sizes of the frame influence the output of the linear prediction?

With the increase in the LPF order the computation time required for the computation also increases gradually which is to be addressed

The error computed are by taking absolute mean of both predicted as well as original signal is always positive

Prediction gain gives great scenario of performance of the system therefore higher the value of it higher it is feasible.

The result of the best frame is 64 and its parametrical values are optimal compared to other value

## EXTERNAL PREDICTION BY LEVINSION DURBIN METHOD

External prediction by levinson Durbin method also is an amendable part of the normal equation. As this method starts its implication only for finding the LPC coefficients by using autocorrelation array, so this becomes an amendable part of the normal equation. Even in this method also we will be following the same procedure as explained in the normal equation but instead of using normal equation program we will be using the levinson Durbin method code for determining the linear predictive coefficients. In this method to we will be following the sliding window concept for the determination of external samples by moving window towards right as and then a single prediction sample is calculated.

Software used for solving the computational part of the algorithm is mat lab and as it does not support negative and as well as zero as the value for its array indices we will manipulate the program to work for positive indices' values by shifting property

MATLAB CODE FOR LEVINSON DURBIN EXTERNAL PREDICTION

Mat lab code implemented for finding the best value of the frame size is given at the link below.

External prediction by Levinson Durbin

Frame Size

Prediction Gain

Error

Computation Time

Frame Size

Prediction Gain

Error

Computation Time

60

-6.5087

0.6885

0.0281

76

-7.6191

1.4810

0.0110

62

-110452

1.2399

0.0080

78

-12.5817

0.7354

0.0114

64

-5.1587

0.6250

0.0083

80

-5.5954

1.0114

0.0119

66

-10.0668

0.6758

0.0087

82

-11.3132

1.0240

0.0124

68

-8.7199

1.1707

0.0091

84

-5.9275

1.9989

0.0130

70

-10.1204

1.6853

0.0097

86

-2.8949

0.8516

0.0138

72

-8.3033

0.6913

0.0099

88

0.0141

3.3348

0.0143

74

-7.3087

0.7524

0.0105

90

.-6.1593

1.4227

0.0158

Table showing the output for external prediction by Levinsion Durbin method

Now by comparing the tabular column created for the both normal equation and levinson Durbin method we can say that the time taken for the computation in the case of levinson Durbin method is very less when compared to that of the normal equation. The optimal value of the frame size obtained for the method is 64 and the value of the prediction gain is optimal when compared to all other values in the table.

## EXTERNAL PREDICTION BY LEROUX GUEGUEN METHOD

In external prediction using the Leroux Gueguen method need is to be taken when selecting the frame size of the external prediction because of the reason that Leroux Gueguen always supports odd frame size. The reason for limitation in Leroux Gueguen method is clearly explained in the internal prediction section for Leroux Gueguen method. All other limitations and implications considered under the internal prediction holds good for external prediction to. Siding window concept is applied in Leroux Gueguen for the determination of the external prediction.

MATLAB CODE FOR EXTERNAL PREDICTION BY LEROUX GUEGUEN

The matlab code for the external prediction by Leroux Gueguen method is given at the following link:

External prediction by Leroux Gueguen

Frame Size

Prediction Gain

Error

Computation Time

Frame Size

Prediction Gain

Error

Computation Time

61

-15.5512

1.3953

0.0918

79

-25.9951

6.3316

0.1036

63

-19.4328

2.1821

0.0647

81

-25.5749

5.9343

0.1181

65

-20.0579

2.4559

0.0597

83

-21.0748

2.8724

1.1059

67

-20.5689

3.7797

0.0634

85

-20.8676

2.9036

0.1064

69

-18.3390

3.0770

0.0698

87

-21.8938

2.6832

0.1124

71

-10.2531

1.5026

0.0722

89

-21.7357

2.5898

0.1113

73

-21.7538

2.7585

0.0745

75

-26.6728

7.8200

0.0815

77

-24.9938

2.6182

0.0849

Table showing the output for external prediction by Leroux Gueguen

From the execution of the matlab code it is derived that the best frame size is "61" that can be used for the computation of the parameters which are shown above. The values computed for the error are absolute mean in the nature and the prediction gain is calculated for the by using the original signal and as well as the error signal. Prediction gain gives an insight for the performance of the system for that particular frame size

## COMPARISION OF ALL THREE METHODS FOR EXTERNAL PREDICTION

The best way for the comparison of all the three methods is by comparing their performance for their best frame size selected as shown in the table

Method Used

Prediction Gain

Error

Computation Time

Normal Equation (64)

-6.7607

0.5900

0.0595

Levinson Durbin Method (64)

-5.1587

0.6250

0.0083

Leroux Gueguen Method (61)

-15.5512

1.3953

0.0918

Table showing the comparison of all the three methods

From the above table we can clearly see that the value of performance depicted by the prediction gain in the tabular column gradually decreased from the first method to the last method and is same applicable for the error also. Now considering the computation time for all the methods the time taken bye Levinsion Durbin method is the least but the limitations of it for the dynamic range and fixed point computation which makes it suitable for limited applications. So by above values it is concluded that Levinsion Durbin method is the best method giving optimal values over all the three parameters for external prediction.

## CHAPTER 5

## REAL TIME MULTIPLE FAULT SCENARIO

## REAL TIME FAULT SCENARIO

Under this section of the project document and implementation the concept of getting fault in signal is depicted by generating multiple faults in input signal which are of different lengths in nature. This scenario implementation takes the closest approach of real time processing were the faults in input signal occurs at distinct locations and prolongs for different amounts of time. Some of the assumptions considered for making this code less stressed and computed quickly are that from the starting itself we know the locations of the faults to occurring in the system. The method used to develop the code for this is normal equation written for external prediction, reason for choosing this particular method is that it is to modify and we are extensively concentrating on the external prediction in this project.

To develop this code its required to write two different matlab files out of which one is the file containing the modified version of normal equation for this purpose and in other matlab file the code required for the initialization of variables, calling of the function and also the function code for plotting of the functions. As it is discussed earlier that it is already known data that were the faults will be going to generate for the execution of the program are as fallows.

1st fault is at following location

Start of fault= 80 ; end of the fault=110

2nd fault is at the following location

Start of fault=150 ; end of the fault=190

3rd fault is at the following location

Start of fault=500 ; end of the fault=550

MATLAB CODE FOR REAL TIME MULTIPLE FAULTS

The modified Normal Equation matlab code given at the following link

Modified normal equation matlab code

The matlab code files having the calling function is at the given link

Matlab calling code

From the evaluation of the above scenario it is clear that with the help of these codes we can implement the following things in real time scenario. This also shows that these codes are capable of holding things god in real time scenario were randomly multiple faults of variable sizes are generated

## CHAPTER 5

## ADVANCEMENTS AND CONCLUSION

## FUTURE ADVANCEMENTS

As this being the general base for the prediction of the signal either in the internal or external prediction of the signal. As this have the major applications in various fields the future advancements for this concept are vast in nature. Some of the major future advancements of this concept are as fallows

These all methods can be implemented in the kalman filter at the predictive stage of it enhance it predictor performance.

## CONCLUSION

In the last it can be said that all the objectives said in the project are achieved with great feasibility. Our main focus in this project was on the external prediction of the signal and because of this is also considered for real time multiple faults. This multiple faults condition mainly depicts the real time scenario. In the present project we have studied all the prediction methods for both internal and external prediction of the signals. The main objectives studied in the project other than to the prediction are the dependencies of the parameters on the frame size and interdependencies of the parameters in the prediction. We also achieved the successful development of method which applies the logic on the absolute mean square error obtained for each and every frame size to decide the best linear predictive filter order. Atlas concluding the documentation of the project it can be concluded by saying that it was a great success and all objectives accomplished.