# Dual Fuzzy Neural Network Adaptive Control Computer Science Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The dual fuzzy neural network presented in this paper is capable of implementing fuzzy inference in general and neural network mechanism in particular. A systematic method for mapping an existing rule base into a set of dual fuzzy neural network weights has also been presented. However, in order to utilize this method to initialize the dual fuzzy neural network weights, such a rule base obtained from domain experts or from experimental data through systematic, knowledge acquisition methods has been proposed. In this paper, a new learning algorithm for the dual fuzzy neural network is developed. The characteristics of this learning algorithm include learning new rules, fine-tuning initial rules and eliminating erroneous rules from the rule base. At last, the examples are intended to illustrate how the dual fuzzy neural network can be used for the implementation of the knowledge acquisition.

## Introduction

In many cases, the plant to be controlled is too complex to find the exact system dynamics, and the operating conditions in dynamic environments may be unexpected. Therefore, adaptive control technique has been combined with multiple models. The current trend in intelligent systems or dynamic control research is concerned with the integration of artificial intelligent tools [1, 2, 3, 4] in a complementary hybrid framework for solving complex problems.

The transfer of knowledge from some source into the knowledge base is called knowledge acquisition [5]. Knowledge acquisition involves applying a technique for eliciting data [6] from the expert, interpreting data to recognize the underlying knowledge and constructing a knowledge representation model. The sources of knowledge include books, personal experience, experiment data and so on. Knowledge acquisition can be conducted manually from domain experts or automatically as a result of machine learning [7, 8]. In the case of manual knowledge acquisition, however, several computer programs have been developed to preprocess the acquired knowledge before adding it to the knowledge base of the target expert system [9, 10, 11, 12].

In this chapter, we proposed the novel dynamic tracking fuzzy neural network control system (DTFN) using two fuzzy neural networks (Dual FNN) [13]. Two fuzzy neural networks with the same structure are considered. First, an FNN which to be designed as controller is obtained from one fuzzy neural network, the uncertainties compensation control uses ANFIS technique. Second, the learning FNN is obtained from neural learning model, the training data from plant can get the suitable control strategy in the learning part. Third, with the changes of the external environment, the learning part which has been training already, now it is used as controller on-line. The other neural network was to be regarded as controller, and now tuned into a learning part.

We also proposed a systematic method for mapping an existing rule base into a set of DTFN system weights. In order to utilize this method to initialize the DTFN weights, a body of knowledge in the form of a fuzzy rule base must be readily available for this purpose. Such a rule base can be obtained from domain experimental data through systematic, knowledge acquisition methods. In this paper, the process of knowledge acquisition is reviewed, and a new learning algorithm for the DTFN system is developed. The characteristics of this learning algorithm include learning new rules, fine-tuning initial rules, and eliminating erroneous rules form the rule base.

The paper is structured as follows: We start with a discussion of the dual fuzzy neural network. The system adaptation with single fuzzy neural network and dual fuzzy neural network will introduce in Section 2. In Section 3, our concept of the dynamic tracking fuzzy neural network (DTFN) control system learning algorithm will be presented. We mainly focus on the learning model and the adaptive control in the DTFN control system. Then the real-time control actions using DTFN in the magnetic levitation system will be discussed in Section 4. Finally, we conclude with a discussion and an outlook in Section 5.

## Discussion of the Dual Fuzzy Neural Network

## System Adaptation with Single Fuzzy Neural Network

Consider a nonlinear process given by:

(1)

where is the state, is the input vector. is a bounded locally Lipschitz and general smooth function.

Let us consider the following dynamic neural network to identify the nonlinear process Eq. (1):

(2)

where is the state of the neural network, is a known stable matrix which will be specified. The matrix is the weight of the output layer, is the weight of the hidden layer. . is neural network activation function. The elements of can be any stationary, bounded and monotone increasing functions. In this section we use sigmoid functions. Generally the multilayer dynamic neural networks Eq. (2) cannot exactly match the given nonlinear system Eq. (1), the nonlinear system Eq. (1) can be represented as:

(3)

where is defined as modeling error, and are set of unknown weights which can minimize the modeling error . The identified nonlinear system Eq. (1) can also be written as:

(4)

where is modeling error, and are set of weights chosen by the system identification agency.

## System Adaptation with Dual Fuzzy Neural Network

From Eq. (4) we know a neural network can not match a nonlinear system exactly, the modeling error depends on the structure of the network..

Although the single neural network Eq. (2) can identify any nonlinear process, the identification error is big if the network structure is not good for the steady state data from environment change. In general we cannot find the optimal network structure, but we can use two possible networks and select the best one by a proper switching algorithm.

The structure of two dynamic neural networks is shown in Figure 1. Here, are neural identifiers, whose outputs are ; . In the beginning of the system control, we use a selector to choose a best identifier such that the identification error is minimized. Let be a closed and bounded set that represent a parameter space of finite dimension. Assume that the plant parameter vector and the model parameter vector (the weights of neural networks) belong to . We assume that the plant and all of models can be parametrized in the same way as in Eq. (4). Each parameter vector is associated to one neural identifier . The two dynamic neural networks are presented:

(5)

where . The objective of two neural networks is to improve a performance of the identification using a finite number of models , is selected according to the plant parameter vector .

Figure 1. The general view of DTFN control system

In each instant the identification error which corresponds to each neural identifier is calculated. The two neural networks identification is to select suitable from all possible switching input set such that the performance indexes is minimized. We can define identification error performance index for each neural identifier as:

(6)

where and are design parameters. and define the weights given to the instant and long term errors, respectively.

## The DTFN Learning Algorithm

The objective of the neural network learning algorithm is to enhance the performance of the neural network by improving a performance measure. This is typically done by means of decreasing a measure of error which is derived from the performance measure. If the neural network can be trained with the presence of a set of target examples or correct outputs, or with the presence of a mechanism that can provide corrections (a teacher), then the measure of error can be the sum of the squared errors as in the following:

(7)

where is the target or correct value for the -th output, is the number of outputs, and is the -th output of the neural network.

## Updating the Sensitivity Weights

In light of the implications of the DTFN neuronal model, it can be concluded that the neuron is tuned to be sensitive to a particular input pattern, and that the neuron expresses the degree of similarity between the input pattern and the one that the neuron is tuned for by means of its measure of sensitivity. It is conceivable that process of tuning the neuronâ€™s sensitivity to a particular input pattern may be independent of its activation value. In other words, it is conceivable that the process of tuning the sensitivity of the neuron to a particular pattern may be dependent only on the input patterns and their frequency of occurrence. This observation motivates the following conjecture:

The above conjecture implies a mechanism for updating the sensitivity pattern of the neuron. In light of the above conjecture, the following recursive update mechanism for the sensitivity weights, , is proposed:

(8)

(9)

where is the index of the neuron in the DTFN neuron model, is the sensitivity weight associated with the -th element of the input vector of the neuron before updating, is the sensitivity weight after updating it (for the next iteration), is a temporary variable, is the -th element of the input vector, is a learning rate that can be used to control the weight update resolution, is a measure of error, and is the value of the measure of sensitivity for the -th neuron corresponding to its inputs. Clearly, if the measure of error, , or the neuronâ€™s measure of sensitivity, , is zero, the sensitivity weights will not be changed in Equation (9). In contrast, if the measure of error, , is relatively large (close to 1), and the neuronâ€™s measure of sensitivity, , is also relatively large, then the sensitivity weights will asymptotically approach the input vector being repeated. In other words, the neuron will become more sensitive to the repeating input pattern.

Although updating the sensitivity weights will affect the sensitivity measure, and therefore the neuronâ€™s activation and the output of the DTFN neuron model, Eq. (9) is not intended to decrease the measure of error. The sensitivity weight update is intended to tune the neurons to become more sensitive to the more frequently applied input patterns.

## Updating the Activation Weights

A generalization of the error back propagation learning algorithm is proposed here for updating the activation weights of the DTFN neuron model. Let us first consider the case of the DTFN neuron representing output fuzzy variables. In this case the target values (or correct responses) are available during a training session. Recall the measure of error with respect to the -th neuron is given by:

(10)

where is an index for the neurons in the DTFN neuron model. Recall that the DTFN neuron model is given by the following equations:

(11)

(12)

(13)

(14)

Then the elements of the error gradient vector, with respect to the activation weights, are derived as in the following equations:

(15)

The dependency of the element of the error gradient in the last equation above on the input pattern is implied in the measure of sensitivity, . The activation weight in the -th neuron, , can be readily updated using the corresponding element of the error gradient as in the following equation:

(16)

Eq. (16) is applicable to neurons in output only, because it requires knowledge of the target (or correct) values, , for all the neurons. To update the activation weights of hidden DTFN neurons, the output error for all output must be back propagated to the outputs of hidden neurons. The same technique used in the classical error back propagation can be extended to the case of the DTFN neuron model. In the classical error back propagation, the output error is first propagated to the inputs which are the outputs of some hidden neurons, and the process is repeated backward through the layers of the network. Therefore, the output error of the DTFN neuron is propagated to each element of the input vector, , by the following equation:

(17)

In a straightforward way, the first derivative of the right hand side of the above equation evaluates to:

(18)

Recall the DTFN neuron output is given by:

(19)

Then, the derivative of the output with respect to the sensitivity measure, , is given by:

(20)

The third derivative term in Eq. (17) evaluates to:

(21)

Eq. (18), (19), (20) and (21) evaluate all the derivative terms in the right hand side of Eq. (24), and the following equations are used to update the activation weights of -th neuron:

(22)

## The Real-time Control Actions

In order to assess the performance of the proposed scheme in controlling the real-time actions for DTFN control application and auto knowledge acquisition, the magnetic levitation experiments are performed and presented in this section. All simulation examples are performed over 30 samples to track a reference signal representing the target changes. The DTFN system controllers were directly tested in the Magnetic Levitation System, when the simulation results were considered satisfactory. It is shown in Figure 2.

The requirement for the controller is that it can be able to position the ball at any arbitrary location in the magnetic field and that it move the ball from one position to another. These requirements are captured by placing step response bounds on the position measurement voltage. Specifically we require the following constraints on the ball:

Position constraint: within 20% of the desired position in less than 0.5 seconds

Settling Time Constraint: within 2% of the desired position within 1.5s

To meet the control requirements we implement the DTFN system controllers. For convenience the controller uses a normalized position measurement with a range from 0 to 1, representing the bottom-most and top-most positions of the ball respectively.

Figure 2. The magnetic levitation control system

Figure 3. The magnetic levitation control system using DTFN in Matlab

Figure 4. The control input signal

Figure 5 The switching scheme, nonlinearity disturbance and fine-tuning results

Figure 3 show the schematic diagram of the magnetic levitation control system using DTFN in Matlab. Figure 4 and 5 show the Control Input Signal of dual-FNN and ANFIS internal model controllers for the magnetic levitation system. A selector of the DTFN system is also presented as the switching scheme of the dual-FNN. The controller is optimized using the learning algorithm for the knowledge acquisition method. The nonlinearity disturbance and the fine-tuning result are also shown in Figure 5.

## Conclusion and Future Work

The DTFN learning algorithm as automation of the knowledge acquisition process has been studied in this report. Issues related to knowledge acquisition from domain experts have been described and a systematic method for rule induction from the results of data acquisition experiments has been proposed. The DTFN learning has been addressed and a generalized error back propagation learning algorithm has been developed for the DTFN neuron model.

Considering the great potential on issues we received from the DTFN model and its ability to be easily expandable, our future works should consider the extension of this model. However, we should also continue the improvement of our learning algorithm, which could help improving the overall results of our DTFN. We should finally investigate the possibility of extending the DTNC model with intelligent sensor and reinforcement learning applied to the real control system.

Acknowledgments

This work is supported by the National Natural Science Foundation of P.R. China (Grant No. 60875034). Also be supported by the specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20060613007).