Introduction Bifurcation Theory And The Basic Setup Engineering Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Bifurcation theory has mathematical origin in the work of Euler; however, further developments were made by Poincare, Andronov, Hopf with the advancements of qualitative theory of differential equations, dynamical system theory, group theory, singularity theory etc. A bifurcation in a dynamical system is said to have occurred when a small smooth change in the parameter values (bifurcation parameters) of the system causes a 'qualitative' or topological change in its dynamics or behaviour. In essence, a bifurcation occurs when a solutions' behavior changes in a radical way. The qualitative changes can sometimes lead to complete chaos. The methods and results of bifurcation theory are fundamental to understanding of nonlinear dynamical systems, and the theory can potentially be applied to many areas.

There are many real world examples where, a formerly stable system begins to oscillate with larger and larger amplitude when the parameter of the system is changed in one direction. Initially stable civil engineering structures, electrical networks, ecological communities could begin to oscillate and then as the amplitude grows larger, collapse, break down, or remain oscillatory when some parameter is varied. The mathematical modelling of such phenomenon leads to differential equations depending on parameters which can then be studied to assess the stability of the system etc.

Such studies have led to classification of Bifurcations into two categories based on where they occur

Global Bifurcation: They often occur when large invariant sets of the system collide with each other, or with equilibria of the system. These cannot be detected purely by a stability analysis of equilibria. A few examples of global bifurcations include blue sky catastrophe,

homoclinic bifurcation, heteroclinic bifurcation and infinite period bifurcation.[1]


Local Bifurcation: These types of bifurcations are characterized by changes in local stability properties of equilibria or periodic orbits crossing through critical thresholds. In continuous systems this is equivalent to the real part of eigen value passing through zero.


Consider a continuous system described by the following ODE


here is the bifurcation parameter.

Classification of local bifurcations is based on the eigen values of the Jacobian matrix(J) at the fixed point of the system

With the saddle-node, transcritical, and pitchfork bifurcations, the stable fixed point has

p = trace(J) < 0, and q = det(J) > 0. Its eigenvalues are real and negative (lie in the left half-plane). As α varies for the Transcritical and Pitchfork bifurcations, one eigenvalue becomes positive, turning the fixed point into a Saddle-node.

With Andronov-Hopf bifurcations, the fixed point's eigenvalues are complex conjugates, µ1 and µ2 with real part of the eigen value being zero. We will be studying the detection of Hopf bifurcation and look into it in further detail in the following chapters.

A useful definition in the study of bifurcation is the Codimension of the bifurcation, which is the number of parameters to be varied for the bifurcation to occur. Hopf and Saddle node bifurcations are generally codimension 1, meaning they depend only on one parameter. Most of the others have higher codimensions, but, the normal form of the equations after using techniques like centre manifold reduction become codimension 1.

Brief outline of prior work

To detect bifurcations several approaches like centre manifold reduction and normal form theory are available. These involve the following steps

Reduction: Identification of the neutral mode at for a system as defined in (1.1) so as to restrict the system to the appropriate centre manifold

Normalization: Application of coordinate changes to the system to make it simple and easier to work with. This cant be generalized and needs to be done for specific cases

Unfolding: Introduction of nonlinear terms which are small and possibly nonlinear to describe the effects of varying away from . These terms are introduced into the normal form of the equation.

Truncation: Unfolded system is truncated at some order to aid the understanding of the dynamics of the system. Once the system is understood to an extent the effect of restoring higher order terms is discussed. This effect is important in bifurcations like Hopf as the higher order terms are the ones responsible for qualitative changes in the system.

The above steps are difficult to implement numerically and study systems which are multivariable and not amenable to centre manifold reduction easily. So, an alternate approach was described by Yu.A. Kuznetsov in [2]. This theory involves description of test functions which have regular zeros at the singularity points like bifurcations. Sometimes a family of test functions is defined to identify a particular kind of bifurcation. This was used to develop a GUI called MATCONT which is used in the study of dynamical systems and detects bifurcations up to codimension 2. The zeros of the test functions are monitored as the phase plot of the system is computed to determine the type of bifurcation.

Outline of the report

In this report the 'objects in question' are dynamical systems in the form of differential equations represented in the standard state space format. The systems typically studied can be expressed in the form of equation (1.1) after linearization around an equilibrium point.

This report begins with a description of the prior work done on this topic and the shortcomings of the methodologies presently available. It then follows, to explain the theory on which the new approach to detect 'Andronov-Hopf' bifurcation is based. We explore in detail Hurwitz stability test and its importance in the proposed method .The basics of Semi Dual minimization and Sum of Squares are studied in depth before proceeding to utilize the tools available to develop the new algorithm for detection. We also analyse and compare various schemes of descent algorithms like Newton, Polak-Ribiera, BFGS and gradient.

The report ends with application of the algorithm to two very commonly seen examples of bifurcation and a synthetic which exploits the power of the algorithm to deal with large number of states easily.

Contributions of the project

The contributions made to detection of Hopf bifurcation are three fold

A new algorithm is developed which is simple and does not include computation of phase portraits and regular zeros of test functions

The well established theory- Sum of Squares is exploited to solve the otherwise NP hard problem of non-negativity of univariate polynomials

Small modifications to the algorithm and the structure of the system help us look for stable and unstable Hopf bifurcation in particular. This feature is not available in any of the commercially available toolboxes for study of continuous systems.

Chapter 2: Background

2.1 Introduction

The recent developments in study of dynamical systems can be attributed to shifting emphasis from an analytical approach to one with an emphasis on topology of the orbit structure in phase plane with differential equations. This initial impetus triggered the real explosion of interest for dynamical systems and let to valuable contributions by Smale and Arnold in the 1960s.

To recognize the special types of dynamics which has degenerate structure the global approach of the sixties by the Smale school was important. The global approach further led to classification of a generic set of systems, which now could be described quite simply.

This in turn gave a new impetus to the work of Andronov et al. [3], Neimark, Sacker, and Hopf on families of dynamical systems. The prediction of global phenomenon using local calculations was now possible. For instance, Hopf bifurcations predict the growth of a limit cycle or invariant circle around an equilibrium point which undergoes a change of stability. There are nondegeneracy conditions associated with the creation of this circle, but they are all calculated at the fxed point.

It is almost impossible to have an idea beforehand which of the parameters is critical to unfolding the degenerate behaviour of the defining equations. Of course, some parameters may have no structural effect on the system whatsoever. Often a dynamical system modelling a real-life phenomenon comes prepackaged with too many parameters to handle. It is very daunting to search in large-dimensional parameter spaces! The growth and classification of bifurcation has been extremely valuable in allowing modellers to focus their search for key dynamical behaviour in the parameter space.

Gaining insight into the bifurcation behaviour of a system and obtaining a complete set of topological types of phase portrait that occur by varying parameters is an immensely useful guide to the behaviour of the system. This knowledge can be used to control the system using the emerging bifurcation control techniques.

Most of the work that has been done and mentioned above has been well documented in [2] helping us study the background in detail.

This chapter begins with the basic Hopf theorem in the time domain followed by an application which was based on it. It then explores briefly the Graphical Hopf Theorem without going into rigorous mathematical proofs as we don't require it further.

2.2 Hopf theorem

Let us begin our study of prior work done with the basic Hopf theorem. The theorem is as follows:

Theorem 2.1 (Hopf[4])

Consider the system

where and ;suppose that for small the matrix has a pair of complex conjugate eigenvalues (the derivative of the real part with respect to the parameter µ is positive) and the other n-2 eigenvalues have negative real part; then

there exist a and a function such that for the system

has a periodic solution with period T( also , , and the amplitude of this periodic solution ( the approximate distance of the corresponding periodic orbit from the origin) is proportional to ;

the origin of the space has a neighbourhood that does not contain any periodic orbit of (2.1) but those of the family ;

if the origin x=0 is a 3-asymptotically stable(resp. 3-unstable) 3-unstable equilibrium of the system (resp. for and the periodic solution is asymptotically orbitally stable(resp.unstable)

The above theorem with the statements (i),(ii) and (iii) state the existence, the uniqueness and the stability of the bifurcating periodic solution, respectively. The uniqueness is to be understood as it is expressed in (ii) and it is to be observed that it does not mean that to each sufficiently small there belongs exactly one periodic orbit. The point is that the function may assume the same value at different values of arbitrary near (cf.Negrini-Salvador[1979]) resulting in several periodic orbits corresponding to the same system ( to the same ) but to different 's, or even it may happen at

INSERT FIGURE 2.2.1 (scan and paste)

There are two typical conditions that arise if the condition in (iii) is satisfied. These are shown in the Figure 2.2.1. The first case when x=0 is 3-asymptotically stable is called a supercritical bifurcation since then the periodic orbits appear for, i.e. above the critical value of the bifurcation parameter. The second case when x=0 is 3-unstable is called subcritical bifurcation: in this case the periodic orbits appear below the critical value, i.e. for negative values of

Sometimes, the respective terms "soft and hard bifurcations" are used: "soft" because then the system begins to oscillate stably with small amplitudes; "hard" because then the behaviour of the system is unforeseeable, usually is starts to oscillate with wilder and wilder amplitudes leading often to breakdown/collapse/chaos.

Theorem 1 forms the basis of the Hopf bifurcation detection process with its enormously important existence criteria.

2.3 Detection methodologies

One of the most popular techniques that have been exploited to study dynamical system is that of 'test functions'. We will attempt to understand how this is used in the context of a GUI-MATCONT [5] which was developed to study dynamical systems. The following explanation is based on the work done in [2] and other sources

2.3.1 Computation of solution curve

Let us consider a smooth function


A solution curve of the equation F(x)=0 is computed with the help of a technique called 'Numerical Continuation' .This technique obtains a series of points which approximate a given branch. Most continuation algorithms implement a predictor-corrector method. The idea behind this method is to generate a sequence of points xi, i=1,2,...n along the curve, satisfying a chosen tolerance criterion: ||F(xi)|| ≤ ɛ for some ɛ > 0 and an additional accuracy condition ||dxi|| ≤ ɛ where ɛ > 0 and dxi is the last Newton correction.

The generation of new points consists of two important steps

Prediction of new points is made with commonly used tangent prediction technique

Correction of the generated points

We assume that is close to the curve. To find the point xi+1 on the curve we use a Newton-like procedure. Since the standard Newton iterations can only be applied to systems with the same number of equations as unknowns and extra scalar condition is appended. This condition is based on 'Moore-Penrose' arc length prediction.

Therefore, we now have a methodology available for computing the solution curve. It remains now to discuss the test functions which need to be monitored as the curve is being computed to find out the bifurcations in the phase plot. The following sub sections give details of the test functions and how they are combined to form a singularity matrix to monitor different kinds of bifurcations.

2.3.2 Test functions

The main idea is to detect singularities by defining smooth scalar functions which have regular zeros at the singularity points. These functions are called test functions. Suppose we have a singularity S which is detectable by a test function

Let us further assume that we find two points on the curve

The singularity S will be detected only if

Having found two points we now locate a point vanishes. An obvious way of doing this is solving the following equations using Newton's iteration starting at the point

However, it is necessary to compute the derivatives of F(x) to use this method which is not always to compute. So, a one-dimensional secant method to locate f(x) =0 along the curve was implemented by the programmers as a default one.

The above method is a general way to detect and locate singularities depending on one test function the proof for which is given in greater detail in [2]. However, it may happen that it is not possible to represent a singularity with only one test function.

Let us now suppose there is a singularity S which depends on nt test functions which all change sign at a sequence for some consecutive points xi and xi+1:

A one dimensional secant search gives all the zeros of the test functions which in the ideal case should coincide. Since the continuation is numeric and not exact we tend to find zeros which are clustered around a centre point

A cluster is then detected by finding the mean of all the zeros of the test functions. Conditions for detecting the cluster and the formula for obtaining the centre are as follows:

This helps us define x* as a mean of all the zeros of the test functions

2.3.3 Singularity matrix

Until now we have dealt with singularities which vanish at one test functions. When a large number of singularities are monitored we need a consolidated representation combining the test functions and where they vanish. The representation that follows has been adopted from [6]

Say we have two singularities S1 and S2, depending respectively on test functions and .Further, assume that vanishes at both S1 and S2, while vanishes at only S2. Therefore we may need to represent singularities with non vanishing test functions as.

To represent all singularities we introduce a singularity matrix (as in [6]). This matrix is a compact way to describe the relation between the singularities and all test functions. Suppose we are interested in singularities and test functions which are needed to detect and locate the singularities. Let S be a matrix of dimension

There is also a possibility of the default location algorithm running into problems for which there is an option provided for the user to specify their own test function which is monitored.

2.4 Graphical Hopf Theorem-a perspective (GHT)

Hopf[4] showed that oscillations near an equilibrium point can be understood by looking at the eigenvalues of the linearized equations for perturbations from equilibrium, and at certain crucial derivatives of the equations. The graphical criterion checks the Hopf conditions for the existence of stable or unstable periodic oscillations. Since it is reminiscent of the generalized Nyquist criterion for linear systems, graphical procedure can be interpreted as the frequency domain version of the Hopf bifurcation theorem.

Without going into rigorous mathematical proofs, let us try to understand the GHT with its similarity to the time domain version. [7]

Consider a general n-dimensional linear system with nonlinear feedback-u as follows:

with an initial condition x(0)=0. In (2.13) is a (variable) real parameter, A is a system matrix, B and C matrices respectively, and nonlinear function which is used as state feedback. After a few steps which involve Laplace transform of the output vector y and manipulations we can derive the equation

where represents the Laplace transform of the signal y and

It then follows from equation (2.14) that we may only deal with (contained in etc.) in the frequency domain, without directly considering the states/ of the system.

The frequency domain version of Hopf bifurcation theorem will have a lower dimension as m<n in feedback systems. This is one of the advantages over the time-domain analogue.

If we proceed to linearize the system by evaluating its Jacobian - about the equilibrium output we obtain an equivalent of the time domain version of the theorem as follows:

Lemma 1.If an eigenvalue of the corresponding Jacobian of the nonlinear system (2.13), in the time domain, assumes a purely imaginary valueat a particular value , then the corresponding eigenvalue of the constant matrix in the frequency domain must assume the value at

The above stated Lemma along with a generalized form of Nyquist criteria of stability is the basis of the GHT which was used by Allwright-Mees-Chua (Allwright [1977], Mees & Chua [1979], and Mees [1981]) to develop the graphical version. The theorem as such is not being stated here as we will not be using it further.

The equivalence of both theorems with the pros and cons of both has been discussed extensively in [7].

2.5 Central Manifold Theorem

We are going to formulate without proof the main theorem that allows us to reduce the dimension of a given system near a local bifurcation. Let us begin with the critical case; we assume that the system values are fixed at their bifurcation values, which are those for which there is a nonhyperbolic equilibrium. The theorem which is going to be stated is widely used to restrict the order of the system to the centre manifold and thus reducing the complexity of the system. It makes the computation easier when test functions are used.

Consider a continuous time dynamical system [2]

where f is sufficiently smooth, f(0)=0 . Let the eigenvalues of the Jacobian matrix A evaluated at the equilibrium point . Suppose there are eigenvalues with zero real part making the equilibrium nonhyperbolic. Assume that there are eigenvalues with , eigenvalues with , and eigenvalues with . Let denote the linear eigenspace of A corresponding to the union of the eigenvalues on the imaginary axis. The eigenvalues with are often called critical, as is the eigenspace . Let correspond to the flow associate with (2.15). Under these assumptions the following theorem holds. INSERT PIC

Theorem 2.2

There is a locally defined smooth -dimensional invariant manifold of (2.15) that is tangent to at x=0.

Moreover, there is a neighbourhood U of, such that if for all then, for

The manifold is called the centre manifold.

There are certain deductions that can be made from the Theorem 2.2 which are useful. We can conclude that the centre manifold is attracting from the second statement in the theorem. A very vital fact is the non-uniqueness of the centre manifold. It is supposed to have the same finite smoothness as

f in some neighbourhood U of x0. Also, this theorem has an equivalent notation and definition in the discrete time case.

The above stated can thus be used to reduce the system to the centre manifold and study it there. There are other theorems by (Shoshitaishvili[1972]) which prove the topological equivalence of the system when it is restricted to the centre manifold.

In parameter dependent system (2.1) like the ones we deal with the center manifolds are of equal importance. The manifold is tangent at the origin to the (generalized) eigenspace of J corresponding to eigenvalues with zero real part. Since, we have invariant hyperplanes which foliate the manifold .

where are invariant hyperplanes.


2.6 Shortcomings of prior work

There are a few shortcomings in the test function approach as described above and in [2], [5] and [6].Firstly, a lot of computation is involved because the solution curve is calculated to detect the bifurcation. It might be easier in most cases to determine the occurrence of bifurcation without needed the information of how the solution evolves. The characterization/defining features of a particular type of bifurcation is not exploited in this scheme.

Secondly, the knowledge of 'when and where' the bifurcation occurs is of more as a lot of control schemes can be developed around this knowledge. It gives the control system designer enough information to sketch the control scheme.

Thirdly, it does not allow us to find out the nature of bifurcation i.e. whether it is stable or unstable. This information is very important to study the system further and make modifications if we require stability

Fourthly, we are not aware of exactly at what value of parameter the bifurcation occurs. We get information about the type of bifurcation and at what point on the solution curve it occurs, it is not straightforward to find out at what value it occurs.

Moreover, from the GHT point of view the analysis of the system is not straightforward and not much theory has been developed to aid us. In particular, many useful methods and significant results have been developed in the time domain setting for analyzing dynamical behaviour of nonlinear ODEs arising from singular (especially multiple) bifurcations as well as period doubling sequences, yet their corresponding frequency domain graphical version has not been completed. Also, even though GHT reduces the order of the system the characteristic equation of the lower-order polynomial is rather complicated. It appears to be an algebraic function, which has the frequency variable ω as a parameter of the characteristic loci adding to the complications. With all these disadvantages in mind it is quite evident why this approach to the problem was not considered.

2.7 Concluding remarks

The theorem 2.1 stated at the beginning of the chapter is of utmost importance in this report. It not only discusses the circumstances conducive for the occurrence of Hopf bifurcation but also provides conditions for stability and uniqueness.

The shortcomings stated in section 2.4 indicate the need for a simpler more efficient detection methodology. This report fulfils that need by exploiting the features of Hopf bifurcation and using a simple technique to detect their occurrence. Hence, most of the drawbacks are overcome in the new algorithm proposed .A similar procedure which takes advantage of characteristics of the bifurcation can be employed for others also.

Chapter 3: New Approach to the problem

3.1 Introduction

The previous chapter discussed in some detail the prior work done on the problem of detection of bifurcation and the shortcomings of it. We begin this chapter by reviewing the basic Hopf theorem and catching a small glimpse of the new algorithm devised. This chapter then explores the Hurtwitz polynomial and SOS in detail before discussing the pseudo code for the algorithm.

3.2 Basis of the new approach

Most of the previous work which was concerned with bialternate products of matrices, Nyquist theorems and test functions did not exploit the simple stability issue that arises when there is a bifurcation in a dynamical system. A Hurwitz polynomial is obtained as the determinant of the Hurwitz matrix. This matrix is formed with the coefficients of the characteristic polynomial of the system matrix of a system as defined in (2.1) .Change in stability/the movement of eigen values can be characterized by the change in sign of the Hurwitz polynomial from positive to negative or vice versa.

In the case of Hopf bifurcation we observe the Hurwitz determinant becoming zero at the critical value of the bifurcation parameter.

The sign change in the determinant can easily be monitored by taking advantage of the relation between positivity and expression of a polynomial as a sum of squares polynomial. The very famous Hilbert's 17th problem concerns itself with the question-"Given a multivariate polynomial that takes only non-negative values over the reals, can it be represented as a sum of squares of rational functions?"

The question was answered with an affirmative, shown by Emil Artin in 1927. An algorithm for finding the rational function was later formulated and popularised. Over the years, however, a few counter examples were found to this formulation. Explicit sufficient conditions for a polynomial to be a sum of squares of other polynomials were found by Wormann and Powers. We explore this relation in the new approach to the problem

The positivity of the Hurwitz determinant over the reals is a very good way of ensuring the non occurrence of a bifurcation. In the case of univariate polynomials like the ones we are concerned with, the relation between positivity and sum of squares is firmly established. Therefore, we can be sure that if there exists is a sum of square decomposition for the Hurwitz determinant then we remove the possibility of a bifurcation happening. This conclusion was used to develop the algorithm also taking into account that the bifurcation happens from an equilibrium point of the dynamical system

3.3 Characteristic polynomial of a matrix

3.4 Hurwitz-stability test

Hurwitz has related the stability of an n-th order polynomial

to a set of determinants The determinants come from the following set of n Hurwitz matrices:

This pattern continues until an nÃ-n matrix is obtained. For n even, this last matrix Hn has the form

for n odd, it has the form

As an example, if the polynomial (3.1) is of order 4, the largest Hurwitz matrix is

while for n=5, the largest Hurwitz matrix is

The relation of these matrices to stability is given in the following theorem. There will be no proof shall be provided for this well-known result of Hurwitz.

Theorem 3.1

An n-th order polynomial (3.1) is stable if and only if

Note that, for both n even and n odd, it follows from expanding by its last column that

Thus, the stability condition 3.5 is equivalent to for , and a0 > 0.

From the above theorem we can conclude that the only determinant we need to check for positivity is . Given the characteristic polynomial of the system matrix for (2.1) we can construct the Hurwitz matrix and calculate the determinant quite easily in any programming language. Matlab was used to do the above mentioned operation.

The next section describes the underlying principles of testing the positivity of the Hurwitz's determinant using tools and techniques which are well established in theory and in practise but have not been applied as yet to bifurcation analysis.

3.5 Nonnegativity and sums of squares (SOS)

A univariate polynomial p(x) (in one variable or in one bifurcation parameter) is said to be a sum of squares (SOS for brevity) if there exist such that [8]

If a polynomial p(x) is a sum of squares, then it obviously satisfies for all

Thus, this condition is a sufficient one for global nonnegativity. It can also be shown that the converse is true in the univariate case alone.

Theorem 3.2

A univariate polynomial is nonnegative if and only if it is a sum of squares.

Theorem 3.2 again is of utmost importance to us because it deals with nonnegativity of a polynomial. The nonnegativity of the can be relatively easily determined by using this technique as it is tractable or determinable; otherwise this problem is a NP hard. NP hard problems are hard to solve but, it is easy to show if a plausible solution is a true solution or not in polynomial time. The classification of this problem as NP hard is easy to see; the whole of real space needs to be spanned if we cannot express it as sums of squares. A Turing Machine may never stop if the problem of checking nonnegativity is fed to it. Also, note that the theorem is particularly applicable only for a univariate case. In the case when polynomials under consideration have more than one variable the relaxations which we talk about later on may not be justified and we may not obtain a direct relation between nonnegativity of the polynomial and expressing it as sums of squares.

Over the years a few counter examples were found where the polynomial is nonnegative but, it could not be expressed as sums of squares. In theory, we do know that a polynomial of any degree and any number of variables can be expressed as SOS of rational functions. The widely accepted proof of this Hilbert's seventeenth problem by Artin was given quite a long time ago. However, it is difficult to program the procedure of finding a SOS expression in the generic case due to the computational limitations and relaxations. These considerations and exceptions will be discussed later on in this report.

3.5.1 Sums of Squares polynomials and programs