Cgm Time Series State Of Art Biology Essay


The accuracy of CGM sensor data could be affected with time lag between blood glucose and interstitial fluid glucose, calibration error, motion artifacts, measurement error due to sensor drift and random noise. Exhaustive analysis of various types of errors is not been done so far. The filtering technique applied by CGM manufacturers deal only with random noise. Lack of vast noise analysis and advanced specialized filtering methods could be the reason for 50% false alarms and missing alarms.

2.1 Issues in CGM Sensor Data

The CGM Systems assess blood glucose fluctuations indirectly by measuring the concentration of interstitial glucose but are calibrated via self monitoring to approximate the blood glucose. There is an average time lag of 12 minutes between arterial blood glucose and ISF glucose [19] (1). The number and timing of calibration is still a research issue (2) . Sensor performance is poor at situations like rapid excursions (sudden rise or fall) , motion artifacts and local inflammatory complications (3) . The above said factors have a direct impact on the sensor accuracy, resulting in noisy data. Therefore optimal filtering or preprocessing is required for CGM sensor data signal before sending it for further processes like generation of predictive alert or a control signal for insulin pump control. The following section briefs out the various issues in CGM sensor data.

Time Lag

Lady using a tablet
Lady using a tablet


Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

The CGM systems use subcutaneous sensors which measure the glucose concentration from interstitial fluid surrounding it rather than from plasma blood. There is an average time lag of 5 to 12 minutes between the changes in plasma and interstitial glucose levels. The time required for glucose to diffuse from the capillary to the tissue plays an important role in the lag between changes in plasma and interstitial fluid glucose levels. A two compartmental model had been used in the literature to characterize the interstitial glucose dynamics (4).

Figure 2.1 Compartmental model of BG-to-IG kinetics

The gradient and delay between plasma and interstitial glucose was estimated with a mass balance equation


Where C1 & C2 are plasma and interstitial glucose kinetics respectively. K12 is the forward flux rate for, K21 is the reverse flux rate for glucose transport across the capillary, K02 is the glucose uptake in to the subcutaneous tissue, V1 is the plasma volume and V2 is the ISF volume. In simple form the interstitial glucose sensor signal dynamics had been represented as a first order difference equation,


Where 'g' is the plasma to ISF glucose gradient given by


And 'Ï„' is the ISF equilibration time constant (delay) given by


'Ï„' reflects a delay in ISF glucose relative to changes in plasma glucose. The gradient and delay are time varying factors. The values change with change in insulin concentration, insulin sensitivity, insulin resistivity glucose uptake etc.. The lag between BG and IG increases at times of increasing plasma glucose levels. During the time of decreasing glucose, the IG may fall in advance of plasma glucose. The relation between BG and IG acts as a first order low pass filter and introduces a distortion i.e., attenuation in amplitude and distortion in phase.


CGM sensors as said earlier, are of minimally invasive placed subcutaneously. These sensors are amperometric type that estimate interstitial glucose values by measuring an electrical current generated by the reaction of glucose either with oxygen or with an immobilized reduction - oxidation (redox) enzyme ( normally Glucose Oxidase). During this red-ox reaction, the enzyme accept or donate electrons. This movement of electrons is measured as the concentration dependent current using electrodes.

This current (in nA) should be transformed to glucose levels (in mg/dL) requires 1 or more blood glucose samples (SMBG). This procedure is normally referred to as calibration. (5) Calibration is essential since CGM sensors measure glucose indirectly from Interstitial Fluid(ISF) rather than plasma blood. A calibration algorithm is used to convert the raw sensor signal into blood glucose estimates. Normally, a simple linear equation is used for calibration.


where 'x' is the reference blood glucose which is the independent variable and 'y' is the dependent, the sensor current. For one point calibration, y intercept is assumed to be known (usually b = 0), and the sensor sensitivity (i.e., the slope 'm') is obtained from

Lady using a tablet
Lady using a tablet


Writing Services

Lady Using Tablet

Always on Time

Marked to Standard

Order Now

m = ( y - b ) /x (2.6)

Two point calibration can also be used. Likewise when multiple data points are available, linear regression techniques can be used. Once the sensor is calibrated, the estimated glucose concentration is obtained from sensor current with

ẋ = (y - b) / m (2.7)

It has been specified that the correlation coefficient can be used as the measure of calibration quality (6). If correlation coefficient is too low, the calibration may be deemed unacceptable and so additional reference glucose measures are required. The patent by Feldman and Mcgarraugh (7) had given many criteria for calibration acceptance. Knobbe and Buckingham (8) made a descriptive study on this recalibration. The BG to IG kinetics model was taken into account in order to reconstruct BG levels at constant time from CGM measurements and an Extended Kalman filter was used to estimate the unknown variables. Facchinetti et al., (9) proposed a new enhanced calibration method that can potentially work in cascade to standard calibration of any CGM device. Their method was also based on the Extended Kalman filter, which by taking into accounts the BG-to-IG kinetics, four SMBG samples per day, and a model of the time behavior of sensor accuracy, significantly enhances the quality of CGM data in real-time applications. Vast literature is available on the works done on calibration procedures and their accuracy (10-14) It is pointed out in literatures that errors in glucometer readings in addition to lags between blood and ISF glucose, make it necessary that two reference BG values differ significantly when using a 2 point calibration. King et al., showed in his work that the sensor performance is affected with variation of reference BG values that are used for calibration (15). It is also proved that the deviations are >30mg/dL.

Figure 2.2 Representative Real Data taken from Kovatchev et al., BG reference

samples (stars) versus CGM data (line) profiles.

This figure shows a comparison, performed in a clinical study (16) on a type 1 diabetic subject between a Freestyle Navigator® CGM profile and BG references collected every 15 minutes and measured in lab by Yellow Springs, Ohio, USA. The figure shows a distortion in 5 to 10 hours and discrepancy in 17 to 24 hours interval. This distortion is the resultant of change of sensor performance after initial calibration. Several studies have been performed in order to assess the influence of the number, accuracy and temporal positions of reference SMBG samples, as well as the trend of glucose concentration at their pick up times(17,18).

The Medtronic , Dexcom and Abbott continuous glucose monitoring systems use the linear regression principle as real time calibration algorithm. The DIRECNet study group (19) evaluated factors affecting calibration of Medtronic CGMS. They found that the sensor accuracy was improved with number and timing of calibration points. Results confirm that the timing of calibration is more important than the number. It is more important to perform calibrations during periods of relative glucose stability i.e., when the point to point difference between BG to ISF glucose is minimum. The fact that CGM profiles can be affected by calibration issues can be critical in several applications such as alert generation systems and closed loop insulin pumps.

Sensor Drift

After sensor insertion, the measurement medium is not normal ISF, rather ISF is enriched with inflammatory cells, cytokine and mediators. Due to this bio fouling, the surface of the electrode is covered with cells which obstruct fluid exchange and leads to passivation of electrodes i.e., weakening of signal by reduction in conductivity. This degeneration leads to changes in sensor function that results to drift in sensor signal in course of time. Recalibration at fixed intervals is currently required to deal with problems related to signal drift (20).

Non linear rise / fall

Since the CGM systems are calibrated at relative periods of stability, when the BG rises suddenly due to high intake of CHO meal or sudden fall in BG level due to severe exercise or high insulin dosage, the sensor might not be able to cope up with such non linear responses [21]. Body movements, jerks and other motion artifacts could also be a reason for sudden changes in CGM sensor signal.

Random Noise

Another source of error is related to sensor physics, chemistry and electronics (22).The CGM signal is also corrupted by a random noise component which dominates at high frequency (23).

2.2 State of Art in noise modeling of CGM Sensor data

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

It is evident from literature that an implanted subcutaneous sensor could malfunction due to calibration drift, lag between concentrations of arterial blood glucose and ISF glucose during rapid fluctuation, sensor fouling and local inflammatory complications (24). This malfunctioning has a direct impact on the accuracy of sensor measurements and results in a noisy CGM time series. However, the exact distribution of CGM noise profiles has not been reported in the literature to date.

Chase et al., (23) are the pioneer in modeling CGM errors. From the literature they observed that 78% of measurements were within 20% of actual value and correlation coefficient between the measured value and actual value was 0.88. Chase et al., approximated the error model using a normal distribution with 17% (0.17) standard deviation (SD). This standard deviation and distribution allowed 78% of measurements within 20% of actual value and a limit of 40% (~ 2.5 SD) was used to match the reported values. They simply modeled the error as a normally distributed random noise added to a simulated glucose profile.

Figure 2.3: Example of approximated CGMS error to a simulated glucose profile. Dashed lines show 20% and 40% bounds to estimate the magnitude of any error (23).

Breton et al., modeled the sensor error based on a diffusion model of blood to interstitial fluid glucose transport which accounts for the time delay and a time series approach, which includes auto regressive moving average noise to account the interdependence of consecutive sensor errors. He had given a histogram of sensor error with fitted normal distribution(green) and fitted Johnson distribution(Red) which is reproduced here in figure 2.4. They have pointed out that the discrepancy between sensor and reference glucose differ from random noise by having substantial time-lag dependence and other non independent identically distributed (iid) characteristics.( i.e., the error is independent of previous errors and drawn from the same time independent probability distribution). But they have omitted high frequency errors( period of 1 to 15 minutes) in their modeling due to the need of fine samples (24). Breton et al., proposed a sophisticated model, where sensor error is not white and non Gaussian. They concluded that the time series of the reconstructed CGM sensor errors could be described as realization of the output of an AR filter of order 1 driven by white noise. Their work had two steps. In the first step, the CGM data were recalibrated by fitting a linear regression model against all the available BG references. In the second step, to account the distortion due to BG-to-IG dynamics, the data was fit to a LTI model of BG-to-IG as proposed in (4).

Fig. 2.4 Histogram of sensor error with fitted normal distribution(Green)

and fitted Johnson distribution(Red)

The value of time constant ' was fixed based on the population value. The average auto correlation (ACF) and partial auto correlation (PACF) were then employed to assess the estimated sensor error time series and its statistical properties. The contribution of Breton and Kovatchev is very important because for the first time, the role of calibration and BG-to-IG has been explicitly considered. However, Facchinetti et al., have shown that the assumptions made by Breton et al., have a serious influence on the quantitative results (25). (Assumptions of perfect recalibration and perfect knowledge of a LTI model of BG-to-IG). Even small errors either in CGM data recalibration or in the description of BG-to-IG dynamics can severely affect the possibility of correctly reconstructing the statistical properties of CGM sensor error time series. According to Facchinetti, the first order AR model obtained by Breton et al., could describe spurious low-frequency components in the reconstructed time series of sensor error introduced either by a deficient recalibration. And also added that the first order AR model was due to error in modeling, not to a randomly generated error within the sensor. He had suggested that the reliable model of sensor error time series need to start with sophisticated recalibration algorithm in order to deal with the possible time variance of calibration factor during multiple day monitoring. In addition to the major sources of sensor i.e., calibration and BG-to-IG kinetics, the other sporadic events like motion artifacts, loss of sensitivity of the sensor, or the inflammatory response, should also be considered and possibly integrated into a model of sensor error. As pointed by Breton and Kovatchev, the difficulty in sensor error time series modeling is the need to collect, in addition to CGM data, several BG references at high frequency sampling. And methodologically, both distortions introduced by BG-to-IG dynamics and problems of CGM data recalibration must be taken in to account. Methodological changes are however still open.

2.3 CGM Sensor Data - Preprocessing

Very few denoising approaches have been presented in the literature, to remove the errors in CGM sensor data. Given the expected spectral characteristics of noise, low pass filtering represents the most natural candidate to denoise CGM signals. One major problem with low pass filtering is that since signal and noise spectra normally overlap, removal of noise will introduce distortion in the true signal. This distortion results in a delay affecting the estimate of true signal. Digital filtering techniques can be used to improve the quality of the signal by reducing the random noise components (26). Let

y (t ) = g (t) + n(t) (2.8)

Where g (t) is the actual glucose signal to be measured, n(t) is the noise added with and y(t) is the received signal from CGM sensor. A brief overview of the basic filters used in noise removal is given below.

A median filter takes the median value of a window of N past glucose values.

Y(k) = median ( Yk , Yk-1,……………..Yk-N+1 ) (2.9)

The advantage of this filter is that it discards the effect of sudden spikes in the signal. A finite impulse response filter has the form

Yk= a0Yk + a1Yk-1 + ………… + amYk-m (2.10)

where Y represents the measured glucose value and Yk is the filtered value which depends on the past 'm' measurements. An Infinite impulse response (IIR) filter has the form

Yk = -a1Yk-1 - a2Yk-2 - … - aNYk-N + b0Yk + b1Yk-1+…+ bmYk-m (2.11)

where the current filtered output is a function of 'N' previously filtered values and 'm' previous measurements. Understanding how denoising is done inside commercial CGM devices is often difficult, but some of the informations can be obtained from the registered patents which are given below. Dexom patented the use of IIR filters for raw signal filtering with N=3 and m=3 (6). Medtronic patents show that the raw signals being obtained at 10 second intervals. At the end of each 1 minute intervals the lowest and highest values are removed and the remaining four are averaged to obtain 1 minute average (27). From the patents of Steil and Rebrin, it is found that the filters can also be applied to the derivative of the glucose data which is useful as a part of closed loop algorithms based on the rate of change of glucose (28). Chase et al., developed an integral based fitting and filtering algorithm for CGM signal, but it requires the knowledge of insulin dosages (23). The optimal estimation theory of Knobbe and Buckingham showed that the use of extended Kalman filter accounts for both sensor noise and calibration errors (8). Facchinetti et al., adopted the real time estimation of parameters of Kalman filter for online denoising of random noise errors in CGM data (29). The same group have tried the extended Kalman filter algorithm for calibration errors (9). Facchinetti et al., arrived an online denoising method to handle intraindividual variability of signal to noise ratio (SNR) in continuous glucose monitoring (30) . Earlier they worked with tuning of filter parameter only in the burning interval to assess the interindividual SNR variation. Since the CGM time series observed with different sampling rates must be processed by filters with different parameters, it is clear that optimization made on order and weights of the filters cannot be directly transferred one sensor to another. Moreover, filter parameters should be tuned according to the SNR of the time series, e.g., the higher SNR, the flatter the filtering. Precise tuning of filter parameters in an automatic manner is a difficult problem for the basic filters. So far the filtering approaches have been tested with a consideration of white Gaussian noise alone in CGM sensor data. Inspite of these tremendous works by various research groups, achievement of 100% accurate prediction is still a tough task. This shows the need of more smart filtering algorithms.

Commonly used filters are median filter and moving average filter. The present days filtering algorithms are not adaptive to all intensities of noise. Wiener filter and Kalman filter are the choices for adaptive filtering. Wiener filter requires the process to be stationary. Kalman filter will do better for the time varying CGM signal. Hence in our work we have compared our proposed Hybrid Filtering technique with that of Moving Average filter and Kalman filter. The following section gives the necessary descriptions of these filters.

2.3.1 Moving Average Filter

Moving Average (MA) concept helps to remove the spikes in the signal of interest and produces a smooth output. It is a linear, causal filter which gives the average of last 'k' samples. MA filters can be defined in different forms. A simple MA filter is given by

( 2.12 )

Where 'N' is the total number of samples, 't' is the instant at which the sample is taken and 'k' is the window size, also called as memory length. Accuracy of the output signal increases with the window size (more smoothened signal). However it results in signal distortion due to increase in delay.

A weighted MA filter provides a weightage to each of the past 'k' samples under consideration.

(2.13 )

Usually the weights are defined as exponential weights, wi = μi where 'μ' is the forgetting factor whose value ranges between 0 and 1. The drawback of these MA filters is that the weights or the parameters are fixed and not adaptive to the user. This MA filter for CGM data was investigated in depth by Sparacino et al 2008 (31) and was found to be inadequate for smoothing CGM data. Fachinetti et al., (2010) (29) had also studied the ability of MA filters in their work on online self tunable method to denoise CGM data. It was observed that once the weights have been chosen, the MA filter treats all the time series in the same way. The MA filter was not able to cope up with SNR variation from one person to another and SNR variation of a person from time to time. Hence the smoothening of CGM data with MA filter was found to be optimal. For the purpose of performance analysis, we compared our proposed work on denoising with MA filter of order 6 and forgetting factor = 0.65(as fixed in trial of Fachinetti et al.,).

The Moving Average has been applied to the two data sets obtained through SMBG values and Medtronic CGMS. The performance metrics were analyzed, the details of which are given in chapter 3. Though MA is the most common filter embedded in CGM systems to enhance the Signal to Noise ratio of CGM profiles, the enhancement produced is expected to be suboptimal. Facchinetti (2010) adds the fact that it does not use any kind of a priori statistical information on neither the regularity of the glycemic profile nor the intensity of noise components which affects it. Facchinetti has implemented the online self tunable method for denoising of CGM sensor data with Kalman Filter(KF) approach through Maximum Likelihood (ML) based parameter estimation procedure.

Figure 2. 5 Comparison of MA output with actual CGM signal

2.3.2 Kalman Filter

The Kalman filter is essentially a set of mathematical equations that implement a predictor-corrector type estimator that optimally minimizes the estimated error covariance when some presumed conditions are met. It is a tool for stochastic estimation from noisy sensor measurements. Each type of sensor has fundamental limitations related to the associated physical medium, and when pushing the envelope of these limitations the signals are typically degraded. In addition, some amount of random electrical noise is added to the signal via the sensor and the electrical circuits. The time varying ratio of "pure" signal to the electrical noise continuously affects the quantity and quality of the information.

A brief overview given by Facchinetti is reproduced here for the purpose of understanding. At discrete time, the KF is implemented by first-order difference equations that recursively estimate the unknown state vector x(t) of a dynamic system from noisy measurements y(t). The process update equation is given by

x(t + 1) = Fx(t) + w(t) (2.14)

where x(t) is of size n, w(t) is usually a zero-mean Gaussian noise vector (size n) with unknown covariance matrix Q (size n Ã- n), and F is a suitable matrix (size n Ã- n). The state vector x(t) is linked to the measurement vector y(t) (size m) by the equation

y(t) = Hx(t) + v(t) (2.15)

where v(t) is the zero-mean Gaussian noise measurement error vector of size m with covariance matrix R which is unknown, and which is uncorrelated with w(t), and H is a suitable matrix (size m Ã- n). The linear minimum variance estimate of the state vector obtainable from the measurements y(t) collected till time t is indicated by xˆ (t|t), and can be computed by using the following linear equations:

Kt = (FPt−1|t−1FT +Q)HT (H(FPt−1|t−1FT +Q)HT + R)−1

xˆ (t|t) = F ˆx(t − 1|t − 1) + Kt (y(t) − Hˆx(t − 1|t − 1))

Pt|t = (1 − KtH)(FPt−1|t−1FT + Q)


where Pt|t (size n Ã- n) is the covariance matrix of the estimation error affecting xˆ (t|t), Kt (size n Ã- m) is the Kalman gain matrix, and P0|0 and xˆ (0|0) are the initial conditions. The Q and R matrices, i.e., the process and the measurement noise covariance matrices respectively, are key parameters in determining the performance of KF. However the major problem of KF is the determination of Q and R, i.e., Q/R ratio (26)(32). Usually this Q/R ration is tuned off line and retrospectively all over the data. Hence it might not be useful in real time applications. Facchinetti's argument is that Q/R ratio has to be individualized in order to cope with SNR variability from subject to subject.

Facchinetti had used a double integration model for describing a glycemic profile u(t) i.e., u(t) = 2u(t-1) - u(t-2) + w(t) (2.17)

where w(t) is a zero mean Gaussian noise with unknown variance equal to λ2. The definition of state variables were x1(t) = u(t) and x2(t) = u(t-1), and state vector

x(t) = [ x1(t) x2(t) ]T and

F = (2.18)

and the output which is CGM measurement which would be given by measurement vector y(t) becomes scalar and H =[1 0]. Updating equation (2.17) for estimation of x^(t/t) , P(t/t) becomes a 2*2 matrix with P(0/0) = I2 and = [y(0) y(-1)]-1), Kt a 2*1 vector and Q and R are

Q = , R = Ï­2 (2.19)

In order to arrive an estimate of glucose u^(t), both λ2 and ϭ2 are required.

Since SNR varies from sensor to sensor and from individual to individual, facchinetti et al., had gone for a two step procedure for real time tuning of λ2 and ϭ2 i.e., Q/R ratio estimation. In the step 1, the first 6 hour period is considered as tuning period in which the unknown parameters λ2 and ϭ2 are estimated using a stochastically based smoothing criterion based on Maximum Likelihood (ML). A cost function comprising of two terms, first term measures the fidelity control and second term weights the roughness of the estimate had been developed and to be minimized for estimating Q/R ratio. In step 2 the so obtained values of λ2 and ϭ2 are used in (2.17) and (2.20) for rest of the data thereby allowing both real time applications of KF and individualization of KF parameters.

Facchinetti applied Gaussian noise of different variance values ranging from Ï­2 = 2 to 50 mg2/dL2. And his methodology was able to find the variance value in 6 hour tuning period accurately with a correlation coefficient of R2 = 0.986. He had evaluated the filtering efficiency with RMSE, Time lag and SRG and compared with MA. We apply our data to the same procedure and confirmed the betterment of KF, the details of which are given in section 5.

Figure 2. Comparison of MA and Kalman filter outputs with actual CGM signal (WGN Variance = 24 mg2/dL2) (PRG: MA.m)


Figure 2. Comparison of MA and Kalman filter outputs with actual CGM signal. (WGN Variance = 40 mg2/dL2)

We have tried the denoising of CGM sensor data with the artificial intelligence modeling technique to track the uncertainties in the physiological signal.

2.4 Blood Glucose Prediction

Estimation of future glucose values is very much essential in the daily management of Diabetes. The prediction of blood glucose helps in identifying impending hypo/hyper glycemic events , so that preventive actions could be taken to avoid complications. An accurate prediction model is also needed for implementing in Artificial Pancreas. Physiological signals are strictly periodic but rather fluctuate irregularly over time. The physiological system is a complex one, which contains both deterministic and stochastic components. Analysis of physiological time series is complicated because of its non linear and non stationary characteristics. Various models have been in literature for forecasting of glucose concentration in blood. However, none of the models and methods has reached the goal of 100% accuracy.

2.4.1 State of Art

Bremer and Gough, are the pioneer in prediction of glucose concentration in blood (33). According to them if the recent blood glucose history is not random but has an exploitable structure, it might be possible to anticipate the BG values in the near future based only on previous values. They tried the prediction mathematically with an Auto Regressive Moving Average (ARMA) process for prediction horizon (PH) of 10, 20 and 30 minutes. Palerm et al., have demonstrated the effect of sampling frequency, threshold selection and prediction horizon on the sensitivity and specificity of predicting hypoglycemia (34) (35). Palerm et al., approached the problem with estimation and prediction with a Kalman Filter and got the results as 90% Sensitivity and 79% Specificity. i.e to have 79 correct predictions one has to bare with 21 false alerts. Sparacino et al., used two prediction strategies based on the description of past glucose data. One is the first order polynomial (Poly (1)) and the other is the first order Auto Regressive AR(1) model (36). Both the methods have time varying parameters estimated by Weighted Least Squares. In both the methods, at each sampling time, a new set of model parameters is first identified by means of weighted least squares technique. Then the model is used to forecast glucose level for a given prediction horizon. Mean Square Error (MSE) and Energy of First Order Differences (ESOD) were taken as the performance metrics. The analysis is done with various forgetting factors of µ = 0.2, 0.5, 0.8 and prediction horizons of 30 minutes and 45 minutes. They got a MSE of 318 mg/dL for AR(1) model and 336 mg/dL for Poly(1) model for PH of 30 minutes and for PH of 45 minutes the MSE in the range of thousands for both the models. Sparacino et al., had used time lag also as a parameter to assess the predictive capability of the models. Reifman et al., investigated the capabilities of data driven AR models to capture the correlations in glucose time series data (37). For PH of 30 and 60 minutes, the Root Mean Square Error (RMSE) are 26 and 36 mg/dL respectively. Pappada et al., (2008) designed various neural network models with Neuro Solutions software and an electronic diary information, for the prediction of blood glucose in various PH of 50,75,100,120,150 and 180 minutes (38). For PH of 100 minutes he obtained a mean absolute difference (MAD) of 43 mg/dL. Predictions in hypoglycemic ranges were of lower accuracy, which may be due to smaller number of training data in that range. A.Gani et al., combined the predictive data driven models and the frequent blood glucose measurements (39). By simulation they proved that stable and accurate models for near future glycemic predictions with clinically acceptable time lags can be obtained by smoothing the raw glucose data and regularizing the model coefficients. This has to be validated for real time implementation. This group has worked with AR model of higher orders (AR (30)). For PH of 30, 60 and 90 minutes, the RMSE were 1.8, 12.6 and 28.8 mg/dL respectively. C.Perez-Gandia et al., implemented an artificial neural network algorithm for online glucose prediction from continuous glucose monitoring (40). The inputs of the neural network are the values provided by CGM sensor during the last 20 minutes and the output is the prediction of the next time step. The performance of the neural network prediction model was compared with the AR model. The results (RMSE) given by the neural model are 10, 18 and 27 mg/dL for PH of 15, 30 and 45 minutes respectively. Robertson et al., reviewed the various neural network approaches in blood glucose prediction and arrived with an artificial neural network (ANN) architecture that is the Elman Recurrent structure (41). They analyzed the predictive capability of the model with various longer PHs, at nights and over 5 days by feeding additional information like food, insulin dosages for a maximum of 1 hour. The RMSE between the actual and predicted blood glucose levels was calculated and an average RMSE of 0.15±0.04 SD mmol/L for the five days was obtained.

Though a myriad of prediction methodologies are available, the AR model is the basis for all. And Artificial Neural Network modeling is the latest trend in which we have recent works on prediction. Hence our proposed prediction models are being compared with the above two methodologies. Therefore, a short description of these two methods are given in the following section. Description of AR Modeling

The AR model described here is the one followed in Sparacino et al., He had considered a time domain difference equation ,

ui = a ui-1 + wi (2.20)

Where I = 1,2 ,……,n denotes the order of glucose samples collected till the n'th sampling time tn and {wi} is a random white noise process with zero mean and variance = ϭ2. The parameters of the prediction model is denoted by θ = (a, ϭ2). At each sampling time tn, a new value of θ is determined by fitting the model against past glucose data un,un-1,un-2,…. By Weighted Least Squares . Once 'θ' is determined, the model is used to calculate the prediction of glucose level 'T' steps ahead 1.e., θ^n+Q where QTs = PH, Ts is sensor sampling period and Q is the time slots ahead the prediction is to be made.

All the past data un, un-1, ….u1 participate , with different relative weights, to the determination of Q. A weight μk is assigned to the sample taken 'k' instants before the actual sampling time, i.e., 'μ' is the forgetting factor. This forgetting factor is used in the modeling of non stationary processes to improve the fit of the most recent data. Hence 'μ' regulates the length of memory which participate in parameter estimation.

This methodology was tested with our data sets. As done by Perez Gandia we also optimized the 'μ' value for PH = 30 and 60 minutes separately for both the data sets with WLS procedure. For SMBG data set the estimated values are μ = 0.715 and 0.886 for PH = 30 and 60 minutes respectively. For Medtronic CGMS, the values are μ = 0.775 and 0.895 respectively for PH = 30 and 60 minutes respectively. The detailed results and comparisons are provided in chapter 5. Description of NNM

Perez Gandia made use of a Neural Network with three layers, first layer with 10 neurons, second layer with 5 neurons and output layer with 1 neuron. The input and hidden layers were applied with sigmoidal transfer functions and output layer with linear transfer function. The glucose measurements from CGM at the current instants and past 20 minutes were given as inputs to NNM. Based on the sampling time of CGM systems, number of inputs to the model also varies. If we have 5 minutes sampling interval, then this model would have 5 inputs. The network parameters (weights and bias) were randomly initialized and are updated by Levenberg-Marquardt optimization algorithm in Back Propagation. Three NNMs had been trained to predict glucose values after 15,30 and 45 minutes. Perez Gandia evaluated his methodology with 2 data sets obtained from Guardian CGMS and Freestyle Navigator CGMS. RMSE and Time Lag were the performance metrics used for assessment of their work. Algorithm for time lag calculation is clearly been explained in Perez Gandia et al., which has been elaborated in chapter 5 for assessment of proposed work.

2.7 Blood Glucose Variability Analysis

Glycemic variability is a possible risk factor for development of complications from diabetes. Glycemic variability affects the improvement in blood glucose regulation and control. In the early days, Standard Deviation (SD), percentage of Coefficient of variation (%CV) and graphical displays of frequency histograms were used. Service et al., 1970 introduced the domain dependent variability measures for blood glucose dynamics. Mean Amplitude of Glycemic Excellence (MAGE) , Mean of Daily differences (MODD), Continuous Overall Net Glycemic Action (CONGAn) are some of the parameters used for the retrospective analysis of blood glucose variations of a person with Diabetes(42)[ 22,23,24 in Radbard ,Book 10, pap 10]. MAGE is the average amplitude of upstrokes or down strokes that are above a threshold equal to SD of measurements for an entire 34 hours period. MAGE is computed as follows. First, the SD for the daily plot of CGM data is computed. Next, each blood glucose excursion which exceeds, the standard deviation is detected. The heights of these detected excursions are then averaged together. Depending on which type of excursion occurs first, only peak-to-nadir excursions are included in the calculation. The MODD is a measure of between day variability, calculated as the mean of absolute difference of glucose values obtained at exactly the same time of day, from 2 consecutive days. The CONGAn criterion was recently introduced, defined as the SD of signed differences of glucose separated exactly by 'n' hours. Where n is 1 to 4 in normal procedures. Sometimes n is used between 1 and 24, and the average is taken. MAGE, MODD, CONGAn are directly proportional to total SD. MAGE is more correlated with both total SD and within day variability but less correlated with measures of between day variability. MODD gives more information about between day variability rather than within day variability. CONGAn and MODD both measure the variability of glucose values obtained exactly hours apart. The above said domain dependent parameters of glycemic variability are used for the retrospective analysis and would not be much use for real time online analysis of BG dynamics.

The ability to dispose carbohydrate depends on the pancreatic Beta cells to glucose and the sensitivity of the glucose utilizing tissues to the secreted insulin. Rahagi et al., characterized the blood glucose dynamics by four distinct frequency ranges [ ]. These different frequency ranges characterize the different physiological mechanisms. The highest frequency range with periods between 5 and 15 minutes is generated by pulsatile secretion of insulin. The second frequency range with periods between 60 and 120 minutes is by ultradian glucose oscillations. The third frequency band with periods between 150 and 500 minutes bands is due to exogenous insulin and nutrition. The fourth frequency range with periods greater than 700 minutes is due to circadian oscillations. A prediction system should consider various factors like patient characteristic such as insulin sensitivity and insulin resistance, patient condition like fasting, with meals, rest or in action condition or stress etc…(43) Since the frequency band I is too short and band III is too long for training samples, we had used the time frame of 30 minutes which would be sufficient to capture the frequency content of glucose time series.