The Factors Affecting Exchange Rate Fluctuation Accounting Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

CHAPTER 1

In order to design an ANN based model to predict Financial Exchange rate, it is first necessary to understand the following concepts:

What is an exchange rate

What are the various types of exchange rate regimes

Factors affecting the fluctuation of exchange rates

Financial Exchange Rate

In finance, the exchange rate between the currencies of two countries is defined as the value of one country's currency in terms of another.

Exchange Rate Regimes

a. Free-Floating Exchange Rate:

Such an exchange rate regime exists when a country's exchange rate varies against the exchange rate of other countries and its value depends upon make forces of supply demand. The value of currency in such regimes is controlled by financial sector and banks.

b. Pegged float

The currencies in such type of exchange rate regime are restrained in a certain band. Either they are fixed or periodically adjusted.

Types of Pegged floats are:

Crawling bands: in this type of band the exchange rate value is allowed to fluctuate in a band around a central value, and that band is adjusted periodically.

Crawling pegs: in this type of peg the exchange rate value is fixed, and adjusted as above.

Pegged alongside horizontal groups the rate is allowed to fluctuate in a fixed group (bigger than 1%) concerning a central rate.

c. Fixed exchange-rate system

Fixed rates include those that have managed convertibility towards one or more currency.

Factors Affecting Exchange Rate Fluctuation

A market based exchange rate will fluctuate depending upon many factors; these factors can be classified into following categories:

Economical

Political

Market psychology

The above mentioned subcategories include the following the factors:

Differentials in Inflation:

A country with a consistently low inflation rate exhibits a rising currency value, as its purchasing power increases relative to other country's currency.

Differentials in Interest Rates:

Manipulation in interest rates by central bank affects inflation which in turns affects exchange rate value of a country.

Public Debt.

Current-Account Deficits:

The current account deficit is the balance of trade between a country and its trading partners. A deficit in current account reflects that a country is spending more on foreign trade as it is earning.

Terms of Trade

Political Stability and Economic Performance

In the project at hand the prime objective is to predict the exchange rate value of USD and Euro in terms of Indian rupee. So we would mainly focus on

Reserve Bank of India, as it is the sole manager and regulator of the Indian foreign exchange market (Foreign Exchange Management Act, 1999).

Following are the participants of the Indian foreign exchange market:

All projected business Banks (Authorized Dealers only).

Reserve Bank of India (RBI).

Treasuries of Corporate sector.

Public Sector and Government.

Inter Bank Brokerage Houses.

Indian Residents

Non Resident Indians

RBI's role in the foreign exchange market:

To grasp the transactions rate mechanism.

Regulate inter-bank foreign exchange deals and monitor the external transactions chance of the banks.

Keep the transactions rate stable.

Manage and uphold country\'s external transactions reserves.

RBI has imposed external transactions exposure limits on banks (FE 12 of 1999).

The limits are tied alongside the Paid up capital of the bank.

Previously banks had NOP check, which was established on external transactions volume grasped by the bank.

NEED FOR FOREIGN EXCHANGE RATE PREDICTION

International deals are normally adjudicated in the adjacent future. Transactions rate forecasts are vital to assess the external denominated cash flows encompassed in global transactions. Thus, transactions rate forecasting is extremely vital to assess the benefits and dangers attached to the global company environment.

CHAPTER 2

HISTORICAL BACKGROUND OF EXCHANGE RATE PREDICTION

After years of research since Meesse and Rogof's exceptional work on exchange rate prediction viability, noteworthy technical advancements have been made on the case below i.e. currency exchange rate predictability. In the last quarter 20th century momentous work has been completed in this area by scientists like Bekaert and Hodrick(1992), Fong and Ouliaris(1995), LeBaron(1999), Levich and Thomas(1993), Liu and He(1991), Sweeny (1986) that has given conclusive facts concerning the predictability of transactions rate predictability.

Due to the fact that exchange rate forecasting is of theoretical as well as of practical importance a large number methods and techniques (including linear and non-linear) were introduced to beat the "Random Walk Model" (by Martingale).

On a broader basis exchange rate forecasting models developed so far could be classified into two categories:

Models based on fundamental approach:

These models are established on a expansive collection of data shouted as frank commercial variables. A little of the examples of these variables include: Gross National Product, transactions balance, inflation rates, attention rates, occupation to joblessness ratio, productivity indexes etc.

Models based on technical approach:

The technical approach (TA) relies on a tinier subset of the obtainable data. It is established on worth information. The scrutiny is "technical" in the sense that it does not rely on a frank scrutiny of the underlying commercial determinants of transactions rates or asset benefits, but merely on extrapolations of past worth trends. Technical scrutiny looks for the repetition of specific patterns.

Models based on technical approach are simple. These methods are based on the concepts of numerical analysis methods such as moving averages (MA), momentum indicators or filters.

Some of the popular technical approach models are:

Moving Average Models

The aim of a MA ideal is to flat unpredictable daily swings of asset benefits in order to gesture main trends. A Moving Average is plainly an average of past prices. We will use the simple moving average. Simple moving average is the un-weighted mean of the preceding Q data points:

Simple moving average = (St + St-1 + St-2 + ... + St-(Q-1))/Q

If we contain the most present past prices, next we compute a short-run MA (SRMA). If we contain a longer sequence of past prices, next we compute a long-term MA (LRMA). The double MA arrangement uses two advancing averages: a LRMA and a SRMA. A LRMA will always lag a SRMA because it gives a tinier heaviness to present movements of transactions rates. In MA models, buy and vend signals are normally activated after a SRMA of past rates crosses a LRMA.

For example, if the exchange rate value of a currency is moving downward, its SRMA will be below its LRMA. As soon as it starts rising again, it crosses its LRMA, thus generating a buy foreign currency signal.

Momentum Models

Filter models

ARIMA model

ARIMA processes are mathematical models used for forecasting. ARIMA is an acronym for Autoregressive, Integrated, and Moving Average. Each of these phrases describes a different part of the mathematical model.

The main reason of selecting ANN as an instrument for exchange rate forecast is that countless discriminating features of ANN make them priceless and appealing in forecasting. Early of all in difference to countless models ANN models are data driven and self-adaptive and that there are merely insufficient restrictive assumptions encompassed in the model. This exceptional feature is exceedingly desirable in countless commercial forecasting situations whereas the data under consideration is plentiful but the underlying data producing mechanism is unfamiliar (Qi and Zhang, 2001). Second, ANNs can generalize. Third, ANNs are universal function approximators (Hornik et al., 1989). Finally, ANNS are a class of non-linear models (Zhang et al., 1998). As Lapedes and Farber earlier gave the theory of Several Layer Feed-Forward Neural Web to resolve non-linear gesture forecast ever as than the transactions rate predictability algorithms has made momentous advancements (Shin and Han, 2000).

However, no one method has prospered plenty to consistently beat each supplementary method in the of external exchange rate prediction. The presentation of disparate models and methods gets modified considerably after disparate variables encompassed in the procedure are varied for example: ANN outperforms the ARIMA models after the forecasting skyline is restricted to a week whereas after the presentation of these two methods are contrasted above a forecasting skyline of concerning 2-3 months there is no momentous difference amid the aftermath shown by these two methods (Han and Steurer, 1996).

CHAPTER 3

LITERTAURE SURVEY

The criteria of research works or literature selection for this survey are that they ought to have methodical discussion on the progress procedure of exchange rate forecasting employing ANN and that they must throw some light on technical aspects of ANN. In view of this criterion the literature collection process for this project was carried out in two steps:

A reference search on textbooks of neural networks and there applications were carried out. A total of 3 textbooks were considered by the following authors: Masters (1995) [11], S. Sumathi and Surekha P. (2010) [12], Raj kumar bansal; Ashok kumar Goel and Manoj Kumar Sharma (2009)[13] and Lean Yu, Shouyang Wang and Kin Keung (2007) [14].

Furthermore a search on articles and research papers in this field was carried out. A total of 10 articles were read and analysed. Following are the authors: Nikola Gradojevic and Jing Yang [10], Hamadu Dallah and Adeleke Ismaila [8], Doc. Dr. Cem KADILAR, Prof. Dr. Muammer SĐMSEK and Aras. Gör. ÇaÄŸdas Hakan ALADAÄž [9], Mohamad Alamili [7], Vincenzo Pacelli, Vitoantonio Bevilacqua and Michelle Azzoleni [6], Lean Yeu, Shuyang Waeng, Wei Huyang, and Kin Keung Lai [5], Adewole Adetunji Philip, Akinwale Adio Taofiki and Akintomide Ayo Bidemi [4], Suresh Kumar Sharma and Vinod Sharma [3], Yaser S. abu Mostafa and Amir F. Atiya [2], Joarder Kamruzzaman and Ruhul A Sarker [1].

3.1 LITERATURE ANALYSIS

3.1.1 Input output Nodes and Hidden layer

In the survey conducted it was found out that majority of the models use one output node and the number of input nodes vary from article to article. Majority of the ANN models in the survey employed a single hidden layer.

3.1.2 Data division

Generally data division in the surveyed models was done arbitrarily and the statistical properties of the data set were seldom considered. On a broader basis it could be stated that majority of articles had three data sets namely: Training data set, testing data set and Validation data set. It was realized after analysing and contrasting aftermath from assorted articles learned that data set size is a vital factor that affects the efficiency of ANN.

3.1.3 Forecasting Horizon

Forecasting results given by ANN are more accurate under the circumstance in which the forecasting horizon under consideration is short term or medium term. On the contrary, foreign exchange rates are unpredictable when forecasting horizon is long (i.e. ANN loses its accuracy).

3.1.4 Network Types and Model Types

In this survey Multiple Layer Feed Forward Neural Network (MLFNN) is used predominantly in Majority of articles and some other types of neural networks predominantly used in the various research paper surveyed include General regression neural network (GRNN), Radial basis function network (RBFN) and the recurrent neural network (RNN).

3.1.5 Control Strategy

The survey indicates that the feed forward strategy is the most widely used control strategy, but hybrid and recurrent strategies are also potential candidates.

3.1.6 Training Algorithm

In this survey, Back propagation (BP) algorithm was found to be the most widely used algorithm, the other popular algorithms used are Levenberg-Marquardt algorithm, Genetic algorithm and conjugate gradient algorithm. BP is by distant the most extensively utilized algorithm for optimizing feed-forward neural webs it is established on the method of steepest descent. Though BP is prone to be deceived into innate optimum and consequently GA and simulated anneals are utilized to vanquish these drawbacks but these algorithms are additionally extremely period consuming and proposal frail convergence and hence are less favored as contrasted to BP.

One important factor associated with learning algorithm is the learning rate. In most of the articles surveyed a fixed learning rate has been employed. However a fixed learning rate does not always lead to an optimal convergence rate. Hence an optimally adaptive learning rate is extremely vital in ANN based applications.

3.1.7 Transfer Function

The survey indicates that the sigmoidal type transfer function is the most commonly used transfer function. The sigmoidal type transfer function has two categories: Hyperbolic tangent and Logistics function.

In general any differential transfer function can be used in an ANN model but only the sigmoidal type transfer function optimize the functioning of ANN. As interpreted from the survey it is generally advantageous to use sigmoidal type transfer function in the hidden layer as well as the input output layers because as the foreign exchange rate market has features like high noise, large volatility and complexity.

CHAPTER 4

DATA COLLECTION

Data collection is an important and critical step of ANN. The main reason behind this is that the quality of input data into a neural network strongly affects the results given by that particular neural network model.

For this purpose all the data for problem formulation of this data has been collected from Reserve bank of India's official website.

The data collected includes past 3426 days of exchange rate values of USD and Euro in terms of Indian Rupee.

The collected data has been tabulated date wise for the sake of making the data analysis step easier.

The data collected could be found in ANNEXURE A.

CHAPTER 5

ANN

An Artificial Neural Network (ANN) is a mathematical ideal inspired by biological neural networks. A neural web consists of an interconnected cluster of manmade neurons, and it procedures data employing a relating way to computation. In most cases a neural web is an adaptive arrangement that makes adjustments to its construction across a learning phase. Neural webs are utilized to realize convoluted connections amid inputs and outputs or to find trends in data.

The motivation for artificial neural networks came from seminal work on central nervous systems and its working. In a manmade neural network, easy manmade nodes, termed as neurons, neurodes, processing elements or units, are related jointly to form a web that resembles a biological neural network.

ANN involves a web of easy processing agents that display convoluted global behaviour represented by the connections amid the processing agents and agent parameters. Manmade neural networks are utilized alongside algorithms projected to change the strength of the connections in the web to produce a wanted signal flow.

NETWORK FUNCTION

The word network in the term 'artificial neural network' refers to the inter-connections between the neurons in the different layers of each system. An example system has three layers. The first layer has input neurons, which send data via synapses to the second layer of neurons, and then via more synapses to the third layer of output neurons. More complex systems will have more layers of neurons with some having increased layers of input neurons and output neurons. The synapses store parameters called "weights" that manipulate the data in the calculations.

An ANN is typically defined by three types of parameters:

The interconnection pattern between different layers of neurons

The learning process for updating the weights of the interconnections

The activation function that converts a neuron's weighted input to its output activation.

Mathematically, a neuron's network function f(x) is defined as a composition of other functions gi (x), which can further be defined as a composition of other functions. This can be conveniently represented as a network structure, with arrows depicting the dependencies between variables.

http://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Ann_dependency_graph.png/150px-Ann_dependency_graph.png

Fig 5.1 ANN dependency graph

The first view is the functional view: the input x is transformed into a 3-dimensional vector h, which is then transformed into a 2-dimensional vector g, which is finally transformed into f. This view is most commonly encountered in the context of optimization.

http://upload.wikimedia.org/wikipedia/commons/thumb/7/79/Recurrent_ann_dependency_graph.png/120px-Recurrent_ann_dependency_graph.png

Fig 5.2 Two separate depictions of the recurrent ANN dependency graph

Networks such as the previous one are commonly called feed forward, because their graph is a directed acyclic graph. Networks with cycles are commonly called recurrent. Such networks are commonly depicted in the manner shown at the top of the figure, where f is shown as being dependent upon itself. However, an implied temporal dependence is not shown.

USE OF ANN FOR FINANCIAL EXCHANGE RATE PREDICTION

By the development of ANN, researchers and investors are hoping that they can solve the mystery of exchange rate predictions. The ANN model, which is a non-linear model, is a strong alternative in the prediction of exchange rates. ANN is a very suitable method to find correct solutions especially in a situation which has complex, noisy, irrelevant or partial information. The main reason for the choice of ANN as a vehicle for the prediction of an exchange rate is that it possesses some important and attractive characteristics. The first of these, ANN, as opposed to prediction methods based on classical models, is a method which has few limiting hypotheses and can be easily adapted to the types of data. This characteristic of ANN is a highly desired situation in cases where basic data is not obtained in some of the financial predictions to be carried out. Secondly, ANN can be generalized. Thirdly, ANN has a general functional structure. Furthermore, ANN can be classified as non-linear models. However, alongside these, it is too early to state whether ANN will be able to solve all of the problems which no method has hitherto been sufficiently successful in terms of solving the prediction of exchange rates.

CHAPTER 6

PROPOSED PLAN OF WORK

Post mid-semester I will start working on the practical aspects of the project i.e. MATLAB will be used for problem formulation and various types of ANN models to predict financial exchange rate will be tested.

For the purpose of problem formulation a total of three currencies have been selected which include: USD, Euro and INR. The exchange rate values of USD and Euro in terms of INR will be predicted.

To begin with, the ANN model that will be used for prediction will have the following specifications:

Multi-layer feed forward neural network (MLFNN)

The control strategy employed would be feed-forward strategy

Training algorithm used will be Levenberg-marquardt algorithm.

Sigmoidal type transfer functions such as the logistic and hyperbolic tangent function will be employed to use.

One input and one output node.

Dataset division into three sets namely training, testing and validating sets.

MSE to be used as performance criteria.

Single hidden layer.

Number of hidden layer nodes will be determined by hit and trial basis.

Single day prediction by data predictor.

CHAPTER 7

TECHNICAL ANALYSIS

7.1 FORECASTING MODEL

Prediction is a kind of dynamic filtering, in which past values of one or more time series are used to predict future values. Dynamic neural networks, which include tapped delay lines are used for nonlinear filtering and prediction.

There are many applications for prediction. For example, a financial analyst might want to predict the future value of a stock, bond or other financial instrument.

The NAR network chosen for this project is subjected to a number of variations, in order to decide the network parameters which best increase the performance of network.

Three of the network parameters have been varied, namely:

Transfer function

Number of hidden layers

Number of nodes in hidden layers

Initially a comparison was made on the performance of network considering only the transfer functions. The transfer functions considered are:

Tansig

Logsig

After this variation the transfer function with better performance was fixed for rest of the iterations. The next variations include varying the nodes and the hidden layers of the network keeping the transfer function fixed. This was carried out in two steps:

Hidden layer 1: initially only one hidden layer was taken and the nodes in this layer were varied from 5 to 15.

Hidden layer 2: after the nodes in hidden layer reached 15 a second hidden layer was added in which also the number of nodes were varied from 5 to 15. The number of nodes in the first layer is decided according to the performance of network in the above iteration.

The result of each iteration were tabulated, graphs were plotted and the performance criteria were calculated. The performance criteria are:

MSE: mean square error.

RMSE: Root mean square error.

Percentage error for the data predicted on first day.

7.1.1 Single input - single output NAR network

Nonlinear auto-associative time-series network (NAR network) is used in this project. Initially the number of nodes in hidden layer was set as 15, based on this assumption the function was run twice once with tansig transfer function and then again with logsig transfer function. The results given by these two iterations were tabulated and compared. This was done in order to decide which transfer function to take for future iterations.

Furthermore, after selecting the transfer function the number of nodes in hidden layers and the number of hidden layer will be varied, results will be plotted and tabulated.

7.1.2 Multiple input - multiple output NAR network

In this network the input layer consists of two input nodes one for USD historic values and other for EURO historic values, the output layer also consists of two output nodes which return the value of USD and Euro in terms of INR.

The main reason behind selecting this network is to do coupling of these two currencies so that there is no need to train a network individually in order to predict USD in terms of EURO or vice versa.

7.2 Network Specifications

Name: 'NAR Neural Network'

Dimensions:

Number of Inputs: 1

Number of Layers: 2

Number of Outputs: 1

Number of Input Delays: 199

Number of Layer Delays: 0

Number of Feedback Delays: 0

Number of Weight Elements: 3031

Connections:

Bias Connect: [1; 1]

Input Connect: [1; 0]

Layer Connect: [0 0; 1 0]

Output Connect: [0 1]

Functions:

Learning algorithm: Levenberg Marquardt algorithm

Performance Function: 'mse'

Performance Parameters: .regularization, .normalization

Data division sets:

Training set ratio = 80/100

Test set ratio = 10/100

Validation set ratio = 10/100

7.2 CODING

Different codes were made for different functions to perform various operations in the neural network.

Following are the functions:

trainExRatePredictor.m: This function is responsible for training, testing and validating the network. Following are the instructions to use this function:

The "trainExRatePredictor" function simultaneously support multiple variables, and also allow user-specified network structure as input parameters. the syntax is as followed:

net = trainExRatePredictor(tr, delays, netStructure, 'tansig')

- tr: The input training data, each currency should have its own column. For example, if we are doing USD and EURO, then historic EURO rates could be in the first column of tr, and historic USD data should be in the second column.

- delays: The number of historic data the network considers when making a prediction.

- netStructure: The structure of the hidden layers. Should be a row vector where each element corresponds to the number of nodes in a hidden layer. For example, [5 5] would mean that we want two hidden layers, each layer having 5 nodes.

- transfer: The transfer function we want the hidden nodes to have. Can be 'tansig' or 'logsig'.

testNetOnExRateData .m:

"testNetOnExRateData" function was added to compute RMSE values and make actual vs. predicted plots. It will produce the plot given a network and some testing data. In the plots, red represents actual data, and green represents prediction. Its usage is as follows:

[error ys] = testNetOnExRateData(net, x);

- net: the trained neural net.

- x: the input data, same format as "tr" in the previous function.

- error: the RMSE for each currency.

- ys: the network's prediction based on the input "x"

testNextNDays.m:

Syntax: [rmse ys] = testNextNDays(net, x, n, plotting)

- same as testNetOnExRateData, except "n" is the number of days we want to predict ahead, and "plotting" is true/false indicating whether we want to plot the resulting prediction against the actual value.

predictTomorrow.m:

Syntax: y= predictTomorrow(net, x)

predictnextNDays

y = predictnextNDays(net, x, n)

- same as predictTomorrow, except "n" is the number of days we want to predict ahead.

[rmse ys] = testNextNDays(net, x, n, plotting)

- same as testNetOnExRateData, except "n" is the number of days you want to predict ahead, and "plotting" is true/false indicating whether you want to plot the resulting prediction against the actual value.

exRateCoupledRmseDemo

7.3 trainExRatePredictor (Training function code)

% Solve an Autoregression Time-Series Problem with a NAR Neural Network

% This script assumes this variable is defined:

% tr - feedback time series.

function [netPredict, netCloseloop, net] = trainExRatePredictor(trainingData, delays, netStructure, transfer)

targetSeries = tonndata(trainingData,false,false);

% Create a Nonlinear Autoregressive Network

feedbackDelays = 1:delays;

hiddenLayerSize = netStructure;

net = narnet(feedbackDelays,hiddenLayerSize);

for i = 1:length(net.layers)-1

net.layers{i}.transferFcn = transfer;

end

% Choose Feedback Pre/Post-Processing Functions

% Settings for feedback input are automatically applied to feedback output

% For a list of all processing functions type: help nnprocess

net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};

% Prepare the Data for Training and Simulation

% The function PREPARETS prepares timeseries data for a particular network,

% shifting time by the minimum amount to fill input states and layer states.

% Using PREPARETS allows you to keep your original time series data unchanged, while

% easily customizing it for networks with differing numbers of delays, with

% open loop or closed loop feedback modes.

[inputs,inputStates,layerStates,targets] = preparets(net,{},{},targetSeries);

% Setup Division of Data for Training, Validation, Testing

% For a list of all data division functions type: help nndivide

net.divideFcn = 'dividerand'; % Divide data randomly

net.divideMode = 'time'; % Divide up every value

net.divideParam.trainRatio = 80/100;

net.divideParam.valRatio = 10/100;

net.divideParam.testRatio = 10/100;

% Choose a Training Function

% For a list of all training functions type: help nntrain

net.trainFcn = 'trainlm'; % Levenberg-Marquardt

% Choose a Performance Function

% For a list of all performance functions type: help nnperformance

net.performFcn = 'mse'; % Mean squared error

% Choose Plot Functions

% For a list of all plot functions type: help nnplot

net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...

'ploterrcorr', 'plotinerrcorr'};

% Train the Network

[net,trainingData] = train(net,inputs,targets,inputStates,layerStates);

% Test the Network

outputs = net(inputs,inputStates,layerStates);

errors = gsubtract(targets,outputs);

performance = perform(net,targets,outputs)

% Recalculate Training, Validation and Test Performance

trainTargets = gmultiply(targets,trainingData.trainMask);

valTargets = gmultiply(targets,trainingData.valMask);

testTargets = gmultiply(targets,trainingData.testMask);

trainPerformance = perform(net,trainTargets,outputs)

valPerformance = perform(net,valTargets,outputs)

testPerformance = perform(net,testTargets,outputs)

% View the Network

view(net)

% Plots

% Uncomment these lines to enable various plots.

%figure, plotperform(tr)

%figure, plottrainstate(tr)

%figure, plotresponse(targets,outputs)

%figure, ploterrcorr(errors)

%figure, plotinerrcorr(inputs,errors)

% Early Prediction Network

% For some applications it helps to get the prediction a timestep early.

% The original network returns predicted y(t+1) at the same time it is given y(t+1).

% For some applications such as decision making, it would help to have predicted

% y(t+1) once y(t) is available, but before the actual y(t+1) occurs.

% The network can be made to return its output a timestep early by removing one delay

% so that its minimal tap delay is now 0 instead of 1. The new network returns the

% same outputs as the original network, but outputs are shifted left one timestep.

netPredict = removedelay(net);

[xs,xis,ais,ts] = preparets(netPredict,{},{},targetSeries);

ys = netPredict(xs,xis,ais);

predictionPerformance = perform(net,ts,ys)

% Closed Loop Network

% Use this network to do multi-step prediction.

% The function CLOSELOOP replaces the feedback input with a direct

% connection from the outout layer.

netCloseloop = closeloop(net);

[xc,xic,aic,tc] = preparets(netCloseloop,{},{},targetSeries);

yc = netCloseloop(xc,xic,aic);

closeloopPerformance = perform(net,tc,yc)

7.4 testNetOnExRateData

function [rmse ys] = testNetOnExRateData(net, x)

targetSeries = tonndata(x,false,false);

[xs,xis,ais,ts] = preparets(net,{},{},targetSeries);

ys = net(xs,xis,ais);

% error = perform(net,ts,ys);

ys = cell2mat(ys);

ts = cell2mat(ts);

error = ys-ts;

error(isnan(error)) = 0;

rmse = sqrt(mean(error.^2, 2));

figure;

for i = 1:size(ys, 1)

subplot(size(ys, 1),1,i);

plot(ts(i,:), '*--r');

hold on;

plot(ys(i,:), '+--g');

hold off;

7.5 testNextNDays

function [rmse ys] = testNextNDays(net, x, n, plotting)

ts = x(end-n+1:end, :);

tempx = x(end-n-net.numInputDelays:end-n, :);

ys = predictNextNDays(net, tempx, n);

error = ys-ts;

rmse = sqrt(mean(error.^2));

if plotting

figure;

for i = 1:size(ys, 2)

subplot(size(ys, 2),1,i);

plot(ts(:,i), '*--r');

hold on;

plot(ys(:,i), '+--g');

hold off;

end

7.6 predictTomorrow

function y = predictTomorrow(net, x)

xp = tonndata(x, false, false);

[~, xi, ~, ~] = preparets(net,{},{},xp);

y = net(xp, xi);

y = y{end};

end

7.7 predictNextNDays

function y = predictNextNDays(net, x, n)

shortX = x(end-net.numInputDelays:end, :);

shortX = [shortX; zeros(n, size(shortX, 2))];

netc = closeloop(net);

xp = tonndata(shortX, false, false);

[xc,xic,aic,~] = preparets(netc,{},{},xp);

y = netc(xc, xic, aic);

y = cell2mat(y);

y = y';

end

7.8 MODEL 1: Tansig vs Logsig

7.8.1 Results for network with TANSIG transfer function:

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 1

Number of nodes in hidden layer: 15

Transfer function: Tansig

Rest all parameters are same as specified above in Network Specifications.

Fig 7.1: NAR neural network (tansig)

Date

Actual

Predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

52.9089

-0.2089

0.04363921

15-Oct-2012

53.1198

52.521

0.5988

0.35856144

16-Oct-2012

52.8193

52.8409

-0.0216

0.00046656

17-Oct-2012

52.7510

53.0668

-0.3158

0.09972964

18-Oct-2012

52.9690

53.4706

-0.5016

0.25160256

19-Oct-2012

53.7175

53.2542

0.4633

0.21464689

22-Oct-2012

53.6735

53.1388

0.5347

0.28590409

23-Oct-2012

53.5895

53.391

0.1985

0.03940225

25-Oct-2012

53.6300

53.4493

0.1807

0.03265249

29-Oct-2012

53.8065

53.4369

0.3696

0.13660416

MSE=

0.12193411

RMSE=

0.34919065

TABLE 7.1 Forecasting results of neural network model for US Dollar.

%error for first day of prediction= .312%

Fig 7.2 GRAPH OF PREDICTED AND ACTUAL DATA

7.8.2 Results for network with LOGSIG transfer function:

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 1

Number of nodes in hidden layer: 15

Transfer function: Logsig

Rest all parameters are same as specified above in Network specifications.

Fig 7.3: NAR neural network (logsig)

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

52.8815

-0.1815

0.03294225

15-Oct-2012

53.1198

53.0086

0.1112

0.01236544

16-Oct-2012

52.8193

52.8998

-0.0805

0.00648025

17-Oct-2012

52.7510

52.6049

0.1461

0.02134521

18-Oct-2012

52.9690

52.2705

0.6985

0.48790225

19-Oct-2012

53.7175

52.081

1.6365

2.67813225

22-Oct-2012

53.6735

51.9217

1.7518

3.06880324

23-Oct-2012

53.5895

52.0673

1.5222

2.31709284

25-Oct-2012

53.6300

52.0395

1.5905

2.52969025

29-Oct-2012

53.8065

51.8505

1.956

3.825936

MSE=

1.24839083

RMSE=

1.11731412

TABLE 7.2 Forecasting results of neural network model for US Dollar.

%error for first day of prediction= .343%

Fig 7.4 GRAPH OF PREDICTED AND ACTUAL DATA

7.8.2 CONCLUSION

Based on the above data it can be concluded that the tansig transfer function has a better accuracy in predicting the future values as compared to the logsig function. Thus for all the remaining iterations the tansig function will be used in the training network.

7.9 MODEL 2 - (VARIATION HIDDEN LAYERS AND HIDDEN NODES)

In this model the hidden layers and nodes are varied as mentioned above. In all there are a total of 22 variations.

For all the variations the transfer function - 'tansig' and the parameters mentioned in network specifications will remain fixed except for the number of hidden layers and the hidden nodes.

7.9.1 Variations in the first hidden layer

One hidden layer with 5 nodes:

Fig 7.5: NAR neural network (1 hidden layer with 5 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 1

Number of nodes in hidden layer 1: 5

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

52.1361

0.5639

0.31798321

15-Oct-2012

53.1198

51.5288

1.591

2.531281

16-Oct-2012

52.8193

51.2022

1.6171

2.61501241

17-Oct-2012

52.7510

50.8015

1.9495

3.80055025

18-Oct-2012

52.9690

50.2921

2.6769

7.16579361

19-Oct-2012

53.7175

49.8991

3.8184

14.5801786

22-Oct-2012

53.6735

49.6559

4.0176

16.1411098

23-Oct-2012

53.5895

49.2898

4.2997

18.4874201

25-Oct-2012

53.6300

48.7919

4.8381

23.4072116

29-Oct-2012

53.8065

48.6534

5.1531

26.5544396

MSE=

9.63341501

RMSE=

3.10377432

Table 7.3: Forecasting results (hidden layer 1 - 5 nodes)

%error for first day of prediction= 1.07%

Fig 7.6: training, testing and validation performance (hidden layer 1 - 5 nodes)

Fig 7.7: Graph of predicted and actual data (hidden layer 1 - 5 nodes)

The graph above shows that the accuracy of prediction deteriorates as we advance further, because errors keep on accumulating.

2. One hidden layer with 7 nodes:

Fig 7.8: NAR neural network (1 hidden layer with 7 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 1

Number of nodes in hidden layer 1: 7

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

51.0344

1.6656

2.77422336

15-Oct-2012

53.1198

51.26

1.8598

3.45885604

16-Oct-2012

52.8193

51.1562

1.6631

2.76590161

17-Oct-2012

52.7510

49.8377

2.9133

8.48731689

18-Oct-2012

52.9690

50.3907

2.5783

6.64763089

19-Oct-2012

53.7175

49.2945

4.423

19.562929

22-Oct-2012

53.6735

47.5968

6.0767

36.9262829

23-Oct-2012

53.5895

46.8994

6.6901

44.757438

25-Oct-2012

53.6300

46.8249

6.8051

46.309386

29-Oct-2012

53.8065

45.4379

8.3686

70.033466

MSE=

20.1436192

RMSE=

4.48816435

Table 7.4: Forecasting results (hidden layer 1 - 7 nodes)

%error for first day of prediction= 3.16053131%

Fig 7.9: training, testing and validation performance (hidden layer 1 - 7 nodes)

Fig 7.10: Graph of predicted and actual data (hidden layer 1 - 7 nodes)

In this iteration the performance further degrades as reflected in the above graph.

3.One hidden layer with 9 nodes:

Fig 7.11: NAR neural network (1 hidden layer with 9 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 1

Number of nodes in hidden layer 1: 9

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

53.0268

-0.3268

0.10679824

15-Oct-2012

53.1198

53.2398

-0.12

0.0144

16-Oct-2012

52.8193

53.8509

-1.0316

1.06419856

17-Oct-2012

52.7510

53.9415

-1.1905

1.41729025

18-Oct-2012

52.9690

53.6569

-0.6879

0.47320641

19-Oct-2012

53.7175

53.4881

0.2294

0.05262436

22-Oct-2012

53.6735

54.0693

-0.3958

0.15665764

23-Oct-2012

53.5895

54.2137

-0.6242

0.38962564

25-Oct-2012

53.6300

55.1443

-1.5143

2.29310449

29-Oct-2012

53.8065

55.7152

-1.9087

3.64313569

MSE=

0.80092011

RMSE=

0.8949414

Table 7.5: Forecasting results (hidden layer 1 - 9 nodes)

%error for first day of prediction= 0.62011385%

Fig 7.12: Graph of training, testing and validation performance (hidden layer 1 - 9 nodes)

Fig 7.13: Graph of predicted and actual data (hidden layer 1 - 9 nodes)

The graph above shows that the accuracy of prediction deteriorates as we advance further, because errors keep on accumulating. But

4.One hidden layer with 11 nodes:

Fig 7.14: NAR neural network (1 hidden layer with 11 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 1

Number of nodes in hidden layer 1: 11

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

53.1583

-0.4583

0.21003889

15-Oct-2012

53.1198

53.0916

0.0282

0.00079524

16-Oct-2012

52.8193

53.0263

-0.207

0.042849

17-Oct-2012

52.7510

53.1062

-0.3552

0.12616704

18-Oct-2012

52.9690

53.0344

-0.0654

0.00427716

19-Oct-2012

53.7175

53.0825

0.635

0.403225

22-Oct-2012

53.6735

53.465

0.2085

0.04347225

23-Oct-2012

53.5895

53.7365

-0.147

0.021609

25-Oct-2012

53.6300

54.2032

-0.5732

0.32855824

29-Oct-2012

53.8065

54.3784

-0.5719

0.32706961

MSE=

0.12567179

RMSE=

0.35450217

Table 7.6: Forecasting results (hidden layer 1 - 11 nodes)

%error for first day of prediction= 0.86963947%

Fig 7.15: Graph of training, testing and validation performance (hidden layer 1 - 11 nodes)

Fig 7.16: Graph of predicted and actual data (hidden layer 1 - 11 nodes)

4.One hidden layer with 13 nodes:

Fig 7.17: NAR neural network (1 hidden layer with 13 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 1

Number of nodes in hidden layer 1: 13

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

52.7962

-0.0962

0.00925444

15-Oct-2012

53.1198

53.1009

0.0189

0.00035721

16-Oct-2012

52.8193

53.4033

-0.584

0.341056

17-Oct-2012

52.7510

53.6194

-0.8684

0.75411856

18-Oct-2012

52.9690

52.8934

0.0756

0.00571536

19-Oct-2012

53.7175

53.2016

0.5159

0.26615281

22-Oct-2012

53.6735

53.5526

0.1209

0.01461681

23-Oct-2012

53.5895

54.612

-1.0225

1.04550625

25-Oct-2012

53.6300

54.5773

-0.9473

0.89737729

29-Oct-2012

53.8065

54.3264

-0.5199

0.27029601

MSE=

0.30037089

RMSE=

0.54806103

Table 7.7: Forecasting results (hidden layer 1 - 13 nodes)

%error for first day of prediction= 0.18254269 %

Fig 7.18: Graph of training, testing and validation performance (hidden layer 1 - 13 nodes)

Fig 7.19: Graph of predicted and actual data (hidden layer 1 - 13 nodes)

5.One hidden layer with 15 nodes:

This iteration has been done in section 7.8.1

For more information refer to the above mentioned section.

7.9.1.1 Conclusion

After performing all the above iterations it was found that the network performed best when the number of nodes in the first hidden layer was 13.

Thus, in the next set of iterations the number of nodes in the first hidden layer will be fixed at 13.

7.9.2 Variations in the second hidden layer

1. Second hidden layer with 5 nodes:

Fig 7.20: NAR neural network (2 hidden layers with 13 and 5 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 2

Number of nodes in hidden layer 1: 15

Number of nodes in hidden layer 2: 5

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

53.3353

-0.6353

0.40360609

15-Oct-2012

53.1198

52.9892

0.1306

0.01705636

16-Oct-2012

52.8193

52.6783

0.141

0.019881

17-Oct-2012

52.7510

51.9271

0.8239

0.67881121

18-Oct-2012

52.9690

50.9971

1.9719

3.88838961

19-Oct-2012

53.7175

50.4669

3.2506

10.5664004

22-Oct-2012

53.6735

50.2186

3.4549

11.936334

23-Oct-2012

53.5895

50.3717

3.2178

10.3542368

25-Oct-2012

53.6300

50.7242

2.9058

8.44367364

29-Oct-2012

53.8065

51.383

2.4235

5.87335225

MSE=

4.34847845

RMSE=

2.08530057

Table 7.8: Forecasting results (hidden layer 1 - 13 nodes, hidden layer 2 - 5 nodes)

%error for first day of prediction= 1.20550285 %

Fig 7.21: Graph of predicted and actual data (hidden layer 1 - 13 nodes, hidden layer 2 - 5 nodes)

2. Second hidden layer with 7 nodes:

Fig 7.22: NAR neural network (2 hidden layers with 13 and 7 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 2

Number of nodes in hidden layer 1: 13

Number of nodes in hidden layer 2: 7

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

52.0131

0.6869

0.47183161

15-Oct-2012

53.1198

51.9221

1.1977

1.43448529

16-Oct-2012

52.8193

51.4426

1.3767

1.89530289

17-Oct-2012

52.7510

51.4051

1.3459

1.81144681

18-Oct-2012

52.9690

50.8614

2.1076

4.44197776

19-Oct-2012

53.7175

50.2728

3.4447

11.8659581

22-Oct-2012

53.6735

51.6224

2.0511

4.20701121

23-Oct-2012

53.5895

51.1653

2.4242

5.87674564

25-Oct-2012

53.6300

48.9714

4.6586

21.702554

29-Oct-2012

53.8065

49.4631

4.3434

18.8651236

MSE=

6.04770307

RMSE=

2.45920781

Table 7.9: Forecasting results (hidden layer 1 - 13 nodes, hidden layer 2 - 7 nodes)

%error for first day of prediction= 1.30341556%

Fig 7.23: Graph of predicted and actual data (hidden layer 1 - 13 nodes, hidden layer 2 - 7 nodes)

3. Second hidden layer with 9 nodes:

Fig 7.24: NAR neural network (2 hidden layers with 13 and 9 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 2

Number of nodes in hidden layer 1: 13

Number of nodes in hidden layer 2: 9

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

53.4863

-0.7863

0.61826769

15-Oct-2012

53.1198

53.1401

-0.0203

0.00041209

16-Oct-2012

52.8193

53.8472

-1.0279

1.05657841

17-Oct-2012

52.7510

54.7489

-1.9979

3.99160441

18-Oct-2012

52.9690

54.7306

-1.7616

3.10323456

19-Oct-2012

53.7175

54.8142

-1.0967

1.20275089

22-Oct-2012

53.6735

54.2166

-0.5431

0.29495761

23-Oct-2012

53.5895

54.5308

-0.9413

0.88604569

25-Oct-2012

53.6300

53.7434

-0.1134

0.01285956

29-Oct-2012

53.8065

52.5413

1.2652

1.60073104

MSE=

1.0639535

RMSE=

1.03148121

Table 7.10: Forecasting results (hidden layer 1 - 13 nodes, hidden layer 2 - 9 nodes)

%error for first day of prediction= 1.49203036%

Fig 7.25: Graph of predicted and actual data (hidden layer 1 - 13 nodes, hidden layer 2 - 9 nodes)

4. Second hidden layer with 11 nodes:

Fig 7.26: NAR neural network (2 hidden layers with 13 and 11 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 2

Number of nodes in hidden layer 1: 13

Number of nodes in hidden layer 2: 11

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

54.5198

-1.8198

3.31167204

15-Oct-2012

53.1198

54.9735

-1.8537

3.43620369

16-Oct-2012

52.8193

54.9967

-2.1774

4.74107076

17-Oct-2012

52.7510

55.0175

-2.2665

5.13702225

18-Oct-2012

52.9690

54.7183

-1.7493

3.06005049

19-Oct-2012

53.7175

55.0263

-1.3088

1.71295744

22-Oct-2012

53.6735

55.4017

-1.7282

2.98667524

23-Oct-2012

53.5895

55.9859

-2.3964

5.74273296

25-Oct-2012

53.6300

56.1982

-2.5682

6.59565124

29-Oct-2012

53.8065

56.235

-2.4285

5.89761225

MSE=

3.55180403

RMSE=

1.88462305

Table 7.11: Forecasting results (hidden layer 1 - 13 nodes, hidden layer 2 - 11 nodes)

%error for first day of prediction= 3.45313093%

Fig 7.27: Graph of predicted and actual data (hidden layer 1 - 13 nodes, hidden layer 2 - 11 nodes)

5. Second hidden layer with 13 nodes:

Fig 7.28: NAR neural network (2 hidden layers with 13 and 13 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 2

Number of nodes in hidden layer 1: 13

Number of nodes in hidden layer 2: 13

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

53.1766

-0.4766

0.22714756

15-Oct-2012

53.1198

53.1297

-0.0099

9.801E-05

16-Oct-2012

52.8193

53.4875

-0.6682

0.44649124

17-Oct-2012

52.7510

52.841

-0.09

0.0081

18-Oct-2012

52.9690

51.5629

1.4061

1.97711721

19-Oct-2012

53.7175

51.861

1.8565

3.44659225

22-Oct-2012

53.6735

51.0193

2.6542

7.04477764

23-Oct-2012

53.5895

50.9696

2.6199

6.86387601

25-Oct-2012

53.6300

50.5588

3.0712

9.43226944

29-Oct-2012

53.8065

50.2859

3.5206

12.3946244

MSE=

3.48675781

RMSE=

1.86728622

Table 7.12: Forecasting results (hidden layer 1 - 13 nodes, hidden layer 2 - 13 nodes)

%error for first day of prediction= 0.90436433%

Fig 7.29: Graph of predicted and actual data (hidden layer 1 - 13 nodes, hidden layer 2 - 13 nodes)

5. Second hidden layer with 15 nodes:

Fig 7.30: NAR neural network (2 hidden layers with 13 and 15 nodes)

The network was trained, tested and validated on 2000 days of data.

Last 200 days of data was used to predict next ten days of closing rates of USD in terms of INR.

Number of hidden layers: 2

Number of nodes in hidden layer 1: 13

Number of nodes in hidden layer 2: 15

Transfer function: Tansig

Rest all parameters are same as specified above in Network specifications.

Date

actual

predicted

Error(Et)

(Et)2

12-Oct-2012

52.7000

54.8508

-2.1508

4.62594064

15-Oct-2012

53.1198

55.2277

-2.1079

4.44324241

16-Oct-2012

52.8193

54.7237

-1.9044

3.62673936

17-Oct-2012

52.7510

54.2315

-1.4805

2.19188025

18-Oct-2012

52.9690

54.5895

-1.6205

2.62602025

19-Oct-2012

53.7175

54.9985

-1.281

1.640961

22-Oct-2012

53.6735

54.1768

-0.5033

0.25331089

23-Oct-2012

53.5895

53.5131

0.0764

0.00583696

25-Oct-2012

53.6300

53.07

0.56

0.3136

29-Oct-2012

53.8065

53.2425

0.564

0.318096

MSE=

1.67046898

RMSE=

1.29246624

Table 7.13: Forecasting results (hidden layer 1 - 13 nodes, hidden layer 2 - 15 nodes)

%error for first day of prediction= 4.08121442%

Fig 7.31: Graph of predicted and actual data (hidden layer 1 - 13 nodes, hidden layer 2 - 15 nodes)

7.9.2 Conclusion

Since the inclusion of second hidden layer has not in any way enhanced the performance of network therefore it could be stated that, this network does not require a second layer at all.

Although the network performed the best when the second layer had 13 nodes but the performance criteria was not satisfied.

Thus the idea of making a second hidden layer has been completely dropped and the next few iterations would only have a single hidden layer.

7.10 Multiple input - multiple output NAR network

Fig 7.32: Multiple input - multiple output NAR network

RESULT

For USD:

Actual

predicted

Error(Et)

(Et)2

55.7630

55.5908

0.1722

0.02965284

55.5990

55.6548

-0.0558

0.00311364

56.0143

55.4352

0.5791

0.33535681

55.8585

55.2008

0.6577

0.43256929

MSE=

0.9060452

56.4178

54.4866

1.9312

3.72953344

RMSE=

0.95186407

Table 7.14: Forecasting results (Multiple input - multiple output NAR network)

%error for first day of prediction= 0.30880691%

Fig 7.29: Graph of predicted and actual data (Multiple input - multiple output NAR network) for usd

For Euro:

actual

predicted

Error(Et)

(Et)2

68.1585

69.0348

-0.8763

0.76790169

68.602

69.7774

-1.1754

1.38156516

68.6125

70.0564

-1.4439

2.08484721

69.0385

70.4136

-1.3751

1.89090001

MSE=

1.30031926

69.456

70.0695

-0.6135

0.37638225

RMSE=

1.14031542

Table 7.15: Forecasting results (Multiple input - multiple output NAR network)

%error for first day of prediction -1.2856797%

Fig 7.34: Graph of predicted and actual data (Multiple input - multiple output NAR network) for euro

7.11 CONCLUSION

It may be concluded from the results, that the network used is valid for short term predictions only and that in order to make the network viable for long term predictions other network parameters must be considered.

In order to make the network feasible for future prediction a number of other variables can be considered for example: gold reserve, daily volume traded, net current balance etc.

Apart from this the network could be trained on different algorithms, some network parameters could be altered in order to improve the prediction of network.

REFERNCES:

[1] Joarder Kamruzzaman and Ruhul A Sarker," Forecasting of Currency Exchange Rates using ANN: A Case Study," in Neural Networks and Signal Processing, 2003. Proceedings of the 2003 International Conference on

[2] Yaser S. abu Mostafa and Amir F. Atiya , "Introduction to Financial Forecasting," Applied Intelligence, vol. 6, pp 205-213, 1996.

[3] Suresh Kumar Sharma and Vinod Sharma, "Proficient Prophecy of Foreign Exchange Rate using Artificial Neural Network: A Case of USD to INR," International Journal of Computer Applications (0975 - 8887) Volume 43- No.1, April 2012

[4] Adewole Adetunji Philip, Akinwale Adio Taofiki and Akintomide Ayo Bidemi, "Artificial Neural Network Model for Forecasting Foreign Exchange Rate " in World of Computer Science and Information Technology Journal (WCSIT) ISSN: 2221-0741 Vol. 1, No. 3,110-118, 2011

[5] Lean Yu, Shouyang Wang, Wei Huang, and Kin Keung Lai, "Are Foreign Exchange Rates Predictable? A Survey from Artificial Neural N

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.