Technical Report On Abi And Its Business Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

The report talks about Adaptive Business Intelligence and its support to decision making in various fields of business and science by using technology to solve real world problems. In this report we discuss Neural Networks as one of the technologies in particular. The report details Neural Networks theory, implementation, applications and future developments. It discusses Neural Networks uses and models used to solve problems.


Introduction to Adaptive Business Intelligence

Computer technology has advanced over the years, creating new sources of information to businesses and organizations. However with the introduction of new raw sources and abundance of information, an increasing number of complex challenges have arose that where nonexistent earlier, thereby making decisions based on information more difficult. Conventional computing has a limited role and does not help to serve the fast decisions pace required in today's competitive market.

Businesses are moving towards a more decision driven approach by integrating their technologies and methodologies. In order to create a rich source of knowledge businesses are now looking to capitalize on technology and explore its use to make decisions by tapping into their existing and new sources of information.

The Goal of Business Intelligence systems are as below

Retrieve data from various sources

Transform the data into information

The information would then be used to produce knowledge

Display the knowledge in a user friendly graphical interface

Business Intelligence is defined as "a broad category of application programs and technology for gathering, storing, analyzing, and providing access to data" (Zbigniew Michalewicz â‹… Martin Schmidt)

Adaptive business intelligence has gained further market importance as it's not only used for transformation of information to knowledge, but also to facilitate the decisions making process.

The diagram below shows the process of creating knowledge.

Figure (Zbigniew Michalewicz â‹… Martin Schmidt)

Data - Collection of values in the forms of bits, numbers, symbols and objects

Information - Data that is preprocessed, structured, cleaned and void of redundant data

Knowledge - The facts obtained from information that is newly discovered or learned

The knowledge obtained from the data can further be used to make business decisions as shown in figure 2. Many organizations consider knowledge to be the final end point of the process. However in today's dynamic and competitive market, management may require further assistance to make decisions.

Figure (Zbigniew Michalewicz â‹… Martin Schmidt)

Adaptive Business Intelligence includes




Prediction - Prediction is an important aspect of ABI and thereby provides strong support to decision making, as the basic functionality is to provide a prediction of the future as an output based on the input provided. The prediction module may train and learn from historical data, thereby enabling prediction of output.

Figure (Zbigniew Michalewicz â‹… Martin Schmidt)

Optimization - The optimization module refines and tries different input data combinations based on the output module which is further fed once again as input.

Figure (Zbigniew Michalewicz â‹… Martin Schmidt)

Adaptability - Inaccuracies in prediction may occur due to varying reasons in the ever changing environment. It is important to adapt and change with the changes in environment. The system is truly adaptive if it is able to learn from its own errors and adapt to make more accurate predictions

Figure (Zbigniew Michalewicz â‹… Martin Schmidt)

The adaptability module in the above figure is responsible to compare the predicted output and the actual output. On comparing, the error that exists between the two is used to tune the prediction module and reduce the prediction error.

To understand the interactions between modules, the below figure displays the structure of Adaptive Business Intelligence

Figure (Zbigniew Michalewicz â‹… Martin Schmidt)

As from the above, it is clearly visible that each module interacts with each and other and provides support to ABI, the interactions can be broken down into the below elements that support decision making

Data mining

Predictive modeling




Predictive and adaptive models are a crucial part of decision making and thereby form the core of Adaptive Business Intelligence.

The concept of ABI to achieve business success is evident in industry and has been applied to business process alignment and strategy too. For e.g. the fundamentals of ABI are evident in Six sigma methodologies. Six Sigma methodologies have adopted the core fundamentals of ABI to achieve the below

Defect elimination

Waste control

Quality control

The steps in the methodology are as below

Figure (Zbigniew Michalewicz â‹… Martin Schmidt)

Objective of the use of ABI

The objective of systems based on ABI is to solve real world problems that have the below characteristics.

Problems with complex constraints

Time changing environments

Multiple Objectives

Large number of possible solutions

Popular ABI Technologies

The continuous strive by organization to drive improved decisions making has put further focus on Adaptive Business Intelligence and fueled research and development in the below areas of ABI Technologies

1 Evolutionary computing

2. Support vector machine

3. Swarm intelligence

4. Data mining

5. Fuzzy logic

6. Neural network

7. Agent-based modeling

8. Machine learning

9. Expert systems

Neural Network

Introduction to Neural Networks

Artificial neural networks (ANN) as the name suggests is design similar to the neurons in the brain. It is also known as Neural Networks. It can be explained as "An interconnected assembly of simple processing elements, units or nodes, whose functionality is loosely based on the animal neuron. The processing ability of the network is stored in the inter unit connection strengths, or weights, obtained by a process of adaptation to, or learning from, a set of training patterns" (Gurney, Aug 05, 1997)

The neural network is similar in functionality as the human brain as stated earlier. The brain has billions of nerve cells called neurons. The cell walls of the brain generate electric signals that enable communication between the neurons. The synapses (electrochemical junctions) that are present on the branches of the cell are responsible for the control of interneuron connections. The neurons have thousands of connections with other neurons and are continuously receiving incoming signals that finally reach the cell body. They are summed or integrated to create signals, when the signal reaches a certain threshold, the neuron will generate a voltage impulse in response. The branching fiber also known as axon is responsible to transmit the neurons the impulse to other neurons.

Figure (Gurney, Aug 05, 1997)

The below is an example of artificial neuron

Figure (Gurney, Aug 05, 1997)

The below is an example of neural networks

Figure (Gurney, Aug 05, 1997)

The threshold logic unit (TLU) is a simple example where by the weighted sum of inputs when exceeds a threshold gives an output 1, otherwise 0.

On comparing the human brain and neural networks, it is observed that both can learn and become experts in a particular area. The crucial difference between the two is that humans can forget where as the later cannot, so everything that the neural networks learns is hard coded. (JOHN PETER JESAN, 2003)


One of the first early experiments of trying to model a neuron can be traced back to 1943. Two physiologists named McCulloch and Pitts in 1943 experimented on the model by using two inputs of equal weight to produce a binary result. The experiment is now known today as the logic circuit.

The existence and use of Neural computers is also evident as early as 1950, where neural computer where built to solve problem specific tasks with the use of elemental base based on AND, OR, NOT, etc. They formed the bases of threshold logic elements.

In 1958, Rosenblatt developed a model called the perceptron. Rosenblatt changed the weights of the inputs randomly using trial and error to gain "learning".

Back propagation, a concept developed by Werbos in 1974 which enabled training of perceptron's where by the weights were adjusted based on known desired outputs.

Rosenblatt and Webros experiments further fueled a number of researchers to further produce more intricate and complex models that are used within the business realm till date.

Why use Neural Networks?

Neural Networks display the below characteristics that help solve real world problems and help with predictions.

Learn from historical knowledge and forecast

Ability to perform their functions on their own as the algorithm can be self-trained.

Ability to determine their function based on sample input

Ability to produce output for input that it has not been taught how to deal with

Where is Neural Networks used?

Neural Networks is widely used in the below areas

Classification - feature extraction, image matching, pattern matching

Noise Reduction - Recognition of patterns in noise and suppression of noise to produce noiseless outputs

Prediction and forecasting - prediction and forecasting for sales, stock, financial applications

An Explanation of Neural network model

The below sections will discuss neural network models and analyze the structure and formation

Neuron - A Mathematical Model

Earlier we discussed the biological model of how information is process in the brain. Here we seek to discuss and depict a simple neural model in a mathematical model to utilize it in artificial neural networks.

Figure (He & Xu, Jul 05, 2010)

The above figure depicts x1,x2…xn neurons that are inputs to j where w1j,w2j…wnj are its weights respectively. 0j is the threshold of the neuron j.f is the active function and yi the output of this neuron.

The relationship can be denoted as

Feedforward/Feedback Neural Networks

There are number of neural network models developed. In this section we discuss neural networks by dividing them into categories as below

Feedforward Neural Network

Feedback Neural Network

Feedforward Neural Network

Feeddforward neural networks have an input layer, several middle hidden layers and one output layer.

Figure (He & Xu, Jul 05, 2010) displays the neural network for a single hidden layer.

Feedback Neural Network

In feedback neural network, we can connect any two neurons, which include self-feedback of neurons.

Figure (He & Xu, Jul 05, 2010) Figure displays a simple Feedback Neural network

Solid line - The above fig displays the connections weight for the forward transferring network nodes.

Dashed line - The dashed line displays the connection weight for feedback transferring nodes.

The input signals are repeatedly transferred between neurons and after being transformed several times, it will tend a particular state.

Back Propagation Neural Network (BPNN)

"Back propagation BP algorithm is a systematic method of training multi-layer artificial neural networks. It acquires its name from the fact that during training, information is propagated back through the network to adjust the connection weights" (Nagalakshmi, 2013)

A BP algorithm has two parts, forward propagation of input and the other is error back propagation. The following steps are observed to achieve back propagation

The input is transferred to the hidden layer

The hidden layer processes the input layer and transfer it to the output layer

The Neurons in each layer can only affect the state of the neuron in the next layer

If the expected output at the output layer is not obtained, we observe a shift to back propagation, the output error signal is then returned along the original pathway of connection

Upon return the weight of each layer is modified.

After a number of iterations, the error of the expected output and the new output reduces.

Solving problem with neural network

Neural networks can be used to solve a large number of real world problems. With the help of nueromathematics, a branch of computer mathematics that deals with algorithms in neural networks.

Neural networks has multiple connection with the neurons in the hidden middle layers, however the number of layers must be small to provide an optimal solution.

The properties of neural networks to solve problems are as below

The large scale synchronous operations over operands that the algorithm uses

Flexible and complex functional transformation of input into output

Allows for the analytical description to be transformed from the input to output space

Previous property allows adjustment for the control of the algorithm functioning

The complexity of the neural network solution reflects the problem itself

Neural networks correspond to modern technology and microelectronics

(Galushkin, 2007)

Selection for problems to solve using Neural Network

The adoption of neural network to solve a problem should be used only when the data and expected results is understood with the help of calculations.

The neural network algorithm developer should be objective in solving the problem and validate the data while taking input from the requestor or originator of the problem

Physical Problems are usually divided into two parts

Unformalized - cannot be denoted mathematical formulas, graphs, structures, etc.

Formalized - Can be formulated using mathematical equations

Unformalized problems are usually more complex and need to be analyzed as they grow in complexity. Neural networks frameworks can provide valuable support to solving these problems.

Neural networks can work efficiently with problems related to the input information space to generate valuable output. It also works in establishing flexible nonlinear approximation models that are used in pattern recognition

Neural networks algorithms are continuously being modified and developed to solve problems of dynamic systems control. It is also used to bring down solution time and help with optimization of results.

It also broadens the scope of linear algebraic equations by utilizing the middle hidden layers and feedback algorithms present in neural networks

Figure (Galushkin, 2007) displays the logical structure for selection procedure

Sometimes it's efficient to use neural networks when the solution demands optimization of processors and claster computers.

The stages involved in solving the problems for the neural network is as below

Creating and describing the initial data description

Decide on the input signal x(n) to be used

Creation of optimization function of the neural network

Establish the output signal y(n)

Find the desired output signal of the neural network

Find the error signal vector

Establish the optimization function

Find the transformation performed by the network

Search the analytical expression for the gradient of the secondary optimization function

Generate the adjustment algorithm

Establish the ideal input for the verification of the solution procedure

Establish plans for the experiments

If neural networks are chosen for the design of a system, it generally follows two structures as below

Flexible structure - Multiple layers and number of neurons in each layer to facilitate adjustment

Fixed Structure

Applications of Neural Networks

Following are a few applications domains that have gain recognition in the neural network domain

Character recognition - Character recognition has gained high amount of attention and have gained various uses in mobile portable devices and reading of scanned images. Neural networks can be trained and used to recognize patterns

Image compression - Neural networks can be used for images compression as it can process large amount of information relatively quickly.

Stock Market prediction - Companies have claimed to have gained a high return using stock market predictions. Using neural networks, trends can be analyzed and further used for predictions.

Travelling salesman's problem - The problem can be solved by using neural network to determine the shortest path.

Medicine - neural networks has gained a high amount of attention in cardiopulmonary diagnostics.

Electronic nose - Patterns of order can be sensed using neural networks

Security - it can be used for identifying pattern in human criminal offences

Loan applications - Banks are now making use of neural networks to reduce the failure rate of repayment. Neural networks are used to decide whether to grant an application for a loan or not.

Network Optimization - To predict network utilization and manage resource usage.

Software Estimation - Neural network can be used to estimate software that includes cost , resource etc.

Miscellaneous applications - Other notable use of neural networks are its use in self-driving cars, wireless networks, sensor monitoring, monition detection.

Popular fields that have utilized Neural networks can be found in the appendix

Implementation of Neural Networks

To display how neural networks can be used and implemented, the below section discusses a journal of how neural networks can be applied to the stock exchange

The below discusses some of the task needed to complete the model

Define the scope and problem statement

Establish the preliminary model

Train the model

Simulate Results

Verify the results

Journal - Modeling Stock Market Exchange Prices Using Artificial Neural

Network: A Study of Amman Stock Exchange

Author - S. M. Alhaj Ali*,a, A. A. Abu Hammadb, M. S. Samhouria, and A. Al-Ghandoor

Define the problem

The stock market presents itself with ample opportunity to give high investments returns; however shareholders may find it difficult to understand the market as there may be variable of factor the cause fluctuation in the market e.g. government regulations, insider trading, commodity price fluctuations etc. thereby the ability to make an approximate stock estimate in the future could hold great value for shareholders to plan and select the best trading opportunity accurately in advance.

Establishing the model

Neural network is used to model and forecast stock market prices

The neural network is represented in the mathematical model

Figure (S. M. Alhaj Ali*, 2011) Neuron mathematical model

x1(t),..., x(t) n represent the input cell

y(t) represents the output cell

v1….vn represent the weights

v0 is the firing threshold

f is the the activation function

As a result the output can be represent in the mathematical form as below


The model can further be explained in layers where L is the layers (2)

There could however be more than one layer. To represent the more than layer input i.e. the second layer input to the first layer output, it can be represented by the below in a mathematical form


f1,f2,f3 represent the activation function for layers 1, 2,3 respectively.

n1,n2,n3 represents the number of input signals for layer1,2,3 respectively.

In order to forecast, the model was built from two or three layers, 13 inputs and a single output,

The variables where set as follows for equation (3)

y: the 14th day price

x1 - x13:The stock for first 13th day of the month.

Train the model

After structuring the model to be used the network is ready for training, learning. The weights are chosen randomly

There are three approaches to learning

Supervised learning

Unsupervised learning

Reinforcement learning

Supervised Learning

The Network is supplied with a training data set with inputs and its corresponding output. The weights are adjusted one observation at a time. Learning is achieved by having the least square error

Reinforcement learning

By using trial and error to maximize the expected value of a criterion function

Unsupervised learning

There is no target information or performance judge in the training data. Features are inherited in the training data

Simulating Results

To simulate the results MATLAB was used. Initialization of weights was automatically assigned using random numbers. A full year of data was used for training data for each company. The validation was done using the next pricing.

Case 1 : Arab Engineering Industry

Back propagation was used for training with one secant step; the network followed a two layer approach

Hyperbolic tangent sigmoid activation function (First Layer) - the first layer consists of 14 neurons

Hard limit activation function (Second Layer) - The second layer consists of 1 neuron

The stock market data for the year as shown in the figure is entered into the simulation software.

Figure (S. M. Alhaj Ali*, 2011)

The training Error generated from the simulation software is as shown in the below figure

Figure (S. M. Alhaj Ali*, 2011)

The Training output against the target is as shown below

Figure (S. M. Alhaj Ali*, 2011)

The forecasted price against actual price is as shown below

Figure (S. M. Alhaj Ali*, 2011)

As per the above case provided, the model came rather very close to the actual data. There by this model can help predict stocks and enable shareholders to take calculative risks while investing.

Future Developments

There has been continuous research and development in the neural networks space. The main objective of neural network is to simulate the human brain; neural networks have far excelled traditional computing in domains like image recognition, forecasting etc.

Neural Networks and fuzzy Logic

The Advances made on Fuzzy logic has proved to be very useful in the ABI space. Fuzzy logic is a type of logic that builds controls and provides logical results

Neural Networks when integrated with fuzzy logic to mimic the human mind could be used in aerospace engineering projects, understand distorted speech, decipher sloppy writing, recognize and classify images etc. (A.ZADEH)

Neurofuzzy products are most likely to be used extensively in years to come

Pulsed neural networks

Networks of spiking neuron are pulsed neural networks and have drawn inspiration from neurophysiology. The benefits of pulsed neuron, is the amount of data that can be transferred in relatively timely manner with only a few spikes, they integrate and rely on temporal information thereby making it efficient in simulation of complex real world problems. (Kunkle & Merrigan, February, 2002)

Hardware specialized for neural networks

Hardcoding of Neural networks are now developing as a new trend, the main reason for this is to increase speed, and mostly used in areas of high performance requirements for e.g. high energy physics. As the number of neurons involved in the process increase in the near future it will be essential to have neural network hardware to keep up with the pace

Using Neural Networks the below may be seamlessly possible in the near future.

Better forecasting and prediction of stocks

Cars that can be self-driven

Auto composition of music pieces

Robots that have the ability to see, hear and predict

Auto conversation of Handwritten documents to be formatted word documents

Diagnosis of medical problems

Dynamic Computer resource allocation and prediction

Defense against denial of service attacks for networks

Meta learning algorithms

Drawbacks of using Neural Networks

The drawbacks of using network are as below

Takes more computation to process

May require a number of parameter tweaking and retraining's to it well

New data may cause loss of learning from old data


The report describes how neural network can be used to in the various fields to solve real world problems. Neural network clearly is much more efficient than traditional computer architectures. With the increasing amount of Neural network architectures and learning rules, applications can be improved and built to be more intelligent and make decisions effectively.

To summarize the report, the following observations where made from the report

Neural network model the neuron of the human brain

It is flexible

It can integrate with other algorithms for e.g. Fuzzy logic

It can self-adjust its weight

It can be trained easily

It conforms to uniformity and be used across domains

There are many established products that have been using neural networks to analyze their business and make decision for e.g. banks use to check if a loan should be given to a client

Can run parallel process

Graceful degradation of the network


A.ZADEH, L. (n.d.). Fuzzy Logic,Neural Networks and soft computing . Soft computing.

Aaron Mccoy, T. W. (2007). Multistep-ahead neural-network predictors for network traffic reduction in distributed interactive applications. ACM Transactions on Modeling and Computer Simulation (TOMACS) .

algorithm, M. t.-p. (1996). DILIP SARKAR. ACM Computing Surveys (CSUR).

Daniel A. Jiménez, C. L. (2002). Neural methods for dynamic branch prediction. ACM Transactions on Computer Systems (TOCS).

Dayhoff, J. (1991). Pattern recognition with a pulsed neural network. ANNA '91 Proceedings of the conference on Analysis of neural network applications (p. ). New York, NY, USA: ACM.

Galushkin, A. I. (2007). Neural Networks Theory. Berlin: Springer.

Gurney, K. (Aug 05, 1997). An Introduction to Neural Networks . Routledge, London.

He, X., & Xu, S. (Jul 05, 2010). Process Neural Networks : Theory and Applications. Dordrecht: Springer.

Hecht-Nielsen, R. (1989). Theory of the backpropagation neural network. Neural Networks, 1989. IJCNN. HNC Inc., San Diego, CA, USA : IEEE.

Imberman, S. P. (2004). An intelligent agent approach for teaching neural networks using LEGO® handy board robots. Journal on Educational Resources in Computing (JERIC).

Jesse Hoey, C. B. (2012). People, sensors, decisions: Customizable and adaptive technologies for assistance in healthcare. ACM Transactions on Interactive Intelligent Systems (TiiS).

Jing Yan, N.-Y. X.-F.-H. (2011). An FPGA-based accelerator for LambdaRank in Web search engines. ACM Transactions on Reconfigurable Technology and Systems (TRETS).

Jiuchun Ren, D. M. (2009). A neural network based model for VoIP speech quality prediction. ICIS '09 . New York: ACM .


Kunkle, D. R., & Merrigan, C. (February, 2002). Pulsed Neural Networks and their Application.

Michael I. Jordan, C. M. (1996). Neural networks. ACM Computing Surveys (CSUR).

Nagalakshmi, S. (2013). On-line evaluation of loadability limit for pool model with TCSC using back propagation neural network. International Journal of Electrical Power & Energy Systems.

Naixue Xiong, A. V. (2011). An adaptive and predictive approach for autonomic multirate multicast networks. ACM Transactions on Autonomous and Adaptive Systems (TAAS).

Noga Alon, A. K. (1991 ). Efficient simulation of finite automata by neural nets. Journal of the ACM (JACM).

Rubio, J. (2008). An Approach Towards the Integration of Adaptive Business Intelligent and Constraint Programming. Information Processing (ISIP), 2008 International Symposiums. Pontificia Univ. Catolica de Valparaiso, Valparaiso : IEEE.

Ruchi Shukla, A. K. (2008). Estimating software maintenance effort: a neural network approach. ISEC '08. New York, NY, USA ©2008 : ACM.

S. M. Alhaj Ali*, a. A.-G. (2011). Modeling Stock Market Exchange Prices Using Artificial Neural Network: A Study of Amman Stock Exchange. JJMIE.

SMITH-MILES, K. A. (2008). Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Computing Surveys (CSUR).

Tao Peng, C. L. (2007). Survey of network-based defense mechanisms countering the DoS and DDoS problems. ACM Computing Surveys (CSUR).

Tsungnan Lin, C. W.-C. (2008). A neural-network-based context-aware handoff algorithm for multimedia computing. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP).

Zbigniew Michalewicz â‹… Martin Schmidt, M. M. (n.d.). Adaptive Business Intelligence. Springer.