Anns and their biological inspiration

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

1.1 Introduction

Picton (1994) defines a neural network as an interconnected system that produces an output pattern when presented with an input pattern. In other words, ANNs are an attempt to recreate the functionality of a biological neuron on a machine. The brain is very successful at solving complex problems such as pattern recognition and is highly efficient at processing information due to a vast parallel topology, and for this reason it makes great sense to mimic these processes on a machine.

In the early 1900s Ramn y Cajal was attributed with being the first to note that the brain was constructed from single cells known as neurons (Rieke et al., 1999) and since then a vast amount of research has been carried out to discover how these neurons function. This chapter explores a brief history of the first two generations of ANNs that have driven this research from their conception by McCulloch and Pitts in 1943. Following this, the biological functions of real neurons are described and it is shown how the desire to implement a more biologically plausible ANN, which uses time as a resource, has given way to a third generation of networks known as Spiking Neural Networks. Since Hodgkin and Huxley produced the first mathematical model of a neuron in 1952, there have been numerous models with differing degrees of complexity presented. The advantages and disadvantages of these models for the construction of ANNs are discussed. An exploration into the different network configurations that have helped realise and stimulate state of the art research in the field is also presented.

1.2 A Brief Review of First and Second Generation ANNs

There is a strong focus within the Computational Intelligence community on building machines which possess "intelligent" capabilities and these can be broken into 3 general groups: Fuzzy Logic (FL), Genetic Algorithms (GAs), and Artificial Neural Networks. Although it can be argued that all three groups have a grounding in nature and biology, it is the field of ANNs that truly reflects the capabilities of a biological brain. ANNs are comprised of large arrays of interconnected computational elements, or neurons, and mimic the hugely dense and parallel processing ability of the brain (Wasserman, 1989) which consists of approximately neurons (Jain et al., 1996). Each neuron has the possibility of being connected to 1000 - 10000 other neurons which makes approximately to synaptic interconnections (Schalkof, 1997). Early research into the functionality of the brain is accredited to Ramn y Cajal in 1911 (Rieke et al., 1999) but it was McCulloch and Pitts who were the first to produce an ANN that was loosely inspired by neuroscience. In this network each input is multiplied by a weight and if the sum of the input weight vector crosses a certain threshold then the neuron is said to fire (Goles and Palacios, 2007). The threshold of the neuron is controlled by a firing rule, which is a step function, and therefore the resulting output is a binary if triggered, otherwise the output would be (McCulloch and Pitts, 1943). Figure [fig:CH2-McCulloch-and-Pitts-Neuron] provides a simple representation of the McCulloch and Pitts Neuron where the neuron sums the multiplication of the inputs with their respective weights and the output is governed by an activation threshold.

Following on from this work, Donald Hebb (1949) suggested that it was possible to train an ANN to give a certain output when presented with an input by means of learning. In this case the individual weights on the inputs to the neuron were updated according to the activity of the neuron. In 1958, Frank Rosenblatt devised the Single Layer perceptron (SLP) which had modifiable synaptic weights and an output threshold function (Rosenblatt, 1958). The SLP used supervised learning to update the weights and was implemented to solve pattern recognition problems. Rosenblatt's introduction of adjustable weights undoubtedly triggered a new era in ANNs which saw a multitude of learning algorithms introduced. Following Rosenblatt's work, Widrow and Hoff (1960) introduced the Least Mean Square (LMS) learning rule or Delta rule. This rule strives to continuously update the synaptic weight strengths between input and output by reducing the difference between the desired output value and the actual output value.

In 1969, Minsky and Papert released mathematical evidence that the SLP was severely limited in that it could only solve linearly separable problems and was incapable of solving the simple XOR problem (Minsky and Papert, 1969). Minsky and Papert, however, did surmise that the introduction of a hidden layer would possibly solve this problem but that in doing so, the simplicity of the perceptron would be destroyed. This revelation led to research into ANNs coming to a virtual standstill until 1986 when Rumelhart and McClelland introduced a learning algorithm for the training of Multi Layer Perceptrons (MLPs) (Rumelhart and McClelland, 1986). This work was based on previous work that had, at the time, gone unnoticed (Werbos, 1974; Parker, 1985). This algorithm utilises the error between the current output and the desired output, which is then back-propagated through the network so that all the synaptic weights can be trained until the error at the output is minimised; this algorithm has become known as error back-propagation. It also showed that Minsky and Papert were correct as, the MLP was capable of solving the Exclusive OR (XOR) problem. However, the error back-propagation algorithm is not without limitations (Basheer and Hajmeer, 2000), i.e., a slow learning rate, the optimisation of the starting weights, and the number of hidden layer neurons. If the learning rate is too small, the convergence time of the algorithm is greatly increased; if it is too large then the algorithm suffers from oscillatory effects on the output. The introduction of a momentum term can help reduce the effects of the learning rate and also helps the algorithm overcome getting stuck in a local minimum. The error back-propagation algorithm was the birth of the second generation of neural networks which employed a sigmoid activation function giving a continuous set of values at the output (Maass, 1997). As well as computing Boolean functions, these networks could compute functions with analogue inputs and outputs, which gave them a greater ability to solve real world problems. In the 1980s the Self Organising Map (SOM) was proposed by Kohonen, which is the best known unsupervised algorithm. This method updates the weights using a form of competitive learning with a winner-take-all regime where all the weights in the winning neuron's neighbourhood, for any given input, are updated (Kohonen, 1982).

Even though the previous two generations of ANNs have been loosely based on the biophysics of the brain, they have not used a vital resource which is readily available, and that is time. The third generation of neural networks, SNNs, implements a more biologically plausible interpretation of the connectionist system of neurons in the brain and makes use of time as a resource by coding information in the timing of spikes. In doing so, a better understanding of how the human brain functions can be achieved (Izhikevich, 2003). Despite the fact that research into SNNs is still only in the embryonic stage, it has been suggested they have, at least, the same computational power as previous generations, and require less neurons to carry out the same level of computation (Sohn et al., 1999).

1.3 The Biological Neuron

The neuron is the most basic component of the brain and Central Nervous System (CNS), and there are approximately neurons that make up a dense, parallel, data processing architecture (Maass and Bishop, 1999; Swanson, 2003). Information processing in the brain is relatively slow when compared to modern day computers, yet, because of the brains parallel nature, it is around times faster (Hopfield, 1988). Figure [fig:CH2-Two-Biological-Neurons] shows two connecting neurons and although the size and function of the neurons may vary depending on their location, the component parts are thought to be similar.

Neurons are made up of a soma, an axon, dendrites, and synapses. When a neuron generates a spike it travels along the axon and eventually branches out to many other neurons via connections called synapses. These synapses are connected to the dendrites of the other neurons and therefore the dendrites can be thought of as the inputs. The soma fires a spike train whose frequency and phase is a measure of the coincidence of all inputs (Swanson, 2003).

1.3.1 The Soma

The cytoplasm contained in the soma consists of positively charged ions of sodium and potassium . The outer membrane of the soma is permeable to these ions and since the outer extracellular fluid contains a much higher amount of these ions, the soma is held at a negative voltage (75mV) while at rest (Thompson, 1993). When the soma is stimulated via inputs from the dendrites, voltage dependent ion gates are opened. This allows a further influx of sodium ions which depolarise the soma. If the soma is further stimulated with other dendrite inputs, it will continue to become further depolarized until it reaches a set voltage known as the threshold voltage. Once this voltage has been reached, all the ion gates open and allow a massive influx of sodium. This causes the soma to emit a spike which is known as an Action Potential (AP). The AP is then used to carry information to other neurons via the axon (Swanson, 2003). Once the gates are activated they cannot be reactivated for a period known as the refractory period. Figure [fig:CH2-A-Typical-Action-Potential] depicts a typical action potential.

1.3.2 The Axon

The axon is a thin tube-like structure which carries signals from the soma (cell body) of the neuron to the synapses of other neurons and ranges anywhere from a few micro metres to a few metres in length before terminating at the synapses. As can be seen from Figure [fig:CH2-Two-Biological-Neurons], the axon originates at a cone-like thickening on the soma called the Axon Hillock. There is generally only one axon per neuron, which branches out to many synaptic terminations (Levitan and Kaczmarek, 2002). As the axon is a poor and slow conductor, it is covered in a fatty sheath called Myelin which insulates the axon and speeds up the rate of conduction. The Myelin is constructed from two types of glial cells, Schwann cells (in the peripheral nervous system) and oligodendrocytes (in the CNS) (Bacci et al., 1999) and is broken at short intervals called Nodes of Ranvier which play a vital role in the propagation of the AP of the neuron. As the spike travels along the axon it deteriorates in strength but if the spike is large enough when it reaches this node, it opens a ion channel gate and lets in an influx of which causes a sharp rise in the electrical spike thus refreshing it to its original level. If the Myelin sheath becomes damaged the AP from the neuron can become very slow or lost (Swanson, 2003).

1.3.3 The Dendrite

All spikes are passed to the neuron via a dendrite, therefore they carry information from other neural axons via the synapse to the soma (Swanson, 2003). Dendrites are extremely branched giving way to dense neural networks (Levitan and Kaczmarek, 2002) and while they are commonly used for the receipt of input signals alone, they sometimes can be used for the transmission of electrical signals. They can also exhibit dynamic changes in shape and size which can give rise to plasticity within the neuron.

1.3.4 The Synapse

A neuron is connected to another neuron via the synapse. The synapse consists of a presynaptic input terminal that connects to a postsynaptic neuron at the dendritic spine. The space between the presynaptic and postsynaptic terminals is called the synaptic cleft (Figure [fig:CH2-Synapse]) and usually has a width of about 20nm. When an AP reaches the presynaptic terminal it causes a bio-chemical chain of events (Gerstner and Kistler, 2002). The presynaptic bulb contains calcium ion channels that are voltage dependent and are shut in the absence of APs. When an AP reaches the terminal it causes the gates to open and allows an influx of ions which causes the vesicles of the bulb to migrate towards the bulb's outer membrane and subsequently becomes attached to it. The neurotransmitters contained within the vesicles are then released into the synaptic cleft where they are detected by specialised detectors on the postsynaptic terminal. The receptors then cause more ion channels to open and more charged ions enter the soma from the extracellular fluid. In doing so, information transfer from one neuron to another has taken place (Swanson, 2003).

The voltage response of the postsynaptic neuron to the presynaptic neuron is called a Post Synaptic Potential (PSP). A PSP that increases the likelihood of postsynaptic firing originates from an excitatory synapse and is called an Excitatory Post Synaptic Potential (EPSP); the opposite of this is an inhibitory synapse which generates a Inhibitory Post Synaptic Potential (IPSP) (see Figure [fig:CH2-PSP]).

1.4 Spiking Neuron Models

In the realisation of SNNs, much research has focused on developing models of biological neurons. These models have helped utilise the information contained in the temporal aspect of neuronal firing as well as that contained in the spatial distribution which was effectively utilised by the previous generations of neural networks. Numerous SNN models have been developed that can potentially implement a biologically plausible artificial neuron. In this section, these models are discussed.

1.4.1 The Hodgkin and Huxley Model

In 1952, Hodgkin and Huxley carried out experiments on a nerve cell from a giant squid (Hodgkin and Huxley, 1952). From the results of these experiments, the authors went on to develop a mathematical model which reproduced the AP found in the nerves of the giant squid. The model was based on the fact that there were three types of ion currents active in the cell: potassium , sodium and a leak current . Ion channels that are voltage dependent control the flow of ions in the cell membrane, one for sodium and one for potassium. The other channel types are taken care of by the leak current, and are not explicitly described (Gerstner and Kistler, 2002). Figure [fig:CH2-HH-Block-Diagram] shows a block diagram of the Hodgkin and Huxley (HH) model which is a four dimensional model as it requires the solution of four differential equations (Gerstner and Kistler, 2002).

If a current is injected into the circuit, charge is added to the capacitor which leaks away through the other channels and the total current is therefore represented by:

where is the total current, is the capacitance of the cell membrane, is the voltage across the membrane, is the sodium ionic current, is the potassium ionic current and is the leakage current. The probability that current is flowing in the channels is given by the following equations:

where and are the activation and inactivation coefficients for the sodium channel and is the potassium channel activation coefficient. is the cell membrane potential, , and are the potassium, sodium, and leakage reverse potentials respectively and , and are the potassium, sodium and leakage channel conductances. A list of the original empirical potentials and conductances taken from the model can be found in Table [tab:CH2-HH-Model-Parameters].

The channel currents change with time according to the , , and gates and are calculated by the following equations:

Where and are values that were experimentally adjusted by Hodgkin and Huxley to fit the data. Table [tab:CH2-HH-Model-a-and-b-Parameters] provides a list of these adjusted parameters.

Although this model is one of the most detailed at describing spike generation, thresholding and bursting, it is very computationally intensive because of the large number of variables to be solved. To combat this, several more abstract models have been developed which are less computationally intensive. These are discussed in the following sections.

1.4.2 Integrate and Fire Model

The Integrate and Fire (IF) model is a very basic abstraction of a biological neuron. It is the simplest form of all the spiking neuron models where the neuron and cell membrane are represented by a single capacitor and a threshold device (Gerstner and Kistler, 2002), as shown in Figure [fig:CH2-Integrate-and-Fire]. When a current is injected into the capacitor, it will charge until the threshold value is reached. Once surpassed, the capacitor is allowed to discharge and a spike is released. Although this model is very simple it still exhibits two of the main characteristics of a neuron, i.e. the ability to integrate inputs over time and subsequently produce a spike once a threshold level has been reached (Trappenberg, 2002). However, the output of the model does not have the ability to describe the form of a biological spike and this is usually implemented in software. As all APs from any given neuron are of the same shape and size, it is the general consensus in the field of neuroscience that information is carried in the timing and frequency of the spikes as a form of temporal coding and not in their the size and shape (Tam, 1990; Kandel et al., 2000; Trappenberg, 2002).

The most basic version of an IF neuron is called the Perfect Integrate and Fire model (PIF) where the firing frequency of the neuron is proportional to the current injected into the model. The behaviour of the cell is governed by the charging of the membrane and is given by:

where is the capacitance of the cell membrane and is the stimulus current. The PIF model is an inaccurate biological model as it does not include the refractory period exhibited by a real neuron. The refractory period is a period, subsequent to a firing event, where the neuron cannot be fired again regardless of the stimulus (Koch, 1999). This can easily be modelled by clamping the membrane of the cell to the resting potential for a fixed duration after firing has taken place. By introducing this factor the current/frequency (I/f) relationship is modified from to where is the refractory period. The output of this model is a spike train of constant frequency which is unrealistic as real neurons have variable frequency spike trains. One possible solution to this would be to use a variable threshold voltage. Figure [fig:CH2-PIF-Sim] shows a Matlab simulation of a single input PIF neuron model, from Figure [fig:CH2-PIF-Sim] (B) it can be seen that the PIF model does not take into account the leakage current exhibited by a real neuron. The next generation of IF model, the Leaky Integrate and Fire model (LIF), was modified to take this into account.

1.4.3 Leaky Integrate and Fire Model

The Leaky Integrate and Fire model, shown in Figure [fig:CH2-LIF-Model], models only three neuron dynamics, the neuron membrane time constant, threshold and the refractory period (Maass and Bishop, 1999). The model provides a means by which the neuron can "forget" about an input by allowing the charge on the plates of the capacitor to leak away over time and the neuron returns to its resting potential in the absence of stimuli. This is achieved by placing a resistor, , in parallel with the capacitor providing a pathway for current to flow so that the capacitor discharges to its resting potential (Gerstner and Kistler, 2002).

The equation for this model is of the form:

where is the capacitance of the membrane, is the voltage across the membrane, is the membrane resistance and is the total stimulus current of the membrane. Rearranging equation ([eq:CH2-LIF]) and introducing which is the time constant of the leakage current gives:

If the membrane is stimulated with a constant current and the resting potential is at 0, then the membrane potential is modelled by:

where is the time of the previous spike. If the value of is less than the threshold value, , no spike is produced and when exceeds a spike is produced at , given by:

Finally, solving for the time interval which is the Inter Spike Interval (ISI) gives:

A Matlab simulation of a dual input LIF is shown in Figure [fig:CH2-LIF-Sim]. It can be seen that the voltage on the cell's membrane increases with each subsequent input spike, but, when there is no input the voltage decays back to the resting potential. Although this is still a very abstract model of a neuron, it exhibits the most important features of a neuron: integration, thresholding, and refractoriness (Gerstner and Kistler, 2002; Abou-Donia, 1992). Also, its simplicity is desirable for a better understanding of how neurons interact (Abbott, 1991).

There have been several other abstractions of the LIF model such as a probabilistic model where the synapses have probabilistic transmission properties and where each input spike no longer creates a step-like increase in the membrane potential but models a biological PSP (Swiercz et al., 2006). Variable firing threshold models have also been used, where the voltage is no longer reset fully to zero once a spike has been emitted (Koch, 1999). There are also conductance based models where the conductances of the synapses are calculated (Gerstner and Kistler, 2002; Wu et al., 2005).

1.4.4 Spike Response Model

The Spike Response Model (SRM) is a variation on the LIF model. The LIF model parameters are dependent on voltages whereas the SRM is dependent on the time of the last spike. Another difference is that the LIF model is defined by differential equations, whereas the SRM expresses the membrane potential at time as an integral over time of all previous spikes (Gerstner, 2001; Gerstner and Kistler, 2002; Bohte et al., 2002; Kasabov and Benuskova, 2004). The model is described by where models the AP generation and is used to reset the neuron, is the last firing time of neuron , is a presynaptic input spike from neuron , represents the PSP of neuron caused by a spike from neuron , models the synaptic strength, represents the set of all firing times of neuron , represents the set of all firing times of neuron , and represents the set of all presynaptic neurons to .

In the SRM, the state of the membrane potential of neuron is represented by variable . If there is an absence of spikes at the input then the membrane will be at its resting potential. When a spike is presented to the neuron it will add to , after which it will take some time for to decay to the resting potential. represents the time course response to incoming spikes and if, after the summation over several spikes, reaches the threshold voltage, a spike is generated. The form of the spike is then modelled by the function . Since the form of the spike contains no information, then as long as the firing times are recorded it can be ignored in most cases by simply clamping to a reset level for a fixed time period (Gerstner and Kistler, 2002).

1.4.5 Izhikevich Model

The Izhikevich model is a simple two dimensional mathematical model but is more biologically plausible than the LIF or SRM models. It can exhibit the firing patterns of all known cortical neurons and is defined by the following two differential equations (Izhikevich, 2003):

Resetting of the membrane voltage after spiking is performed with the following equation:

In the above equations, is the membrane voltage, is the firing threshold, is a recovery variable, and , , , and are dimensionless parameters that can be adjusted to create the different spiking activities. This model is particularly useful for understanding how cortical neurons interact with one another since it has the capability of reproducing their different firing patterns (see Figure [fig:CH2-Izhikevich2004]).

1.4.6 FitzHugh-Nagumo Model

The FitzHugh-Nagumo model is based on the realisation that the membrane potential and the sodium activation channels of the HH model evolve on the same time scale. This is also the case for the sodium inactivation and the potassium activation channels, albeit on a much slower time scale. Therefore, the channels which are on similar time scales can be grouped together (FitzHugh, 1961; Nagumo et al., 1962; Koch, 1999). The following equations describe the model:

where is the fast activation channels, is the slower channels, and , , and are positive variables with a number of different versions being used over the years (Koch, 1999).

1.4.7 Morris-Lecar Model

The Morris-Lecar model, like the Izhikevich model, is a simple two dimensional mathematical model which can be used to describe the oscillations found in the barnacle giant muscle fibre. Barnacle muscle fibres respond with a variety of oscillatory firing sequences when subjected to a constant input current. To model these oscillations (Morris and Lecar, 1981; Koch, 1999; Izhikevich, 2004) Morris and Lecar described a set of coupled differential equations which consist of a membrane voltage and potassium and calcium ionic currents and is described by:

where is the membrane potential, and is the potassium activation variable.

The Ionic current has three components and is as follows:

Since the calcium current responds much faster than the potassium current it is regarded as being at equilibrium for the time step used to calculate the results and its activation curve is given by:

The activation curve for the potassium channel is given by and the time constant is The rest of the parameters are , , , , , and This model has become very popular in the neuroscience community as it has meaningful and measurable parameters (Izhikevich, 2004).

1.4.8 Discussion

There are a wide range of models available which simulate the behaviour of a biological neuron, the choice of which greatly depends on the purpose for which they are to be used. For example, if a phenomenological study of how neurons behave and interact is to be carried out, for which a great deal of neural dynamics must be included, then a conductance based model such as the HH model is an ideal choice as they allow for the modelling of detailed neural dynamics such as the membrane potential and spike generation. The Izhikevich model is also a good model for these purposes as a number of different firing patterns can be simulated with just a small change in parameters. On the other hand, simplified models allow the user to observe network interactions with a much broader overview (Swiercz et al., 2006) and when large networks are simulated, it is desirable to use more abstract models that are less computationally intensive (Bugmann, 1997). The LIF and SRM models are easily the most commonly used models for simulating large networks. Even though the SRM model may give a good approximation of a neuron's synapse response, this does not give an advantage over the less complex LIF model since relatively little importance lies in the shape of the PSP but rather in how the neurons communicate with one another (Johnston et al., 2005)

1.5 Neural Network Architectures

To attempt to model brain functionality, it is necessary to construct networks of neurons. There are two types of network topologies which help achieve this: feed-forward networks and recurrent networks. Feed-forward networks pass all the information forward in one direction only, whereas, recurrent networks allow some of the information to filter back through the network as a type of feedback. This section discusses these network topologies.

1.5.1 Feed-forward Networks

A Feed-forward Network (FFN) is so called because the input neurons can only transmit information in a forward direction, i.e., to neurons on the next layer. This group of networks is broken into two groups: Single Layer Feed-forward Network (SLFFN) and Multi Layer Feed-forward Network (MLFFN).

1.5.1.1 Single Layer Feed-forward Network

SLFFNs are comprised of a set of input neurons which are connected to other output neurons as shown in Figure [fig:CH2-SLFFN]. It is considered to be a single layer architecture as the input neurons do not perform any calculations; all processing is carried out by the output layer (Picton, 1994; Zurada, 1992).

1.5.1.2 Multi Layer Feed-forward Networks

The difference between a MLFFN and a SLFFN is that the former has one or more hidden neuron layers (as shown in Figure [fig:CH2-MLFN]), the purpose of which is to mediate between the input and the output of the network. This facilitates more powerful computations (Haykin, 1999). In a MLFFN the input nodes supply information to the first hidden layer, which processes this information and subsequently passes the outcome to the second hidden layer and so on. The result generated by the output layer represents the overall response of the network. If every neuron in each layer is connected to every neuron in the next layer the network is said to be fully connected, and if some connections are missing then it is said to be partially connected.

1.5.2 Recurrent Networks

A Recurrent Network (RN), shown in Figure [fig:CH2-Recurrent-Network], is in many respects the same as a FFN except it has at least one feedback loop connecting the output of a neuron back to the input of neurons in the same layer.

The feedback loop uses delays of , the purpose of which is so the output can be finely tuned according to the previous output of the network (Wasserman, 1989). As the outputs of the system are fed back, the change to the output becomes smaller until eventually the output becomes constant. Thus, by introducing feedback the system stability can be greatly increased. One such commonly known system that uses recurrence is the Hopfield network (Hopfield, 1982; Hopfield, 1984). These networks can act as an associative memory by storing a number of target vectors and recalling the most closely related stored memory when a noisy or incomplete pattern is presented.

1.6 Summary

The brain is highly adept at recognising complex patterns and even though computational speeds are slow in comparison with modern day computers, they are able to process information in highly efficient parallel networks. Since the seminal works of McCulloch and Pitts in 1943 there has been a great deal of interest and research in the emulation of brain-like function using machines, resulting in a wide range of training algorithms for first and second generation ANNs. These two generations have been loosely inspired by biological events which occur in the brain and furthermore, they have been relatively successful in solving complex tasks. SNNs are considered to be the third generation of ANNs and strive to implement a more biologically inspired type of network. They do this by modelling the behaviour of a real neuron more closely and also utilise time as a resource, which had not been used previously.

There have been a number of neuron models developed over the years which have advantages and disadvantages based on their level of abstraction. If a phenomenological model, which accurately reflects the chemical external and internal processes of a neuron, is required, then undoubtedly the HH model is an appropriate choice. However, due to the large number of calculations and parameters it is very restrictive for the construction of large networks. The LIF or SRM models are much more suited to these purposes with the LIF model being one of the most utilised in the field of neuroscience.

To construct networks of neurons it is important to define an architecture which reflects biological reality. The FFN and RN topologies both have their roots based in biological neuronal circuits and in the next chapter, an investigation into SNN training algorithms that utilise these types of networks is presented. Furthermore, how time can be utilised as a resource to encode and transmit information is explored, along with the various mechanisms used by the brain to implement learning.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.