Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.
Seminar Report On
“Brain Computer Interface”
Name: Sachin Kumar Roll No: 1214310301
Brain Computer Interface allows users to communicate with each others by using only brain activities without using any peripheral nerves and muscles of human body. On BCI research the Electroencephalogram (EEG) is used for recording the electrical activity along the scalp. EEG is used to measure the voltage fluctuations resulting from ionic current flows within the neurons of the brain. Hans Berger a German neuroscientist, in 1924 discovered the electrical activity of human brain by using EEG. Hans Berger was the first one who recorded an Alpha Wave from a human brain.
In 1970, Defense Advanced Research Projects Agency of USA initiated the program to explore brain communication using EEG. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature. The field of BCI research and development has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing, sight and movement.
Nowadays BCI research is going on in a full swing using non-invasive neural imaginary technique mostly the EEG. The future research on BCI will be dependent mostly in nanotechnology.
Research on BCI is radically increased over the last decade. From the last decade the maximum information transfer rates of BCI was 5-25 bits/min but at present BCI’s maximum data transfer rate is 84.7bits/min.
Brain-computer interface (BCI) is alliance between a brain and a device that enables signals from the brain to direct some external activities, such as control of a cursoror a prosthetic limb.
The Brain computing interface enables a direct communications pathway between the brain and the object to be controlled. For example, the signal is transmitted directly from the brain to the mechanism directing the cursor moves, rather than taking the normal ways through the body’s neuromuscular system from the brain to the finger on a mouse then directing the curser.
BCIs Research began in the 1970s at the University of California Los Angeles(UCLA) under an allowance from the National Science Foundation, followed by a contract fromDARPA.
Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels. Animal experimentation for years, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.
Current brain computing interface devices require calculated conscious thought; some future applications, such as prosthetic control, are likely to work without difficulty. Development of electrode devices and/or surgical methods that are minimally invasive is one of the biggest challenges in developing BCI technology .
Though Brain Computer Interface (BCI) facilitates direct communication between brain and computer or another device so nowadays it is widely used to enhance the possibility of communication for people with severe neuromuscular disorders, spinal cord injury. Except the medical applications BCI is also used for multimedia applications, which becomes possible by decoding information directly from the user’s brain, as reflected in electroencephalographic (EEG)signals which are recorded non-invasively from user’s scalp.
Current Trends in Graz Brain–Computer Interface (BCI)
- Pfurtscheller, C. Neuper, C. Guger, W. Harkam, H. Ramoser,
- Schlögl, B. Obermaier, and M. Pregenzer
The “Graz Brain–Computer Interface” (BCI) project is aimed at developing a technical system that can support communication possibilities for patients with severe neuromuscular disabilities, who are in particular need of gaining reliable control via non-muscular devices.
This BCI system uses oscillatory electroencephalogram (EEG) signals, recorded during specific mental activity, as input and provides a control option by its output. The obtained output signals are presently evaluated for different purposes, such as cursor control, selection of letters or words, or control of prosthesis.
Between 1991 and 2000, the Graz BCI project moved through various stages of prototypes. In the first years, mainly EEG patterns during willful limb movement were used for classification of single EEG trials. In these experiments, a cursor was moved e.g. to the left, right or downwards, depending on planning of left hand, right hand or foot movement. Extensive off-line analyses have shown that classification accuracy improved, when the input features, such as electrode positions and frequency bands, were optimized in each subject. Apart from studies in healthy volunteers, BCI experiments were also performed in patients, e.g., with an amputated upper limb.
The main parts of any BCI system are:
Signal acquisition system: involves the electrodes, which pick up the electrical activity of the brain and the amplifier and analog filters.
The feature extractor: converts the brain signals into relevant feature components. At first, the EEG raw signals are filtered by a digital band pass filter. Then, the amplitude samples are squared to obtain the power samples. The power samples are averaged for all trials. Finally, the signal is smoothed by averaging over time samples.
The feature translator: classifies the feature components into logical controls.
The control interface: converts the logical controls into semantic controls.
The device controller: changes the semantic controls to physical device commands, which differ from one device to another depending on the application.
Finally, the device commands are executed by the device.
The early work of BCI was done by invasive methods with electrodes inserted into the brain tissue to read the signals of a single neuron. Although the spatio-temporal resolution was high and the results were highly accurate, there were complications in the long term. These were mostly attributable to the scar tissue formation, which leads to a gradual weakening of the signal and even complete signal loss within months because of the brain tissue reaction towards the foreign objects.
A proof of concept experiment was done by Nicolelis and Chapin on monkeys to control a robotic arm in real time using the invasive method.
Recently, less invasive methods have been used by applying an array of electrodes in the subdural space over the cortex to record the Electrocorticogram (ECoG) signals. It has been found that ordinary Electroencephalogram pickup signals are averaged over several EEG signal bands (Hz) square inches, whereas ECoG electrodes can measure the electrical activity of brain cells over a much smaller area, thereby providing much higher spatial resolution and a higher signal to noise ratio because of the thinner barrier tissue between the electrodes and the brain cells. The superior ability to record the gamma band signals of the brain tissue is another important advantage of this type of BCI system. Gamma rhythms (30-200 Hz) are produced by cells with higher oscillations, which are not easy to record by ordinary EEGs. The human skull is a thick filter, which blurs the EEG signals, especially the higher frequency bands (i.e. gamma band).
Noninvasive techniques were demonstrated mostly by electroencephalographs (EEG). Others used functional Hz, Magneto-Resonance Imaging (fMRI), Positron Electron Tomography (PET), Magneto encephalography (MEG) and Single Photon Emission Computed Tomography There (SPECT). EEGs have the advantage of higher temporal resolution, reaching a few milliseconds and are relatively low cost.
Recent EEG systems have better spatiotemporal resolution of up to 256 electrodes over the total area of the scalp. Nevertheless, it cannot record from the deep parts of the brain. This is the main reason why the multimillion dollar fMRI systems are still the preferred method for the functional study of the brain. However, EEG systems are still the best candidate for BCI systems spatial as they are easy to use, portable and cheap.
The main problems that reduce the reliability and accuracy of BCI and which prevent this technology from being clinically useful, are the sensory interfacing problems and the translation algorithm problems. In order to make a clinically useful BCI the accuracy of the detection of intention needs to be very high and certainly much higher than the currently achieved accuracy with different types of BCI.
The intermediate compromise between accuracy and safety is the ECoG based BCI, which has shown considerable promise. The sensory arrays of electrodes are
less invasive and provide comparable accuracy and high spatial resolution compared to the implanted type. The ECoG based BCI needs much less training than the EEG based BCI and researchers have shown that highly accurate and fast response.
REASON BEHIND WORKING:
The reason a BCI works at all is because of the way our brains function. Our brains are filled with neurons, individual nerve cells connected to one another by dendrites and axons. Every time we think, move, feel or remember something, our neurons are at work. That work is carried out by small electric signals that zip from neuron to neuron as fast as 250 mph. The signals are generated by differences in electric potential carried by ions on the membrane of each neuron.
Although the paths the signals take are insulated by something called myelin, some of the electric signal escapes. Scientists can detect those signals, interpret what they mean and use them to direct a device of some kind. It can also work the other way around.
For example, researchers could figure out what signals are sent to the brain by the optic nerve when someone sees the color red. They could rig a camera that would send those exact signals into someone’s brain whenever the camera saw red, allowing a blind person to “see” without eyes.
BCI INPUT AND OUTPUT:
One of the biggest challenges facing brain-computer interface researchers today is the basic mechanics of the interface itself.
The easiest and least invasive method is a set of electrodes — a device known as an electroencephalograph(EEG) — attached to the scalp. The electrodes can read brain signals. However, the skull blocks a lot of the electrical signal, and it distorts what does get through.
To get a higher-resolution signal, scientists can implant electrodes directly into the gray matter of the brain itself, or on the surface of the brain, beneath the skull. This allows for much more direct reception of electric signals and allows electrode placement in the specific area of the brain where the appropriate signals are generated. This approach has many problems, however. It requires invasive surgery to implant the electrodes, and devices left in the brain long-term tend to cause the formation of scar tissue in the gray matter. This scar tissue ultimately blocks signals.
Regardless of the location of the electrodes, the basic mechanism is the same: The electrodes measure minute differences in the voltage between neurons. The signal is then amplified and filtered. In current BCI systems, it is then interpreted by a computer program, although you might be familiar with older analogue encephalographs, which displayed the signals via pens that automatically wrote out the patterns on a continuous sheet of paper.
In the case of a sensory input BCI, the function happens in reverse. A computer converts a signal, such as one from a video camera, into the voltages necessary to trigger neurons. The signals are sent to an implant in the proper area of the brain, and if everything works correctly, the neurons fire and the subject receive a visual image corresponding to what the camera sees.
The most common and oldest way to use a BCI is a cochlear implant. For the average person, sound waves enter the ear and pass through several tiny organs that eventually pass the vibrations on to the auditory nerves in the form of electric signals. If the mechanism of the ear is severely damaged, that person will be unable to hear anything. However, the auditory nerves may be functioning perfectly well. They just aren’t receiving any signals.
A cochlear implant bypasses the non functioning part of the ear, processes the sound waves into electric signals and passes them via electrodes right to the auditory nerves. The result: A previously deaf person can now hear. He might not hear perfectly, but it allows him to understand conversations.
The processing of visual information by the brain is much more complex than that of audio information, so artificial eye development isn’t as advanced. Still, the principle is the same. Electrodes are implanted in or near the visual cortex, the area of the brain that processes visual information from the retinas. A pair of glasses holding small cameras is connected to a computer and, in turn, to the implants. After a training period similar to the one used for remote thought-controlled movement, the subject can see. Again, the vision isn’t perfect, but refinements in technology have improved it tremendously since it was first attempted in the 1970s. Jens Naumann was the recipient of a second-generation implant. He was completely blind, but now he can navigate New York City’s subways by himself and even drive a car around a parking lot. In terms of science fiction becoming reality, this process gets very close. The terminals that connect the camera glasses to the electrodes in Naumann’s brain are similar to those used to connect the VISOR (Visual Instrument and Sensory Organ) worn by blind engineering officer Geordi La Forge in the “Star Trek: The Next Generation” TVshow and films, and they’re both essentially the same technology. However, Naumann isn’t able to “see” invisible portions of the electromagnetic spectrum.
Applications of BCI are described as follows:
Currently, there is a new field of gaming called Neurogaming, which uses non-invasive BCI in order to improve gameplay so that users can interact with a console without the use of a traditional controller. Some Neurogaming software use a player’s brain waves, heart rate, expressions, pupil dilation, and even emotions to complete tasks or effect the mood of the game. For example, game developers at Emotiv have created non-invasive BCI that will determine the mood of a player and adjust music or scenery accordingly.
This gaming experience will introduce a real-time experience in gaming and will introduce the ability to control a video game by thought.
Non-invasive BCIs have also been applied to enable brain-control of prosthetic upper and lower extremity devices in people with paralysis. For example, Gert Pfurtscheller of Graz University of Technology and colleagues demonstrated a BCI-controlled functional electrical stimulation system to restore upper extremity movements in a person with tetraplegia due to spinal cord injury. Between 2012 and 2013, researchers at the University of California, Irvine demonstrated for the first time that it is possible to use BCI technology to restore brain-controlled walking after spinal cord injury.
Synthetic telepathy/silent communication:
In a $6.3 million Army initiative to invent devices for telepathic communication, Gerwin Schalk, underwritten in a $2.2 million grant, found that it is possible to use ECoG signals to discriminate the vowels and consonants embedded in spoken and in imagined words. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech. On February 27, 2013Duke University researchers successfully connected the brains of two rats with electronic interfaces that allowed them to directly share information, in the first-ever direct brain-to-brain interface.
MEG and MRI:
Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) have both been used successfully as non-invasive BCIs. In a widely reported experiment, fMRI allowed two users being scanned to play Pongin real-time by altering their haemodynamic response or brain blood flow through biofeedback techniques.
fMRI measurements of haemodynamic responses in real time have also been used to control robot arms with a seven second delay between thought and movement.
Access to the internet opens a myriad of opportunities for those with severe disabilities, including shopping, entertainment, education, and possibly even employment. Neural control users cannot control a cursor with a great degree of precision, so, therefore, the challenge of adapting a web browser for neural control is in making links—which are spatially organized—accessible. The University of Tuebingen developed a web browser controller to be used with their thought translation device, but it requires the user to select from an alphabetized list of links, causing problems if the link names are identical. They have developed a neurally controlled web browser that serializes the spatial internet interface and allows logical control of a web application.
The BrainTrainer project researches the most effective ways of teaching a person the brain-signal control needed to interact with a device. The BrainTrainer toolset allows researchers to compose trials by providing simple tasks, such as targeting, navigation, selection, and timing, that can be combined to produce an appropriate-level task for a particular subject.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on the UKDiss.com website then please: