Messaging UNERA Students

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Research on the attitude of NTU undergraduates towards feasibility and risks of Brain Computer Interface technology


An increase in the number of software applications running on a computer, causes one to have numerous windows open simultaneously. This is called “Desktop Clutter” and it severely minimizes the workspace. University students use many applications and are hence most affected by the problem. University students are also confronted by the problem of slow speeds of instant messengers in campus networks. This project explores the development and testing of a new locally hosted, personalized embedded messaging service called UNERA (University Easy Resource Access) as a solution to the aforementioned problems. The first version of UNERA developed has been targeted as students of Nanyang Technological University(NTU).

Software has been designed to embed webmail, edventure and Online Public Access Catalogue (OPAC ) search amongst others in UNERA. It was found that use of Client Server architecture was superior and was hence used. The project explores in detail the emerging technologies of Session Distribution, Data Transfer based on Transmission Control Protocol and RSS Feeds.

Surveys conducted in NTU and in other overseas universities to assess the scale of the problem revealed that UNERA would be welcomed by students. It was also established that desktop clutter, inefficient access to e-resources and flaws in instant messaging server technologies was a problem in many universities overseas. UNERA provided faster online communication and more efficient e-resource access. The numerous benefits of UNERA were improved security, faster connectivity, greater degree of personalization, much enhanced convenience, and highly user friendly interface.

To improve on this design, more e-resouces may be embedded into UNERA in future.


This section gives and introduction of the research. It consists of the research background, research objectives and organisation of the report.

Background (General to specific)

In recent years, there has been growing interests in the research of Brain-Computer Interface (BCI). It is an emerging technology that allows brain to communicate with a computer directly. Through BCI, brain can send commands to a computer just by “thinking”, and brain can also even receive signals from it. [1]

During the early time of research, scientist did experiments on animals to find the relationship between the electrical responses of neurons and animals' behavior. [2, 3] In 1990s, several groups have done experiments which are able to capture complex brain signals from humans and use these signals to control external devices.[1] Nowadays, human can dial a call, move a cursor or control a simple robot through BCI. In 1999, an experiment decoded neuronal signals to reproduce images seen by cats.[4] With future development, more and more functions will be implemented through BCI.

Now, majority of the researches on this topic focus on the one way communication from brain to computer. BCI reads electrical signals or other manifestations of brain, process them, and convert them into appropriate actions.[5] BCI is often used to aid people with severe physical disabilities. Paralyzed patient can use BCI to move a computer cursor in several directions.[6] Healthy people can also benefit from BCI. It brings a fantastic and convenient way of communication and greatly improves the quality of life.

On the other hand, a lot of problems might arise during the development of BCI. One of the main issues is safety. For invasive BCI, the device is required to be implanted into the brain through surgery, and this might cause physical brain damages. Violation of human right and privacy might be another issue. BCI may be exploited to mind control; brain pacemakers are now already successful in treating depression. Lastly, the reliability of the technology might be a problem. Computers may misinterpret people's thoughts and causecontrary and undesired outcomes. These issues may limit the development of BCI. Previously, the researches focused mainly on the implementation of the technology and there are very few researches on people's attitude towards this technology. In our research, we will find NTU students' acceptance of BCI and their attitude towards these concerns.


The objectives of this research project are:

To evaluate the attitude of NTU undergraduates towards the feasibility and risk of BCI.

To correct any misconception of the respondents towards the technology by providing solid evidence.


We will begin the research by reviewing different articles to understand the current development of the BCI. This research will focus on the attitude and perception of NTU undergraduates towards BCI in terms of the following factors:

Safety and Health Concerns



Any other forms of Human-Computer Interface such as traditional mouse and keyboard will not be covered in this research.

Organisation of the report

This report will systematically describe how UNERA was developed. The Literature Review section has been organized thematically, and all concepts used in development have been suitably organized under headings. The Survey section consists of the detailed analysis of survey data. The Method section, explains technical features of UNERA, taking one feature at a time.

Chapter 2


2.1 Possibilities of Brain Computer Interface

For the past 15 years, researchers have been trying to develop BCIs to tap into the brain waves of individuals who are unable to communicate with the outside world. The goal of all BCI research is to create a direct link between computers and the electrical signals in the brain of these so-called "locked in" individuals so they can operate devices like wheelchairs or use simple word processing programs to express their wishes.

While the EEG monitors brain activity, so a person's intent could in theory be understood, the issues that have to be resolved before anything resembling thought recognition can happen are monumental. The skull muffles much of the brain activity, and since everything a person is thinking, doing, seeing and hearing -- from eye blinks to muscle movements - is encoded into the EEG signals, the number of variables researchers have to cope with is considerable.

Scientists hope to isolate task-related brain patterns with a good degree of accuracy. A computer then could translate the patterns into commands.

Chuck Anderson, a professor at Colorado State University, is studying five separate mental tasks, including writing a letter, performing complex multiplication problems and visualizing numbers being written on a board. While he is able to detect which of these tasks a subject performs with up to 70 percent accuracy by analyzing brain waves, that's just the start of what researchers need to understand in order to engineer tasks like these, let alone more complex ones like driving a car.

Invasive BCI haven't been much tested with humans yet. At present it is even questionable if electrodes should be installed in peoples' brains.

Though BCIs cannot use EEG signals to communicate at even half the speed of a person talking at a normal rate, there are many potential applications for the technology. It could allow disabled people to control prosthetic devices. BCIs also could lead to the development of an entirely new class of video games, or "mental typewriters" which translate thoughts into cursor movements. The military is interested in using BCIs to make faster responses possible for fighter pilots.

2.2 Development of Brain-Computer Interface (BBCI)

Brain computer interfaces bypasses the traditional neuromuscular communication channels to exploit the ability of human communication and control. Current multimedia technologies use only a subset of humans' I/O channels, eg motor, visual and acoustic senses. Since all these information converges or emerges from the brain, the investigation of a direct communication channel between an application and the human brain should be of high interest.

EEG yields data that is easily recorded with relatively inexpensive equipment, making it and excellent candidate for BCI.

There are several non-invasive methods of monitoring brain activity encompassing Positron Emission Tomography (PET), functional Magnetic Resonance Imaging (fMRI), Magnetoencephalography (MEG) or Electroencephalography (EEG) techniques, which all have advantages and shortcomings. Notably alone EEG yields data that is easily recorded with comparatively inexpensive equipment, is rather well studied and provides high temporal resolution. Thus it outperforms remaining techniques as an excellent candidate for BCI.

Slow Cortical Potentials (SCP) are voltage shifts generated in cortex lasting over 0.5-10 seconds. Slow negativation is usually associated with cortical activation used to implement a movement or to accomplish a task, whereas positive shifts indicate cortical relaxation [17]. Further studies showed that it is possible to learn SCP control. Consequently, it was used to control movements of an object on a computer screen in a BCI referred to as Thought Translation Device (TTD) [3]. After repeated training sessions over months, through which patients achieve accuracies over 75% they are switched to a letter support program, which allows selection of up to 3 letters/min. Using information recorded invasively from an animal brain Nicolelis reports in [18] a BCI able to control a robot. Four arrays of fine microwires penetrate the animal's scull and connect to different regions inside the motor cortex. A robotic arm remotely connected over the Internet implements roughly the same trajectory as the owl monkey gripping for food. Granted, this invasive technology allows the extraction of signals with fine spatial and temporal resolution, since each microelectrode integrates firing rates of few dozens of neurons. However, to make a BCI attractive to an everyday-user it should be noninvasive, fast mounted and leave no marks.

2.3 Non-invasive BCI

The first feature that distinguishes Brain Computer Interface (BCI) is whether they utilize invasive (i.e. intra-cranial) or non-invasive methods of electrophysiological recordings. Non-invasive systems primarily exploit electroencephalograms (EEGs) to control computer cursors or other devices. This approach has proved useful for helping paralyzed or ‘locked in' patients develop ways of communication with the external world.

However, despite having the great advantage of not exposing the patient to the risks of brain surgery, EEG-based techniques provide communication channels of limited capacity. Their typical transfer rate is currently 5-25 bits per second. Although such a transfer rate might not be sufficient to control the movements of an arm or leg prosthesis that has multiple degrees of freedom, past and recent research in this field seems to indicate that EEG-based BCIs are likely to continue to offer some practical solutions (e.g. cursor control, communication, computer operation and wheelchair control) for patients in the future.

Both slow cortical potentials, recorded over several cortical areas [23], and faster mu (8-12 Hz) and beta (18-26 Hz) rhythms, recorded over sensorimotor cortex, have been exploited in such BCIs. For example, one such system relies on event-related synchronization and desynchronization of the EEGs associated with motor imagery. Training to operate EEG-based BCIs can take many days [2]. Visual feedback is the essential part of such training. Some BCI designs rely on the subjects' ability to develop control of their own brain activity using biofeedback, whereas others utilize classifier algorithms that recognize EEG patterns related to particular voluntary intentions.

Several strategies have also been proposed to provide feedback to users of EEG-based BCIs. For instance, virtual-reality systems can provide a realistic feedback that can be efficient for BCI training. In a recent demonstration of this approach, subjects navigated through a virtual environment by imagining themselves walking. In an effort to improve the resolution of brain potentials monitored by the BCIs, more invasive recording methods, such as electrocorticograms (ECoGs) recorded by subdural electrodes, have been introduced. ECoGs sample neuronal activity from smaller cortical areas than conventional EEGs. These BCIs (in the case of patients with advanced ALS) enable control of computer cursors, which the patients use to communicate with the external world or to indicate their intentions.

Severely and partially paralyzed patients can reacquire basic forms of communication and motor control using EEG-based systems. Yet motor recovery obtained using these systems has been limited, and no clear breakthrough that could significantly enhance the power of EEG-based BCIs in the near future has been reported in the literature. This by no means reduces the clinical utility of such systems. Some of them have improved the quality of life of patients, such as the BCI for spelling. But if the goal of a BCI is to restore movements with multiple degrees of freedom through the controlof an artificial prosthesis, the message from published evidence is clear: this task will require recording of high resolution signals from the brain, and this can be done using invasive approaches.

2.4 Invasive BCIs

Invasive BCI approaches are based on recordings from ensembles of single brain cells (also known as single units) or on the activity of multiple neurons (also known asmulti-units). In these experiments, monkeys learned to control the activity of their cortical neurons voluntarily, aided by biofeedback indicating the firing rate of single neurons. A few years after these experiments, Edward Schmidt raised the possibility that voluntary motor commands could be extracted from raw cortical neural activity and used to control a prosthetic device designed to restore motor functions in severely paralyzed patients [46]. These bottlenecks were passed because of a series of experimental and technological breakthroughs that led to a new electrophysiological methodology for chronic, multi-site, multi-electrode recordings. The BCI approach that relies on long-term recordings from large populations of neurons (100-400 units) evolved from experiments carried out in 1995.

These developments paved the way for the first experiment in which neuronal population activity recorded in behaving rats enacted movements of a robotic device that had a single degree of freedom [1]. Soon after this first demonstration, a similar BCI approach was shown to work in New World [54] and rhesus monkeys [55-58]. As a result of these experimental efforts, in less than six years several laboratories reported BCIs that reproduced primate arm reaching [1,54-58] and the combination of reaching and grasping movements [57], using either computer cursors or robotic manipulators as actuators. There are several important differences that distinguish these BCIs (Figure 1). These include: the number of cortical implants (e.g. uni-site or multi-site recordings); the cortical location of implants (e.g. frontal or parietal cortex, or both); the type of neural signal recorded (local field potentials versus single-unit or multi-unit signals); and the size of the neural sample. At Duke University, a BCI strategy has recently been implemented based on single-unit recordings made during intra-operative placement of deep-brain stimulators in Parkinsonian patients [65].

2.5 Real World Application

2.5.1 Communication

Restoring communication is a top priority for people with severe disabilities such as locked-in syndrome, in which the person is completely paralyzed and unable to speak. Consequently, BCI researchers have experimented with several methods of assistive communication, ranging from simple binary (yes/no) capabilities [2] and iconic selection applications to virtual keyboards that support spelling. Several approaches to spelling have been developed. Birbaumer et al. [4], [5] describe a binary speller, dividing the alphabet in successive halves until the desired letter is selected. This speller has been used by a locked-in person to compose letters in a real-world home environment. Several of these spellers have also been used for free-spelling, although measuring the accuracy of BCI output is difficult and relies on user self-reporting.

2.5.2 Environmental Control and Virtual Worlds

Virtual reality has been employed in BCI training systems because of its relative safety and motivational factors. Bayless [7] describes a virtual driving environment that tested P300 responses when subjects encountered a stoplight. Virtual reality can provide a safe environment for training and tuning neurally controlled interfaces to real-world devices, such as a power wheelchair. More experimentation is necessary to determine if skills learned in a virtual-reality setting transfer to real-world scenarios.

2.5.3. Neural Prosthetics

Another key application for BCI technology is restoring movement for people with motor disabilities. Cortical signals have been used to control a hand orthosis [9], essentially restoring the connection from the brain to a paralyzed arm. A locked-in subject has also used neural signals to control a virtual hand [3] in the hopes that simulation would provide clues to potentially incorporating functional electrical stimulation (FES) into a BCI system to restore movement.

A proposed roadmap for the future of BCI research

To achieve the ambitious goal of creating a clinically useful BCI for restoring upper-limb mobility, one has to pass the following key bottlenecks:

1 Developing computationally efficient algorithms, that can be incorporated into the BCI software, for translating neuronal activity into high-precision command signals capable of controlling an artificial actuator that has multiple degrees of freedom.

2 Implementing a new generation of upper-limb prosthetics, capable of accepting brain-derived control signals to perform movements with multiple

degrees of freedom.

3. Learning how to use brain plasticity to incorporate prosthetic devices into the body representation. This will make the prosthetic feel like the subject's own limb

2.3 Selection Methods

[Review on article: Brain-computer interfaces (BCIs): Detection instead of classification [4]]

In this article the author mentions that the selection capacity of BCI is often realized using a computer cursor [1] and, also in other ways such as controlling an arrow on a dial, a moving robot [2] [3], or controlling other external devices. But all of these selections are from the commands extracted from brain wave of the user.

The performance of BCI is largely dependent on its relevancy of signal identification. And the current BCI systems are all based experimental observations that particular mental tasks such as imagining hand movements. In addition of the initial identification, the continual work is necessary because these features and locations are usually subject-specific and may also change over time.

Approaches to translate BCI signal features into device control signals typically use classification or regression procedures. For example, studies reported in the literature have used linear discriminant analysis; neural networks; support vector machines; linear regression.

2.6 Brain waves as a replacement for DNA identification

Some researchers believe the same interface could form the basis of a mind-controlled password system; they are exploring the possibility of a biometric security device that will use a person's thoughts to authenticate her or his identity.

Brain waves are acknowledged as being unique to each individual. Even when thinking of the same thing, the brain's measurable electrical impulses vary slightly from person to person. A pass-thought could be anything from a snatch of song, the memory of your last birthday or even the image of your favorite painting.

The system has the potential to become a new kind of biometric security tool that would allow users to change their pass codes periodically.

A chief challenge facing BCI technology is that brain-wave signatures are unique, so a system trained to recognize a particular user can be quite difficult for another to manipulate. Brain-wave signatures, represented as the EEG signals of a person are different from one individual to another, even when they perform the same thought or task.

A security device wouldn't need to interpret or understand the thought, but simply extract the repeatable features of the pattern and recognize a match. A brain-based biometric can be as strong as DNA-based biometric.

However, some researchers are skeptical that a computer will ever be able to passively recognize a particular mental image in a person's head as any Brain Signals from an uncountable number of nerve cells are smeared and lumped together by the time we are recording the brain-wave patterns, authentication is akin to recognizing speakers from muffled voices because, for example, the speakers are some distance away.

2.7 Privacy

Privacy is another rising concern about BCI. Stephen Fairclough, a psycho physiologist at Liverpool's John Moores University, said that BCI games might give marketers and government entities the inside track on a user's emotional and brain states, potentially turning your computer into a polygraph.

Chaper 3




[1]N. M. Lebedev MA, "Brain-machine interfaces: past, present and future," Trends Neurosci, vol. 29, pp. 536-546, 2006.

[2]L. J. Georgopoulos AP, Petrides M, Schwartz AB, Massey JT "Mental rotation of the neuronal population vector," Science vol. 243, pp. 234-236, 1989.

[3]S. E. M, "Fine control of operantly conditioned firing patterns of cortical neurons" Exp. Neurol. , vol. 61, p. 349, 1978.

[4]F. F. L. G. B. Stanley, and Y. Dan.,"Reconstruction of natural scenes from ensemble responses in the LGN," J. Neurosci., vol. 19, pp. 8036-8042, 1999.

[5]S. O. Jr., "Brain-Computer Interface: where human and machine meet," Computer, vol. 40, pp. 17-21, 2007.

[6]N. Birbaumer, "Brain-Computer Interface Research: Coming of Age," Interface Clinical Neurophysiology, vol. 117, pp. 479-483, 2006.