There are two major trends in computing that will impact augmented cognition. The first is the shift in computing platform from the desktop to mobile computing (e.g., smartphone and tablet) because the user wants to be able to do computing tasks where ever they are. The second trend is the gamification of computer applications to keep the user engaged and motivated. Compared to a workstation, the mobile computing environment is a challenge because of limited computing power, storage capacity and limited battery capacity. This paper discusses the issues involved in implementing augmented cognition activities on a mobile platform and the tradeoffs of gamifying augmented cognition activities. These issues are discussed in terms of two example mobile platform applications using that use implement sensors.
There are two major trends in computing that will impact augmented cognition. The first is the shift in computing platform from the desktop to mobile computing (e.g., smartphone and tablet). "Mobile internet usage is predicted to overtake desktop usage as early as 2014" . Users enjoy the portability of mobile computing (e.g. smartphones and tablets). With 4G and Wi-Fi hot spots internet connectivity is almost ubiquitous. The second trend is the gamification. Gamification is the incorporation of game elements into non-game applications to keep the user engaged and motivated.
With the availability of high speed wireless internet (e.g., 4G or Wi-Fi) and cloud computing the computing capacity reserved for workstations has become readily available for the mobile user. Organizations with traditional web services are becoming increasing aware of the need to migrate or redesign their applications to the mobile computing platform. One of the primary concerns with the shift to mobile will be how maintain and increase user productivity.
Maintaining and increase user productivity is a strategic goal of augmented cognition which is to increase task performance capacity. This task performance capacity could be manifested by increasing the learning rate, increasing the ability to do a task, or maintaining continued task competence. Increase task performance capacity is achieved by using physiological sensor feedback to adjust or modify the activity the user is performing.
To implement a real-time augmented cognition system on a workstation can be a challenge because of the streaming physiological sensor data that must be stored and processed while simultaneously running and modifying the application. Current mobile platforms in comparison are more limited in computing and storage capacity than the workstation and must also consider limited battery life. Regardless of these limitations of the mobile systems, computing power and storage increases seem to be following Moore's law (Chang, Y. S., Lee, J., & Jung, Y. S. (2012)) Also, with the advent of higher wireless communications rates, both computing power and storage capacity could be off-loaded to the cloud. Preliminary research and methods developed now can be applied to future mobile devices with greater computing power, storage capacity, wireless connection speed and battery life.
At the core of augmented cognition research to increase task performance capacity is physiological sensor selection, data collection, data analysis, and then the modification of the activity guided by the analysis of sensor data. Currently, mobile devices are equipped with a set of sensors that could be repurposed for augmented cognition (see Table 1). An eye-tracking application using the forward facing camera on an Android based tablet will be described in this paper. Other sensors found on mobile devices are listed in Table 1 along with their potential cognitive measures that could be used with augmented cognition applications.
Potential Cognitive Measure
Eyetracking : Gaze Position, Fixation Number, Fixation Duration, Repeat Fixations, Search Patterns, Pupil Size, Blink Rate, Blink Duration
Difficulty, Attention, Stress, Relaxation
Problem Solving, Successful Learner, Higher Level of Reading Skill (andreassi, Sheldon)
Arousal (Andreassi, J.L.)
Pressures Applied to the Button
Stress, Certainty of Response, Cognitive Load (Ikehara, Curtis S.; Crosby, Martha E. 2005, Ikehara, Curtis S; Crosby, Martha E.; Chin, David N. 2005)
Almost all mobile devices have Bluetooth, Wi-Fi and 4G communications. Assuming Wi-Fi or 4G may be in use accessing cloud computing resources, Bluetooth would be the preferred interface approach for connecting sensors in close proximity. In this paper, an android smart phone application is described to demonstrate how Bluetooth would be implemented on a mobile device to acquire sensors data.
Gamification is a process of applying game elements to non-game applications to maintain a high level of user engagement and motivation.
High level of user engagement and motivation
Adding gamification requires more computing capacity since the application has game elements added to the application.
Too many game elements could be a distraction from the primary task.
Unclear which combination of game elements produces the best results and individual differences may play a significant role in determining the optimum combination of game elements. The combination of game elements may need to dynamically change with the individual's predisposition.
Gamification benefitting from physiological sensors
Reward for cognitive state and not only for performance or action
Increase challenge for a cognitive state and not only for performance or action
Examples of Sensors for Mobile Applications
Using the Front Facing Camera of a Mobile Device for Eye Tracking
Many smartphones and tablets have forward facing cameras (i.e., cameras that face the user) for video conferencing. Although these cameras have lower resolution than the rear facing camera they are of sufficient quality to do eye-tracking. Although the pupil-center/corneal-reflection eye tracking technique using both the pupil location and a reflected glint from the eye is more accurate, the less accurate pupil-center only approach is possible with a mobile device.
Locating the pupils first starts with capturing and image of the user facing the display. The second step is to locate the face. The third step is to locate the eyes. The fourth step is to locate the pupils of both eyes. A calibration procedure is required for each user where the user looks at specific locations on the screen and the pupil location is recorded. Once that calibration information available, an algorithm can be used to determine the rough location of where is looking on the screen can be determined. The following gives more detail on the eye-tracking process.
An Android tablet (Eee Pad Transformer TF101) was programmed based on the OpenCV class for face detection using JAVA as the programming language. "OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library" http://opencv.org/about.html. A Local Binary Patterns (LBP) cascade classifier is used to do face detection. "LBP features are integer in contrast to Haar features, so both training and detection with LBP are several times faster then with Harr features." https://github.com/alexmac/alcexamples/blob/master/OpenCV-2.4.2/doc/user_guide/ug_traincascade.rst Algorithm efficiency is critical with a mobile device since an efficient algorithm would increased the speed of computational and reduced power consumption.
To locate the eyes in the face the Harr cascade classifier is used. The Haar , ". . . detectors based on these Haar-like features work well with 'blocky' features such as eyes, mouth, face, and hairline . . . p. 510 Gary Bradski and Adrian Kaehler
There are many issues including eyelids, eyelashes, corneal reflections, shadows, and blinking . Having an algorithm to deal with all these factors is beyond the scope of this paper, but researchers have been working on these problems. Even with a less robust pupil detection method, the pupil is difficult to locate and requires several image processing steps.
Turn the color image containing the eye into gray scale.
Eliminate unnecessary pixels above the eye area.
Use Histogram Equalization to increase black and white contrast to make the black pupil the dominant in terms of pixel intensity.
Invert pixel intensity to make the black pupil white.
Use erosion to "eat away" the distracting white areas
Threshold the picture into binary to make the image only black and white.
Take the center of the bounding box of the contour as the center of the pupil.
Figure . The location of the pupil looking left, center and right.
The horizontal location of the pupil can be determined more accurately than the vertical location because there is a clear image change when moving the eyes from left to right on the display than up and down on the display. Figure 2 shows the face location, eye location and pupil location. Note that the pupil position on the left eye is not centered since the algorithm used become less accurate when there is a corneal reflection on pupil.
Figure . Eyes and face are detected. Note that the pupil position on the left eye is not centered since the algorithm used will become inaccurate when there is a corneal reflection on pupil.
With the pupil location accurately determined calibration of the user can be performed. Both calibration data and the data from the real-time pupil locations can be used to determine the where the user is looking at on the screen.
A Gamified Smartphone Application Using External Sensors Connected via Bluetooth
As mentioned before, the augmented cognition system using sensors places additional computation and storage requirements on the system being used. Gamification will also add more computation and storage demands on the mobile system, and generally reduce the battery life of the device.
Described is a simple application demonstrating the potential of a gamified learning task that can access external sensor connected via Bluetooth.
Variations of Bluetooth.