Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Automatic Human Task and Behavior Analysis Using Wearable Sensors

Paper Type: Free Essay Subject: Technology
Wordcount: 1254 words Published: 18th May 2020

Reference this

Automatic Human Task and Behavior Analysis Using Wearable Sensors

 

1. Statement of the research problem and its significance

Automatic human task and behavior analysis plays a significant role in a vast variety of applications. Non-stop identifying and tracking the activities are essential tasks to provide healthcare and support to elderly people living on their own as well as to people with physical or mental disabilities [1]. The smart glasses can automatically record photos and video of surroundings when high cognitive load is detected which offloads the items occupying the human memory and improves the working efficiency. With the wearable automatic task analysis, computing system can detect whether the user has high engagement in a task or not and hence manage the interruptions posed by the notifications [2].  

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

The recent development of smart wearable sensors offering a wide range of sensing and input modalities provides an innovative way for automatic human task and behavior analysis based on sensing signals such as eye activity, speech, head movement and etc. Eye feature such as pupil diameters, blink and saccade/ fixation extracted through eye-directed infrared cameras contains a rich information about the users’ activity and cognitive process [3].  For example, the open source PUPIL head-mounted eye tracker from Pupil Lab provide an affordable and fully customizable approach to gather eye activity data.  Moreover, the high-resolution video recorded can be used for corneal imaging which analyses the scene reflected by the users’ cornea and provides contextual information. Light-weighted sensor like microphones can be easily integrated into the headset used for speech analysis and providing contextual information such as ambient sound. Physical activity recognition can be realized through the inertial motion sensor such as accelerometer, gyroscopes and magnetometers providing information about the wearer such as social activities, cognitive tasks and attention. Unlike the existing method in the domain of human factor and human-computer interaction which are manual and subjective [2], the automatic human task and behavior analysis via wearable sensors has a significant potential for long-term sensing, modelling and analyzing.  With the development of smartphone, it is more achievable to scale up to a large number of users for collective human behavior analysis. 

In the proposed research project, the key aim is to develop an automatic system that takes sensing signal from wearable device and uses the extracted feature to assess the task in terms of the load type (perceptual, cognitive, communicative and motor), load level (low, medium and high) and task transition state. Supervised machine learning and a variety of classifiers are used to classify the load type, estimate the load level and detect the transition point from the annotated data.

2. Outline of plans to address the problem

Our research method is illustrated in Figure 1 which are elaborated with the following steps.

 

2.1 Data collection

Four types of tasks will be designed to represent the four types of load. Tasks like searching for and receiving information or identifying objects will be designed for perceptual load. Tasks like information processing and problem solving will be designed for cognitive load. Tasks involves communication will be designed for communitive load. Tasks involves physical activity will be designed for motor load. The participant will be assigned the tasks with three difficulty level representing different load level. The time stamps of each task will be automatically recorded. The participants will be asked to wear a pair of safety glass equipped with two lightweight infrared web cameras collecting the eye activity and speech information. Head movements will be recorded via IMU attached to the head by a head strap.

2.2 Data pre-processing

Video processing will be carried out to obtain features like blink rate, pupil size and pupil center using the self-tuning and dual ellipse fitting algorithms [4]. Fixation and saccade can be separated by dispersion based algorithms [5], using one degree of visual angle for at least 200 ms. Speech processing will be carried out to extract features like speech rate, intervals between pauses, fundamental frequency, MFCC and etc. Time-series signals from the accelerometer and gyroscope located inside the IMU can be transferred into a new activity image which contains hidden relations between any pair of signals [6].

2.3 Data classification


The activity image enables Deep Convolutional Neural Networks (DCNN) to automatically learn the optimal feature suited for activity recognition [6]. A variety of classification techniques such as K-Nearest Neighbor (K-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), Random Forest (RF) and Hidden Markov Model (HMM) will be tested in terms of accuracy and computational cost. The algorithm with the best performance will be adopted.

Figure 1 Overall design diagram for automatic task analysis using wearable devices [2]

3. Details of previous research experience in my area of interest

In the fourth year of my undergraduate study, I carried out a real-time speech recognition project which can detect a sequence of spoken digits recorded by several users with 98% accuracy. I firstly implemented an end-point detection algorithm which can segment each spoken digit based on the energy spectrum and zero crossing rate. Then I computed the MFCC feature vector of each digit and compared the feature to the reference feature using dynamic time wrapping. The whole process was realized using Matlab where all the code was written originally by myself. In this project, I accumulated some experience about audio signal processing and speech recognition which is a key part of the proposed research project.

Reference:

[1] Attal, F., Mohammed, S., Dedabrishvili, M., Chamroukhi, F., Oukhellou, L. and Amirat, Y. (2015). Physical Human Activity Recognition Using Wearable Sensors. Sensors, 15(12), pp.31314-31338.

[2] Epps, J. and Chen, S. (2018). Automatic Task Analysis: Toward Wearable Behaviometrics. IEEE Systems, Man, and Cybernetics Magazine, 4(4), pp.15-20.

[3] Bulling, A. and Kunze, K. (2016). Eyewear computers for human-computer interaction. interactions, 23(3), pp.70-73.

[4] Siyuan Chen and Epps, J. (2014). Efficient and Robust Pupil Size and Blink Estimation From Near-Field Video Sequences for Human–Machine Interaction. IEEE Transactions on Cybernetics, 44(12), pp.2356-2367.

[5] Salvucci, D. and Goldberg, J. (n.d.). Identifying Fixations and Saccades in Eye-tracking Protocols. pp.71-78.

[6] Jiang, W. and Yin, Z. (n.d.). Human Activity Recognition Using Wearable Sensors by Deep Convolutional Neural Networks.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: