EEG Correlates of Task Engagement and Mental Workload in Vigilance, Learning, and Memory Tasks

Review

In 2007, Berka et al published their article, EEG Correlates of Task Engagement and Mental Workload in Vigilance, Learning, and Memory Tasks. With the aim to improve our ‘capability to continuously monitor an individual’s level of fatigue, attention, task engagement, and mental workload in operational environments using physiological parameters’, they present the following:

  • A new EEG metric for task engagement
  • A new EEG metric for mental workload
  • A hardware and software solution for real-time acquisition and analysis of EEG using these metrics
  • The results of study of the use of these systems and metrics

The article focuses primarily on two related concepts: task engagement and mental workload. As they put it:

Both measures increase as a function of increasing task demands but the engagement measure tracks demands for sensory processing and attention resources while the mental workload index was developed as a measure of the level of cognitive processes generally considered more the domain of executive function.

Using features derived from the signals acquired using a wireless, twelve channel EEG headset, Berka et al trained a model using linear and quadratic discriminant function analysis to identify and quantify cognitive state changes. For engagement, the model gives probabilities for each of high engagement, low engagement, relaxed wakefulness, and sleep onset. For workload, the model gives probabilities for both low and high mental workload. (They appear to consider cognitive states as unlabeled combinations of probabilities of each of these classes.) The aim of their simplified model was generalizability across various subjects and scenarios, as well as the ability to implement the model in wireless, real-time systems.

They trained the model using 13 subjects performing a battery of tasks, and cross-validated it with 67 additional subjects performing a similar battery of tasks. Task order was not randomized in either training or cross validation. The batteries of tasks encompass a range of task types and difficulties. Unfortunately, the authors struggle to present these batteries of tasks as a cohesive whole and to argue for relationship between the tasks.

In general, Berka et al found that for the indexes they developed:

[T]he EEG engagement index is related to processes involving information-gathering, visual scanning, and sustained attention. The EEG-workload index increases with working memory load and with increasing difficulty level mental arithmetic and other problem-solving tasks.

My primary issue with this article revolves around the authors’ statement:

During [some] multi-level tasks, EEG-engagement showed a pattern of change that was variable across tasks, levels, and participants.

Indeed, these tasks represented a large portion of the task battery. The authors argue for the effectiveness of their engagement index, but never thoroughly address why this index is inconsistent across tasks, levels, and participants. At the very least, this might have been included in the authors’ suggestions for future work.

Open Questions

  • The authors gave very few details on the specifics of their wireless EEG system. Many recent products in this area have been of questionable usefulness, at best…
  • Why did the authors not control for ordering effects?
  • Why the different protocols for training and cross-validation? More than this, why modify tasks that were common across both protocols. Finally, if the authors were going to modify common tasks, why not modify those that seemed particularly problematic–at least as they presented them in the paper (e.g., “Trails”)?

I thought we were over ‘synergy’…

Hey Matthew Bietz, Toni Ferro, and Charlotte Lee, 2004 called–it wants its terrible buzzwords back. No really, people have been vocal about their hate for ‘synergy’ for over a decade now–find a less grating way to describe cooperative interaction. Here’s a brilliant suggestion: ‘cooperative interaction’.

Now that that’s out of the way, I’m just through with reading Sustaining the Development of Cyberinfrastructure: An Organization Adapting to Change by Bietz et al. (yes, at least they left it out of the title.) This was a 2012 study in how to create cyberinfrastructure sustainability through ‘synergizing’ (an unholy, Frankensteinian abomination of a made up word).

Paper Mindmap

Paper Mindmap

Cyberinfrastructures

According to the authors, a ‘cyberinfrastructure’ (CI) is a virtual organization composed of people working with large-scale scientific computational and networking infrastructures. This seems to be an overly limiting definition of a CI, but a suitable one for the purposes of the paper. Within this definition, the authors consider how the people who work on and within CIs grapple with the issues of growing amounts of data, and sizes and complexities of computational problems. In particular, the authors are interested in the exploring the sustainability of CIs. They do so through a large case study of one particular CI, that at the Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA) out of UCSD. The authors spent an extended period of time over two observation periods separated by two years interviewing participants on the projects, working amongst the participants, and observing general trends in the microbial research community.

Relationships

Overall, the sustainability of a CI boils down to how well relationships are managed, and how open to change the developers of CIs are to change. The authors present several observations from their work with CAMERA that demonstrate how innate constant change is to the environments in which CIs are situations, and how CIs are, fundamentally, an intricate set of relationships between people, organizations, and technologies. Over the course of the study’s observation of the camera project, the authors observed a number of changes in the structure of the project. These changes, due to the multi-layered relationships comprising the CI, had far-reaching effects across many different pieces of the CI. The only successful way to navigate such changes is to understand their potential impact throughout the CI.

Reactions

At the risk of sounding overly reductionistic, it seems to me that the overwhelming majority of what the authors present in this paper is basic common sense. Take any business, stir the pot, and watch how the business responds. I assume that most intelligent people would be able to surmise that any significant changes would have far reaching effects within the business, and that sensitivity to such changes and their effects on relationships would be important in determining how well the organization copes with such changes. Certainly, the situation becomes more complex given a more complex relationship structure, but the principal remains the same. Furthermore, this does not pair well with my general cynicism toward practice-based research. While the paper is well structured and written, I find it hard to identify any genuine contribution the paper makes beyond a decent articulation of what most people should already know.