Perceptive Pixel

I’ve had the opportunity to think a bit more about this NUI-based tool for MongoDB data exploration and visualization. In addition, I’ve been able to discuss the project with Doug Bowman. I’ve now have a bit more clarity about what I’d like to see from this interface, and what first steps I should take.

First, on Friday, Chris North introduced Virginia Tech’s new Microsoft Perceptive Pixel at the ICAT Community Playdate last Friday.

From Microsoft:

The Perceptive Pixel (PPI) by Microsoft 55″ Touch Device is a touch-sensitive computer monitor capable of detecting and processing a virtually unlimited number of simultaneous on-screen touches. It has 1920 x 1080 resolution, adjustable brightness of up to 400 nits, a contrast ratio of up to 1000:1, and a display area of 47.6 x 26.8 inches. An advanced sensor distinguishes true touch from proximal motions of palms and arms, eliminating mistriggering and false starts. With optical bonding, the PPI by Microsoft 55” Touch Device virtually eliminates parallax issues and exhibits superior brightness and contrast. And it has built-in color temperature settings to accommodate various environments and user preference.

Perceptive Pixel

While the unit is quite impressive, I’m most interested in how this interface might enable something truly unique for this project. Other than space around the unit, there’s no other limiting factor on the number of users who might view and interact with on-screen content. There is plenty of space for multiple users to carve out their own visualizations, as well. So, I’ll be working with the Perceptive Pixel, instead of the iPad. The learning curve will be steeper for me, as I’m already a competent iOS developer, but I think it will be worth the additional effort.

Second, I’m concerned about biting off more than I can chew in this project. Both data exploration and visualization (in particular, of the dataset with which I’m always working) are important for me to have. However, given the duration of the project, trying to get very deep into both might be too ambitious. Instead, I’ll be focusing on developing an interface for collaborative visualization of NoSQL data–data exploration can come later. This will likely mean that the first number of iterations use only canned data from the dataset.

So, the first step is to jump into C#. I’m not particularly excited to work on a Microsoft stack, but if this is what working with the Perceptive Pixel requires, so be it. The next step is to begin to brainstorm design ideas–more to come on that this week.

Houston, we have a problem.

Here’s the punch line from Mark Ackerman’s The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility:

If CSCW (or HCI) merely contributes “cool toys” to the world, it will have failed its intellectual mission. Our understanding of the gap is driven by technological exploration through artifact creation and deployment, but HCI and CSCW systems need to have at their core a fundamental understanding of how people really work and live in groups, organizations, communities, and other forms of collective life. Otherwise, we will produce unusable systems, badly mechanizing and distorting collaboration and other social activity.

This “social-technical” gap is the space between how human behavior and activity actually work and our ability to understand, model/represent, and design for human behavior and activity in human-computer interactions. And, coming to grips with this gap presents, for Ackerman, the primary challenge for computer-supported cooperative work as a field.

Ackerman borrows Simon’s idea of sciences of the artificial to build a case for an approach toward better studying, understanding, and addressing the social-technical gap in CSCW. Simon differentiates between the artificial (those things that exist as the products of “human design and agency”), and the natural (those things that exist apart from human intervention). For Simon, the existing sciences focused on understanding the natural, and engineering focused on synthesizing the artificial. Between these two, Simon proposed a space for new sciences–those that seek to understanding the nature of design and engineering. Ackerman places CSCW squarely in the realm of these new sciences:

CSCW is at once an engineering discipline attempting to construct suitable systems for groups, organizations, and other collectivities, and at the same time, CSCW is a social science attempting to understand the basis for that construction in the social world (or everyday experience).
CSCW’s science, however, must centralize the necessary gap between what we would prefer to construct and what we can construct. To do this as a practical program of action requires several steps-—palliatives to ameliorate the current social conditions, first-order approximations to explore the design space, and fundamental lines of inquiry to create the science.

I’m most interested by Ackerman’s call for fundamental lines of inquiry to create this new science of the artificial, primarily because I believe this approach to CSCW holds implications not only for CSCW, but for the broader field of human-computer interaction. The lack of focus we tend to have in HCI (exhibited by the never-ending stream of “cool toys” presented at conference after conference) desperately needs to be addressed, and the identification of and careful examination through fundamental lines of inquiry could go long way in bringing this focus.

I’m just as guilty of this lack of focus as anyone else. I’ve got what I think are “cool” ideas, and I’ve built up my own research around what I’m afraid are thrown together, not-so-fundamental lines of questioning. It’s difficult for me to backtrack, as I know it would be for anyone else. However, in order to genuinely contribute to the progress of HCI as a field, I must take the time establish my work in such a way that it is both prompted by that work that has gone before, and is at least situated to inform that which may follow. If I and others don’t, then fourteen years after Ackerman wrote his article, we’re still failing our mission.

EEG Correlates of Task Engagement and Mental Workload in Vigilance, Learning, and Memory Tasks

Review

In 2007, Berka et al published their article, EEG Correlates of Task Engagement and Mental Workload in Vigilance, Learning, and Memory Tasks. With the aim to improve our ‘capability to continuously monitor an individual’s level of fatigue, attention, task engagement, and mental workload in operational environments using physiological parameters’, they present the following:

  • A new EEG metric for task engagement
  • A new EEG metric for mental workload
  • A hardware and software solution for real-time acquisition and analysis of EEG using these metrics
  • The results of study of the use of these systems and metrics

The article focuses primarily on two related concepts: task engagement and mental workload. As they put it:

Both measures increase as a function of increasing task demands but the engagement measure tracks demands for sensory processing and attention resources while the mental workload index was developed as a measure of the level of cognitive processes generally considered more the domain of executive function.

Using features derived from the signals acquired using a wireless, twelve channel EEG headset, Berka et al trained a model using linear and quadratic discriminant function analysis to identify and quantify cognitive state changes. For engagement, the model gives probabilities for each of high engagement, low engagement, relaxed wakefulness, and sleep onset. For workload, the model gives probabilities for both low and high mental workload. (They appear to consider cognitive states as unlabeled combinations of probabilities of each of these classes.) The aim of their simplified model was generalizability across various subjects and scenarios, as well as the ability to implement the model in wireless, real-time systems.

They trained the model using 13 subjects performing a battery of tasks, and cross-validated it with 67 additional subjects performing a similar battery of tasks. Task order was not randomized in either training or cross validation. The batteries of tasks encompass a range of task types and difficulties. Unfortunately, the authors struggle to present these batteries of tasks as a cohesive whole and to argue for relationship between the tasks.

In general, Berka et al found that for the indexes they developed:

[T]he EEG engagement index is related to processes involving information-gathering, visual scanning, and sustained attention. The EEG-workload index increases with working memory load and with increasing difficulty level mental arithmetic and other problem-solving tasks.

My primary issue with this article revolves around the authors’ statement:

During [some] multi-level tasks, EEG-engagement showed a pattern of change that was variable across tasks, levels, and participants.

Indeed, these tasks represented a large portion of the task battery. The authors argue for the effectiveness of their engagement index, but never thoroughly address why this index is inconsistent across tasks, levels, and participants. At the very least, this might have been included in the authors’ suggestions for future work.

Open Questions

  • The authors gave very few details on the specifics of their wireless EEG system. Many recent products in this area have been of questionable usefulness, at best…
  • Why did the authors not control for ordering effects?
  • Why the different protocols for training and cross-validation? More than this, why modify tasks that were common across both protocols. Finally, if the authors were going to modify common tasks, why not modify those that seemed particularly problematic–at least as they presented them in the paper (e.g., “Trails”)?

I thought we were over ‘synergy’…

Hey Matthew Bietz, Toni Ferro, and Charlotte Lee, 2004 called–it wants its terrible buzzwords back. No really, people have been vocal about their hate for ‘synergy’ for over a decade now–find a less grating way to describe cooperative interaction. Here’s a brilliant suggestion: ‘cooperative interaction’.

Now that that’s out of the way, I’m just through with reading Sustaining the Development of Cyberinfrastructure: An Organization Adapting to Change by Bietz et al. (yes, at least they left it out of the title.) This was a 2012 study in how to create cyberinfrastructure sustainability through ‘synergizing’ (an unholy, Frankensteinian abomination of a made up word).

Paper Mindmap

Paper Mindmap

Cyberinfrastructures

According to the authors, a ‘cyberinfrastructure’ (CI) is a virtual organization composed of people working with large-scale scientific computational and networking infrastructures. This seems to be an overly limiting definition of a CI, but a suitable one for the purposes of the paper. Within this definition, the authors consider how the people who work on and within CIs grapple with the issues of growing amounts of data, and sizes and complexities of computational problems. In particular, the authors are interested in the exploring the sustainability of CIs. They do so through a large case study of one particular CI, that at the Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA) out of UCSD. The authors spent an extended period of time over two observation periods separated by two years interviewing participants on the projects, working amongst the participants, and observing general trends in the microbial research community.

Relationships

Overall, the sustainability of a CI boils down to how well relationships are managed, and how open to change the developers of CIs are to change. The authors present several observations from their work with CAMERA that demonstrate how innate constant change is to the environments in which CIs are situations, and how CIs are, fundamentally, an intricate set of relationships between people, organizations, and technologies. Over the course of the study’s observation of the camera project, the authors observed a number of changes in the structure of the project. These changes, due to the multi-layered relationships comprising the CI, had far-reaching effects across many different pieces of the CI. The only successful way to navigate such changes is to understand their potential impact throughout the CI.

Reactions

At the risk of sounding overly reductionistic, it seems to me that the overwhelming majority of what the authors present in this paper is basic common sense. Take any business, stir the pot, and watch how the business responds. I assume that most intelligent people would be able to surmise that any significant changes would have far reaching effects within the business, and that sensitivity to such changes and their effects on relationships would be important in determining how well the organization copes with such changes. Certainly, the situation becomes more complex given a more complex relationship structure, but the principal remains the same. Furthermore, this does not pair well with my general cynicism toward practice-based research. While the paper is well structured and written, I find it hard to identify any genuine contribution the paper makes beyond a decent articulation of what most people should already know.