First Crit

A few days ago, we had our first crits of our initial design ideas for our course project NUIs. As I’ve previously described, I’m designing a NUI for the Perceptive Pixel for exploration of a MongoDB dataset.

The biggest question around my current design is how exactly collaboration is accomplished with the NUI. In reality, this question is about more than the NUI itself; really, the question is, how do multiple parties collaborate in data exploration? To begin to consider this question, it’s important to understand what the data are in my first target use case.

This use case considers data exploration with the Emotion in Motion dataset. Emotion in Motion is a large-scale experiment that measures subjects’ physiology while they listen to different selections of music. ‘Documents’ in the dataset come primarily in three flavors: trials, signals, and media. An abbreviated trial document looks something like the following:

Those entries in the media property correspond to three different songs to which this subject listened. The answers property contains both demographic information, as well as answers to questions that this subject was asked after listening to each song. For instance, after listening to the song with label ObjectId("537e601bdf872bb71e4df26d") (from the media property), the subject rated their ‘liking’ of the song as 4 on a scale of 1-5. The media ObjectIds point to media documents that look something like this (also abbreviated):

And finally, each media in a trial is associated with a signal document. Here’s an abbreviated example:

Each of the properties under the signals property is a very long array, with each entry representing the instantaneous value of a given signal as measured at a specific point in time while listening to the associated media file. The entries in eda_status and hr_status are binary indicators of the acceptability of the other EDA and HR signals at that moment in time. In addition, we work with a far greater number of featured that are derived from the raw and filtered physiological signals.

Looking at one of these combined media/signal/trials in any detail takes a considerable amount of screen space. The problem is, we are approaching 40,000 ‘song listens’, and this number continues to grow daily. Within the next two years, we expect to be well beyond 100,000 listens. So, for a given song, an interface to explore signals from the, say, 2,000 subject that have listened to the song needs to be carefully considered. And, how do we go about creating an interface with which multiple people can work together to explore such a dataset.

The most obvious way to visualize data like this is to create individual plots for each type of signal/feature (tonic EDA and heart rate variability, for instance). These plots are naturally aligned vertically, as they all correspond to a common timebase. How, though, do multiple people easily manipulate and view this visualization? I’ve imagined the scenario for this project to be one in which the Perceptive Pixel is used as a tabletop interface. Thus, the most obvious arrangement of users is on either side of the table. Is each user shown their own separate visualization/interface in the orientation that is correct for them? Is the separation of displays used only during the exploration process and later combined for a larger visualization? If the exploration is to be tightly linked (each party works closely together during the exploration), how is the interface oriented? Or, does a less tightly linked interaction better suit this scenario?

These are the kinds of questions that came up during my first crit. Many of them would be easily addressed by mounting the Perceptive Pixel vertically, and in the end, this may be the best solution. I’m still enjoying the challenge of exploring ways to create a collaborative NUI using a tabletop interface that deals with content that is highly sensitive to orientation, though.

Leave a Reply

Your email address will not be published. Required fields are marked *