One of our ongoing studies is Emotion in Motion, a large-scale experiment that collects physiological data from people while they listen to selections of music. Emotion in Motion began in 2010, while we were working as Ph.D. researchers at Queen’s University Belfast. It first ran for several month in the Science Gallery in Dublin, Ireland. Here, we went through several iterations of the experiment: questions we asked the participants changed, the music selections changed, and so on. Since Dublin, Emotion in Motion has been staged in New York City, Bergen (Norway), Manila, and the Philippines. We are currently preparing to deploy Emotion in Motion in Taiwan for the entirety of 2015.
The data generated by Emotion in Motion were originally written to formatted text files. We wrote parsers for these files to work in the environments in which we chose to work. As Emotion in Motion’s life has continued, however, we’ve recognized that we really need a better method for storing and accessing these data. Across all of these iterations, while we’ve made a number of changes to the content of the experiment, the overall structure of the experiment has remained relatively stable: participants are always watching or listening to some form of media; we are recording their physiology, and asking them questions about their experiences. We decided that a NoSQL database would allow us to store huge numbers of data entities that in some aspects share many common similarities, but in others may vary wildly. For instance, while we record the same physiological signals from all participants during each media session, the lengths of all media selections are not the same. Or, while we ask for the same demographic information from all participants, we may ask different questions in response to each media selection. The difficulty of representing these varying schemas of data into an RDBMS’ tables made a NoSQL solution the obvious alternative.
So, I now find myself doing a great deal of work in MongoDB. The learning curve has been surprisingly gentle, and I’m very comfortable querying around through the scripting interface. One thing that I have found myself wanting as an easy means of quick-and-dirty visualization for data exploration and high-level analysis means, though. Currently, my workflow is to refine queries using the scripting interface, pull the data I need from MongoDB, and then use an external tool (MATLAB, R, etc.) to visualize the data. It would be very useful for me to be able to be able to visualize queries on the fly, instead of hopping through this piecemeal workflow. In addition, the modularity of MongoDB queries and aggregation would lend themselves well to construction and refinement through a graphical interface.
It’s this real, personal need for such a tool that has led me to choose building such a tool using a tablet interface for a semester-long project in Doug Bowman’s class on natural user interfaces. Some of the other ideas with which I was toying were:
- Tabletop audio editing tool
- Gestural music improvisation tool
- Live music performance looping tool
- Gestural musical score following tool
The musician in me would love to to build any of those tools. Certainly, it would make the project more enjoyable and motivating for me. The researcher in me (that just needs to finish this ****ing dissertation), needs what I’ve described in order to do his work. Practicality and necessity beats out fun and exciting in this case. I’ll post more as the project progresses.