How CS6724 has changed my life…

A hyperbolic statement? Possibly. Really though, I have learned a great deal from my time in this course (Applied Theories in Human-Computer Interaction). When I first chose to register for the course, I was in a situation where I wasn’t entirely sure how to situate my own research under the umbrella of human-computer interaction. My research spans a number of disciplinary boundaries–primarily computer science, music, and psychology. As I’ve said elsewhere, I currently deal with the interactions of music and human emotion, using affective computing as a tool for exploring these interactions. Certainly then, I believe that this research does have a place in human-computer interaction, but I was at a loss when it came to describing those theories, models, or frameworks that have been developed by other HCI researchers that might come to bear on my own work. In all truthfulness, I can’t say I’m in a very different position today than I was at the beginning of this course. What I can say, however, is that I’m in a much better position to begin to look for the answers to these questions now, both due to my own time presenting for this class, and also due to my experiences participating in the presentations that others have given. From each side of the lectern, I’ve read and discussed scholarship that has all but slapped me across the face and screamed, “Don’t let your own work end up looking like this!”, as well as scholarship that has very clearly demonstrated the right way to go about my work.

Bad research. I came into the course incredibly biased against what I thought (and often still think) is the status quo for rigor in HCI research. I’ll admit that there are plenty of exceptions to the rule, but it seemed that after reading any arbitrary piece of HCI literature I was more often than not left wondering what reviewer in their right mind would recommend this paper/article for acceptance. Part of my reason for taking this course–one that I articulated in one of the first course meetings–was that I hoped that this course would provide me with the opportunity to have this assumption to be proven wrong. To be honest, there have been a number of things we’ve read over the course of the semester that aren’t worth the paper on which they’re printed. Reading these, combined with discussions with others in the course and the gripes of other well-respected scholars, I’m now convinced that our field does have a problem with ‘research’ that plays fast-and-loose with academic rigor.

Good research. On the other hand, I have been heartened both by calls for more disciplined approaches to research, as well as exemplars of the same. For example, Ackerman’s argument for creating a new science of the artificial from CSCW (and HCI as a whole), coupled with a call for carefully planned and executed, fundamentally sound inquiry brings me the hope that there are researchers in our field who do give a damn about the respectability of their work. At the same time, several pieces of literature that we’ve read serve as wonderful examples of such clearly thought-through and well-executed examples of such research. What does this mean for me, as a young(ish) scholar, in the end? It means that I believe that I have an issue with the level at which we’ve set the bar for acceptable research in our field. It means that I see the fingerprints of the allure of quick and easy work in my own research. And finally, it means that in recognizing my own shortcomings and these larger problems in our field I am responsible for bringing what I can to the table in my own work to be a part of a change for the better in our collective work as a group of academics. More to the point (and at the risk of sounding esoteric), I see the problem both in our field as a whole and in my own work, and its up to me to make a difference where I can by letting my own work serve as an example of what quality scholarship should look like. This is how CS6724 has changed my life.

Computer Science(?)

The more literature I read from the HCI corpus of scholarship (if it can be called as much), the more I become convinced that HCI–as an academic field–is plagued by pseudoscholars. Whether I am reading papers for my own research or reviewing conference submissions, I find myself repeatedly shocked at the lack of academic rigor, the poor writing skills, the misappropriation or misunderstanding of concepts (especially those borrowed from other fields), and most shockingly, the general blasé attitude with which we as an academic community sweep these issues under the rug.

My initial training in academia was in music. I first studied music theory and composition and later, musicology. When I begana Ph.D. program in music, it was perplexing to me how work in music composition would fit the mold of a research degree. Plenty of people were doing it though, and as I continued to steep in the scholarship, I came to understand that while scholars in the arts granted each other a certain degree of latitude (for instance, when expressing opinions with regard to a composer’s motivations in structuring a work in a certain fashion.) In other respects, if a scholar’s work didn’t hold water with regard to researchable, irrefutable facts, it would be torn to shreds by other scholars as a pack of starved dogs lays into their first meal in days. I have witnessed Master’s students and well-respected scholars alike brutalized in shouting matches after paper presentations at conferences, and read ruthless, vicious responses to articles. In many instances, while I may have chosen a different approach or tone in correcting the work of another, I now recognize that their intentions were–in many cases, though certainly not in others–rooted in a general endeavor for an academic rigor able to weather the attacks of skeptics.

When we started to dig into the critical theory literature around music theory and musicology, things got a little strange for me. If I’m being completely honest, there was a great deal of work that I dismissed outright, thinking that these writers just had an axe to grind, and found musicology as a venue just hippy enough to do so. It wasn’t until I took a course, Homosexuality in Music, offered by a now good friend of mine, Byron Adams, that I began to see things differently. Over the course of that class, I came to realize that, for the most part, the work of these scholars was grounded in rigorous research. I began to develop an appreciation for two different strains of scholarship, the border between which fell roughly along the dividing line between the arts and sciences.

Even so, while my opinion of scholarship in the arts had begun to change, my understanding of research in the sciences was the same as it always had been: the incremental accumulation of knowledge drawing from and building on previous advances and discoveries through methodical, measured scientific inquiry. This was, after all, what we’d always learned in school–the scientific method. We come to an understanding of the world based on the discoveries of those that have gone before us, form new ideas about how things may work based on our own observations, and build environments in which to test these ideas. So, when I moved into computer science–specifically, human-computer interaction (or whatever you’d like to call it these days)–I came with the battery of expectations I had held all along about just what scientific inquiry was, and how I should expect to see it exercised in a field that called itself a science.

In my reading of Yvonne Roger’s HCI Theory this week, I came across this quote: “HCI has emerged as an eclectic interdiscipline rather than a well-defined science.” That’s all well and good, but in further exploring the explosion of theories that have come to bear on scholars’ work in HCI that Rogers describes, coupled with all of my own reading of the literature, I’ve grown very skeptical of statements such as these, and very cynical of the quality of a great deal of work produced by fellow academics in computer science–specifically HCI. Time after time, I read articles or papers that fill space with a great new idea, throw in a dash of cognitive theory, fold in a p-value here and there, frost with a delicious acceptance to CHI, and voilà–scholarship is born.

I don’t buy it.

The problem as I see it is that more and more often academics take license in interdisciplinarity to skimp on rigor. Don’t really know what’s behind the statistics your throwing around? Only read a summary or two of a theory that seems to fit the bill for your work and would give it a little extra spice? No problem, because hey, creativity and innovation in the name of interdisciplinarity and interdepartmental collaboration trump all. Even better, find a post in an interdisciplinary research center, and you can get by without having to worry yourself over rigor ever again.

Let me be clear, I’m not saying this to wag a finger at others. The reason that I find this terrifyingly uncomfortable is that lately I find myself falling into the same trap more and more often. My own research crosses the boundaries of computer science/HCI, electrical engineering, psychology, and (still) music. When push comes to shove, there aren’t enough hours in the day to absorb and synthesize all of the literature with which I’m working. I often find myself writing an article or paper and skimping on the details because I just don’t know enough, or I’m simply not confident enough in a particular area to open myself up to the embarrassment of being shown that I’ve made a mistake. Nonetheless, at the end of the day I’m expected to produce, and in order to do so, I sweep it all under the rug, too.

Are we really okay with this? Or, am I simply just mistaken? There’s a big part of me that wants to be convinced that I am mistaken. I want to see the abundance of recent scholarship that proves me wrong. More than that, I want to know about and read the work of creative, innovative scholars bearing the banner of interdisciplinarity while nevertheless producing unquestionably rigorous scholarship. Until then, I’m tired of being surrounded by people who unashamedly label themselves scholars in spite of the schlock the serve up regularly, and more importantly, I’m tired of being one of those people.

Building and using the MongoDB C++ driver/library

I’ve blown some serious time over the last couple of days getting the MongoDB C++ driver/library built, and being able to link against it. First, I had problems compiling the MongoDB driver when trying to link against the Boost library. Then there were problems compiling a simple program (the tutorial program from the driver documentation) that leverage the MongoDB library. The exact problems really were too numerous to document here, and to be honest, I was pretty tunnel-visioned on getting everything to work correctly. So, I’m afraid I don’t have all the errors I fought with here for you to search against.

With that in mind, I hope this might help some people with general issues getting these libraries off the ground and working together. In particular, I found the steps listed in one StackOverflow answer to be very helpful. In general, no attempts with gcc/g++ were of any use. Once I started throwing clang at the problem, I started to see results. Here’s a rundown of my process:

  1. Download Boost (the latest version at the time of writing is 1.55.0).
  2. Decompress your Boost download and drop into a terminal in the decompressed folder. I used the instructions from the Boost documentation with a few modifications:

    In short, that’s just:
  3. Download and decompress the MongoDB C++ driver.
  4. Back in the terminal, in the MongoDB C++ driver decompressed folder:
  5. Save tutorial.cpp from the MongoDB C++ driver documentation. In the same directory:

In general, I had to pay particularly close attention to these requirements (thanks again to that SO answer!):

  • Mongo DB uses exactly the same version of Boost library and headers that you are using.
  • Make sure that they are looking on exactly the same headers.
  • Make sure libmongoclient and Boost library are both compiled with the clang compiler.
  • Make sure you link with the correct versions of the Boost library.
  • Make sure you use the same C++ flags for all compilations (i.e. stuff like C++0x)

Trouble installing the JSON gem

I just got a replacement machine from Apple (literally the only time I’ve ever been pleased with Apple’s customer service). The recent iOS upgrade for my iPhone and iPad required me to upgrade to Xcode 4.3. Needless to say, all of this gave me quite a headache when it came to reinstalling Ruby, RVM and various gems. Either way, I was able to get Ruby 1.9.3-p125 installed using RVM, but the issues with gem installation have persisted (with a handful of gems, at least).

One of those problems came yesterday when trying to install the JSON gem. Here’s what went down:

Several StackOverflow questions (like this and this) that I’d found said this was an issue with my Xcode installation. One guy said this “just works” with the OS X 10.7.3 update. Not true at all, guy.

Lots of discussions pointed me to this Github repo. I’m not interested in installing just the command-line tools with which Xcode ships–I already use Xcode. Other places said to install the command line tools from the Downloads tab in Xcode preferences. I’d already done that. This post seemed to have the answer. It gave me two options:

  • Use the separate standalone gcc installer
  • Uninstall XCode 4.2 and Install XCode 4.1

Well, I have to use XCode 4.3, so rolling back wasn’t an option. The discussion of a separate standalone gcc compiler didn’t apply either–gcc --version told me I had GCC 4.2.1 installed. Ah-ha! Back in my error output, I saw this:

That’s really all it was. So, this fixed it for me:

Hope that helps someone…