Here are a few thoughts on a couple of database-related interfaces (two of them also touch-based) that are similar to the query building NUI that I’m currently building. I’ll look specifically at areas where my tool can improve, and problems with which these other projects have dealt of which I should be aware as I continue to work on this project.
First, Stefan Flöring and Tobias Hesselmann created TaP,
a visual analytics system for visualization and gesture based exploration of multi-dimensional data on an interactive tabletop…[U]sers are able to control the entire analysis process by means of hand gestures on the tabletop’s surface.
This doesn’t seem to be entirely true, as many tasks are accomplished through half-circle radial menus called stacked half-pie menus. I’ll also note here that while the authors claim this to be a collaborative interface, all collaborators must be grouped around a single edge of the tabletop. It seems that neither were Flöring and Hesselmann able to address the problem of coherent representation of a orientation sensitive entities to multiple users at different positions. They do acknowledge this, but give no advice on how to address this problem.
While TaP isn’t a query construction tool, there are several issues that Flöring and Hesselmann have addressed from which it may be useful for me to learn. While their half-pie menus may not make sense as direct replacements for the menus I have currently designed, it is possible that their layered approach to the radial menu may be useful. I also like the ability to call the menu forth from any location on the screen with the heel of the palm. TaP’s dropzones are also in line with my thinking, and seem to be intuitive from watching the video.
The only real gestures beyond the obvious ones for moving and scaling objects are the tracing of rectangles to create new charts, and the tracing of circles to open the help menu. These seem contrived to me; they seem like gestures created just for the hell of it. For better or worse, this reinforces my aversion for designing gestural interactions in my tool unless they seem specifically useful or called for.
GestureDB is a tool very similar to the tool I am currently designing. The designers of GestureDB describe it as
a novel gesture recognition system that uses both the interaction and the state of the database to classify gestural input into relational database queries.
The primary difference between GestureDB and the tool I am developing is that GestureDB addresses problems in designing queries against relational databases. My tool, on the other hand, targets NoSQL (non-relational) databases. While there are similar problems in designing queries for both relational and NoSQL databases, building queries for NoSQL databases does present its own unique set of challenges. Nevertheless, there are a number of things to learn from the experiences of the designers of GestureDB.
First, simple gesture recognition may not satisfactorily describe the range of intent of a user when designing a query. To address this issue, the designers of GestureDB use an entropy-based classifier that draws on two sources of features. As usual, it narrows the set of potential gestures based on spatial information contained in the gesture. Second, it prunes the space of possible user intent by examining which actions represented by gestures are more likely than others given the constraints imposed by the underlying database structure. Based on this, the classifier automatically detects the most likely intent based on all possible intents. It may not be the case that building such a classifier is within the scope of this project, but my experiences may prove that this approach is worthy of consideration as I continue development.
Second, GestureDB provides means for just-in-time access to the underlying data in order to more efficiently design queries. For instance, simply preview gestures are available that allow the user to see the data they are querying against in order to modify their gestures before completing them.
Finally, the ability to undo an operation adds to GestureDB’s flexibility. While this has seemed to me to be a nice-to-have feature, I see it now as even more important. While some aspects of the interactions I am designing allow for implicit undo, at some point it will be necessary to explicitly undo any operation, as well as to undo many successive operations.
There are also numerous ways in which GestureDB seems to be successful that reinforce the design I am considering. Representation of ‘tables’ as real objects that the user can manipulate seems effective. In addition, separating the interaction space into a ‘well’ where tables are selected and a ‘sandbox’ where tables are dropped in order to be shaped into a portion of the query also seems to be effective.
As Nandi and Mandel state, precious few tools for graphical construction of database queries exist for touch interfaces. This leaves me in the exciting position of working in an area where little progress has yet been made, but at the same time having little in the way of the experiences of other researchers from which to draw. These examples of similar work that I have been able to find do, fortunately, provide helpful advice on common pitfalls that I might avoid, as well as reinforcement of not only the utility of such a tool, but the appropriateness of a number of design decisions that I have already made.