Perceptive Pixel

I’ve had the opportunity to think a bit more about this NUI-based tool for MongoDB data exploration and visualization. In addition, I’ve been able to discuss the project with Doug Bowman. I’ve now have a bit more clarity about what I’d like to see from this interface, and what first steps I should take.

First, on Friday, Chris North introduced Virginia Tech’s new Microsoft Perceptive Pixel at the ICAT Community Playdate last Friday.

From Microsoft:

The Perceptive Pixel (PPI) by Microsoft 55″ Touch Device is a touch-sensitive computer monitor capable of detecting and processing a virtually unlimited number of simultaneous on-screen touches. It has 1920 x 1080 resolution, adjustable brightness of up to 400 nits, a contrast ratio of up to 1000:1, and a display area of 47.6 x 26.8 inches. An advanced sensor distinguishes true touch from proximal motions of palms and arms, eliminating mistriggering and false starts. With optical bonding, the PPI by Microsoft 55” Touch Device virtually eliminates parallax issues and exhibits superior brightness and contrast. And it has built-in color temperature settings to accommodate various environments and user preference.

Perceptive Pixel

While the unit is quite impressive, I’m most interested in how this interface might enable something truly unique for this project. Other than space around the unit, there’s no other limiting factor on the number of users who might view and interact with on-screen content. There is plenty of space for multiple users to carve out their own visualizations, as well. So, I’ll be working with the Perceptive Pixel, instead of the iPad. The learning curve will be steeper for me, as I’m already a competent iOS developer, but I think it will be worth the additional effort.

Second, I’m concerned about biting off more than I can chew in this project. Both data exploration and visualization (in particular, of the dataset with which I’m always working) are important for me to have. However, given the duration of the project, trying to get very deep into both might be too ambitious. Instead, I’ll be focusing on developing an interface for collaborative visualization of NoSQL data–data exploration can come later. This will likely mean that the first number of iterations use only canned data from the dataset.

So, the first step is to jump into C#. I’m not particularly excited to work on a Microsoft stack, but if this is what working with the Perceptive Pixel requires, so be it. The next step is to begin to brainstorm design ideas–more to come on that this week.

Leave a Reply

Your email address will not be published. Required fields are marked *