Monday, June 29, 2009

More on the Schemantic Multitouch Wall At Cannes

Last week I blogged about the Schematic Multitouch Wall At Cannes, which has really received a lot of well deserved attention.

This week a video appeared on WebKitchen which provides a lot more detail on the device. It uses a new multi-touch technology called Laser Light Plane Illumination which is starting to become popular.

It's interesting that the application was developed in Flex 4 with skinning for a custom, non-Flex, feel.

Another thing that is interesting is that the device is described as "single touch multi-user" which is a different interaction style than we normally see with multitouch surfaces. It makes sense when building a device that might be used by so many people and also, I think, it lends its self to LLP which uses lasers and is possibly easier to implement over very large surfaces. Very cool stuff, Guys!

iDisplay - Gesture and Tangible Computing On The Cheep

Found this video of tangible computer and gesture recognition system on According to

"Lahiru a 22 years old Software Engineering student from the Informatic Institute of Sri Lanka brings us the iDisplay showcasing some intriguing in air gesturing (Pinch the future between your thumb and your forefinger. ) .

An inexpensive solution to retrofit existing flat displays into interactive surfaces. No expensive sensors, no Infrared based techniques, no Soldering, no Compliant surface, no Closed windows, no patents infringements, no broken money box . Only a regular off-the-shelf webcam and a regular LCD monitor . "

The circles are an example of tangible computing; the manipulations with the circle made by the thumb and forefinger are the gestures. This actually isn't multi-touch because the hands do not touch the surface of the display.

Although the applications here are fairly limited it does illustrate how smart people can make Natural User Interfaces with very little money. In my opinion, it an excellent illustration of how quickly NUI technologies will become inexpensive and mainstream.

Tuesday, June 23, 2009

Natural User Interfaces

I'm still wrestling with what to call the umbrella term for all the advanced user interface technologies now coming into vogue (as noted in my last post).

NUI is perhaps the best label and its the one I used in this presentation that I gave last week to the International Association of Software Architects (IASA) in Minneapolis, MN. This is an updated version of the deck I used during my presentation.

Any and all feedback is welcome.

Monday, June 22, 2009

The 'X' User Interface

I've been studying the assortment of alternative user interface technologies that go beyond Command Line Interface (CLI) and Windows, Icons, Menus and Pointers (WIMP). These include the following:

  • Touch User Interfaces
  • Voice User Interfaces
  • Gestural User Interfaces
  • Organic User Interfaces
  • Natural User Interfaces
  • Augmented Reality
  • Automatic Identification

The biggest similarity among these alternative user interface technologies seems to be direct interaction with information, but that doesn't always apply. Another possibility is "invisibility" a term that is used a lot with NUI, but seems like an ill fit to me. What's so invisible about touch screens? In fact, the only similarity I can find is that these user interface technologies are more technically advanced than CLI or WIMP.

I did try to apply the Tangible User Interface (TUI), Natural User Interface (NUI) and Organic User Interface (OUI) definitions, such as they exist, but these are less than appropriate labels. For one thing there is nothing natural or organic about these alternative user interface technologies. Second, including speech as one of the UI technologies makes it less than tangible. Also, isn't the keyboard and mouse used in CLI and WIMP tangible?

So what label do I apply? I don't know exactly, but for now I'm calling them the "X User Interfaces" or XUI. Something I blogged about a while back. I could refer them collectively as the "Advanced User Interfaces" because of the fact that they are more advanced than CLI and WIMP. I suppose I could call them the 5th Generation UIs, where mechanical, command-line, text, and WIMP make up the first 4 generations.

Another label I've considered seriously is Magical User Interface in reference to the famous quote from Arther C. Clarke

As Arther C. Clarke said:

Any sufficiently advanced technology is indistinguishable from magic.

I've posted about magic as a metaphor for user interface design before. Does that make sense, or is it too whimsical? Is Advanced User Interfaces too academic? Should I adopt Natural User Interfaces because that's what the industry trend seems to be? Should we stick with XUI? What's your opinion?

Update June 23, 2009

I've wrestled with this long enough I think (far longer than the one day since this post was originally published) and I've decided to use Natural User Interface as the umbrella term for all these non-traditional UI technologies. I don't think there much that is natural about them but NUI seems to be the term that is gaining the most traction when it comes to describing these things and so I see no compelling reason to advance a different term - any label you give this new type UI would probably be just as arbitrary. At least NUI has some traction.

Friday, June 19, 2009

Resources for Programming the iPhone 3.0 OS

With the iPhone 3G S officially going on sale today and the release of the iPhone 3.0 OS for all iPhones, I thought it would be a good idea to share some resources specific to developing applications for the iPhone 3.0 OS.

The first one that comes to mind, of course, is Apple's own developer forum which you can become a part of starting at $99.00 a month - I'm a member.

Another great resource is a new beta version of the book "iPhone SDK Development" which is available from Pragmatic Programmer as a downloadable PDF for $24.00. That book includes coverage of the new iPhone 3.0 OS.

There are, of course, plenty of resources specifically for 2.0 versions of the iPhone operating system and they can be an excellent start. For example, I posted before about the the great Apress book, "Begining iPhone Development" back in January. There are also a number of blogs dedicated to iPhone development. Another great resource for learning iPhone 2.x is a free MIT course on iPhone SDK development that you can get from iTunes U. It was taught by Apple Engineers and is a really good - if a bit long - introduction to iPhone programming.

I ordered my iPhone from AT & T yesterday and look forward to getting it in the next couple of weeks. I also downloaded and installed the iPhone 3.0 OS on my first generation iPhone and it seems to be working great. Of course it will be even better on my new iPhone 3G S!

If you are serious about multi-touch development then I recommend learning how to do iPhone development now rather than later. You can make the time. Just cut out a couple of TV shows from your daily routine and have fun programming again.

Wednesday, June 17, 2009

Schematic Multitouch Wall for Cannes

Boing Boing Gadgets has a great video and story on Schematic's large multi-touch wall in development for the 2009 Cannes Film Festival. The multi-touch wall appears, from the video, to be about 5 feet high and at least 10 feet long - the final installation may be much longer. The touch wall is probably the best implementation of this form factor yet. The visuals are well designed, aesthetically pleasing and have real utility.

The UI is really well designed. The height of the wall is perfect for people standing, and each person interacting with the device has access to all functionality which emanates from their initial contact. That's critical in a large installation so that users don't have to move around the wall to use its features.

The aesthetics are excellent with an all black background, blue highlights, and muted secondary colors - its actually really beautiful.

In terms of utility you can't do much better. For self-identification, the wall uses RFID tags in the lanyards of the attendees, which must be held about 2 inches away from the screen in order activate personal information. That's important as you don't want on-lookers activating their personal schedules and networks (more on that in a bit) accidentally but you also don't want people to have to enter in their name and password to use the system. The software itself includes maps of the venue and the city, a personal festival schedule application, and a social networking application that allows two attendees to exchange contacts right at the wall.

While this implementation by Schematic is specifically for the Cannes Film Festival the software application and hardware could be deployed at any event and would make an excellent addition to any large modern building foyer. As one of the interviewees points out this type of form factor will probably become a part of the architecture for large buildings in the future.

Check out the video below which can be activated using the passcode "cannes" - Boing Boing Gadget's Joel Johnson promises to remove the pass code requirement when he "wakes up". Cool video, Joel! Great work Schematic!!

Monday, June 15, 2009

Multitouch for Commerical Art and CAD

The idea of using a drafting style form factor for multitouch is not new, but when would that form factor make sense and what would you need to create a compelling single-user productivity device?

One of the best applications for a drafting-style form factor would be for commercial artists working with products like the Adobe Creative Suite and architectural/mechanical draftsmen that utilize CAD systems. In fact, I believe that the use of multitouch in these professionals will become common place in ten years.

I spent three years of High School studying drafting - this was back in the late 70's and early 80's - and I always found the drafting table to be a delightful form factor. The drafting table form factor had been perfected over the course of at least 100 years, so the working posture associated with drafting is extremely comfortable. In my opinion, and from direct experience, the drafting table form factor is perfect for working long hours. In addition, unlike a regular desk, the drafting table is focused on one task - drawing - allowing laser like focus and enabling flow.

Multitouch tables, tipped at the angle of a standard drafting table with leg room underneath, would be an outstanding form factor. In addition to supporting multitouch with fingers it would also need to support pen-based input and potentially tangible objects like the tagged objects used on Surface. That last bit would require tag objects with strong friction surfaces or magnetism.

So having thought about this a lot I was absolutely delighted when I discovered, via SolidSmak, that a team of researchers from Media Interaction Lab of Upper Austria University of Applied Sciences had been thinking of many of the same things and had in fact implemented the ideas in a project called Flux. Flux is a flexible multitouch drafting table that supports the use of both multiouch- and pen-based interactions. They go a step further than my musings by integrating peripherals for setting the line-weight and color of pen devices and by making the form factor adjustable so it can be placed in drafting, horizontal, and vertical positions. Check out the video below to see how it works. Great work guys!

Sunday, June 14, 2009

Using Multitouch Tables and Tablets Together

If you've watched the Microsoft future videos that I have blogged about a few times (here, here, and here) you'll notice the use of multitouch tablets in conjunction with multitouch tables and walls.

In thinking about how to implement multi-user productivity applications in multitouch its becoming clear to me that you need touch tablets to augment the multi-touch table or wall experience so that every participant can work in concert and independently with the same view. This is a concept that is explained pretty well in the video below which shows research on multi-display technologies used in concert with a multitouch table (fast forward to 2 minutes and 24 seconds to see the tablet part).

This is not only true of productivity applications; multi-player games would also benefit.

I've also been thinking about how multi-touch tables might find a place in the home gaming and leisure computer market (see my post on the R.U.S.E game). It seems to be me that many games would benefit if the each participant could use a peripheral touch tablet in addition to a shared multi-touch table. For example, imagine a role playing game (e.g. Dungeons & Dragons) implemented on a multi-touch table. Each player could have a shared view of the dungeon map on the table, but they could also manage their own characters on a personal tablet. The is true of many table games such as Scrabble, Monopoly, poker, and so on.

To make a really compelling multi-user, multi-touch table or wall for the home or office, you will need to include light-weight tablets as peripherals - one for each participant. These light-weight tablets (I'll call them "touch paddles") don't need to be that powerful, they just need to be able to sync up with the table. In fact, the table could do all the processing for the touch paddles. The touch paddles would only be touch sensitive screens. This is probably the most likely scenario in the short term as its the least expensive.

As tablet computers and eBooks and touch screen mobile phones become more common it will be possible to support users personal devices as peripherals rather than requiring everyone to use the touch paddles shipped with the table.

No matter how you slice it, collaborative multitouch table and wall applications are going to need peripherals to allow everyone to be productive. Sharing the same surface is great in some cases, but in many others its better to have your own personal multitouch surface as well. Just look at any meeting at work where everyone has their own notepad but ideas are shared on a whiteboard.

Friday, June 12, 2009

ARCHOS 9: A Nice (Multitouch?) Tablet

Endgadget has a great scoop on a new tablet computer, the ARCHOS 9. According to Engadget, fans of ARCHOS devices claim that the ARCHOS 9 has multitouch support but that claim is not supported in product information on ARCHOS site or in the video demonstration below.

Whether or not ARCHOS 9 has multi-touch is important, but what is also important is is the fact that it runs on Windows 7 (a full operating system) and has a nice form factor. With a cabinet that is 9" long and (by appearances) about 6" wide it looks like it would be prefect for vidoes, games, and web surfing.

I like the ARCHOS 9 because it is not a claim shell notebook with a swivel screen - a form factor that is too bulky in my opinion to be practical. I would much prefer to use this thinner device with a FrogPad or some other portable keyboard.

The ARCHOS 9 probably will not be available until this October.

Razerfish Da Vinci Surface Application

Thanks to DigitalDesignBlog for pointing out this video of a new demo application created by Razorfish for Microsoft Surface.

Like other physics applications covered in this blog, Razorfish's De Vinci application demonstrates how drawn objects react to forces like gravity and collisions with each other.

It's a really cool demonstration and hints at the power of the Surface not just for graphics but also the use of real world objects to effect the physics of drawn objects. Nice work guys!

(reprinted from March 2009 to cut Razorfish some slack :)

Native Multi-Touch Support on Linux

Thanks to Phoronix for the heads-up on the Interactive Computing Laboratory's work adding multi-touch to the Linux Kernel 2.6.3. It's pretty cool and can be used today with NTrig, Stantum, and DiamondTouch touch screens and with computers such as the HP TouchSmart tx2.

A while back I asked if multi-touch on Linux was dead, well I'm happy to report that its a live and kicking. Nice work!

Thursday, June 11, 2009

Lonely Planet Microsoft Surface Application

A story in the Sydney Morning Herald reports that the Lonely Planet, a travel guide publisher, is opening their first brick-n-mortar store at the Sydney Airport and it will include a Microsoft Surface with a Lonely Planet application. The application was built by Razorfish. You can see a video demo of it here.

With all do respect to Razorfish I find the application rather uninspiring and little more than a gimmick for Lonely Planet. I guess that's a good thing for the debut of a new store, but if you watch the video it quickly becomes apparent (to me anyway) that people will not be to utilize the application in a meaningful way. I also think the design and aesthetics leave a lot to be desired. What do you think of the application?

FashionBuddha Brings Beauty to Multitouch

What first attracted me to this story produced by Digital Trends was the fact that FashionBuddha, a design boutique in Portland Oregon, has built their own multi-touch table for about $600.00. Now that's affordable!

While Fashion Buddha's DYI multitouch table looks great and seems very responsive, its their sense of aesthetics that won me over. Most multitouch applications are aesthetically dark (if that makes sense) something you are more likely to see in Blade Runner than anywhere else.

FashionBuddha's creations are just the opposite. Bright with vivid colors and a real sense of beauty they raise the bar for other multitouch designers to move out of the stiff, dull, us-too design and into new territory (I know that sounds ridiculously obscure but I can't express it better than that).

The photos and video provide some insight to what FashionBuddha is doing with their multitouch table, but only a hint. Judging by their other works FashionBuddha is going make a real splash in the multitouch design industry.

There is a frustrating audio interview (frustrating because you end up really wanting to see what they are demonstrating but can't), which tells about a gallery show for artist Yellena James that they are working on. In the show their multitouch table will be front and center featuring Yellena James' many sea creatures. The mulitouch application will allow attendies to generate sea creatures on the fly. I would love to see that.

FashionBudda is injecting a breath of fresh air into multi-touch design that I believe is desperately needed. I'm grateful for what little I've seen of their work. In multitouch, utility, design and aesthetics is everything. You can create a really useful application but if its not visually beautiful its not making the most of large multitouch medium. What we want is both utility and beauty. FashionBudda is the first design firm I've seen to really deliver on the beauty aspects of multitouch.

Update June 19, 2009

FashionBuddha has posted a couple more videos of their Aquarium and Orchestra applications, which are still in development. Nice work guys!

Wednesday, June 10, 2009

XUI: The Next Paradigm in HCI

When you look at the illustration on the left, what do you see? A young woman or an old lady?

The purpose of this blog post is to introduce you to XUI which is so weird and so out-there that it's hard to believe or even understand. Before reading on, have some fun and take a look at these two videos prepared by Microsoft as they attempt to capture the concept of XUI; the next human-computer interface.

XUI Concept Video: Home
XUI Concept Video: Work

When I first saw these videos and blogged about the concept of XUI I was, to be perfectly honest, skeptical. It seemed like so much wishful thinking. How on earth would the experiences shown in the video ever be realized in the real-world? How can you make real-world objects fly about and materialize out of think air? It was, in my opinion, interesting but far-fetched. Recently, however, I've come around to a new perspective of XUI. That it is possible that XUI will be the future of HCI. That I've been limited by the paradigm I'm currently living in.

If NUI can be designed with Magic as the metaphor, than XUI would take that metaphor to an extreme. We are not wizards playing with spells and wands as is the case with NUI, we are gods creating something from nothing.

So let me try to convince you that XUI is not only possible, but that if we allow ourselves to think outside our paradigm's box, that XUI is the next step in our symbiosis with the digital. That XUI represents the convergence of what can be imagined and what is real.

The way we perceive things depends almost entirely on what rules have been established for the world. These rules, always artificial but accepted as absolute truths, are called paradigms. Paradigms represent entire frameworks of perception and guide our understanding of the world.

Examples of old paradigms include the steadfast belief that the World was flat, or when people believed that the Sun revolved around the Earth, or when Newtonian Physics was the only theory of physics. Each of these paradigms were accepted as absolute truths and as a result they guided and simultaneously retarded our perception of the world around us. It's not until an existing paradigm is shattered - shown not to be absolute - that we can move on to bigger and better things.

So what does this have to do with XUI? Everything. In an interview with Robert X. Cringely, Douglas Engelbert - the guy who invented the computer mouse and was key to the development of graphical user interface, word processing, and hypertext - reveals how his quest to create computer systems of today was met with scorn from leading computer scientists throughout the 50's and most of the 60's. Engelbert, the guy who is rightfully thought of as the father of personal computing, was ignored and even ostracized for the first 20 years of his career. Why? Because he was proposing a new paradigm in computing; Personal computing.

In the interview with Cringely, Engelbert talks about the problem of overcoming paradigms of thinking and how personal computing really represented a completely new paradigm. It's a thrilling interview for Engelbert groupies like myself, but it may prove way too boring for anyone else. At any rate, that interview got me to thinking about NUI and especially about XUI.

What is XUI? XUI is a term that was invented (I think) by Dennis Wixon of Microsoft. It's a big step beyond NUI; a big step beyond what we think of as human-computer interaction today. It is, in effect, a new paradigm.

The movement from Command Line Interface (CLI) to Graphical User Interface (GUI) to Natural User Interface (NUI) are sometimes described as paradigm shifts, but I think the real paradigm shift was Engelbert's vision of augmented intelligence as realized in the personal computer. The evolution from CLI to GUI to NUI is an evolution, not a paradigm shift. XUI, on the other hand, is a paradigm shift. It's something so alien to our way of thinking that its seems absurd.

So if you are able to suspend disbelief rooted in our current paradigm, does that mean you can realize XUI today? No. Probably not. The reason is that our technological capabilities have not yet to catch up with the XUI paradigm.

When Englebert imagined the possibility of personal computing it was all he could have done at that point. There were, by his recollection, only three computers in the entire United States at that time all of which were huge. It took time for technology to catch up with our current paradigm but today we have computers that can fit in your hand that are more powerful than all three of those huge computers of the 50's. If you had told the guys that put together ENIAC in the mid 40's that the computational power they were developing would be easily contained in a device the size of marble 60 years later, they would have laughed you out the room. Not only was that simply unimaginable it was technically impossible at the time.

XUI is not technically possible today. The best we can do is NUI, but in designing NUI we should always strive for the next paradigm. We should not retard our own potential and ignore XUI researchers only because their ideas don't fit into the paradigm in which our minds are captive. Think beyond what seems possible. XUI is a bit out of reach today, but its not unreachable.

iPhone Winners of the Apple Design Awards 2009 presents the 2009 winners of the Apple Design Awards. Covet the Cube!

These applications provide some excellent insight to what Apple considers good iPhone application design including not only graphics but ease of use.

Personally, I think it must be very hard to choose the best designs considering there are over 50,000 iPhone applications to choose from. How does one person or even a group of people review 50,000 applications? Having been involved in these kids of awards before (as a Judge) it's mostly luck of the draw. For every recognized design there are probably 100 equally good or better designs that the judges never knew about.

Monday, June 8, 2009

Magic as a Metaphor for NUI Design: Part 3, Enchanted Objects

In Part 1 of this article I explained how magic can be used as a metaphor for designing NUI applications. In Part 2, I explained how in-the-air gesture systems and speech can be combined to create rich interactions that seem nothing short of spell casting.

In this article I'm going to discuss the parallels between NUI devices and enchanted artifacts such as wands, crystal balls, magic mirrors, amulets, other tools as described in modern fantasy fiction.

While casting spells out of thin air is cool, no fictional account of magic is complete without some type of enchanted weapon or tool. Magic rings, wands, crystal balls, magic pools and magic mirrors are all props of fantasy fiction and they have their counter parts in NUI design as well.

Crystal balls, magic mirrors, and magic pools provide their users with insights into the greater world. This is very similar in purpose and presentation to what we see in large multitouch surfaces. For example, Microsoft Surface allows us to view and interact with information that can be accessed through the ether of the Internet. How different is the Microsoft Surface from the Pensive of Harry Potter? How different is the crystal ball from the multi-touch globe or the magic mirror of Snow White from the augmented reality mirrors in development today? These multitouch surfaces, in all of their incarnations, have a striking resemblance to magical surfaces in fiction and its that parallel that can be leveraged to great affect when designing NUI.

Magic rings, amulets, and talisman will have their corresponding devices in NUI as well. And of course the Wand. What magical fantasy would be complete with out a magic wand or staff of some kind? And while that may seem far fetched its actually one of the enchanted artifacts whose comparable NUI devices are most widely used.

Take mobile phones for example. They are not just for communication, they can also double as remotes, take photos and video, run applications, tell us where we are, and provide us with access to information. They are in fact modern stubby versions of the magic wand. Another exmaple is the Wii Remote, which is very much like a wand. Wave it around and abracadabra stuff happens.

While devices like mobile phones and Wii remotes don't look like the spindly wands of Harry Potter or the staff wielded by Gandolf, they are never the less very similar to wands in that they are held in the hand and used to accomplish what seems like magic.

In case you think I've gone completely batty here is some food for thought. On May 14th, 2009 David Wilson of Microsoft applied for a patent on a device called "MAGIC WAND" which allows the user to control devices in their surrounding, communicate, take commands through in-the-air gestures, and even provide avatar that can talk back to the user. Does that sound like a magic wand or NUI device or both?

In closing I leave you with a quote I discovered as I was wrapping up this series of articles. Its from the Background section David Wilson MAGIC WAND patent application.

"There has long been an imaginative current flowing in popular culture relating to magic, which has recently culminated in the Harry Potter phenomenon. Given the widespread commercial success of Harry Potter books and feature films, as well the many predecessors in the fantasy genre such as The Lord of the Rings, Dungeons and Dragons, etc., it is readily apparent that a number of communities or demographic segments are enamored with the idea of magic. Discounting the aforementioned communities, even the most pragmatic individual would have trouble arguing against the merits or utility of, say, a magic wand that actually worked to control or communicate with objects or components in an associated nearby environment."

Series Links
  1. Magic as a Metaphor for NUI Design: Part 1
  2. Magic as a Metaphor for NUI Design: Part 2, Casting Spells
  3. Magic as a Metaphor for NUI Design: Part 3, Enchanted Artifacts

Apple Keynote at WWDC is Disappointing

Just got done reading the blow-by-blow account of the Apple World Wide Developer Conference from Engadget (Great coverage guys). The big announcement is the iPhone 3GS.

The "S" stands for "speech" since the iPhone 3GS offers voice control in addition to mutlitouch. As I've said before, voice control and multitouch are really great when used together and the iPhone 3GS is certainly a very cool phone. I plan to upgrade on June 19th, the day it becomes available.

While iPhone 3.0 itself will support movie video downloads directly the phone (I still predict that movie videos and games is the next big market for Apple) over all I'm pretty disappointed in the keynote.

A tablet computer was never announced vindicating Gene Munster but leaving me a little sad. I was also disappointed that there was nothing mentioned about new Mac machines having touch screens with Snow Leopard. I thought that was a given.

The fact that nothing was said about multitouch screens tells me that Windows 7 is going to gain back and retain a lot of the Windows OS market share. The consumer is ready for multitouch on larger devices but Apple has dropped the ball.

Of course, rumors of an Apple 10" multitouch tablet persist even today but I wanted one now, not at the end of this year or a year from now.

UPDATE June 9th, 2009

I was pretty skimpy on details about the new iPhone 3Gs, so I thought I would point you to a pretty good review of new features from C|NET.

R.U.S.E. Video Game on Intuiface Multi-Touch Table

You might remember back in March of this year a post about Ubisoft's promotional video of a Real Time Strategy Game on a Multi-touch Table, called R.U.S.E.

While that was simply a promotional video, last week Ubisoft announed and demonstrated a real implementation of their new game R.U.S.E. on an Intuiface a multitouch table developed by Inutilab ( I blogged about Inutiface back in January). A lot of the mainstream press is saying that R.U.S.E. is implemented on a Microsoft Surface table which is wrong, but easy to understand. the Intuiface table demostrated at E3 look a lot like a Microsoft Surface table.

There is a pretty cool video (see above) showing someone playing R.U.S.E. on the Intuiface table and there are introductory videos (Part 1 and Part 2 )to the game itself. It's great to see gaming come to a large multitouch tables even if the device is too expensive for the home market.

I firmly believe that the future of multitouch tables in the home is intimately connected to gaming, not to photo viewing or counter top kitchen cookbooks. Walking through a huge home entertainment store the other day I took note of the prices for pool tables, Foosball tables, and arcade-sized video machines which were priced around $3,000.00 to $4,000.00 on average and went as high as $10,000.00 for a really beautiful pool table.

I think its likely that this luxury home game and entertainment market will be the first retail market for large multi-touch tables in the $3,000.00 range. I suspect if Microsoft Surface had been priced around $3,000.00 than a lot more people would have one in their home and that a small secondary market for Surface games would have emerged by now.

100" Multitouch Table

Ideum, which I blogged about back in February, has announced a commission to build a 100", 7:2 aspect ratio table for Sudekum Planetarium in Nashville, Tennessee.

Ideum's multitouch tables are specifically deisnged for museum settings are are pretty expensive (~$18,000.00 for their mt2 table), but of the highest quality.

HP is Compliling the Rosetta Stone of Multitouch Gestures has an interesting story about HP compiling a "Touch Dictionary" of gestures according to culture. The article quotes Phil McKinney, the vice president and chief technical officer of HP's Personal Systems Group, as saying:

"What we also uncovered is that there literally are touch dialects," McKinney said. "Just as I can be walking down the street and hear someone and say, 'oh, they're from Texas,' or 'they're from the Bronx'. It's the same in touch, really. In the U.S. we touch with a single finger. In certain parts of the world, it's impolite to point with a single finger. They won't even point at a device with a single finger. They'll point with two fingers. People will point with their thumbs."

Obviously he's right. Unfortunately, HP doesn't have any plans of releasing their multitouch Rosetta Stone to the community at large - I suspect they see it as a competitive advantage. I think it would probably a good idea if the multitouch community also started to catalog local-based gestures so that everyone can benefit. I'll had that to my long list of projects.

Thursday, June 4, 2009

Multitouch Technologies, The Book!

The NUI Group, a community over over 5000 NUI professionals and enthusiasts, has published a free book describing, in detail, how to build multi-touch tables using the same technologies used to build Microsoft Surface or Jeff Han's multi-touch surface used by CNN.

Even if you are not interested in building your own table the book is a wealth of information about the various kinds of optical multi-touch tables and how they work including:

  • Frustrated Total Internal Reflection (FTIR)
  • Rear Diffused Illumination (Rear DI)
  • Laser Light Plan (LLP)
  • LED-Light Plane (LED-LP)
  • Diffused Surface Illumination

If you've always wanted to understand how large multi-touch tables work this is the best single source of information that I've seen other than the forums of the NUI Group itself.

In addition to learning how multi-touch tables work and how to build them you'll learn about how software systems are designed to translate touch input into gesture events and how to develop multitouch applications in Python, ActionScript 3 & Flash, and .NET/C# using software libraries built around the TUIO protocol.

This is one of the single best sources of information on multi-touch tables I've seen. A super-treasure trove of information that will only promise get better in subsequent versions. With contributions from 16 multitouch practitioners the book is only in its first "version" and I expect it will be updated fairly frequently since its electronic. If you are extremely anal about copy editing than be prepared for a few typos here and there, a small price to pay for such a wonderful book. Nice work guys!!

Wednesday, June 3, 2009

Multi-touch in WPF 4 Beta 1

Microsoft has made available Visual Studio 2010 beta which includes WPF 4 beta support for multi-touch development. The multi-touch support is not yet fully baked, but there seems to be more than enough there to experiment with. Check out the video interview with the WPF multi-touch product manager to learn exactly what multi-touch feature WPF 4 beta supports.

Considering that Window 7 and WPF 4 are probably going to be one of the most common platforms for developing multi-touch applications it makes sense to get a jump on your friends by downloading the Visual Studio 2010 beta and playing around with WPF 4 now.

3M Announces Multi-touch Developer Kit

3M Touch Systems announced yesterday the availability of their Multi-touch Development kit which includes a 19" (16" x 8") screen which supports 10 points of simultaneous contact. It can display 16.2 million colors with a maximum resolution of 1440 x 900, which is decent for a desktop monitor.

The display is designed to be used with Window 7 and would make a good tablet except for the fact that it's 2.6" inches thick. What makes it interesting is that it supports 10 points of contact, something we haven't seen with projected capacitance screens this large (projected capacitance is the same technology used by the iPhone).

While this is good news I'm not as impressed as I should be. I'm pretty disappointed that the industry has not yet made available multi-touch LCD or Plasma screens or overlays that are larger than this and support more than two points of contact. The only way you can get a large monitor that supports more two points of contact is to do rear projection which is cool but is also really bulky (Just look at any multi-touch table).

The 3M developer kit costs $1,499.00 but from now until June 12th you can get yours for $999.00 - a $500.00 saving. What you might use it for I'm not sure but its probably the largest non-rear-projection solution that supports more than two points of contact available today.

Tuesday, June 2, 2009

Magic as a Metaphor for NUI Design: Part 2, Casting Spells

In Part 1 of this article I explained how magic can be used as a metaphor for designing NUI applications. In this article I'll hang a little meat on those bones by providing examples of how in-the-air gesture systems and speech can be combined to create even richer interactions that seem nothing short of spell casting as described in modern fantasy fiction.

Spells in popular fantasy fiction involve the use of magic words and gestures. A mage, wizard or witch waves her hands around and mutters words in Latin and things happen. How different is this from the use of speech or gesture recognition to control appliances, computer programs, mobile phones?

The use of gestural interfaces are now coming into vogue and the extension of these interfaces into our cars, work, and home will eventually become common place. As an example just look at this G-Speak video by the same person who designed the gestural interface for Minority Report. The video is not fiction its real. Another example is the recently famous 6th-Sense demonstration at TED 2009 - don't the gestures used by postdoctoral student Pranav Mistry look like incantations from some fantasy fiction story?

These types of in-the-air gesture interfaces are likely to be in millions of homes within the next 5 years as video game makers, impressed with the success of the Nintendo Wii, adopt gesture recognition technology. Just this week Microsoft announced a new research project for the XBox 360 called Natal (see this video). And in-the-air gestures will not be limited to a controlled environments such as your home or office. As a matter of fact we've been using gestures to dispense paper towels and turn on the water in public restrooms for over a decade. Some gestures will be specific to personal space and other will be generalized and available to anyone in public settings.

Now lets take a look at Speech Recognition, something that's been under research for decades and is just now starting to see some practical applications. A mobile phone I had a few years back could dial any number in my address book without having to train it to my voice. All I had to do is say "Dial, Mom" and it would call my Mother. It was extremely accurate. Today voice activated dialing is in many mobile phones. Speech recognition will also be applied to home automation. Check out this video of the One Voice home entertainment system. You can even purchase voice control software for your Windows or uses the built in voice control in your Mac to launch programs and navigate file folders.

Admittedly voice control is pretty crude as are in-the-air gestures. With voice you usually give a verb followed by a noun (e.g. "Start, Microsoft Word"). Gestures are also crude with people having to extend their arms out in front of them - the problem of "Gorilla Arms" is often mentioned as a draw back to in-the-air gesture systems. In both cases you also have to be careful of what I call "accidental activation", where the gesture or speech recognition systems pick up commands when you don't want them too.

If, however, we can combine in-the-air gesture systems with voice we bring a much richer experience to the user. For example, imagine being able to point at a lamp or your TV and simply say "On". In this case the gesture replaced the noun by identifying the object to which the command (verb) applies. That's a simple example, but the possible combinations of speech with gestures are pretty impressive.

You can also avoid "accidental activation" by using made up words or words in a foreign language (i.e. Russian), something that dog trainers have been doing for years. Now imagine using the Latin word Incedo ("to awake") in combination with pointing at your lamp or TV. Now its really starting to look like magic, isn't it? Want all the lights on, spread your arms out and say Lumen. It's like you've been educated at Hogwarts.

What's interesting is when you start to assign speech and gesture commands to small individual programs and then link those programs together in macros. Have you ever seen a really good Unix guy hook together three our four Unix utilities using a pipeline? Pipes, in Unix, is where the output of one utility becomes the input to another. It's pretty amazing the things that can be accomplished using the Unix pipleline. Unix shell commands and utilities are even more powerful with a scripting language of some sort which adds branch logic and looping.

Now imagine having a speech and gesture scripting language and thousands of small commands to choose from which you can stitch together into macros or programs. You can think of these macros or programs as spells. For example, lets say you want to turn on the TV, watch LOST while recording Amazing Race. You do this every Thursday night. You might point at the TV while rotating your finger in a circle (a gesture for record) and speak command words in Latin.

Incedo fenestra; utor LOST; {Rotate finger} Amazing Race.
{Turn on the TV, find LOST, record Amazing Race}

Looks like a spell to me. Even if you don't use Latin, which I use just to make it a bit for fun, its like spell isn't it?

To take this a step further imagine if you could record long spells and assign them power words. For example, lets say you want to check your stove, lock your windows and doors, turn down the heat, and switch on the bedroom TV every night at bedtime. This could be accomplished with a long spell

cuspis occludo; domus obfirmo; cubitum ire fenestra Incedo
{turn off the stove, lock up the house, and turn on the bedroom TV}

Or a single power word:


In this case recedo becomes a power word that activates a chain of commands. You could make up spells and power words on the fly or, even better, get them off the Internet or from a book. Imagine a book titled "Spells and Power Words for the Home" or something like that. Now you not only have spells, you also have books of Magic; instruction manuals that contains lots of NUI spells that you can learn to recite or record with associated power words.

As I said you don't have to use Latin any more than a dog trainer has to use Russian, but it does avoid "accidental activation" and it makes interacting with systems a lot more fun. That said, the language used is immaterial; its the combination of speech and in-the-air gestures along with the possibility of creating more complex macros (i.e. spells) that can make NUI so much like Magic is really quite exciting.

So do you have to do it this way? Of course not, but one of the fundamental tenets of NUI is that the interaction should be enjoyable. Designing NUI to closely resemble Magic is one way to accomplish that. Magic as a design metaphor also provides us with a conceptual framework around which we can develop a full ecosystem of NUI controlled devices.

In the next installment of this series I'll talk about how display technology and multitouch can be modeled around magical artifacts.

Series Links
  1. Magic as a Metaphor for NUI Design: Part 1
  2. Magic as a Metaphor for NUI Design: Part 2, Casting Spells
  3. Magic as a Metaphor for NUI Design: Part 3, Enchanted Artifacts

1. This article is subject to editing and will probably change over time.
2. My education in Latin is based on Googling up words while writing this story.

Monday, June 1, 2009

Magic as a Metaphor for NUI Design: Part 1

Windows, Icons, Menus, and Pointers (WIMP), the de facto Human-Computer Interface (HCI) of today, is just starting to give way to other disciplines in HCI. These include Natural User Interfaces, Tangible User Interfaces, and Speech Recognition - all of which have many things in common including the concept that they give ordinary gestures, speech, and objects behavior that is super-real.

Natural User interface (NUI), Tangible User Interfaces (TUI), and Speech Recognition (SR) have been treated as different concepts but as the term NUI becomes mainstream it seems to be subsuming TUI and SR. Soon NUI will stand for any human-computer interaction that emphasizes gestures, touch, objects-are-information, or speech.

What is fascinating to me is how similar NUI technologies are to magic and how magic can be used as a design metaphor for NUI. How is magic a design metaphor for NUI? It's in the method in which we elicit effects and accomplish tasks. Magic is supernatural, beyond nature. NUI is technological but it's also super-real.

The term super-real is used to describe the idea that NUI design borrows from the observable qualities of our interactions with real-world objects (i.e. physics), but not precisely. NUI favors usability over precisely modeling the physical world. Good NUI design makes visuals, and even devices, respond and act like natural objects to a point, but then modifies those visuals to provide afforances and user feedback that is most convenient for the user. For example flicking an object across a multi-touch table shouldn't cause the object to fly off the visible screen on to the floor. It should bounce off the edge to land an inch or so away from that edge regardless of how hard its flicked. That's not natural, but it's not exactly magic either. NUI doesn't precisely map to the natural world and, in fact, we don't want it too. We want NUI to be super-real; to go beyond what is natural and to me that sets up the possibility of using magic as a metaphor for designing (not explaining) NUI.

As Arther C. Clarke said:

Any sufficiently advanced technology is indistinguishable from magic.

The kind of magic I'm talking about is not folk magic or the art of illusion, but the modern fantasy fiction concept of magic. Magic in fantasy fiction has captured our imaginations and has shaped the Western concept of magic over the past 100 years. Our understanding of magic in fantasy fiction comes from original works by H. P. Lovecraft and J.R.R. Tolkien, and all the derivatives of those works such as the role playing game Dungeons and Dragons, video games like World of Watercraft, and wildly popular fantasy books like the Harry Potter series.

Where magic has spells and charms, wands, crystal balls, magic mirrors, and books of magic; NUI has speech and gesture recognition (spells), remotes with accelerometers (wands), multitouch surfaces (crystal balls and magic mirrors), how-to books (books of magic), and so on.

In the next installment to this series of articles I'll talk about how Speech and Gestures can be modeled on the interactions a practitioner of Magic (fictional) would have while casting spells.
Series Links
  1. Magic as a Metaphor for NUI Design: Part 1
  2. Magic as a Metaphor for NUI Design: Part 2, Casting Spells
  3. Magic as a Metaphor for NUI Design: Part 3, Enchanted Artifacts

1: This post is subject to constant editing.
2: I don't believe that magic, ghosts, or anything else that is supernatural is real. I'm just using the fantasy concept of magic as a metaphor for designing NUI.

Update June 19, 2009

This set of posts on magic and design were inspired by a conversation (via email) that I had with Tim High back in February of this year. In an email Tim said the following, which has had a major impact on my thinking when it comes to software development and design:

You might find this amusing: like all computer geeks, I've spent my share of time drooling over Lord of the Rings and other fantasy books imagining I could cast spells by sneezing, but of course realizing that that's as close as I'll ever get to real magic. But in the last few years, I've changed my mind - magic is whatever you can do in the real world, especially with the new technology that's popping up everywhere. I think we're all studying at Hogwarts these days. Just like at Hogwarts, everyone is a magician (as opposed to other fantasies, where magicians are a mysterious breed). But in Harry Potter's world, your power is related to how hard you study and how creative you can be.

This really hit home for me when I got my iPod Touch. Suddenly, I realized that I had the modern equivalent of a magic wand in my hands. I learned the basic spells immediately: play music, look at photos of my friends and family... when I downloaded the first free "flashlight" app, I thought it was ridiculous... their "flashlight" was just an all white screen. But I take a taxi home every day at night, and with my "wand" I was able with a single flick to cast a "light spell" and read in the cab on the way home. I use it to look for things under the bed, look at the car engine, whatever. Then, as long as you're on the internet, you have a magic spell that can whip up information on just about anything (Google and Wikipedia). Various forms of telepathy: cell phone, email, IM... and on and on. It's amazing how much this and other 21st century technologies have perfect analogies to Harry Potter magic. As far as I'm concerned, there's no difference."

It's important to recognize the people that inspire us, the email by Tim inspired me to write this series of articles. This extract from Tim's email is posted with his permission.