The World of Smartboards, Sympodiums is about to change

The technology is multi-touch screens, developed at NYU (as per post on the Cult of Mac blog). And the interesting part is that Apple has patented these interactions, which means… a TabletMac?

Comments ported from old blog:

Comments

Vic - nice find - this is like the Tom Cruise movie “Minority Report” on a screen… :-)

Posted by: fko at February 10, 2006 12:19 PM

I thought this already existed in the retail space…

http://www.jazzmutant.com/lemur_overview.php

+ paul

Posted by: poppenhe at February 13, 2006 02:28 PM

Thanks for the comment, Paul.

The Lemur, however, is not a primary display device for the computer it connects to, is it?

I have little knowledge of the blogged item itself, compared to the Lemur, it seems like it will be used as the primary display. Smartboards and sympodiums act as the primary display devices for the user.

Posted by: rdivecha at February 13, 2006 02:44 PM

This is absolutely amazing!!! I am speaking from a perspective of graphic arts professional with more than ten years of experience. This kind of interfacing will vastly improve performance and move us toward more natural computing experience.
On another note - does anybody know the name and author of the background music? It sounds like Nina Simone in some kind of remix.

Posted by: eldar.a@comcast.net at February 13, 2006 03:13 PM

The background music is from “The Animatrix”, the animation anthology inspired by “The Matrix” trilogy.

Posted by: digitalverve@gmail.com at February 13, 2006 04:59 PM

That’s awesome!

Posted by: vic@interlinkweb.com at February 13, 2006 05:08 PM

To be exact, the background music is ‘Who Am I’ by Peace Orchestra - http://www.amazon.com/gp/product/B00000K53L

Posted by: gavink@gmail.com at February 13, 2006 05:16 PM

I work for SMART Tech and just FYI, we already have technology which can do this:

http://www.smarttech.com/dvit/index.asp

The thing is, multi-touch is not supported by any operating system, so it’s limited to custom applications for now. Maybe if Apple is involved they’ll put support into Mac OS X for it. Apple, if you’re listening, get in touch… :)

Posted by: GordPeters@smarttech.com at February 13, 2006 05:47 PM

And this thing is good for what??

Browsing the net?? How tedious. What beats the mouse in the right hand, and a… coffee in the left (?)

Photoshop? Impossible to do anything but the most basic tasks with a fingertip. This is why we currently use a mouse 'pointer’. Imagine having to zoom in to 2000% just to use your finger tip as a with the eyedropper tool?

Games? A whole new range of 'thrash your arms about’ games are on the horizon.. Great for the reflexes, but a nightmare for posture and backaches as you hunch over this thing.

Moving blobs around the screenm, playing with photos, interacting with a colourful screen saver? This thing perfect for these task, as shown in the demo, but the novelty would wear off rapidly.. In fact, I’m bored with it already.

Posted by: brad@zip.com.au at February 13, 2006 06:08 PM

SmartTech Person,

We already own a smartboard and 2 sympodiums from your company… but I don’t think that they or the videos on the linked page show multi-touch, do they?

Am I missing something?

Thanks,
rdivecha

Posted by: rdivecha at February 13, 2006 06:16 PM

rdivecha, if you read the description closely, you’ll see it:

“You can also place your finger on the screen and touch another finger to the right of it for a right mouse click.”

It’s not officially being promoted for multi-touch because, as I said, multi-touch is not supported by MS Windows, Mac OS X, Linux, etc. So we’re limited to simply creating multi-touch “gestures” for existing functionality.

However, the DViT hardware itself most definitely has multi-touch capability (only the DViT-based SMART Boards, not our other products). We’ve played around with multi-touch internally using our own applications, we just didn’t make a cool demo video of it. ;)

Posted by: GordPeters@smarttech.com at February 13, 2006 07:52 PM

I don’t understand. If NYU developed the technology, why does Apple get to patent it?

Posted by: ywwg@usa.net at February 13, 2006 08:38 PM

Ofcourse GordPeters,

I read that bit about gestures.

My view of applications was imagining my friends who work at Ford & suppliers, working on a CAD design of automobile wire harness 2/3 guys together on the same screen.

Simultaneously working on 3-D models together will save those designers a heck of a time, rather than have the sequential process.

Of course this is as good as working on the same file on different terminals, but, working on the same machine cuts down on data transmission management and errors.

I feel quite hopeful…

Posted by: rdivecha at February 13, 2006 08:45 PM

Now if apple could just encourage the web utility developers to get flip4mac to work on the new intel core duo processor everything would be sweet.

Posted by: jfenno@gmail.com at February 13, 2006 08:46 PM

Imagine the applications in the adult entertainment industry.

Posted by: mike@mikeabundo.com at February 13, 2006 09:20 PM

This device is obviously superfluous for performing everyday computing tasks like surfing the web. Coming from a biochemistry research perspective, this interface would be useful for tasks such as rotating and inspecting objects in three dimensions (such as proteins, chemical structures, etc). Biolgists and Organic Chemists could also rejoice in inspecting 3-D models of cells and pharmaceutical drugs. EE’s could design circuits and virtual instruments (such as using LabView). I’m sure people in other industries could find useful tasks for it.

The main feature that distinguishes this from traditional means of human-computer interfaces is that this sort of touch screen allows coordinated interaction with both hands, instead of limiting oneself to one hand controlling fine 3-D manipulations. I just don’t think having two mice would feel as intuitive.

The only problem is, the improved functionality of this over traditional tools is only marginal considering the cost of touch-screens (and programming). In order to commercialize this technology successfully, they would have to find a really killer app (can’t think of one that a mouse/keyboard couldn’t already handle in some manner).

Needless to say, I’d like to get my hands on one :)

Posted by: nicktrogen@gmail.com at February 13, 2006 10:53 PM

The gaming industry has surpassed even Hollywood for money:“… as the video game industry’s earnings began to eclipse Hollywood’s box office numbers – the video game industry rang up more than $9.9 billion in North America in 2004 versus Hollywood’s North American box office of $9.4 billion -”[Aug 23-05 chicago trib]

Can you imagine how a gamer might think to use this kind of technology??? It could be a revolution- the best gamer in my “clan” has described this kind of multi touch interface for some time, though we were thinking about a pad

Posted by: danieltalexander@gmail.com at February 14, 2006 02:05 AM

Just wanted to add, Touching your viewing screen is a messy affair(my psp is gross and the buttons are ont eh side of the screen). Add to that people already ahve great screens like plasmas, what we are really talking about here could just be a desktop pad, with visual clues so you know where your touching, OLED tech. The exciting part is the multi-touch possiblities. You could still have your mouse and keyboard, but also your multi-touch board with visual feedback. Though wouldn’t tactile feedback be much more efficient for the user???

Can almost imagine the raised textures changing under my fingers, its a mouse, not it’s a keboard, not is’t the shape changing pad. Hitting the shelves in 2010

Posted by: danieltalexander@gmail.com at February 14, 2006 02:24 AM


I love it !

I’ve rewieved here in France The Lémur for french Mag KeybordsRecording, and when i see this other kind of multi-touch screen i’m jumping out of my chair !

Very exciting.

C.Webster

Posted by: c.webster@wanadoo.fr at February 14, 2006 02:35 PM

I think this is great. It takes us a step closer to removing ourselves from the confines of an office or desktop, even laptop screens.

As a designer, graphics, I don’t see it being very useful were I to apply the same technique as I do today; concepting in my head, then scribbling, and finaly agonizing over getting what I see in my mind to land correctly on paper or a screen.

But that process would be moot if I could design and produce as I think, using hand gestures to quickly move elements around, eyes to select or focus on elements - essentially living within my idea. If I could fantasize that this would eventually be spacially applied, away from a screen, then I could see how we interact with computers being completely off the hook!

Apple, I’m hoping you’ll take us there!

Posted by: georgesandoval11@aol.com at February 14, 2006 06:59 PM

it allows multiple inputs? what’s the benefit to that? humans only do one task at a time. besides resizing windows in two dimensions simultaneously, or spinning a couple of records, i don’t see many applications that could benefit from this. you don’t type two paragraphs simultaneously, or type and format at the same time, or enter data and do graphing, or … or what? tell me what is so exciting about this? it’s new, it’s different, but what are the useful new applications that can come about because of this, and how will they make us more productive or improve the quality of our life?

i do see a benefit if several people can interact with the same machine simultaneously, working on different parts of a project or document, but then you run into screen/video limitations. and, you can already do that better with networked or terminal stations today.

Posted by: faisalkhan@hotmail.com at February 14, 2006 08:28 PM

Human’s adapt to their tools as much as we create tools for our needs. How long did it take to learn to use a keyboard, a mouse, and so on? Some people still can’t use either as efficiently as others. This new tool will allow new methods and new expressions of old tasks. I look forward to its wider implementation.

Posted by: ab_early@hotmail.com at February 15, 2006 07:15 AM

>> And this thing is good for what??


I’m sure people were saying the exact same thing when the first Macintosh came out which utilized the (what good’s that for?) mouse. After all, everything people were doing with computers could be done (probably faster!) using just the keyboard. Why the added tediousness??


And look where we’re at today.


The great thing about innovations like this isn’t because it will allow you to do what you’re already doing everyday on your computers “faster” or even “easier” — it’s that it could (potentially) let you do things you didn’t even imagine you’d want to do with your computer.

Posted by: rawheadz@gmail.com at February 15, 2006 07:58 AM

Someone mentioned a killer app? How about GIS and/or military applications? As shown in the video, imagine using intuitive hand movements to pan/zoom/rotate maps, arial/satellite photos, 3d-rendering, adding overlays, etc. Then add in functionality like charting courses, gps tracking, and calling up and dismissing layers of who-knows-what-kind-of-data: the possibilities are endless.

Like any other form of interface the key is good application design: modeling a system that is as intuitive, productive and economic in terms of user interaction as possible. But a device like this (esp. a large-scale variant) could deliver tremendous benefits to the kinds of apps I’m suggested in a very operator-friendly way.

Heck, the first place you might see these in use could be your local weather reporter - the end of the concealed clicker in her palm!

Posted by: jimb@fsap.org at February 15, 2006 09:19 AM

I am sorry, but you don’t just create something and then hope and wait for applications to come along. That’s called finding a solution to a problem that does not exist. So, again, what problem is this solving? Humans do one thing at a time. Computers do many things at a time. We need to help humans do their one thing at a time better and faster, and we need to make computers do many things simultaneously, and in a way that can be understood by humans.

As for not knowing the uses of the mouse when it was developed…that’s actually wrong. We needed a pointing device to move all over the screen, rather than in “blocks” or pixels. We needed it to do this in two axes, up and down and left and right. So a trackball was the obvious solution. A mouse is just a trackball. It is an obvious solution for a glaring need.

So is the keyboard. It is partially an evolution of teletype machines and typewriter keyboarding input functions, and partially an obvious solution to having the need to press buttons to get the computer to do several things. In non-text applications, you can come up with entirely new sets of keys (buttons), or simply map the keyboard “buttons” and button combinations to new functions. Since it’s expensive and stupid to have multiple button boards or keyboards, we just use one and map the keys. Of course, alternate input keyboards and button boards exist. Where it is too expensive and/or inefficient to use the existing keyboard, we have alternate input devices, such as joysticks, scanners, barcode readers, various sensory gloves, etc.

So, again, what applications are there that will work better if you have the ability to have two or more “mouse arrows” on the screen? I see the following:

1. Ability to resize in two dimensions, but it is only slightly faster and resizing windows is a very small part of our computing activity.

2. Ability to “spin” multiple records in virtual DJ software. I think we could do a maximum of two, unless someone can make their fingers work independently. Mkay.

3. Someone mentioned that it could potentially allow us to do things that we have not even imagined yet. Interesting, but for that, typically, there’s a new paradigm. This is not a new paradigm. It’s just a variant of the existing mouse paradigm. You see how it’s being used as a pointer to indicate a point or points of focus on the screen, either for resizing handles, or for guiding the path of generated patterns, etc. All of this is exactly what we do with our mouse. So, it’s not new.

4. GIS, and or military apps. We have these apps already, and their speeds and execution are not limited by the mouse. So no, this method of input does not help. Pan/tilt/rotate are 3D operations and do need a good solution on a 2D screen. This is not it. A plane mounted on a trackball is far better. As you rotate, pan, tilt the plane, so too the representation on the screen should rotate/pan/tilt. Guess what, they have that, and even better solutions, already, known as “mode-lock” operations with keyboard and mouse. You enter the mode with a key combination or mouse button and then move the mouse to control the movement. Further, military applications pose problems not in terms of making sense of pictures, but in terms of painting an accurate picture based on reams and reams of data. How do you show terrain, enemy and friendly positions, different types of assets, overall threat assessment, movements, trajectories, coverage, support resources, estimated times of positions as they move and change, etc., etc. This whole mess is called visualization science, and it requires the abstraction and representation of a very large dataset. Interaction with it requires multiple automated inputs, rapid mode lock and focus shifting, and aggregation and coalescing of views. Two-handed input is not really that useful in this scenario.

As for the weather, the presenter losing their clicker helps us how?

It is not that keyboards are faster than a mouse for input. They do different things. What slows us down is the movement between the mouse and keyboard, if we have to use both. Or, if we use one for the function of the other. Buttons are for selecting. Mousing is for moving. If we use the mouse to select from dropdowns, etc., then it is slower than using a keyboard. If we use a keyboard to move things incrementally (eg. using mapped key combinations or arrow keys) then that is much slower than using the mouse. The other problem is precision and scale. The bigger the scale, the more precision you lose, unless you control scale with one hand and precision with the other.

For interesting control interfaces and paradigms, look at a car’s interface. We can use one foot and one hand to control an automatic car (for the most part). Now, what about airplanes and boats? What functions, in what units of time, and within what ranges of movement, need to be performed. How can they be used to drive our next interface development?

Here are some ways we can do inputs with movement:
Head nodding.
Head rocking.
Head rotation.
Eye tracking.
Blinking.
Feet movement.
Rocking forward or backward or left or right of seat.
Shoulder shrugging.
Abdominal tensing.
Knee movements (side to side).
Thigh squeezing. 
Wrist rotation
Elbow movement.
Pressing of back into a surface such as the seat back. 
Pressing of sides into a surface such as armrests.

Now all of these are independent body movements and can be used to quickly provide additional inputs. Our fingers are joined at the hand, and cannot move much independently. A cluster of buttons (keyboard) IS the best solution for their limited range of motion.

What we see here are both hands tied up in one resizing move, which is not really being done differently, or saving a whole lot of time. It might be marginally useful in desktop publishing layout operations, but otherwise, I think it’s fairly useless. Esoteric, yes. Useful, no.

Faster and or easier are the only measures for a development’s success. Then you have to layer on the “cool” over it. Starting with “cool” doesn’t usually get you anywhere, at least in the real world. The internet’s a different story.

Posted by: faisalkhan@hotmail.com at February 15, 2006 11:49 AM

@ rdivecha

We’ve already been working on custom applications such as the one you describe. They’re just not for the mass market (and thus aren’t being promoted as such). I can’t really say much more than that, but rest assured, it’s already happening.

Posted by: GordPeters@smarttech.com at February 15, 2006 01:31 PM

Err, a couple of mistakes. In my last paragraph, the first sentence reads:
“Faster and or easier are the only measures for a development’s success.”

It was meant to be:

Faster and or easier are the only measures for a development’s effectiveness, and therefore determine its adoption, which is how we measure its success.

Posted by: faisalkhan@hotmail.com at February 15, 2006 02:09 PM

At a more basic level of analysis, any development should either help us perform in established situations in a better/faster/easier way, or enable the creation of new paradigms/scenarios. This does neither.

I challenge anyone to name an existing application that this development enhances, or a previously unavailable set of applications or methods of interaction(s) with computer applications that this devleopment enables.

If we were to think in terms of holographic projection of interactive 3D images, then 2 hands/pointers are actually a very limited set of “tools” that only double what can be done with one hand/pointer. I bet we will end up with multiple manipulators to point and select efficiently in such environments, or come up with a new paradigm where we control multiple “operators” who do numerous things simultaneously at our bidding. Necessity is the mother of invention. 

Posted by: faisalkhan@hotmail.com at February 15, 2006 02:18 PM

This product is about interface, so consider it with fresh eyes. As an example, one of the worst interfaces ever invented is still the keyboard. By looking outside the box with further advanced stylist interpolation, the multi-touch screen can completely replace what we all love to hate. Most know that the irrational location of the keys on a traditional keyboard were organized to prevent the typewriter keys from locking up (I spent rainy afternoons with writer’s block when I was a kid trying to get every key stuck in a suspended cantilever within my parents typewriter).

So how does the multi-touch screen replace the traditional keyboard… To start, each of our fingers has a pad with a slightly different profile, shape, area, and print so assign each of your fingers with a letter. 10 to start; now triple the amount of fingers with touch sensitivity: soft, medium and hard. All these “finger taps” would then be registered by the typist in a method similar to the Apple Quick Key application combined with a standard PDA script interpreter. The resulting database is a personal keyboard or character stroke profile completely unique to each typist. So, 30 characters is nowhere near enough therefore start adding to the character matrix with combinations of fingers, and for good measure throw in a palm tap, a side pinky and a number of different parts of the thumb. It’s fair to say that you would be able to cover any standard keyboard including lowercase, caps and numerals.

All this is pretty basic but it has now replaced a standard keyboard with hand and finger motion. Anywhere on the screen, a light touch of your index finger gives you the letter “A”. No longer are we bound to the construct of a keyboard layout. So what good is this?

The poor ergonomic spacing of keyboard keys is gone forever,
Some imaginative 2-finger typists will get their ideas out faster,
Data input design and choreography will rise as a profession…

In essence, one could consider this as a 30+ click mouse. I agree with the comment about the impracticability or 2-handed input. The 2D-scale adjustment looks cool but better interface solutions will follow. Just as the single button mouse has evolved (sorry Mac purists) so can the scroll wheel be replaced with a gliding forefinger. Hmm?

With an advanced digit input interface, I see a great potential in digitally constructed parametric modeling. From sculpting shape (Look out! Fully upholstered cushion rooms from the 70’s will be back) to bullion form building, applications such as Revit, Sketch-Up, and Max could go a long way.

Posted by: koropecky@shaw.ca at February 16, 2006 12:58 AM

I don’t see this as a replacing’ the keyboard. I would see this as augmenting it. Just have a pop-up keyboard and mouse, and bam, you have all of the traditional tools you are use to, as well as some new nifty tools if you want to use them.

Posted by: jon@industrialsomething.org at February 16, 2006 12:24 PM

I’ve just registered to comment those people who said this has no future, because it isn’t necessesary and it has no use.

Quote:
“Faster and or easier are the only measures for a development’s effectiveness, and therefore determine its adoption, which is how we measure its success.”

You said this technology does computing neither faster nor easier. That’s wrong in my opinon. Imagine those people who do not work with the computer intensivly. Those people normally need someone to help them with their computer. With this new technology they wouldn’t need anyone who helps them with their computer.
In example, an detective. Just an example. The detective has to solve a case in which his only information is a video. What can he do to analyze it? Watching, watching, watching. And then he sees something in the middle left corner or wherever. It is something of his interest.
Nowadays he needs someone to pause, reverse zoom etc. and then he needs to find out what the detective exactly wants and so on and so on. With that new techonolgy the detecitve wouldn’t rely on the technican. He would just use his fingers, stretch whatever he wants in a few seconds zoom whatever he wants etc. It would be way more easier and faster for him to find something out.

Of course you could say that this is not so important, but it would be an advantage to those people. And that’s what inventions are for. People always want it easier and faster even if it might not be effective.

And just to give you guys another area where this technology would kick ass: presentations.
All forms of presentations would be way more flexible and easier. Imagine someone who is making a presentation to some people who complain that they can’t read the word in the right corner because it’s too tiny. The presenter would just stand up and stretch the word with a movement of his too fingers without having to interrupt his presentation or his speech.

Those are just a few thoughts of mine in a terrible english I know, but I’m still learning ;).
There are other areas that come into my mind where it would be usefull. But that would take too much time to go into them. But I hope those “this isn’t worth looking at”-people are convinced that there are areas of life where this could be really usefull.

Posted by: Kylex@web.de at February 16, 2006 12:33 PM

surely there could be some amazing advances in robotically assisted remote surgery using this kind of technology - using both hands at the same time…

Posted by: neale.foulds@duolog.com at February 17, 2006 06:04 AM

Just answering some comments from faisalkhan@hotmail.com and others who don’t see benefits or applications of this screen:

Personally I see infinite ones.

I think it could provide new points of view in many tasks, as computers-keyboards brought to the text/document processing compared with the typewriter.
From my point of view this screen joints the concepts of screen and keyboard just in one, which I would say it goes a step forward. I am not saying this is going to replace the keyboards. I would say it extends it.


Just thinking in some applications that could apply or at least to be debated in the forum, here are a couple of ideas:

It’s true that humans have only two hands and fingers cannot work separately. But what about more than one humans working on the same screen like this… A big one. Don’t you see applications for that?

Another one, what about a cube of screens like this, to allow several persons can interact on the same subject.

Yes, it would sound like fantasy but I think little new concepts can lead to unforeseen new things.

Posted by: jce_gasp@hotmail.com at February 17, 2006 09:14 AM

I’m surprised I’m the first to post this:

http://www.fingerworks.com/userguides.html

There’s a small group of us who have been using multitouch input surfaces for a number of years. They were marketed as keyboard/mouse replacement systems for those suffering from RSI, and they serve that purpose well – I’d fought wrist pain for nearly twenty years, and since I started using the TouchStream, I’ve been essentially pain-free. Not having to move from keyboard to mouse is a HUGE deal.


FingerWorks, the people who developed the TouchStream keyboard, had some *excellent* user-interface talent, and they developed a set of multitouch gestures that are astonishingly easy to learn and use. True, the gestures mostly just replace key-modifier combinations, but they’re MUCH less painful to use.

Unfortunately, their business folded – the keyboards ran close to $400, and between that and the learning curve (typing without tactile feedback is tough), it was a hard sell. The company’s intellectual property was acquired, most likely by Apple, although nobody’s allowed to say. Those of us lucky enough to have their products are hoping that whoever bought them will resurrect something like the TouchStream system.

Posted by: jeff.umich@brandenburgs.us at February 17, 2006 09:36 AM

It looks like StarTrek it’s comming…

Posted by: office@punctweb.com at February 17, 2006 10:21 AM

Anyone who works with big twisty charts in Visio will thank God for this baby.

Posted by: mike@mikeabundo.com at February 18, 2006 06:08 PM

How would this help with twisty charts in Visio?

What can you not do now that this enables?
Or, what will this save you time or effort with?
Thanks.

Posted by: faisalkhan@hotmail.com at February 18, 2006 10:50 PM

This is amazing technology and I am really looking forward to this!.. Its like something one sees when one happens to shut ones eyes and look at the sun!.. Mind-boggling!.. :)

Posted by: sindhu.s@hotmail.com at February 20, 2006 07:43 AM

A-M-A-Z-I-N-G!!!.. :)

Posted by: sindhu.s@hotmail.com at February 20, 2006 07:53 AM

Amazing…. i just wounder what´s coming after this “mac table”…


Just a question …. i also loved tha soundtrack for the video… can anybody please tell who composed it or wich artist it is?


thank you

Posted by: obliquevoice@gmail.com at February 25, 2006 02:36 PM

From one of the earliest comments:

The background music is from “The Animatrix”, the animation anthology inspired by “The Matrix” trilogy.

Posted by: rdivecha at February 27, 2006 09:59 AM

>As for not knowing the uses of the mouse when it 
>was developed…that’s actually wrong. We needed 
>a pointing device to move all over the screen, 
>rather than in “blocks” or pixels. We needed it 
>to do this in two axes, up and down and left and 
>right. So a trackball was the obvious solution. 
>A mouse is just a trackball. It is an obvious 
>solution for a glaring need.

Your statement “rather than in "blocks” or pixels" is warped. Mice move all over the screen based on “blocks” or pixels. In fact the movement of a mouse is measured in DPI, dots per inch - which is a measurement of the number of pixels a mouse will move on a screen when moved.

It wasn’t an obvious solution for a glaring need. I suggest you research it. Joysticks filled this niche considerably well and were invented quite a bit earlier (almost 2 decades).

The mouse was invented for a propietary use and was kept that way for quite some time before finally, and slowly, being adopted for computer usage. It’s initial use was to exploit body movements much in the same way this new system intends.

The mouse is also severely limited as to what can be done in a natural manner; I could definitely see myself using an instrument like the multi-touch for websurfing, document processing or the like and doing so faster.

I’m sure your view of reality could be applied to many new inventions, or items that were invented with no use but later became accepted as standards (i.e. the mouse). It is very fortunate that not all of us eschew common-sense to continue forcing our own distorted view of reality upon society.

As with everything there are applications you can bring up that it won’t be good for. Seriously, though, give it up! I barely use my mouse for what I do, instead favoring the keyboard because I find it tedious to have to move to mouse and back constantly. A multitouch display with a mocked keyboard and a movement control overlapping would be insanely faster; especially for people who type more than 90 WPM.

Posted by: morganke@dor.state.fl.us at March 1, 2006 08:03 AM

This is fantastic. I can’t understand the knockers though.

Probably the most important aspect of UI is the available bandwidth for abstract input. Two buttons is better than one. The requisite bandwidth is relative to the type of task though. The mouse was a profoundly important change, but we still use a keyboard.

And regarding the relationship between development of applications and that of their UI, its more an evolution than a development. They will grow from changes made to each other. Stop worrying about the chicken and the egg.

Posted by: registrationcrap@gmail.com at March 5, 2006 08:46 PM

http://ninasimone.com/rca.html - About the music - Evidence points to the voice sample being Nina Simone. Google doesn’t turn up record of Kruder crediting her.

Posted by: ckbarlow@hotmail.com at March 9, 2006 04:17 PM

also, don’t Apple’s latest trackpads support two-finger swipes for scrolling? Yes, they do; see “Scrolling Trackpad” over to the right of this page:
http://www.apple.com/macbookpro/design.html

Posted by: ckbarlow@hotmail.com at March 9, 2006 04:29 PM

I agree in the fact that almost anything can be done with a keyboard and mouse. (and perhaps a stylus/digitizing pad). But as a Multimedia and IT developer, i can say that any approach to a more “human-like” interface is big help for developers trying to translate a computer process through a user interface.

Maybe applications work exactly the same, but they are more “user friendly”. The risk is… is you make these interfaces too human, you migh mess things up, because computers are still computers. So you need some degree of precision and control that might be hard to achieve through such interfaces.

Posted by: dada@sodio.net at March 10, 2006 02:44 PM

You are missing it’s presentation and easy organization qualities. The application is a very nice tool for arranging ideas for presentations, papers, organizing stories and books. It could be a great learning took for several learning disabelities.

It has a tremendous “Wow” effect for client presentations.

Presentations for Architecture, planning, advertising, graphic design, medical procedures… It would be fantastic for movie or anamation story boards.

The color thing is of little use but the rest of the applications are increadable.

Posted by: wohleb@comcast.net at March 20, 2006 07:59 PM

While there are definitely - and frequently - “innovations” in the world that are of questionable benefit or applicability, it’s mind-boggling to me that some reviewers here consider this to be one of them. One keeps saying that “humans only do one thing at a time;” maybe I’m not human, because I often do multiple things at once. Driving a car, I can shift gears, brake, turn the wheel and maybe even turn the wipers on all at the same time. It’s a skinny example, but it’s what I’m going after here. In the lemur demonstration, even the small bit where a storyboard was being arranged… how else would you move images, resize them, re-orient them and change their proportions or skew/perspective all at once? There’s no mouse or cursor or keyboard or stylus combination that I can think of that could allow you to do that. And why on earth would you not want to do that?

As a musical composer, always struggling to get my synths and sequencers and control devices to work together, I can see huge potential benefit from something like this. It is nothing without a robust application layer, obviously, which is where most great technological ideas that fail, fail. What I mean is, if I were to have to figure out how to map the configurable parameters of a single voice selection of one of my synths to a control surface so that I could manipulate them in real time, and be able to remember how I’d mapped them, (difficult, considering that different types of voices, such as clarinets vs koto drums, have different parameters that I care about), I’d get nowhere fast. But if applications were developed in such a way that the existence and the function/purpose of the data were made available along with the data itself (which is sort of the promise of xml), then I could see some self-configuring device related to this technology playing a key role in bridging the gap between user and new technology. I know, for example, that somewhere in my setup is a way to control reverb depth, and if the application interface layer goes and seeks out that information for me, and maps it to my fingertips in a way that is predictable and accessible to ~me~, then more than half the battle is won.

Innovations like this are not necessarily solutions in and of themselves, but they do open the door to solutions perhaps simply by providing an environment in which fewer things need to be kept in conscious thought. By incorporating natural motions and gestures as devices like this can do, it certainly goes a considerable distance toward creating that kind of environment.

Posted by: k9gardner@mac.com at March 26, 2006 08:12 PM

I can think of an “immediate and glaring” need for this technology in laptops. The touchpad or nub used these days for analog control in laptops is just poor- in terms of precision, comfort, and usefulness, it simply cannot replace the mouse, and the mouse takes away from a laptop’s portability. While Apple’s two-finger pad is a step in the right direction, a touch pad completely replacing the physical keyboard with a virtual one would do wonders for the interface.

As an artist, I personally prefer to keep the screen and touchpad separate, since your own hand blocks the view of what you’re working on. The small sacrifice in intuitiveness really is worth the benefit of full vision, and a touchpad is more cost effective.

The only real concern I would have with this is the feedback system for typing, which I think Apple has already solved for touch-scrolling in the form of the iPod’s clicker. I imagine sound would be the preferred solution in this case as well.

Posted by: spyderfreek@comcast.net at April 3, 2006 03:09 AM

The best touch screen company for china
[url=http://www.ggitech.com.cn]touch screen[/url]
[url=http://www.ggitech.com.cn]触摸��?[/url]

Posted by: haiting202@gmail.com at April 11, 2006 02:37 AM

Interaction between humans and computers will hopefully alter greatly in a lot of ways. Interfaces nowadays are stuck with tons of useless Information and clickediclick scrollscroll and insulting my physiognomy. Touchscreens play only a minor role in this technical revolution. Computers will adapt to every specific user, as people adapt to each other to simplify communication. Voice-“recognition” will play it’s part. It will be less of a “recognition”, more of a dialog human/machine. Gestures will possibly be unbound from the ridiculus screen. I will be pointing and cutting the air with my hands and the computer’s stereovision camera will pick up my intentions. Of course.

The discussed concept is one proper step. I am really begging the developers to advance quickly, as I cannot stand working with what is out there today. The mouse-concept is worn off. It makes me a slave. I cannot act freely. It is outdated. I want to get rid off it. Once for all.

Posted by: interfilip@gmail.com at April 14, 2006 04:44 PM

The video look’s like no longer available

Posted by: Steward5732@gmail.com at February 28, 2007 02:10 AM

the video has been restored, just search “multi touch” on youtube.com and find copies…

Posted by: rdivecha at February 28, 2007 09:30 AM

I think this is great. It takes us a step closer to removing ourselves from the confines of an office or desktop, even laptop screens.

Posted by: armataultra@yahoo.com at February 11, 2008 11:54 AM

Hi, “surely there could be some amazing advances in robotically assisted remote surgery using this kind of technology - using both hands at the same time.” (: thanks you.

Posted by: turgutyalcin@gmail.com at July 14, 2008 04:31 PM

Touchscreens play only a minor role in this technical revolution. Computers will adapt to every specific user, as people adapt to each other to simplify communication. Voice-“recognition” will play it’s part.

Posted by: armataultra@yahoo.com at February 6, 2009 12:05 PM