Thoughts on Multimodal desogmdocumentation

23 Jan 2009 - 12:19pm
660 reads
mark ahlenius


I am curious if this group has some insight or experience into good
methods for documenting multimodal designs. Specifically the type of
multimodal designs I am referring to would include speech (voice
recognition) and other modalities such as; touch screen, keypad, and of
course GUI. Allowing the end user to be able to select (and switch to)
the best type of interaction with a device for their current context can
have value IMHO. But how to document such designs is pretty cumbersome.
Been involved with some designs using call flows and state tables plus
textual docs, but it quickly becomes quite complex. Any thoughts on
this matter?

thanks all,


Syndicate content Get the feed