[Accessibility-handlers] First draft - Use case for "navigability, perhaps multiple levels of navigability"

Pete Brunet brunet at us.ibm.com
Mon Jul 30 12:41:54 PDT 2007

Here are my first, not well thought out, but initial 30 minutes of thought 

Use case for "navigability, perhaps multiple levels of navigability"

AT users need to be able to navigate within sub-components of documents 
containing specialized content such as math, music, or chemical markup. 
Typically these specialized components have content that needs to be 
"focused" on at different levels of granularity, e.g. numerator within a 
numerator, expression, term, etc.  Are there terms, in math for example, 
that can be used to define each level of granularity?  If not is it 
sufficient to just increment/decrement the level?  Within each level you'd 
need functions (in response to AT commands) that returned the following 
"items" for a particular level of granularity:
first/last on line or first/last within next higher level of granularity
first/last in document

There are two scenarios to consider, a read-only scenario and a scenario 
where the user is editing the document.

There are three system components that need to interact: the user agent, 
e.g. a browser, the AT, and the plugin/handler.

In the read-only case, the AT responds to some sort of "focus" change 
events and depending on the "role" of what got focus the AT fetches a11y 
info pertinent to that role and then formats/outputs a response tailored 
to a AT user, e.g. TTS/Braille.  In the case of specialized content, a 
handler needs to be used by the AT because it doesn't have know how to 
deal with the specialized content.

The user will want these commands:
change level of abstraction up/down
read all from top
read all from POR (point of regard)
goto and read first/last on line
goto and read first/last in document
goto and read previous/current/next

In the case of editable content there may also be a desire to have 
separate cursors, e.g. one to remain at the caret, one to move around for 
review purposes.

The AT will already have UI input commands for most of the above 
functions, but probably not for changing to higher/lower levels of 
granularity.  Let's assume ATs add that and in response the AT would call 
the handler to change the mode of granularity.  The AT will handle the UI 
commands and in turn call the handler to return an item at the current 
level of granularity.  The AT would have told the handler about the output 
mode, e.g. Braille or TTS.  Armed with those three things: level of 
granularity, mode of output, and which item (first, last, previous, 
current, next), the handler knows what to do.

In the case of editable content, the UA provides the input UI for the 
user.  This editing capability would most likely be provided via a plugin. 
 We need an example of such a plugin so we can evaluate what a11y features 
need to be added to the existing editors.

Pete Brunet
IBM Accessibility Architecture and Development
11501 Burnet Road, MS 9022E004, Austin, TX 78758
Voice: (512) 838-4594, TL 678-4594, Fax: (512) 838-9666
Ionosphere: WS4G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.linux-foundation.org/pipermail/accessibility-handlers/attachments/20070730/907a5ff2/attachment.htm

More information about the Accessibility-handlers mailing list