[Accessibility-testing] Question about a hypothetical parser rewrite

Andrew Plotkin erkyrath at eblong.com
Tue Mar 12 23:56:09 EDT 2019


On Mon, 11 Mar 2019, deborah.kaplan at suberic.net wrote:

> This is mostly a question for Zarf, I think.

Okay!

> Several of the comments were about ways in which the parsers didn't speak to 
> the AT very well. So here's a question I have: if we imagine a hypothetical 
> world where there's funding or excitement to address even a single parser on 
> a single operating system -- say, WinGlulxe -- how hard would it be to 
> integrate the accessibility APIs? To be clear, in this case it would be 
> integrating Microsoft Active Accessibility (MSAA) for commmunication with the 
> Accessibility Tree on Windows? I tried to poke around the WinGlulxe code 
> briefly but it was somewhat opaque to me.

The answer is... complicated! Sorry.

For WinGlulxe (and Glulxe interpreters in general), the screen is divided 
into panes. Attaching ROLE_SYSTEM_TITLEBAR to the upper one and 
ROLE_SYSTEM_DOCUMENT to the lower one should be very easy.

It may be possible to mark regions of the lower pane as new text. I don't 
know enough about MSAA to say if it supports this.

Going beyond this level is hard, because the interpreter doesn't have 
access to the semantic information that goes into the game layout. For 
example, Inform's "menu" interface (demonstrated in the slide projector in 
my game) is really just the title bar, stretched vertically and with 
different text printed in it. The interpreter doesn't know that the lines 
are menu options.

The interpreter also doesn't have the ability to break out separate blocks 
of information, like exit lists or room descriptions.

When I designed this display model, I intentionally kept the channel 
narrow: the game emits a stream of text, the interpreter displays it. This 
was good in some way -- it's easy to design an interpreter that works over 
IRC, or over a web socket connection, or on MacOS or Windows or anything 
that supports a text document. But there's no way to break up that text 
stream into semantically meaningful sections.

I maybe should have known better, but I came up with this plan in 1997...

Audio and image display can be made smarter. Images come through in the 
output stream as simple commands ("display image 37, right-aligned") but 
image 37 can have metadata like alt text. So it's possible for this to 
come through with accessibility markup; we just need to make sure the 
wires are all hooked up.

--Z

-- 
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*



More information about the Accessibility-testing mailing list