Screen reader support #12508
Replies: 2 comments
-
Also discussed in #10222 |
Beta Was this translation helpful? Give feedback.
-
Thanks for the link! According to this comment, for this to work, it really needs to be contractually included in helix core. I understand it's just a bunch of text primitives, but the thing is, we as devs are generally lazy, and we tend to make things that work for us. If we include TTS as part of the plugin API''s, or primitives that are inherently accessible, then this will really open the door for people such as myself to have a real alternative for a tool that we can use to get work done without having to fight so hard to do basic tasks. If we leave it up to specific frontends to handle accessibility, then many plugin authors won't implement accessibility due to there not being an explicit common API surface to do so. This being said, it definitely should not be mandatory for accessible API's to be used, but it should be considered to perhaps design them in such a way as to be opt-out, where not using them is intentional, whereas requiring plugin authors to opt-in will lead to many not knowing of the API's available. We should also think of the context of users that will be writing plugins for an editor like helix. I don't believe (although I do not know) that helix devs come from a front-end background, where thinking of accessibility is something that has been socially encouraged in the last few years. I would think that many are coming from non-front-end related fields, where accessibility for visually impaired devs is never really in their zeitgeist. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone! I am a legally blind dev. I wanted to propose adding screen reader support to Helix editor.
Currently, the state of accessibility tooling for visually impaired devs, such as myself, are very lacking. VS Code is unfortunately the only editor that is accessible, for the most part, due to it building on web technologies. It still isn't great, allowing only for simple reading-aloud of the current line where the cursor is located.
Editors such as Neovim would be practically impossible to retrofit with screen reader support due to it's inherent lack of UI structure and plugin system design.
I propose adding support for screen readers, or at least to bring up discussion about it since there is work underway to add a plugin system. Creating extensible API's with accessibility in mind is unfortunately not something one can add in after the fact easily.
I have two different ideas for adding screen reader support to helix.
Let's talk a little more about each of these approaches.
AccessKit
AccessKit is an accessibility tree implementation that handles talking to the underlying platform accessibility abstraction on your behalf. This is good for UI's that have an inherent UI tree, and have a linear navigation flow, i.e., focus on a single action at a time, and can move from one focusable item to the next and back again. Think of dropdown lists, menu items, and using
hjkl
for moving in the editor.Supporting AccessKit would universally enable screen readers to be able to interact with helix at a basic level. The problem here is that helix is a CLI, and I am not too sure on the integration story for AccessKit with a CLI application. Assuming integration in a CLI would work, interactions would be strictly limited to simple interactions, much like VS Code where only the focused item would be read aloud, i.e. the current line, menu item, etc.
Built-in
A built-in screen reader would be an embedded text-to-speech engine that would announce whatever action the user is currently performing. No coupling with a platform accessibility tree, managing focus nodes, etc., just a plain old embedded text-to-speech API.
The implementation could be as simple as:
where all speech is queued at the internal API callsites, such as when moving from one line to the next, or when a selection is made.
Comparisons
The biggest advantage of having a built-in TTS engine is that it allows for non-linear navigation of code. A traditional screen reader has no notion of multiple cursors, or LSP suggestions such as variable type hints, inlay hints, code lenses, not even syntax highlighting. Only reading from top-to-bottom, and from right to left.
The largest downside to embedding a TTS, is that the user won't have access to their predefined native screen reader config, such as voice, speech rate, pitch, etc., unless we tried to detect it and load it on startup.
Honestly, there's much to say on this topic, so please, let me know what you guys think. I'd be happy to go into as much detail as y'all want.
Beta Was this translation helpful? Give feedback.
All reactions