A field-level reflection on how AT work is changing
Human-Centered Expertise as the Stabilizing Core Competency in a Quietly Shifting AT Field
December 28, 2025.
As assistive technology moves into more dynamic, adaptive, and increasingly AI-driven environments, I sense that our profession is beginning to change in ways we haven’t fully named yet. Tools now evolve faster than service models, systems update faster than policies, and expectations expand faster than job descriptions. In my day-to-day work, I feel this tension growing, not as a crisis, but as a quiet pressure that asks something different of us than it did even a few years ago.
One pattern I keep noticing is an emerging divide in how AT work actually happens. I don’t see this as a formal split or a judgment about competence. I see it as a functional difference that already exists in practice. On one side, much of the work focuses on implementation, procedures, and tool operation. On the other, the work centers on interpretation, coaching, systems thinking, and decision-making in contexts where answers do not stay stable for long. We often talk about these roles as if they are interchangeable, but in practice, they ask for different kinds of expertise, time, and responsibility. I think the profession will eventually need to name this difference, even though doing so feels uncomfortable.
This shift makes sense when I look at how technology now behaves. AI-influenced systems adapt, learn, and change over time. Decisions that once felt final now require revision. Implementation no longer ends at setup or training. Service delivery increasingly depends on coaching, partnership, co-regulation, and the ability to revisit decisions as new information emerges. In this environment, certainty does not last very long. What lasts longer is the relationship between people and the way we support meaning-making around tools that keep changing.
Because of this, I believe human-centered expertise is moving from a soft skill to the core stabilizing competency of the AT profession. When technology changes this quickly, the only true constants are the human using the system and the human supporting them. Technical knowledge still matters deeply, but it no longer stands on its own. Without strong human judgment, interpretation, and ethical reasoning, even the most advanced tools lose their effectiveness.
In my own work, I see certain skills becoming less central, not because they lack value, but because they cannot carry the work by themselves anymore. Tool-specific expertise, static feature matching, one-time trainings, and procedural compliance still matter, but they no longer define success. At the same time, I see other forms of expertise becoming essential. These include the ability to read people as carefully as systems, to reassess and recalibrate over time, to coach adults without overwhelming them, to translate complexity into usable understanding, and to make ethical decisions in fast-moving, AI-influenced environments. I also see growing value in designing supports that tolerate uncertainty and evolve rather than aiming for perfect implementation from the start.
What I keep coming back to is a larger question about readiness. Are we preparing assistive technology teams and school systems to recognize, value, and develop this kind of human-centered expertise, or do we still treat it as optional and informal compared to technical skill? If the profession continues to change without naming these differences, we risk placing unrealistic expectations on people while leaving them without the structures or recognition they need to do this work well.
I don’t see this as an argument for hierarchy or exclusion. I see it as an invitation to be more honest about what the work now asks of us. Naming what is shifting gives us a chance to redesign roles thoughtfully, support growth rather than burnout, and align training with the reality of AT practice as it exists today. I am curious how others are experiencing this change in their own settings and roles, and which skills they find themselves relying on more than they did before.
Field Notes: On Quietness and AAC
December 22, 2025.
Some of the students I work with who use AAC are quiet. Not all, but enough that I’ve started paying attention to it. I don’t mean quiet as disengaged, or uninterested. More like selective, thoughtful, economical with language.
It makes me wonder whether this is temperament or something shaped over time. Communicating with AAC often requires effort, waiting, and repair, and it usually happens under someone else’s gaze. I find myself asking whether years of moving through those conditions might slowly influence what feels worth saying and when. Maybe expression becomes filtered and spontaneity expensive. Maybe silence sometimes feels safer than trying to fix a misunderstood message again.
I’m not suggesting this is true for everyone who uses AAC, or that quietness is something to change. If anything, it feels adaptive. A way of conserving energy, meaning, or agency in systems that haven’t always made expression easy.
These are just observations I’m holding lightly. But they keep returning as I think about how communication systems don’t just support access in the moment, they may also shape how people come to use (or withhold) their voice over time.
a silent safety layer in aac design
December 13, 2025.
I have students who use AAC (speaking, minimally speaking, and nonspeaking) who suddenly become sad or start crying quietly 🥺. Something has changed: they don’t feel safe, they experience sensory overload, a delayed memory has surfaced, a need can’t be communicated, they’re processing, they need a break, they need time, they simply need to be….
In those moments, when crying is the safest signal they have left, I’m reminded that the most accessible communication is to STOP communicating.
What if AAC (any human interface, really) supported that? What if there were a button on the AAC home page that triggered the screen to turn a muted red (similar to screen dimming or color filters), and fully user-controlled?
Operating systems already provide this capability; AAC apps could provide the trigger.
For adults, a dimmed red screen could signal the need to reduce language, increase physical distance unless invited, pause demands, and use a predictable response (i.e., “I see it. I’m here. We’re safe. I’ll wait.”) with no questioning until regulation returns.
For AAC users, the muted red screen would remove performance, remove the need to justify, remove adult interrogation, and remove the pressure to “use your words.”
Sometimes, the safest communication is silence, and AAC should support that too. What do you think?
Bilingual and Monolingual Side-by-Side Vocabulary Access
December 5, 2025.
For bilingual communicators (especially emergent ones), the ability to open the same app twice (i.e., the Split View on iOS) would be transformative BECAUSE: it would reflect how bilingual communication actually works in the real world, it would reduce the cognitive burden of switching languages mid-thought, it would preserve visual context and motor planning in both languages, it would support natural code-switching without delay or loss of intent, it would increase speed and fluency of communication, it would support emotional expression in the language that feels safest, it would reinforce vocabulary connections across languages, and because it would affirms the student’s linguistic and cultural identity rather than forcing a single-language mode.
For the rest of us, BECAUSE: it would allow real-time bilingual modeling without interrupting the communicative moment, it would make vocabulary comparison and verification across languages immediate, it would improve collaboration with multilingual families during sessions, it would support dual-language instruction and literacy activities, and it would simplify training and consistency for support staff.
The parallel view would also be helpful to monolingual AAC users BECAUSE: it would allow simultaneous access to core and fringe vocabulary without drilling down through folders, it would reduce navigation demands and motor fatigue, it would preserve visual context during longer messages, it would support planning and revision by keeping earlier words visible while constructing new ones, and it would allow comparison between symbol-based and text-based representation.
Does this seem doable? I understand why split view within the app would be too complex.
Gaps in AT/AAC Assessments:
December 1, 2025.
Legacy AT frameworks were built on a linear logic: student’s ability - barrier - choose tool - train - document - revisit.
But modern AT is embedded, adaptive, cross-platform, and multimodal. It’s no longer just about what the tool does, but how it connects, transfers, scales, and coexists within mainstream digital environments and ecosystems.
So, when we look at Grammarly, Alexa, Predictable AAC, and similar AT (but unlike a slant board, pencil grip, keyguard, and similar) the question is: are these still discrete tools, or are they now an element of a dynamic ecosystem (one that can support learning, accelerate access, scaffold development, and sometimes unintentionally replace the learner’s developing competence and voice)?
If modern AT must be evaluated like a relationship, then the updates to AT assessments need to become layered.
A symbol dictionary embedded into an AAC system
November 20, 2025.
A symbol dictionary embedded into an AAC system would be very helpful, not just to teach vocabulary, but to help users understand the logic of the system itself.
Right now, AAC symbols rely on assumed shared meaning. But meaning is learned, shaped, and often personal. A built-in dictionary would help transform symbols from static visuals into linguistic concepts that grow with the user.
Doesn’t this feel doable with the AAC systems we already have? These systems already store metadata for programming, so the first step may not be inventing something new, just exposing what already exists in a user-friendly way. That visible foundation alone would be a major shift. And then, there is AI.
A dictionary could include a word’s definition, part of speech, example sentence, and translation. However, it could also display symbol variants across systems (for example, this SymbolStix icon corresponds to this Minspeak symbol).
The feature could be turned on/off in Settings and activated through a gesture (circular motion, long-press, double-tap, swipe, or another accessible method).
To my students, AAC is native-through-use. The symbol dictionary would help them understand, explore, and grow language in a form that feels native to them.
Time Capsule or Private Folders in AAC
November 19, 2025
Time Capsule or Private Folders in AAC would be helpful for communication sovereignty, not just communication efficiency.
The unspoken assumption behind every AAC design is: if it’s on the device, it’s meant to be shared. That becomes: your language exists for others.
But AAC should also affirm - you exist for yourself (I’ve seen that longing in the eyes of my students).
Users need an AAC feature that lets them mark a folder as Private/Protected/Personal. Not every thought is meant for public space. A folder could unlock with a PIN/gesture/eye-dwell pattern/a symbol sequence. It could be labeled Mine/Later/Quiet/Not Now, or nothing at all.
AAC shouldn’t just support outward communication. It should also protect inner speech, the private space where identity forms before it is spoken aloud.
Privacy isn’t secrecy. Privacy is dignity.
AAC Expression Lab
November 15, 2025.
AAC Expression Lab! A simple toggle that activates Creative Mode on an AAC.
A space to experiment with tone, prosody, emotion filters, and style presets (i.e. whisper, giggle, dramatic voice, warm storytelling, even angry slam poetry).
It shifts communication from “I need” to “I am.”, because some of my students are so unapologetically creative 😊.
Optional messages like “Please wait while I create” to validate the creative process, and clearly labeled downloadable output (“Created by XX using AAC”) to protect authorship and honor voice.
In the creative mode, symbols could be viewed like metaphors or building blocks for poetry or storytelling. Graphic-based AAC systems are already semiotic (think Minspeak!); expanding them into creative expression could deepen symbolic literacy, identity, and belonging.