Rethinking AI in Assistive Technology: When Better Output Isn’t Better Support

How efficiency metrics reshape agency and meaning

When AI improves output, support can quietly shift away from human agency. This framework examines how efficiency metrics reshape decision-making in assistive technology and why better-looking work does not always mean better support.

In assistive technology, better output often passes as better support. Faster responses, cleaner work, increased output, and higher independence scores indicate progress because they align with how teams measure success. Efficiency metrics shape what teams notice, document, and reward.

This measurement logic did not begin with AI. Assistive technology has encountered this pattern across communication, academic, and access technologies. AAC offers one of the clearest illustrations. Teams often interpret faster text generation, more fluent utterances, cleaner grammar, longer responses, and higher independence scores as progress, even when over-prompting and core-favored systems create ongoing tension between production and agency.

AI intensifies this same logic. Teams now evaluate students through AI-shaped performance. We ask whether output increases, task time decreases, or accuracy improves. These metrics describe machine performance well. When teams treat those properties as evidence of human growth, AI increases the risk that the human recedes from view, especially when teams fail to capture communicative intent, meaning ownership, cognitive engagement, authorship, or self-determination.

Efficiency gains don’t arrive neutral. Every AI tool redistributes effort, authorship, decision-making, and responsibility. Who makes the choices now? Which decisions did the tool absorb? What work did the learner stop doing, and does that change support or replace it? Current AT evaluation still relies heavily on frequency, accuracy, independence, and time-to-completion. AI highlights the limitations of these measures.

To answer these questions well, teams need a clearer way to think about where technology acts and where humans remain responsible.

AT decision-making improves when teams name planes of agency explicitly. Some tools operate on a mechanical plane, where automation often helps. Other tools operate on a cognitive plane, where scaffolding requires visibility and adjustability. Tools that touch meaning call for a different approach altogether.

Educators face pressure to demonstrate progress, collect data, and produce visible success. AI-generated work meets these demands easily. AT professionals need to step into a different role at this point and help teams determine whether improved appearance reflects deeper learning. This work relies less on technical fixes and more on shared language, coaching, and permission to value partial and uneven communication.

These conditions make a shift in AT practice necessary. The role must move beyond simple tool matching and toward safeguarding learner agency. The work requires choosing when not to automate, designing friction intentionally, keeping authorship visible, and protecting the learner’s right to effort, ambiguity, and development.

AI brings a long-standing issue in assistive technology into sharp focus. Across tools, metrics, and decisions, teams collapse human and machine performance into a single measure and mistake efficiency for empowerment and output for agency.

December 31, 2025.