Foundational Position:
Why an Object-Based Framing of AT Is No Longer Operative
December 20, 2025.
Historically, assistive technology has been defined around tools: access technologies that can be purchased, issued, inventoried, and replaced. This framing made sense when barriers were primarily mechanical, solutions were primarily mechanical, and access problems could often be resolved by providing the right tool.
The landscape we are already working in looks different. Communication is increasingly AI-mediated. Interfaces adapt in real time. Access is multimodal, distributed across movement, expression, perception, timing, and context. Participation no longer flows through a single channel or device. Today, access is rarely delivered through one tool alone. It emerges across interactions between people, technologies, environments, and systems. In this context, access can change moment to moment, and failure is more often the result of misaligned systems, environments, or assumptions.
This shift calls for a different understanding of assistive technology. Instead of locating success or failure within the individual, responsibility shifts to the design of conditions: the structure of environments, the framing of interactions, and the responsiveness of systems when standard pathways fail. Under this view, assistive technology enables agency rather than merely supplying tools.
A Contemporary Definition of Assistive Technology
The updated understanding of assistive technology recognizes it as:
Any intervention, including devices and associated services, that alters conditions of access, physical, symbolic, social, or temporal, so that agency becomes possible when conventional channels fail.
This definition intentionally encompasses both devices and services while extending beyond them. It reflects how access actually functions in contemporary systems rather than how it has historically been categorized.
Why Shifting Responsibility from People to Systems Matters
When responsibility is placed on people, access failure is explained through familiar narratives. I hear: “The student is not ready…They cannot access it…They don’t generalize…They lack motivation.” These explanations often appear neutral, but they quietly end the design conversation. Once the problem is located inside the person, there is nothing left to redesign.
When responsibility is placed on systems, failure is interpreted differently. The focus shifts away from the individual and toward questions that keep the work alive. We should ask: Which condition is blocking access? What assumptions did the design make about bodies, timing, cognition, or language? Where is the mismatch between the environment and the person’s way of acting?
This shift keeps systems accountable. It is not philosophical. It directly affects outcomes.
Why This Matters More as Systems Become More Complex
As assistive technologies incorporate AI and adaptive features, systems increasingly make invisible decisions. Interfaces adjust in ways users did not request. Default settings encode assumptions about speed, literacy, language, motor control, and attention.
If responsibility remains on individuals, failures in AI-mediated systems will be framed as user error. The person did not use it correctly. They did not adapt. They could not keep up.
If responsibility shifts to systems, a different conclusion becomes possible: the system failed to surface agency under real conditions. That distinction determines whether bias is exposed or concealed, whether access improves or calcifies, and whether people are supported or silently excluded.
How This Framing Protects Practice
A system-centered understanding of assistive technology protects professional practice in concrete ways. It validates low and no-tech interventions. It legitimizes observation, modeling, waiting, and environmental redesign as core AT work rather than secondary supports. It gives practitioners language to explain why a device that technically works may still fail to produce access.
It also provides administrators with something they urgently need: a framework for responsible failure and continuous improvement in complex systems.
This framing is ethically necessary because it makes withholding access visible as a systems decision rather than a personal shortcoming.
Assistive Technology Aligned with Contemporary Systems
This shift accomplishes three critical outcomes:
First, it protects the agency. Individuals are no longer required to demonstrate competence before receiving access. Access becomes a condition for agency, not a reward for performance.
Second, it creates accountability. Designers, institutions, and professionals must explain why access is blocked and how they intend to remove that block.
Third, it scales ethically. As technologies grow more complex, especially with AI and adaptive systems, the moral burden remains with those who design, deploy, and maintain systems rather than with users navigating them.
Without this shift, advanced technologies will simply reproduce existing inequities at a greater speed.
Under the evolving conditions, the role of the assistive technology professional is to ensure that systems adapt to people rather than requiring people to adapt to systems. And the role of the designer is to build technologies that assume variability as a baseline, not conformity as a prerequisite.
This is not an expansion of assistive technology beyond its purpose, but its alignment with the complexity of the systems in which it operates.