Pandu Supriyono

Why we don't collect assistive technology data for analytics

30 April 2025

In web development—and digital media more broadly—data-driven decision-making plays a major role. A clear example of this is web analytics. We often identify problems by noticing patterns like low conversion rates in user flows. From there, we define specific criteria to measure whether our solutions have effectively addressed those issues.

It makes sense that managers and others working to improve web accessibility are interested in analytics about things like screen reader usage. Given how much we value data in decision making, I understand this interest and believe it can be rooted in a desire to do better. However, web platforms aren’t able to track assistive technology use by design. This is intentional: collecting that kind of data can raise serious ethical concerns, and in some cases, it's against the law.

Ethical Web Principles

The Ethical Web Principles is a document that outlines values that aim to guide the evolution of the web platform such that it is developed responsibly. Building on this is the Web Platform Design Principles, which are intended to guide browser vendors, specification authors, standards groups (like W3C working groups), and implementers. One of the principles denotes that a web technology should not reveal that an assistive technology is used. This principle comes from the belief that the web must be inclusive and that web technologies must not create new ways to expose sensitive information about disabled people.

The implication here is that the revealing of a user's usage of assistive technology may allow actors to harm the user, especially if the user is a vulnerable person.

Surveillance

In some places it is not an uncommon practice for insurance companies to hire a private investigator to collect evidence to dispute a short or long term disability claim (STD or LTD). This is a dehumanizing practice aimed at limiting payouts rather than identifying fraud. A private investigator can only inaccurately portray the life situation of a claimant, with real consequences to both the claim as well as the mental health of the claimant and the people in their environment.

In the Netherlands, the authority that handles unemployment benefits, sickness benefits, and disability benefits has been under scrutiny for unlawfully collecting IP-based geolocation data. This data was used as part of a decision model to determine whether or not someone was committing social security fraud.

It’s not far-fetched to imagine that authorities and insurers may use the detection of assistive technology to delegitimise someone's claim, which is an unreliable if not harmful way to determine someone's right to a claim. That is, it may not accurately paint a picture of someone's life at the time of using a website. Someone may receive help from another when using the insurer's website, or someone may only sometimes use an assistive technology, depending on the situation.

Special category of personal data

In a similar vein, the Norwegian Data Protection Authority issued a 6.5 million euro fine for unlawfully collecting personal data, including data on users' sexual orientation and sharing this data with third parties for marketing purposes. Data surrounding sexual orientation is classified as a "special category" of personal data under the European General Data Protection Regulation (GDPR). Data about someone’s health also falls under this special category, which only allows data processing in very limited circumstances. Exposing such data can create large risks to someone's fundamental rights, such as the right to bodily autonomy and integrity. This means that an actor with access to this information can:

  • Discriminate against the individual in areas like employment, housing, or insurance
  • Exploit or manipulate the individual by targeting them based on their condition or vulnerability
  • Stigmatise or shame the person, leading to social exclusion or mental distress
  • Deny access to services, benefits, or opportunities under false pretenses
  • Make assumptions about the individual’s capabilities or needs without their consent

The accessibility API

For these reasons, the web platform is designed in such a way that we cannot detect whether or not someone is using an assistive technology. There is, for example, no interface for 'onEyeTrackerClick' or 'onScreenReaderFocus'. Instead, assistive technologies (which aren't user agents), relay DOM events to the browser. For example, a focus from a screen reader will trigger a focus event in the browser, akin to element.focus().

This is done through the accessibility API, which the browser exposes based on an accessibility tree, which in turn is based on the DOM. Assistive technologies can then interface with this accessibility API to access the content and structure of the page, without revealing their presence to the website itself.

Assistive technologies interface with the browser’s accessibility layer, but do not act as user agents themselves. Therefore, they do not send user agent strings or identify themselves in network requests. A screen reader, for instance, does not load a webpage on its own. It reads what the browser renders, and relies on how well that browser exposes content through the accessibility API.

Detection by heuristics

Clever developers have come up with ways to infer the usage of assistive technology. For example, detecting whether the tab key is used to navigate the website or detecting that someone is not moving their mouse. Some developers assume that users who don’t move the mouse must be using a screen reader. Any number of reasons could explain a lack of mouse usage, including personal preference or a broken mouse or trackpad.

While some of these solutions are innovative and creative, they remain unreliable. Not everyone that uses a screenreader is blind. There is evidence that neurodiverse and chronically ill people, among others, may rely on screen readers.

These solutions become increasingly troublesome if the developer chooses to change the user's experience based on this heuristic.

If a developer assumes that someone who does not move their mouse is dependent on a screen reader because they are blind, and adjusts their application's user experience based on this assumption (by designing a "screen reader mode"), they may ruin an experience for someone that uses a screen reader for some other reason. People who rely on assistive technology depend on web code that properly interfaces with their personal settings.

In the example above, this kind of assumption may also exclude users who rely solely on a keyboard or a switch device due to limited motor ability. Additionally, segregating disabled users into a separate or alternative user experience reinforces exclusion, rather than building an inclusive, unified design.

How do we improve accessibility without the data?

If you could rely on just one thing when evaluating your application's accessibility, it would have to be the WCAG. The WCAG is user-agnostic. It doesn’t require you to know who your users are, only that you’ve built a site that works for anyone, regardless of their abilities or the assistive technologies they might be using. For example, requiring that all interactive elements are reachable with the keyboard ensures that users who rely on assistive technologies, or who have limited motor control, can use your site seamlessly.

However, the WCAG is also not foolproof. Automated tools can test for some aspects of accessibility, but manual testing is crucial to uncover issues that guidelines alone can’t address. While data can help inform accessibility decisions, it is the hands-on testing that ultimately reveals the real-world user experience for people relying on assistive technologies. This kind of testing is what helps us create digital environments that truly serve everyone.