Saturday, March 2, 2019

Matchmaker, matchmaker: Getting a property from a matching path

As the use of JSON and JavaScript objects has increased, UI engineers have increasingly been tasked with locating a particular data point in an object. The typical response to this situation is to find a module that has the appropriate utility - usually lodash - and load that dependency.

As anyone who knows me can tell you, I dislike dependencies - a lot. Over my years of experience, dependencies have, on occasion, caused more problems than they've solved, and they have certainly caused more than a few sleepless nights.

Finding and retrieving a property, however, is a rather simple task as long as the path to the property can be specified in dot notation. Even more, this can be added to the JavaScript Object prototype easily and, as part of ES5, is supported by nearly all browsers...except Internet Explorer 8.

Here's the code you'll need - it's shown here as a method on a specific object, but I'll leave it up to you exactly how to implement it.

First, we'll create an object.

JavaScript
var myObject = {   situation: {     normal: {       afu: true     }   },   statuses: [     {       foobar: false, time: '1999-12-31T23:59:59'     },     {       foobar: true, time: '2000-01-01T03:00:00'     },     {       foobar: true, time: '2000-01-01T07:00:00'     },     {       foobar: true, time: '2000-01-01T11:00:00'     },     {       foobar: true, time: '2000-01-01T15:00:00'     },     {       foobar: true, time: '2000-01-01T19:00:00'     },     {       foobar: false, time: '2000-01-01T23:00:00'     }   ] }


Next we add our find method to it.

JavaScript find method
myObject.find = function (path, value) {   return path.split('.').reduce(function (ttl, val) {     var key = /(\w+)\[?(\d+)?\]?/.exec(val),       ref = ttl;     ref = ref && key[1] ? ref[key[1]] : ref;     ref = ref && key[2] ? ref[Number(key[2])] : ref;     return ref || value;   }, this); }


What this function is doing is relatively straightforward. It takes the path specified and splits it apart using the dots in the dot notation. It then loops through the array of path segments and shifts the reference point at each step, assuming the reference point actually holds a value. By adding the this keyword as the initial value of the reduce method, we eliminate the need for any external references. If the path is not found, and a default value is provided in the value parameter, the default value is returned, otherwise, undefined is returned.

With this find function, you can get a property value by specifying the path like so... myObject.find('situation.normal.afu'); // returns true. Any properties requested that are undefined, even if they are nested inside undefined properties, are returned as undefined. For example, with our object, myObject.find('situation.normal.nafu.ok') will return undefined without throwing a TypeError as would typically happen when 'ok' could not be read from the undefined 'nafu' property.

Hopefully this will simplify your code and reduce those dependencies.

Happy coding.

Saturday, February 16, 2019

About Code: On documentation

In my decades writing software in the public and private sector, one of the things that seems to come up repeatedly is documentation, and the one thing I've learned about it is that it's a big deal. Nearly every engineer has an opinion about it - everywhere on the continuum. Over the years, I've heard arguments that have included strongly worded missives about how counterproductive documentation is and it should never be used to how helpful it is and engineers shouldn't consider their work complete without it. So, with all that in mind, I wanted to put down thoughts about documentation - not only why it's important and what its purpose is, but how to write good documentation.


There's a maxim in software engineering - there will always be at least two engineers on a project: you and the engineer you were six months ago. In fast-paced environments - those where features are churned out in one-week or two-week sprints - engineers may be solving multiple problems in a short period of time. In the past, we used project notebooks to keep notes about tests we ran or solutions we tried so those actions were not repeated. But, those days are gone. Today we require a different kind of collaboration - one in which all project notes are shared by everyone on the team. In environments where the problems are increasingly complex - calculating fraud and risk scores for example - it's even more important to track not only where the project is, but where it has been. Documentation should be, in part, a living history of the code.

On a side note, here, I've heard the argument that we should use source control (e.g., git) for this purpose. I would urge you, in the strongest language I could use, to not do this. Tools built into source control for tracking history are purposefully simple, often telling you only who made a change and when that change was made. It's possible, if the tool is robust enough, that you may be able to trace a change back to its original commit - and assuming that commit hasn't been amended or overwritten you may be able to find out what the change was and assuming the commit message is detailed enough you may be able to find out the why and not just the what...but there are a lot of assumptions in that process (and I have a rule about assumptions).

Documentation is not only important as history, but also for a number of other reasons.

Current practice is to write more small, concise functions than monolithic systems. In this pattern, we've gotten away from one of the purposes of documentation - it takes less time to read the documentation than reading the code. The other piece of that justification was that anyone should be able to read the documentation (which has a lower cognitive load than the code) and be able to understand what the solution to a particular was. One might argue that neither of these arguments apply in the face of advances the software engineering community has seen in the past two decades...and I might be inclined to agree, until I encounter engineers who don't understand the difference between i++, ++i, and i = i + 1. The choices that we make when crafting a solution to a particular problem are important, and the next engineer to come along is not likely to frame the same problem in the same way, but if we explain our choices the next engineer can make informed choices about where to make modifications...like is it worth the two extra bytes in i=i+1 to avoid evaluation bugs that can pop up when using an increment.

It's important for the documentation to be in the code, not somewhere else. Why? It's an accessibility issue. Too often engineers not only build interfaces that are not accessible, but they build them without using accessible practices, creating code that lacks accessibility. Placing the documentation outside of the code introduces a level of complexity that can be a significant problem. As a helpful note, I'll just include here that two simple things you can do to significantly improve the accessibility of your code is (1) use tabs instead of spaces and (2) include comments directly in the code in a common tag format, such as jsdoc.

Additionally, while I won't repeat the argument here, documentation - specifically documentation about authors, creators, and innovators - is important because it can help reduce the rampant bias that is ravaging our industry. You can read more about this particular topic in my blog post Creation, Attribution, and Misogyny.

Good documentation does not repeat the code, but explains it in plain language. If the problem is particularly complex or prone to misunderstanding, it should describe the problem as well as the solution. For example, if our problem is credit card validation, what does that mean? Are we validating the issuer number, the card number against the card type (verifying that if the user says it's a Visa™ card they provide a Visa™ card number), are we validating the number against transposition errors using the check digit, are we validating the expiry date, or are we validating the type against a list of accepted types. Any, or all, of these rules can be called 'validation', and even this plain-language list can be confusing, as rules such as check-digit algorithms can be different depending on the card type. That sort of information is not general information, even among engineers who are familiar with the industry and would be very helpful in documentation.

Good documentation also resolves odd, confusing practices engineers or organizations may have. For example, let's say you're creating a JavaScript library and you want to expose an event every time someone changes the value in an HTML input. There is a native HTML event API and there's an event that fires when a value has been changed - called change - but it only fires after the value has changed and the user has gone on to another input or task. In your library, though, you want this new event to fire when there has been any change to the value in the input, not just when all the changes to the input have been made. In this case, you would need to repurpose a different HTML event - keypress or input for example, but what would you name this new event? If you're like some, you would name this new event change because it fires when a value is changed. This would be a good point in the documentation to explain that this new event uses the same name as an existing event but that it's different.

There are a few good arguments to justify the inclusion of documentation, and no good arguments for its exclusion that I've seen work in the wild. In three decades of writing code alone and as part of teams, I have never seen self-documenting code or a sufficiently verbose source control, and I'm not anywhere near being alone in that experience.

If you are writing code, document it, and even more important, retain as much documentation as you can when modifying the code - perhaps by using the jsdoc tag @since to identify changes - your future self and your teammates will thank you for it.

Happy coding.

Tuesday, November 6, 2018

A Journey of One Thousand Miles: Different styles and uses of progress indicators

One feature, common to nearly every process (and especially most processes in ecommerce), is something that indicates progress - a map, if you will, that shows your current location relative to the destination. Even our Instant Pot has a display that indicates when pressure builds and releases. But, just as every process is different, there should be different techniques that indicate the process and the current state.

One of the most common uses of progress indicators is in multi-page forms - that's certainly where I've had the most exposure to them. In general, they're intended to reduce cognitive load in a process, and they can either succeed or fail, often with significant results.

In the Web Accessibility Initiative section for multi-page forms, several different methods for identifying progress are put forward. The methods used to indicate progress for multi-page forms can be briefly described as (a) landmark content, (b) a progressbar, and (c) a step-by-step indicator - and I'll take each of them in turn to discuss how they might be used and whether or not they can, or should, apply to other types of progress.

Landmark Content

The use of landmark content to identify progress is almost always a good method, especially when - as the first approach identified by the WAI - updating the title element is used. As a general rule, assistive technology will announce changes to the page title. The other approach in the landmark content method, updating the main heading - i.e. the h1 element - is also a good approach as many users navigate using headings and it is, or at least should be, more visible than the page title. Rather than use one of these approaches, use both.

Of course, there are disadvantages to using this method. Neither approach - updating the page title or updating the main heading - is likely to be sufficient if the user has scrolled far enough. Updates are likely to be noticed only by users who have the main heading in their field of vision or are using assistive technology that announces changes. Also, both approaches in this method update content, so not all accessibility is improved - the user still must either read, or have assistive technology that will read, the updated content.

A Progressbar

A progressbar generated using a progress element with a value of 1 and a max of 3, as rendered by Chrome
Progressbar
(from a progress element, rendered in Chrome)
HTML5 offers a progress element that takes a max attribute and a value attribute to draw a visual representation. The progress element takes content, which must be updated, as in <progress max="7" value="1">Step 1 of 7</progress>. However, in some platforms, a progress bar is animated in a way that would violate the Web Content Authoring Guidelines Success Criterion 2.2.2, an A-level criterion.

Like several of the additions in HTML5, the progress element is not very accessible, so it's still recommended a better widget - one that uses the progressbar role with aria-valuemin, aria-valuemax, and aria-valuenow - be used, but be advised that automatic updates to the value in a progressbar role are not well-defined.

The progressbar, whether implemented using the HTML5 progress element or a widget that uses the progressbar role, may be sufficient if the progress reflected is proportional, such as a file transfer showing the number of bytes transferred. The interface is not well-suited to processes where one or more steps is larger, or takes more time, than others.

A Step-by-step Indicator

An ordered list as a step-by-step indicator
Step-by-step Indicator
The third method - a step-by-step indicator - can help users orient themselves in multiple ways. First, the user should be able to clearly see how much progress they've made, whether the progress within each segment is proportional or not. Second, the content should reflect not only steps already completed but the current step, and not-started, or upcoming, steps. In this method, there are three common approaches we can use, and each will apply to different cases.

Fixed-Journey Indicator

The basic step-by-step indicator presented for multi-page forms by the WAI is an ordered list, with list items that have visually hidden content to indicate which item is completed or current. This is likely sufficient, if the progress is a consecutive series of unidirectional steps under the control of the user - although the WAI example should be updated to reflect capabilities available in the ARIA states and properties.

Breadcrumbs

If, progress is bidirectional or the user may complete steps in a non-consecutive manner, the interface should be a navigational one which allows the user to choose the step they'd like to complete. This step-by-step indicator becomes more complex as it not only needs to include completed, current, and not-started states but also accessible navigation between the steps. This sort of navigational, step-by-step indicator is often called a breadcrumb because it's more than just a progress indicator.

This type of progress indicator should especially be used for multi-page forms if those forms cause legal commitments or financial transactions to occur or update user-provided data that has been stored to help meet WCAG Success Criterion 3.3.4 as it aids in the review, confirmation, and correction of information.

Status Indicator

The third type of step-by-step indicator is a status indicator. The status indicator is still, semantically, an ordered list, but a status may switch in a non-consecutive manner and the status is likely to be outside the control of the user. An example of this might be a payment transaction that moves from pending to authorizing to completed or a document retrieval that moves from requesting to receiving to received to loaded. In either of these cases, an intermediate step may be skipped, e.g., the payment may appear to move from pending to completed or the document may move from receiving to loaded.

This type of progress indicator is perhaps the most complex, because the user interface is not really a progress indicator but a status indicator (hence the name). The upcoming statuses need not be announced because their inclusion is likely to reduce cognitive accessibility - the user can do nothing to either prevent or encourage the move to that state. Further, announcing which of the items in the list is current lacks the context that the visual interface has, so identifying completed and current items is a little more complex.

It is also important to note that because this indicator is auto-updating and representing something outside the control of the user, it is critical that it only be used for those activities that are essential. If the interface is used for a part of an activity that is non-essential, the auto-update feature without the ability to pause, stop, or hide the update will violate WCAG Success Criterion 2.2.2, just as the HTML5 progress element would.

Conclusion

Any of the four different types of progress indicators - a progressbar, a fixed-journey indicator, breadcrumbs, or a status indicator - can be sufficient to describe process. While they have areas of overlap, each has a specific interface that leads to a design pattern and accessibility features.

In the very near future, I will be launching a gitbook called Think A11y that will be covering components like this, including HTML, CSS, and JavaScript, in an effort to put my accessibility resources in one location. This blog will still cover accessibility issues from time to time, but hopefully this new approach will make my electronic life a little easier to manage and share. In the meantime...

Happy coding.

Saturday, October 27, 2018

Clean Your Tools

Recently, I was reading an interview with Scott Kubie in which he was asked "is there a piece of professional or life advice you've gotten that has always stuck with you" and to which he responded "clean your tools". I was immediately reminded of all the times my father said something similar to me about the care and maintenance of tools.

Often, as a child, I wondered why we spent the time maintaining our tools when they were just going to get dirty or dulled the next time we used them. In the case of some tools, and how they were soiled, I could understand - steel and water result in rust that means a replacement will soon be needed - but unless it means they'll have to be replaced, why clean or sharpen them?

As I worked, I learned that it was difficult - and sometimes impossible - to hold onto a metal wrench covered in oil; I learned that it was difficult - and sometimes impossible - to cut with an unsharpened blade. Well maintained tools, on the other hand, did the job they were designed to do, often wonderfully well. As much as my child self may have hated to admit it, my dad was right. The simple truth is that clean tools work better.

What does this have to do with technology?

More and more the tools we have created to write, test, and manage code are far from clean. They may be have been updated frequently, with bells and whistles added. They've been converted from plain-old semantic HTML, cascading styles, and vanilla JavaScript to the most popular frameworks. But, like a blade that has been sharpened in the wrong direction, our tools have developed a false edge that is likely to break off, leaving our tools dull and useless.

It can't happen - it won't happen, you say? It already has.

If we look at the standard tools developers use - things like BitBucket and WordPress - we find that many of them have significant accessibility issues, often resulting in a failure to meet the WCAG at even A-level conformance, brought on by how they're built and maintained. The latest WordPress editor, called Gutenberg, has consistently had significant accessibility issues and has been called "a regression in terms of accessibility level", frustrating testers to such a degree many refused to even look at it again.

Good engineers notice and point out the ways in which we've failed. For example, in a very public announcement, Rian Rietveld resigned as leader of the WordPress accessibility team, citing several issues that led to her departure - but our industry, as with many others, often tries to shout them down or shut them out, whether they be leads on projects in major organizations like Rietvel or entrepreneurs writing about the "State of Accessibility in Dynamic Web Content".

Although it isn't lost on many familiar with accessibility and the state of our tools that most of the issues cited in the post "I have resigned as the WordPress accessibility team lead. Here is why." (by Rietveld) are associated with React, the poor accessibility in one of the web's most popular tools is an uncomfortable truth that few are willing to even acknowledge.

Although the problem is not isolated to React, the lack of React developers with accessibility experience and the difficulties with accessibility in React itself is a problem for more than just Gutenberg, and for more than just WordPress, which controls nearly one-third of the Internet - it's a problem for any organization that uses React, because it's a problem in the ecosystem.

We've been building with dirty tools, creating an ecosystem that has shifted away from POSH and CSS with unobtrusive JavaScript to one written in a JavaScript tool that has an architecture and uses design patterns designed without accessibility in mind. That practice - that ecosystem - has grown developers who need to be encouraged and inspired rather than simply educated, because they can't, or won't, see why the lack of accessibility is a problem. Some even respond to code that resolves accessibility issues created as a by-product of using this tool with something along the lines of "that's not the React way of writing code", which is only slightly better than the "blind people don't use the web" that I received (as a response to a question about accessibility) when interviewing a candidate for a "senior UI engineer" position.

Granted, this post has, perhaps somewhat unfairly, focused on React when the problem is greater than React. In truth, I could list any number of tools - frameworks, like React, - that are, or have become, a problem. React is a bigger target at the moment because it's so fashionable nearly every organization of any size uses it. Organizations have become convinced that engineers won't work without it - maybe they're right - and engineers insist on using it because without it they find it difficult to land a decent gig.

But...here's something we know - the simple truth is that clean tools work better. It will always be the case that plain-old semantic HTML and lightweight, cascading styles will out-perform and be more accessible than sites written fully in Angular or React. If you're interested in diversity, or if you believe, like Tim Berners-Lee, that "the power of the Web is in its universality" and that "access by everyone regardless of disability is an essential aspect", you should be pushing for clean tools that will get your product in front of the greatest diversity and the greatest number of users.

Happy coding (with clean tools).

Sunday, October 21, 2018

Open Sesame: A better password experience

password inputs with strength indicator from weak to strong
Password inputs are ubiquitous, but they're often poorly designed. Not because they're not announced or lack security features to prevent someone looking over your shoulder and discerning your password. Instead, it's the user experience that's poorly designed.

In order to be as secure as possible, we're encouraged to have strong passwords, but each site's idea of what makes a password strong varies and often users are left to guess what the requirements are until after they've tried to enter a password and it has failed the hidden validation rules. In many cases the only clue users have is a strength indicator that slides from weak to strong.

To improve the user's experience, I'm going to suggest you add a list of the hidden validation rules, something like the following.



  • At least one lowercase letter is required
  • At least one uppercase letter is required
  • At least one number is required
  • At least one special character from the following list is required: !, @, #, $, %, ^, &, or *

Additionally, it is going to be tempting to make these hidden validation rules present only during onboarding, when the user is creating an account. This practice will lead to unnecessary frustration when users try to login but cannot remember their password. The addition of the list of rules to the interface gives additional clues that can trigger a remembered password, so add the validation rules description list any time a user needs to input a password.

Additionally, when using this approach, one might consider hiding these rules behind an accordion or tooltip that is only displayed by some user action; however, I would recommend against that approach as it reduces the accessibility this practice improves.

Another word about accessibility here, and this is best explained through markup. As you will notice in the markup below, which creates the list, uses the aria-describedby attribute to tie the validation rules to the input.


HTML

<label for="password">Password</label> <input aria-describedby="required-features" id="password" type="password" /> <ul id="required-features">   <li class="missing">At least one lowercase letter is required</li>   <li class="missing">At least one uppercase letter is required</li>   <li class="missing">At least one number is required</li>   <li class="missing">At least one special character from the following list is required: !, @, #, $, %, ^, &amp;, or *</li> </ul>

At this point it may be tempting to include an icon - something like <li><i class="missing"></i>At least one lowercase letter is required<li> - however, you should not do this, because it causes a fragmentation issue, which is when the accessible name of an item is generated by the first element rather than appending all elements together. Instead, use the listing as is and use a CSS pseudoelement ::before to generate content, otherwise you're likely to hear "warning sign" repeated for each item in the list and the remainder of the text will not be announced. There are ways around this, of course, but it's generally best to keep to the simplest method.


In the near future, this pattern will be exposed in the roking react repo with automatic updates to the list as the validation tests pass. As with other components in the repo, accessibility issues will be resolved to provide the best possible user interface for everyone. Of course, you're encouraged to develop using this pattern in a way that meets your needs, even without using the roking react repo.

Happy coding.

Sunday, September 9, 2018

The Poor Babel Fish: The importance of clear communication and how to measure it

Meanwhile, the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation. ~ The Hitchhiker's Guide to the Galaxy
With all due respect to Mr. Adams, it wasn't the fault of the Babel fish, or the increased communication between different races and cultures that caused more and bloodier wars. In fact, we know that communicating clearly with each other is generally a good thing - as long as we have good intentions.

One of the few things those of us involved in building human/computer interfaces know for certain is that the cognitive load of the interface matters. The greater the cognitive load, the longer it takes someone to complete a process or read content on the web page. The longer it takes someone to complete a process, the less likely they are to complete the process. Larger e-commerce companies with large datasets, like PayPal and eBay, know how likely a person is to complete the checkout process given how long it takes them.

Cognitive load is also one of the most often overlooked elements of accessibility. We seldom spend time thinking about how difficult, or easy, it is to identify relevant items on a page and whether or not the content hierarchy is perceivable. We also seldom pay attention to how easy, or difficult, content is to read, which is yet another component of cognitive load.

Part of the avoidance of evaluating this last piece - how "readable" content is - is due to the fact that it's generally very difficult to assess "readability". In US English there are two formulas that can be used for evaluation: the Flesch Reading-Ease Score (FRES) and the Flesch-Kincaid Grade Level Formula. These two evaluative tools provide comparable scores for English language in other regions and have been pretty widely used; however, they do not apply to other languages.

The difficulty of assessing languages other than English presents unique challenges to assessing the accessibility of web pages with regard to internationalization. Because of my interest in both internationalization and accessibility, their convergence around another of my loves - language - makes this a very interesting topic for me...and, frankly, the main reason why I'm sharing this with you.

We tend to pay a lot of attention to the Flesch-Kincaid Grade Level Formula in the US, but that's not really a measure of accessibility, because it's not a measure of how easy or difficult it is to read, but a measure against a "typical" student's comprehension...and the whole idea of "typical" should be an anathema within the accessibility community. The Flesch Reading-Ease Score, on the other hand, is not a comparison of the content to other "typical" content, but a measure of the content itself, by analyzing the number of words and sentences and the number of syllables in the words used.

The number of syllables in word, in languages other than English, are not necessarily a measure of complexity, however. As a hypothetical example, if we compare the English (three syllable) phrase "take this drug" with the Spanish (nine syllable) phrase "tomar este medicamento", we get a FRES of 119.19 for English (which maps to roughly third grade, or an 8 year-old) and a FRES of -50.01 for Spanish, well beyond the reading ease of the Harvard Law Review. Even if we used the English phrase "take this medication", the FRES is still 34.59, a whopping 80 points greater than the Spanish phrase...or roughly the difference between fourth grade (US) and college graduate reading levels.

Granted, this is an extremely small, hypothetical example; however, there is simply no way to justify the difference between those two phrases. Even though many standards in the US specify readability requirements using the FRES, there must be a better way to calculate the readability, especially across languages.

If we look outside the US, and about ten years before the Flesch-Kincaid Grade Level Formula was developed, we find the Läsbarhetsindex. This formula is considerably easier to use than either Flesch-Kincaid readability tests - it's the number of words divided by the number of sentences plus the number of "long words" multiplied by one hundred and divided by the number of words. A "long word" in the Läsbarhetsindex is a word with more than 5 letters. The Läsbarhetsindex formula gives a score for "take this medication" of 36 and the score for "tomar este medicamento" is also 36.

While there is a general mapping of Läsbarhetsindex to educational levels in the author's native Sweden, such a mapping does not exist for other countries...which makes sense because every educational system is different. This, however, doesn't affect our use of this measure for accessibility, because as was mentioned before, for accessibility we need to evaluate the content, not necessarily compare the content to an educational level.

Although the definition of a "long word" may vary, depending on the language used, the remainder of the formula should work for measuring the readability of content for accessibility purposes.

To aid in your evaluation work, I've created a library that will soon be published via npm as roking-a11y. In the meantime - before it's published on npm - you can get the source in the roking-a11y repo on my GitHub profile.

I'm also looking into how else this can be used and extended. One of my plans is to perhaps modify the length of a "long word" based on a language code, and perhaps add guidance for specific score ranges. If you have ideas about how else this might be helpful, please leave a comment - I don't track issues or feature requests in GitHub. If you've done research in this area, I'd especially like to hear from you.

Happy coding.

Monday, September 3, 2018

A Country Without a Language: Constructing and using language tags

Tír gan teanga, tír gan anam (a country without a langauge is a country without a soul).
One of the tools everyone who develops web applications or writes web pages uses is Best Current Practice 47 (BCP-47) for what's commonly called a language tag (langtag) or locale code. The BCP 47 practice specification, defined by the Internet Engineering Task Force (IETF), sets out how to specify a language using internationally standard codes.

Since every engineer working with web technology should be using this specification, it would be helpful to have a solid understanding of this somewhat arcane and sometimes confusing specification. What follows is a simplified explanation that leads us to just that.

The Basics

The langtag is constructed by combining a language code from ISO639-1:2002, Codes for the representation of names of languages -- Part 1: Alpha-2 code, called a "language subtag" and optionally, more specific restrictions, also called subtags, for the script used, the region in which the localization is used, and the variant, each separated by a dash. Although there are additional subtags, these four are more than enough to denote a language used. In fact, the common practice within BCP 47 is to compose a langtag that is only verbose enough to uniquely identify the language used, and typically, langtags are composed of only the language and the region subtags. Let's see how this works using a few examples.

A Basic Langtag

Although the most basic langtag is one that consists solely of the language code, e.g., "en" for English, the most basic common langtag pattern is a language separated by regions. Since George Bernard Shaw famously said that "England and America are two countries divided by a common language", we'll use this as an example to construct basic langtags that identify that although the script used is the same, English written and spoken in the US is different than that used in England.

The "common language" is identified by its ISO language code, "en" and each of the regions, the US and England are identified by their respective ISO country codes, US and GB. Combining these two basic subtags results in two langtags, "en-US" and "en-GB", for the US and England, respectively.

Of course we could have added the script subtag to each of these langtags to identify the alphabet used, but since it's the same in GB English as it is in US English, that would not add meaning to the langtag, so it would not be included in this case. In fact, we find that the script subtag is relatively rare, however, to see how it works we'll look at a language that can use different scripts, or alphabet sets - a language such as Irish (or Gaeilge).

A Different Script

Advert in Gaeilge using the latin gaelic alphabet
There is a relatively famous advertisement for Guinness Stout that carries the slogan "Ní féidir an dubh cur ina bhán air", which is written in Irish, or Gaeilge. The Irish language is often written today using a Latin alphabet, much like English, with a liberal use of vowels that include accents like those in the word "Ní"; however, prior to the middle of the 20th century, there was no "h" in the Irish alphabet as the lenition was identified by a dot above a letter. This practice means the same phrase in the original Irish script, LATG, would be "Ní féidir an duḃ cur ina ḃán air", as shown in the advertisement pictured.

This difference would give us two langtags - "ga-latn-IE" and "ga-latg-IE". Since the formal language specification for Gaeilge now uses the LATN alphabet, and we're only verbose enough to identify the language, the "ga-latn-IE" langtag would commonly be shortened to "ga-IE".

A Variant or Two

Now let's turn our attention to the variant subtag.

Variants are seldom used in common practice - there are only 100 registered with the IANA and there are typically few who regularly use any single variant. Variants are often used to denote archaic uses and intermingled languages like the mix of English and Spanish commonly called "Spanglish".

If we turn our attention back to the langtag for the US, we might also want to include a regional variant for the Northeast or Southern US, especially given the differences in third-person word choice (where the common choice for third-person pronouns is "you" and "you" for singular and plural, respectively, in the southern vernacular the singular and plural are "y'all" and "all y'all"). Subregion variants such as this are quite common, even if they do not reach the status of a dialect. Although variants are common, they are not often registered with the IANA, which is a requirement for the variant to be used as a subtag.

One exception to the pattern of unregistered variants is Boontling, a variant of English that is tied to Boonville, California. Since a variant subtag for Boontling - BOONT - is listed in the IANA language subtag registry as a variant of (US) English, its langtag would be "en-boont" or "en-US-boont" or "en-latn-US-boont" if you wished to use the more verbose, which we don't.

It's also possible for a langtag to have multiple variants. The only example I know of for this would be a variant of English spoken in Scotland (typically referred to as Braid or Ullans) that uses the variant subtag SCOTLAND and a variant of this variant that is spoken in Ulster, Northern Ireland, which would make the langtag "en-scotland-ulster" or "en-GB-scotland-ulster".

BCP 47 Implementation

The way in which the specification has been written implementing langtags can be a little confusing. Sometimes variants are widely used enough that they become regional and sometimes variants even become recognized as their own language. One instance of this is the two primary variants of Norwegian, Norwegian Bokmål and ‎Norwegian Nynorsk. Although linguistically these are two variants or dialects, the ISO considers them languages in themselves, which means there are three valid language subtags that can be used to construct langtags for Norwegian in Norway: "no-NO", representing Norwegian in Norway; "nb-NO", representing Norwegian Bokmål in Norway; and "nn-NO", representing Norwegian Nynorsk in Norway.

Difficulties like this aside, however, one of the rules of accessibility (a11y) under the "robust" principle, requires us to include a langtag for documents using the lang attribute. The inclusion of the language allows assistive technology, like screen readers, to announce words and phrases properly and allows user agents to offer dynamic translation.

As anyone who has read authors that sprinkle phrases in multiple languages throughout their work knows, even though a root document has a language specified, there may be portions in other languages. Those portions also need to be spoken correctly and the user may benefit from dynamic translation of them as well. To help with this process, the folks writing the HTML spec made the lang attribute a global attribute, not just an attribute on the document, meaning it can be applied to any HTML element.

So, if you're concerned about the usability of your pages, include the langtag on the document (e.g., <html lang="en-US">) and anywhere else it's appropriate...and even if you're not concerned about general usability, adding the langtag to the document will help you meet the accessibility guidelines (WCAG 2.1, Guideline 3.1, Success Criteria 3.1.1 and 3.1.2) - and we all want that.

Happy coding.

Saturday, August 11, 2018

It's A Trap!

If you read my previous post, you know I'm a big fan of accessibility, and reading this post after the previous may be a little confusing because although one of the Success Criterion (2.1.2) is "No Keyboard Trap", I'm going to tell you how to build one.

The reason I'm going to tell you how to build a 'keyboard trap' is that there is a exception to the 2.1.2 - when a modal window is open. These virtual windows have been called 'lightbox', 'overlay', or even 'dialog' - no matter what name we call them, they restrict control to their own little container...and the "keyboard trap" is only a keyboard trap because the input mechanism the user has chosen is a keyboard. What we're really going to build is a 'focus trap' - something that traps focus to a specific container.

I will preface what I am about to describe by clarifying that the small amount of code here will be vanilla JavaScript, but you can translate the generalized approach here to Angular, ReactJs, or any framework you choose. The idea; however, is not to provide much code, but rather a framework of requirements that will allow you to create the focus trap and to explain the because's associated with the multiple why's that come with how the focus trap is constructed.

First, let's talk about why a 'focus trap' rather than a 'keyboard trap'. If we were building a keyboard trap, we would bind a handler to the keydown event and check whether the key hit was a navigational key, e.g., a Tab or Arrow key. This method works - but only if the method of navigation is via a navigation key and not through a navigation method that does not include keys - like any one of several methods available within an assistive application. As a side note, event listeners for common events - like a keydown, keyup, or keypress - are very expensive, so a focus or blur listener offers performance benefits as well. Additionally, using a focus or blur listener makes more sense conceptually, because what we're really trying to control is focus, not which keys are hit or which things are clicked.

Of course, the immediate problem is that unlike a keydownkeyup, or keypress event, a focus or blur event does not bubble. This lack of bubbling means the event handler that traps focus will have to be listening to every blur or focus event for every focusable element within the container. Here's another point where your performance can be impacted - assigning the listener by putting the function declaration in the listener attachment will drastically reduce performance. Create a focus handler and then add it to each focusable element within the container. Note - a code sample of how to add this event listener to each focusable element is not provided, because each framework provides a different method for traversing a Node tree.

Within your event handler, you're not going to be concerned about where focus is coming from, but where it is going to, because we can already assume focus is coming from somewhere in our container, where we're trying to trap it. Luckily, we have this information in a blur event - it's called the relatedTarget.

Note: The use of relatedTarget inside a blur event is not standard across all browsers. For example, some browsers set relatedTarget for focusOut events but not blur events. Although there is no sure way to make this behavior absolutely consistent, some browsers will set the activeElement prior to firing the blur. It is recommended that you check the document.activeElement if the relatedTarget is null.


One last decision you'll have to make is where to place focus if the user tries to go outside the container. You might consider leaving focus where it is; however, that would trap focus to the last focusable element when the default behavior of moving focus off the last focusable element is going back to the beginning. You just need to decide where the beginning is for your needs. If your modal is a form, you're likely to have a different 'beginning' than if it's a pure dialog.

So our event handler will look something like the code in Example 1...



Example 1 (JavaScript)

const onBlur = (e) => {   /* destructure with a default to handle a null relatedTarget */   const {     relatedTarget = document.activeElement,     } = e;   if (!container.contains(relatedTarget)) {     beginning.focus();   } };


Because this uses the native blur event, it won't just block blurs based on navigation keys, but will block all attempts to blur. There will be a slight performance impact because the handler is bound to all focusable elements within a container, but that cannot be avoided - just be sure to remove the listeners when the modal is not displayed.

So, there you go - that's all there is to it. As long as this is used exclusively in a modal, there aren't additional accessibility issues associated with the trap - it's use within a modal doesn't violate the Keyboard Accessible guideline via Success Criterion 2.1.2 - in fact, it makes your modal more accessible than if it lacked the focus trap.

Happy coding.

Monday, July 23, 2018

One For All and All For One



All for one and one for all, united we stand divided we fall. Alexandre Dumas, The Three Musketeers 
I've been putting off attempting to write what I might call "What I've learned about accessibility in twenty years" for a while - partly because even though it's based on a presentation and therefore already written (in a sense), copying PowerPoint slides as images is not very satisfying. Mostly, though, it's because it's likely to be a really long post and those often tend to not perform well. Considering what I've seen in the industry recently though, it's maybe past the time I should have done it, either way, here it goes.

Before I seriously dive into this topic, I want to share a little information about myself. Over the past (almost) two decades I've worked with accessibility in both the public sector, where I was bound by Section 508 of the Rehabilitation Act (1973), and in the private sector, where I've worked with guidelines that are now published as the Web Content Accessibility Guidelines (or WCAG). Over that time I've not only built a significant amount of what we might call "product knowledge" about accessibility, but have built quite a bit of passion for the work as well. I'm going to attempt to share that passion and attempt to convince you to become what I call an "Acessibility Ally" (A11y*ally, A11y^2, or "Ally Squared") - someone who is actively supportive of a11y, or web accessibility.

What Is This Accessibility Stuff, Anyway?

A lot of discussions about interface accessibility start with impairment. They talk about permanent, temporary, and situational impairment. They talk about visual, auditory, speech, and motor impairment (and sometimes throw in neurological and cognitive impairment as well). They'll give you the numbers...how in the US, across all ages,
  • approximately 8 percent of individuals report significant auditory impairment (about 4 percent are "functionally deaf" and about 4 percent are "hard of hearing", meaning they have difficulty hearing with assistance)
  • approximately 4 percent of individuals report low or no vision (which are not the only visual impairments)
  • approximately 8 percent of men (and nearly no women) report one of several conditions we refer to as "colorblindness"
  • nearly 7 percent of individuals experience a severe motor or dexterity difficulty
  • nearly 17 percent of individuals experience a neurological disorder
  • nearly 20 percent of individuals experience cognitive difficulties
They might even tell you how all those numbers are really First World numbers and when you go into emerging markets where reliable access to resources is more limited the numbers double. They'll talk about how, in general, those numbers are skewed by age and how about 10 percent of people under the age of 65 report impairment while more than 25 percent of people between the ages of 65 and 74 report impairment (and nearly 50 percent of those 75 and older report impairment).

I don't generally like to start there...though I guess I just did. Accessibility is not about the user's impairment - or at least it shouldn't be - it's about the obstacles we - the product managers, content writers, designers, and engineers - place in the path of people trying to live their lives. Talking about impairment in numbers like this also tends to give the impression that impairment is not "normal" when the data clearly shows otherwise. Even accounting for a degree of comorbidity, the numbers indicate that most people experience some sort of impairment in their daily lives.

The other approach that's often taken is diving directly into accessibility and what I call impairment categories and their respective "solution". The major problem here is a risk similar to what engineers typically refer to as "early optimization". The "solutions" for visual and auditory and even motor impairments are relatively easy from an engineering point of view, even though neurological and cognitive difficulties are far more significant in terms of numbers. Rather than focus on which impairment falls into the four categories that define accessibility - Perceivable, Operable, Understandable, and Robust - we have to, as I like to say, see the forest and the trees. While there is benefit in being familiar with the Success Criteria in each of the Guidelines within the WCAG, using that as a focus will miss a large portion of the experience.

One other reason I have chosen this broader understanding of accessibility is that accessibility in interfaces is holistic. Everything in the interface - everything in a web page and every web page in a process - must be accessible in order to meet the definition of accessible. For example, we can't claim a web page that meets visual guidelines but not auditory guidelines "accessible", and if the form on our page is accessible but the navigation is not then the page is not accessible.

Why is Accessibility Important?

When considering accessibility, I often recall an experience in interviewing a candidate for an engineer position and relate that story to those listening. This candidate, when asked about accessibility, responded something along the lines of "do you mean blind people - they can't see web pages anyway". I've also worked with designers and product managers who have complained about the amount of time spent building accessibility interfaces for such a "small" group of users or flat out said accessibility isn't a priority. I've worked with content writers who are convinced their writing is clear enough for their intended audience and anyone confused by it is not in their intended audience - what I call the Abercrombe and Fitch Content Model.

For those who consider accessibility important, there are a few different approaches we might typically take when trying to sway those who tend be less inclined to consider the importance of accessibility. In my experience, the least frequently made argument for the importance of accessibility is one of a moral imperative - making an interface accessible is the "right thing to do". While I agree, I won't argue that point here, simply because it's the least frequently made argument and this is going to be a post that bounds the too-long length as is.

The approach people most frequently take in attempting to convince others accessibility is important is the anti-litigation approach. Making sure their interface is accessible is promoted as a matter of organizational security - a form of self-protection. In this approach, the typical method is a focus on the Success Criteria of the WCAG Recommendation alongside automated testing to verify that they have achieved A or AA level compliance. The "anti-litigation" approach is a pathway to organizational failure.

Make no mistake, the risk of litigation is significant. In the US, litigation in Federal court has increased approximately 400 percent year-over-year between 2015 and 2017, and at the time of this writing appears to be growing at roughly the same rate in 2018. Even more significant, cases have held third parties accountable and have progressed even when remediation was in progress, indicating the court is at least sometimes willing to consider a wider scope than we might typically think of in relation to these cases. To make matters even more precarious, businesses operating internationally face a range of penalties and enforcement patterns. Nearly all countries have some degree of statutory regulation regarding accessibility, even if enforcement and penalties vary. Thankfully, the international landscape is not nearly as varied as it was, as nearly all regulations follow the WCAG or are a derivative of those guidelines.

So, why, when the threat of litigation both domestically and internationally is so significant, do I say focus on the Success Criteria is a pathway to failure? My experience has repeatedly shown that even if all Success Criteria are met, an interface may not be accessible - an issue I'll go into a little further when I talk about building and testing interfaces - and only truly accessible interfaces allow us to succeed.

What happens when your interface is not accessible - aside from the litigation already discussed? First, it's extremely unlikely that you'll know your interface has accessibility issues, because 9 of 10 individuals who experience an accessibility issue don't report it. Your analytics will not identify those failing to convert due to accessibility issues - they'll be mixed in with any others you're tracking. Second, those abandoned transactions will be costly in the extreme. In the UK, those abandoning transactions because of accessibility issues account for roughly £12 billion (GBP) annually - which is roughly 10 percent of the total market. Let me say that again because it deserves to be emphasized - those abandoning because of accessibility issues represent roughly 10 percent of the total market - not 10 percent of your market share - 10 percent of the total market.

Whether your idea of success is moral superiority, ubiquity, or piles of cash, the only sure way to that end is a pathway of accessibility.

How Do We Become an Accessibility Ally?

Hearing "it's the right thing to do" or "this is how we can get into more homes" or, sometimes the £12 billion (GBP) number - one of those often convinces people to become at least a little interested in creating accessible interfaces, even if they're not quite to the point of wanting to become an Accessibility Evangelist. The good news is that even something as simple as making creating accessible interface a priority can make you an Accessibility Ally.

The question then becomes how to we take that first step - how do we create accessible interfaces. The first rule you have to know about creating an accessible interface is that it takes the entire team. Accessibility exists at every level - the complexity of processes (one of the leading causes of abandonment), the content in the interface, the visual design and interactions, and how all of that is put together in code by the engineers - all of it impacts accessibility.

At this point, I should give fair warning that although I'll try to touch on all the layers of an interface, my strengths are in engineering, so the Building and Testing Interfaces section may seem weighted a little heavier, even though it should not be considered more important.

Designing for Accessibility

If we were building a house we wanted to be accessible, we recognize that we would have to start at the beginning, even before blueprints are drawn, making decisions about how many levels it will have, where it will be located, and how people will approach it. Once those high-level decisions are made, we might start drawing blueprints - laying out the rooms and making sure that doorways and passages have sufficient space. We would alter design elements like cabinet and counter height and perhaps flooring surfaces that pose fewer navigation difficulties. To remodel a house to make it accessible, while not impossible, is often very difficult...and the same concepts apply to building interfaces.

Most projects that strive to be fully accessible start with Information Architecture, or IA (you can find out more about IA at https://www.usability.gov/what-and-why/information-architecture.html). This is generally a good place to begin, unless what you're building is an interface for a process - like buying or selling something or signing up for something. In the case of a process interface, you've basically decided you're building a house with multiple levels and you have accessibility issues related to traversing those levels...to continue our simile, you have to decide if you're going to have an elevator or a device that traverses stairs...but your building will still need a foundation. Information Architecture is like the foundation of your building. Can you build a building without a foundation? Sure. A lot of pioneers built log cabins by laying the first course of logs directly on the ground...but - and this is a very big but - those structures did not last. If you decide to go another route than good IA, the work further on will be more difficult, and much of it will have to be reworked, because IA affects a core aspect of the Accessibility Tree - the accessible name - the most critical piece of information assistive technology can have about an element of an interface.

Once your Information Architecture is complete, designing for accessibility is considerably less complex than most people imagine it to be. Sure there are some technical bits that designers have to keep in mind - like luminance contrast and how everything needs a label - but there are loads of good, reliable resources available...probably more so for design than for the engineering side of things. For example, there are several resources available from the Paciello Group and Deque, organizations who work with web accessibility almost exclusively, as well as both public and private organizations who have made accessibility a priority, like Government Digital Service, eBay, PayPal, and even A List Apart.

With the available resources you can succeed as an Accessibility Ally as long as you keep one thought at the fore of your mind - can someone use this interface the way they want rather than the way I want. What if they search a list of all the links on your site - does the text inside the anchor tell them what's on the other side? What if they're experienced users and want to jump past all the stuff you've crammed into the header but they're not using a scrollbar - is there something that tells them how to do that? Keep in mind that as a designer, you're designing the interface for everyone, not just those who can [insert action here] like you can.

Building and Testing Interfaces

When building accessible interfaces, there is a massive amount to learn about the Accessibility Tree and how and when to modify it as well as the different states a component might have. Much has been made of ARIA roles and states, but frankly, ARIA is one of the worst (or perhaps I might say most dangerous) tools an engineer can use.

We're going to briefly pause the technical stuff here for a short story from my childhood (a story I'll tie to ARIA, but not till the end).

When I was a child - about 8 years old - my family and I visited a gift shop while on vacation in Appalachia. In this particular gift shop they sold something that my 8 year old mind thought was the greatest thing a kid could have - a bullwhip. I begged and pleaded, but my parents would not allow me to purchase this wondrous device that smelled of leather, power, and danger. I was very dismayed...until, as we were leaving, I saw a child about my age flicking one and listening to the distinctive crack...until he snapped it behind his back and stood up ramrod straight as a look of intense pain crossed his face.

ARIA roles and states are like that bullwhip. They look really cool. You're pretty sure you would look like a superhero with them coiled on your belt. They smell of power and danger and when other engineers see you use them, you're pretty sure they think you are a superhero. They're very enticing...until they crack across your back.

Luckily, ARIA roles and states are almost unnecessary. Yes, they can take your interface to the next level, but they are not for the inexperienced or those who lack caution. If you're creating interfaces designed for a browser, the best tool you have to build an accessible interface is Semantic HTML. Yes, it's possible to build an interface totally lacking accessibility in native HTML. Yes, it's possible to build an accessible interface in Semantic HTML and then destroy the accessibility with CSS. Yes, it's possible to build an accessible interface with JavaScript or to destroy an accessible interface with JavaScript. None of the languages we use in front-end engineering build accessibility or destroy accessibility on their own - that takes engineers. The languages themselves are strong enough...if you are new to accessibility, start somewhere other than the bullwhip.

The next topic most people jump to from here is how to test an interface to make sure it is accessible. This is another place where things can get tricky, because there are a number of different tools, they all serve a different purpose, and they may not do what they're expected to do. For instance, there are tools that measure things like luminance contrast, whether or not landmarks are present, or if any required elements or attributes are missing - validating according to the Success Criteria in the WCAG. In this realm, I prefer the aXe Chrome plug-in (by Deque). Nearly all these tools are equally good at what they do, but - and here's one of the places where it can go sideways - tools that validate according to the Success Criteria are a bit like spellcheckers - they can tell you if you spelled the word correctly, but they cannot tell you if you've selected the correct word.

Beyond Success Criteria validation, there are other tools available (or soon to be available) to help verify accessibility, the most common of which are screen readers. Of screen readers available, some are free and some are paid - VoiceOver on Mac and JAWS on Windows are the most popular in the US - JAWS is not free, but there is a demo version you can run for about 40 minutes at a time. NVDA (another Windows tool) and ChromeVox are free, but less popular. In addition to screen readers, in version 61 of Firefox the dev tools should include a tool that gives visibility into the Accessibility Tree (version 61 is the planned release, this version is not available at the time of this writing).

One thing to remember with any of these - just because it works one way for you doesn't mean it will work that way for everyone. Accessibility platforms are multiple tools that share an interface. Each tool is built differently - typically according to the senior engineer's interpretation of the specification. While the results are often very similar, they will not always be the same. For example, some platforms that include VoiceOver don't recognize a dynamically modified Accessibility Tree, meaning if you add an element to the DOM it won't be announced, or it may only be announced if certain steps are taken, and the exact same code running in JAWS will announce the content multiple times. Another thing to remember is that there is no way you will ever known all the edge cases - in the case VoiceOver not recognizing dynamically added elements mentioned previously, it took more effort than it should have to demonstrate conclusively to the stakeholders the issue was in a difference in the platform.

Finally, when you're trying to ensure your interface is accessible, you will have to manually test it - there is simply no other way - and it should be tested at least once every development cycle. Granted, not every user story will affect accessibility, but because have that holistic view of accessibility that acknowledges that accessibility exists at every level, we know that most stories will affect accessibility.

As with design, there are resources available, but good resources are more difficult to find because engineers are opinionated and usually feel like they understand specifications, even though what they understand is their interpretation of the specifications. If you want to become an accessibility expert, it can be done, but the process is neither quick nor easy. If you want to become an A11y^2, well that process is quicker and easier and mostly consists of keeping everything said in this section in mind. Understand accessibility holistically. Make "Semantic HTML" and "ARIA is a last resort" your mantras. Check your work with one of the WCAG verification tools (again, I prefer the aXe Chrome plug-in) and at least one screen reader. Check it manually, and check it frequently.

Being an Accessibility Ally

Being an Accessibility Ally is really not complicated. You don't need to be an accessibility expert (though you certainly can be one if you want)...you just need to see accessibility as a priority and the pathway to success. Being an Accessibility Ally means you're actively supportive of accessibility.

To be actively supportive, one needs to understand accessibility in a more holistic way than we've traditionally thought about it and we need to understand that not only does accessibility accumulate, its opposite accumulates as well. In other words, inaccessibility anywhere is a threat to accessibility everywhere.

To be actively supportive, we need to do more than act the part by designing and building things like stair-ramps with ramps too steep to safely navigate with a wheelchair or Norman doors. We need to make building interfaces that are perceivable, operable, understandable, and robust a priority...and we need to make that priority visible to others.

When we're actively supportive and people see our action, only then will we be the ally we all need...and we all need all the allies we can get.



For another take on age and web interfaces, you may want to take a look at "The Danger of an Adult-oriented Internet", a post in this blog from 2013 or A11y Squared, a post from 2017. 

Sunday, July 8, 2018

The Myth of Analytics

I've been writing web interfaces since the mid 1990s - which in this industry is eons.

When I first started, we compiled analytics directly from the web server, watching primarily how many visitors there were, how long it took to push the page(s) out, and what paths they followed when they came to the site. Looking at the data that showed what visitors were reading was helpful - we were able to see what the most popular pages were and what the most popular path was. For merchants, this gives a window into cross-selling opportunities and more.

As technology grew, we noticed that the longer it took to download pages, the less likely people were to buy stuff. So we started tracking page weight a little more closely...and then, as we realized that a 100KB image is not the same as 100KB of HTML, which is not the same as 100KB of CSS, which is not the same as 100KB of JavaScript, we started looking at the weight of each of the resources and then, ultimately, this mysterious measure we now call 'time to interactive'.

I should take a minute to point out that all of these numbers are important, and if you're not keeping track of all of them, and if your [insert your name for front-end developers here] don't understand the difference between each of these categories and why they are each important, you're setting yourself up for failure.

Here's the thing with this shift, though...the only way to get numbers directly from the user's machine (sometimes 'Real User Measurement', or RUM) is using JavaScript. So, the tool we use to measure our performance has to first be delivered to the browser in order to do the measurement.

So the first problem in this shift from server to client measurement is that the total amount of time cannot be accurately measured - it's not possible.

The closest we could come is to calculate the difference between the time (on the server) the response packet was pushed out and the time the request packet fired by the 'interactive' event was received. However, even this time is unreliable because it has two network traversals in it, and while there is a way to measure the upstream time, there is no way to measure the downstream time (and I have yet to see anything that will measure even upstream time).

The second problem in this shift from server to client measurement is that it relies on JavaScript.

Over the course of my career, I've heard numerous times that this reliance on JavaScript is not an issue, because the majority of users have JavaScript enabled, but again, we come into a number that cannot be accurately measured. In order for a user to be measured...

  1. they have to get our response that contains the JavaScript code that sends the request(s) to the appropriate recording mechanism on the server
  2. that code has to be loaded and compiled by the browser
  3. that code cannot conflict with or be dependent on any other code that may or may not be loaded
...and even if all that can happen, it has to happen before the user gets frustrated enough to click away. It's also not a good idea to rely on all three of those stars aligning all the time - there have been instances where developers have included references to code that isn't loaded which have brought down entire sites. When your site goes down in this manner, it can become very difficult to resolve because you have no idea where the problem lies.

One group measuring interactions of a first-world community with a relatively high degree of trust (a group that would have little to no reason to have JavaScript disabled or otherwise unavailable) found that nearly 3 percent of interactions were untracked by their client-side solution.

If we assume, however, that everything goes well. All of our code is delivered to the end user, and there are no conflicts, and all the request packets are coming in and we're able to measure everything - even the upstream traversal time. All of this data gives us a table we'll call "page views by browser", which contains the following records - "Chrome, 67, 55%; Safari, 29, 24%; Firefox, 19, 15%; Other, 5, 6%".

What does this data tell you? It tells you that 67 visitors used Chrome, 29 visitors used Safari, and 19 visitors used Firefox. That's all. Even if you add the version(s) used and what the operating system was and you were able to tell which were mobile versus desktop users, that information is not typically tied into conversion information...and it doesn't tell you anything about the 5 people who stuck around long enough to be tracked but were small enough numbers that their information is not reported.

Additionally, if we were to combine the 6 percent in the "Other" category with the 3 percent that were untracked, that's 10 percent of customers you have very little information about.

The third problem in this shift from server to client measurement is that it does not measure anyone in the 'click away' category. Those who have everything loading properly but who get bored by the length of time your site is taking to respond.

Of course there are ways to correct some of the deficiencies in this shift from server to client measurement. There are also ways to address some of the issues your improved analytics can identify. You have to be willing to put in the work to identify where your analytics package/process is lacking analysis and improve it...or, alternatively, you can continue believing the myth that your 'analytics' is actually describing your (potential and actual) customers rather than just describing a subset of your customers.

I'd encourage you to review the data you're collecting and see the shortfall(s), from that point on it's just a matter of creative coding to rectify the issues.

Happy coding.