Recently, I was reading an interview with Scott Kubie in which he was asked "is there a piece of professional or life advice you've gotten that has always stuck with you" and to which he responded "clean your tools". I was immediately reminded of all the times my father said something similar to me about the care and maintenance of tools.
Often, as a child, I wondered why we spent the time maintaining our tools when they were just going to get dirty or dulled the next time we used them. In the case of some tools, and how they were soiled, I could understand - steel and water result in rust that means a replacement will soon be needed - but unless it means they'll have to be replaced, why clean or sharpen them?
As I worked, I learned that it was difficult - and sometimes impossible - to hold onto a metal wrench covered in oil; I learned that it was difficult - and sometimes impossible - to cut with an unsharpened blade. Well maintained tools, on the other hand, did the job they were designed to do, often wonderfully well. As much as my child self may have hated to admit it, my dad was right. The simple truth is that clean tools work better.
What does this have to do with technology?
More and more the tools we have created to write, test, and manage code are far from clean. They may be have been updated frequently, with bells and whistles added. They've been converted from plain-old semantic HTML, cascading styles, and vanilla JavaScript to the most popular frameworks. But, like a blade that has been sharpened in the wrong direction, our tools have developed a false edge that is likely to break off, leaving our tools dull and useless.
It can't happen - it won't happen, you say? It already has.
If we look at the standard tools developers use - things like BitBucket and WordPress - we find that many of them have significant accessibility issues, often resulting in a failure to meet the WCAG at even A-level conformance, brought on by how they're built and maintained. The latest WordPress editor, called Gutenberg, has consistently had significant accessibility issues and has been called "a regression in terms of accessibility level", frustrating testers to such a degree many refused to even look at it again.
Good engineers notice and point out the ways in which we've failed. For example, in a very public announcement, Rian Rietveld resigned as leader of the WordPress accessibility team, citing several issues that led to her departure - but our industry, as with many others, often tries to shout them down or shut them out, whether they be leads on projects in major organizations like Rietvel or entrepreneurs writing about the "State of Accessibility in Dynamic Web Content".
Although it isn't lost on many familiar with accessibility and the state of our tools that most of the issues cited in the post "I have resigned as the WordPress accessibility team lead. Here is why." (by Rietveld) are associated with React, the poor accessibility in one of the web's most popular tools is an uncomfortable truth that few are willing to even acknowledge.
Although the problem is not isolated to React, the lack of React developers with accessibility experience and the difficulties with accessibility in React itself is a problem for more than just Gutenberg, and for more than just WordPress, which controls nearly one-third of the Internet - it's a problem for any organization that uses React, because it's a problem in the ecosystem.
We've been building with dirty tools, creating an ecosystem that has shifted away from POSH and CSS with unobtrusive JavaScript to one written in a JavaScript tool that has an architecture and uses design patterns designed without accessibility in mind. That practice - that ecosystem - has grown developers who need to be encouraged and inspired rather than simply educated, because they can't, or won't, see why the lack of accessibility is a problem. Some even respond to code that resolves accessibility issues created as a by-product of using this tool with something along the lines of "that's not the React way of writing code", which is only slightly better than the "blind people don't use the web" that I received (as a response to a question about accessibility) when interviewing a candidate for a "senior UI engineer" position.
Granted, this post has, perhaps somewhat unfairly, focused on React when the problem is greater than React. In truth, I could list any number of tools - frameworks, like React, - that are, or have become, a problem. React is a bigger target at the moment because it's so fashionable nearly every organization of any size uses it. Organizations have become convinced that engineers won't work without it - maybe they're right - and engineers insist on using it because without it they find it difficult to land a decent gig.
But...here's something we know - the simple truth is that clean tools work better. It will always be the case that plain-old semantic HTML and lightweight, cascading styles will out-perform and be more accessible than sites written fully in Angular or React. If you're interested in diversity, or if you believe, like Tim Berners-Lee, that "the power of the Web is in its universality" and that "access by everyone regardless of disability is an essential aspect", you should be pushing for clean tools that will get your product in front of the greatest diversity and the greatest number of users.
Happy coding (with clean tools).
A long time ago in a galaxy far, far away... I gave a lecture called Getting Paid to Think to an academic society. In it I presented a simple hypothesis - an education in the humanities and thinking (e.g., Philosophy) is more beneficial than a skill-based education (e.g., Computer Science). This blog is dedicated to getting you to think as I discuss a variety of topics, most of which are related to my career in the tech industry.
Showing posts with label ethics and values. Show all posts
Showing posts with label ethics and values. Show all posts
Saturday, October 27, 2018
Monday, July 23, 2018
One For All and All For One
All for one and one for all, united we stand divided we fall. Alexandre Dumas, The Three Musketeers
Before I seriously dive into this topic, I want to share a little information about myself. Over the past (almost) two decades I've worked with accessibility in both the public sector, where I was bound by Section 508 of the Rehabilitation Act (1973), and in the private sector, where I've worked with guidelines that are now published as the Web Content Accessibility Guidelines (or WCAG). Over that time I've not only built a significant amount of what we might call "product knowledge" about accessibility, but have built quite a bit of passion for the work as well. I'm going to attempt to share that passion and attempt to convince you to become what I call an "Acessibility Ally" (A11y*ally, A11y^2, or "Ally Squared") - someone who is actively supportive of a11y, or web accessibility.
What Is This Accessibility Stuff, Anyway?
A lot of discussions about interface accessibility start with impairment. They talk about permanent, temporary, and situational impairment. They talk about visual, auditory, speech, and motor impairment (and sometimes throw in neurological and cognitive impairment as well). They'll give you the numbers...how in the US, across all ages,- approximately 8 percent of individuals report significant auditory impairment (about 4 percent are "functionally deaf" and about 4 percent are "hard of hearing", meaning they have difficulty hearing with assistance)
- approximately 4 percent of individuals report low or no vision (which are not the only visual impairments)
- approximately 8 percent of men (and nearly no women) report one of several conditions we refer to as "colorblindness"
- nearly 7 percent of individuals experience a severe motor or dexterity difficulty
- nearly 17 percent of individuals experience a neurological disorder
- nearly 20 percent of individuals experience cognitive difficulties
I don't generally like to start there...though I guess I just did. Accessibility is not about the user's impairment - or at least it shouldn't be - it's about the obstacles we - the product managers, content writers, designers, and engineers - place in the path of people trying to live their lives. Talking about impairment in numbers like this also tends to give the impression that impairment is not "normal" when the data clearly shows otherwise. Even accounting for a degree of comorbidity, the numbers indicate that most people experience some sort of impairment in their daily lives.
The other approach that's often taken is diving directly into accessibility and what I call impairment categories and their respective "solution". The major problem here is a risk similar to what engineers typically refer to as "early optimization". The "solutions" for visual and auditory and even motor impairments are relatively easy from an engineering point of view, even though neurological and cognitive difficulties are far more significant in terms of numbers. Rather than focus on which impairment falls into the four categories that define accessibility - Perceivable, Operable, Understandable, and Robust - we have to, as I like to say, see the forest and the trees. While there is benefit in being familiar with the Success Criteria in each of the Guidelines within the WCAG, using that as a focus will miss a large portion of the experience.
One other reason I have chosen this broader understanding of accessibility is that accessibility in interfaces is holistic. Everything in the interface - everything in a web page and every web page in a process - must be accessible in order to meet the definition of accessible. For example, we can't claim a web page that meets visual guidelines but not auditory guidelines "accessible", and if the form on our page is accessible but the navigation is not then the page is not accessible.
Why is Accessibility Important?
When considering accessibility, I often recall an experience in interviewing a candidate for an engineer position and relate that story to those listening. This candidate, when asked about accessibility, responded something along the lines of "do you mean blind people - they can't see web pages anyway". I've also worked with designers and product managers who have complained about the amount of time spent building accessibility interfaces for such a "small" group of users or flat out said accessibility isn't a priority. I've worked with content writers who are convinced their writing is clear enough for their intended audience and anyone confused by it is not in their intended audience - what I call the Abercrombe and Fitch Content Model.For those who consider accessibility important, there are a few different approaches we might typically take when trying to sway those who tend be less inclined to consider the importance of accessibility. In my experience, the least frequently made argument for the importance of accessibility is one of a moral imperative - making an interface accessible is the "right thing to do". While I agree, I won't argue that point here, simply because it's the least frequently made argument and this is going to be a post that bounds the too-long length as is.
The approach people most frequently take in attempting to convince others accessibility is important is the anti-litigation approach. Making sure their interface is accessible is promoted as a matter of organizational security - a form of self-protection. In this approach, the typical method is a focus on the Success Criteria of the WCAG Recommendation alongside automated testing to verify that they have achieved A or AA level compliance. The "anti-litigation" approach is a pathway to organizational failure.
Make no mistake, the risk of litigation is significant. In the US, litigation in Federal court has increased approximately 400 percent year-over-year between 2015 and 2017, and at the time of this writing appears to be growing at roughly the same rate in 2018. Even more significant, cases have held third parties accountable and have progressed even when remediation was in progress, indicating the court is at least sometimes willing to consider a wider scope than we might typically think of in relation to these cases. To make matters even more precarious, businesses operating internationally face a range of penalties and enforcement patterns. Nearly all countries have some degree of statutory regulation regarding accessibility, even if enforcement and penalties vary. Thankfully, the international landscape is not nearly as varied as it was, as nearly all regulations follow the WCAG or are a derivative of those guidelines.
So, why, when the threat of litigation both domestically and internationally is so significant, do I say focus on the Success Criteria is a pathway to failure? My experience has repeatedly shown that even if all Success Criteria are met, an interface may not be accessible - an issue I'll go into a little further when I talk about building and testing interfaces - and only truly accessible interfaces allow us to succeed.
What happens when your interface is not accessible - aside from the litigation already discussed? First, it's extremely unlikely that you'll know your interface has accessibility issues, because 9 of 10 individuals who experience an accessibility issue don't report it. Your analytics will not identify those failing to convert due to accessibility issues - they'll be mixed in with any others you're tracking. Second, those abandoned transactions will be costly in the extreme. In the UK, those abandoning transactions because of accessibility issues account for roughly £12 billion (GBP) annually - which is roughly 10 percent of the total market. Let me say that again because it deserves to be emphasized - those abandoning because of accessibility issues represent roughly 10 percent of the total market - not 10 percent of your market share - 10 percent of the total market.
Whether your idea of success is moral superiority, ubiquity, or piles of cash, the only sure way to that end is a pathway of accessibility.
How Do We Become an Accessibility Ally?
Hearing "it's the right thing to do" or "this is how we can get into more homes" or, sometimes the £12 billion (GBP) number - one of those often convinces people to become at least a little interested in creating accessible interfaces, even if they're not quite to the point of wanting to become an Accessibility Evangelist. The good news is that even something as simple as making creating accessible interface a priority can make you an Accessibility Ally.
The question then becomes how to we take that first step - how do we create accessible interfaces. The first rule you have to know about creating an accessible interface is that it takes the entire team. Accessibility exists at every level - the complexity of processes (one of the leading causes of abandonment), the content in the interface, the visual design and interactions, and how all of that is put together in code by the engineers - all of it impacts accessibility.
At this point, I should give fair warning that although I'll try to touch on all the layers of an interface, my strengths are in engineering, so the Building and Testing Interfaces section may seem weighted a little heavier, even though it should not be considered more important.
The question then becomes how to we take that first step - how do we create accessible interfaces. The first rule you have to know about creating an accessible interface is that it takes the entire team. Accessibility exists at every level - the complexity of processes (one of the leading causes of abandonment), the content in the interface, the visual design and interactions, and how all of that is put together in code by the engineers - all of it impacts accessibility.
At this point, I should give fair warning that although I'll try to touch on all the layers of an interface, my strengths are in engineering, so the Building and Testing Interfaces section may seem weighted a little heavier, even though it should not be considered more important.
Designing for Accessibility
If we were building a house we wanted to be accessible, we recognize that we would have to start at the beginning, even before blueprints are drawn, making decisions about how many levels it will have, where it will be located, and how people will approach it. Once those high-level decisions are made, we might start drawing blueprints - laying out the rooms and making sure that doorways and passages have sufficient space. We would alter design elements like cabinet and counter height and perhaps flooring surfaces that pose fewer navigation difficulties. To remodel a house to make it accessible, while not impossible, is often very difficult...and the same concepts apply to building interfaces.
Most projects that strive to be fully accessible start with Information Architecture, or IA (you can find out more about IA at https://www.usability.gov/what-and-why/information-architecture.html). This is generally a good place to begin, unless what you're building is an interface for a process - like buying or selling something or signing up for something. In the case of a process interface, you've basically decided you're building a house with multiple levels and you have accessibility issues related to traversing those levels...to continue our simile, you have to decide if you're going to have an elevator or a device that traverses stairs...but your building will still need a foundation. Information Architecture is like the foundation of your building. Can you build a building without a foundation? Sure. A lot of pioneers built log cabins by laying the first course of logs directly on the ground...but - and this is a very big but - those structures did not last. If you decide to go another route than good IA, the work further on will be more difficult, and much of it will have to be reworked, because IA affects a core aspect of the Accessibility Tree - the accessible name - the most critical piece of information assistive technology can have about an element of an interface.
Once your Information Architecture is complete, designing for accessibility is considerably less complex than most people imagine it to be. Sure there are some technical bits that designers have to keep in mind - like luminance contrast and how everything needs a label - but there are loads of good, reliable resources available...probably more so for design than for the engineering side of things. For example, there are several resources available from the Paciello Group and Deque, organizations who work with web accessibility almost exclusively, as well as both public and private organizations who have made accessibility a priority, like Government Digital Service, eBay, PayPal, and even A List Apart.
With the available resources you can succeed as an Accessibility Ally as long as you keep one thought at the fore of your mind - can someone use this interface the way they want rather than the way I want. What if they search a list of all the links on your site - does the text inside the anchor tell them what's on the other side? What if they're experienced users and want to jump past all the stuff you've crammed into the header but they're not using a scrollbar - is there something that tells them how to do that? Keep in mind that as a designer, you're designing the interface for everyone, not just those who can [insert action here] like you can.
Building and Testing Interfaces
When building accessible interfaces, there is a massive amount to learn about the Accessibility Tree and how and when to modify it as well as the different states a component might have. Much has been made of ARIA roles and states, but frankly, ARIA is one of the worst (or perhaps I might say most dangerous) tools an engineer can use.
We're going to briefly pause the technical stuff here for a short story from my childhood (a story I'll tie to ARIA, but not till the end).
When I was a child - about 8 years old - my family and I visited a gift shop while on vacation in Appalachia. In this particular gift shop they sold something that my 8 year old mind thought was the greatest thing a kid could have - a bullwhip. I begged and pleaded, but my parents would not allow me to purchase this wondrous device that smelled of leather, power, and danger. I was very dismayed...until, as we were leaving, I saw a child about my age flicking one and listening to the distinctive crack...until he snapped it behind his back and stood up ramrod straight as a look of intense pain crossed his face.
ARIA roles and states are like that bullwhip. They look really cool. You're pretty sure you would look like a superhero with them coiled on your belt. They smell of power and danger and when other engineers see you use them, you're pretty sure they think you are a superhero. They're very enticing...until they crack across your back.
Luckily, ARIA roles and states are almost unnecessary. Yes, they can take your interface to the next level, but they are not for the inexperienced or those who lack caution. If you're creating interfaces designed for a browser, the best tool you have to build an accessible interface is Semantic HTML. Yes, it's possible to build an interface totally lacking accessibility in native HTML. Yes, it's possible to build an accessible interface in Semantic HTML and then destroy the accessibility with CSS. Yes, it's possible to build an accessible interface with JavaScript or to destroy an accessible interface with JavaScript. None of the languages we use in front-end engineering build accessibility or destroy accessibility on their own - that takes engineers. The languages themselves are strong enough...if you are new to accessibility, start somewhere other than the bullwhip.
The next topic most people jump to from here is how to test an interface to make sure it is accessible. This is another place where things can get tricky, because there are a number of different tools, they all serve a different purpose, and they may not do what they're expected to do. For instance, there are tools that measure things like luminance contrast, whether or not landmarks are present, or if any required elements or attributes are missing - validating according to the Success Criteria in the WCAG. In this realm, I prefer the aXe Chrome plug-in (by Deque). Nearly all these tools are equally good at what they do, but - and here's one of the places where it can go sideways - tools that validate according to the Success Criteria are a bit like spellcheckers - they can tell you if you spelled the word correctly, but they cannot tell you if you've selected the correct word.
Beyond Success Criteria validation, there are other tools available (or soon to be available) to help verify accessibility, the most common of which are screen readers. Of screen readers available, some are free and some are paid - VoiceOver on Mac and JAWS on Windows are the most popular in the US - JAWS is not free, but there is a demo version you can run for about 40 minutes at a time. NVDA (another Windows tool) and ChromeVox are free, but less popular. In addition to screen readers, in version 61 of Firefox the dev tools should include a tool that gives visibility into the Accessibility Tree (version 61 is the planned release, this version is not available at the time of this writing).
One thing to remember with any of these - just because it works one way for you doesn't mean it will work that way for everyone. Accessibility platforms are multiple tools that share an interface. Each tool is built differently - typically according to the senior engineer's interpretation of the specification. While the results are often very similar, they will not always be the same. For example, some platforms that include VoiceOver don't recognize a dynamically modified Accessibility Tree, meaning if you add an element to the DOM it won't be announced, or it may only be announced if certain steps are taken, and the exact same code running in JAWS will announce the content multiple times. Another thing to remember is that there is no way you will ever known all the edge cases - in the case VoiceOver not recognizing dynamically added elements mentioned previously, it took more effort than it should have to demonstrate conclusively to the stakeholders the issue was in a difference in the platform.
Finally, when you're trying to ensure your interface is accessible, you will have to manually test it - there is simply no other way - and it should be tested at least once every development cycle. Granted, not every user story will affect accessibility, but because have that holistic view of accessibility that acknowledges that accessibility exists at every level, we know that most stories will affect accessibility.
As with design, there are resources available, but good resources are more difficult to find because engineers are opinionated and usually feel like they understand specifications, even though what they understand is their interpretation of the specifications. If you want to become an accessibility expert, it can be done, but the process is neither quick nor easy. If you want to become an A11y^2, well that process is quicker and easier and mostly consists of keeping everything said in this section in mind. Understand accessibility holistically. Make "Semantic HTML" and "ARIA is a last resort" your mantras. Check your work with one of the WCAG verification tools (again, I prefer the aXe Chrome plug-in) and at least one screen reader. Check it manually, and check it frequently.
Being an Accessibility Ally
Being an Accessibility Ally is really not complicated. You don't need to be an accessibility expert (though you certainly can be one if you want)...you just need to see accessibility as a priority and the pathway to success. Being an Accessibility Ally means you're actively supportive of accessibility.
To be actively supportive, one needs to understand accessibility in a more holistic way than we've traditionally thought about it and we need to understand that not only does accessibility accumulate, its opposite accumulates as well. In other words, inaccessibility anywhere is a threat to accessibility everywhere.
To be actively supportive, we need to do more than act the part by designing and building things like stair-ramps with ramps too steep to safely navigate with a wheelchair or Norman doors. We need to make building interfaces that are perceivable, operable, understandable, and robust a priority...and we need to make that priority visible to others.
When we're actively supportive and people see our action, only then will we be the ally we all need...and we all need all the allies we can get.
For another take on age and web interfaces, you may want to take a look at "The Danger of an Adult-oriented Internet", a post in this blog from 2013 or A11y Squared, a post from 2017.
Saturday, March 24, 2018
Creation, Attribution, and Misogyny
One of the hidden, insidious, morally bankrupt things that repeatedly peeks its head above the slime in open source development is plagiarism - specifically, the lack of attribution.
We can have all the arguments you want about 'pirating' code and the like - this is not about patent trolls - and pirating code is stealing, but it's also not really the subject of this post. This post, instead, is about intentionally writing creators out of history - whether it's through intentionally omitting an author or reference link or by actively deleting it. Of course, it happens all the time and it's dishonest...and not in a cute "I'm a smuggler for the Rebel Alliance" kind of way, but in a .
Oooh, what's that? Open source is different - it shouldn't recognize any individual authors/creators/innovators? No, that is absolutely not the case, and writing them out of history is not only dishonest, it's misogynistic as well. That's right, I went there. If you are writing authors, creators, or innovators out of history, you're a sexist pig - and frankly, there's no other way to see it.
Women are profoundly underrepresented in STEM. That's a simple fact. We can debate whether it's a pipeline problem (it both is and isn't) or whether STEM fields are hostile environments for women (they are), but ultimately none of those causes affects the outcome that women are underrepresented in STEM, and it isn't getting much better, folks.
Let's take a case in point, Nicole Sullivan started something nine years ago that is used almost universally today in user-interface engineering today, and not only is she not "Twitter Verified", many people don't even know her name, and Wikipedia even deletes entries about OOCSS.
I appreciate Nicole's contributions to the industry I have made my career. When I use her work - whether it's CSSLint or OOCSS - I point out that it doesn't come from me, and given my experience working directly with her while she was consulting for PayPal, I know she would act in a similarly ethical manner.
Attribution is important, and as my career has progressed, I've seen it growing in importance. As anyone who suffers from Imposter Syndrome can tell you, love/hate relationships with ownership and attribution of accomplishments is a very real thing, but it's something the entire tech industry needs to get a grip on...and it's something that very definitely eludes us.
One final thought - as someone who has led teams of engineers, including those working on projects that resemble Frodo and Sam's journey through the Dead Marshes, morale is something rather fragile. One of the easiest ways to boost morale is to give everyone on the team the sense that their contribution matters - to, as we used to say at PayPal (and eBay), keep it human. One of the easiest ways to keep it human is to afford each other the dignity of recognizing the creator in them. Of course, the opposite is also true - take that dignity away and morale will soon follow.
Related posts: Visibility and Obscurity, My Pen Is My Tongue
We can have all the arguments you want about 'pirating' code and the like - this is not about patent trolls - and pirating code is stealing, but it's also not really the subject of this post. This post, instead, is about intentionally writing creators out of history - whether it's through intentionally omitting an author or reference link or by actively deleting it. Of course, it happens all the time and it's dishonest...and not in a cute "I'm a smuggler for the Rebel Alliance" kind of way, but in a .
Oooh, what's that? Open source is different - it shouldn't recognize any individual authors/creators/innovators? No, that is absolutely not the case, and writing them out of history is not only dishonest, it's misogynistic as well. That's right, I went there. If you are writing authors, creators, or innovators out of history, you're a sexist pig - and frankly, there's no other way to see it.
We must take sides. Neutrality helps the oppressor, never the victim. Silence encourages the tormentor, never the tormented.I hear people all over the world crying out in terror "I'm not a sexist pig, I delete (or omit) all creators, not just women". Here's the thing, though, in removing creators you are, at the least, maintaining the illusion and bias that many people have that women are not creators, and at the worst, actively promoting a hostile environment that dismisses and negates the contribution of women. Neither option is something which should invoke pride in our work.
Elie Wiesel
Women are profoundly underrepresented in STEM. That's a simple fact. We can debate whether it's a pipeline problem (it both is and isn't) or whether STEM fields are hostile environments for women (they are), but ultimately none of those causes affects the outcome that women are underrepresented in STEM, and it isn't getting much better, folks.
Read this Twitter thread |
I appreciate Nicole's contributions to the industry I have made my career. When I use her work - whether it's CSSLint or OOCSS - I point out that it doesn't come from me, and given my experience working directly with her while she was consulting for PayPal, I know she would act in a similarly ethical manner.
Attribution is important, and as my career has progressed, I've seen it growing in importance. As anyone who suffers from Imposter Syndrome can tell you, love/hate relationships with ownership and attribution of accomplishments is a very real thing, but it's something the entire tech industry needs to get a grip on...and it's something that very definitely eludes us.
One final thought - as someone who has led teams of engineers, including those working on projects that resemble Frodo and Sam's journey through the Dead Marshes, morale is something rather fragile. One of the easiest ways to boost morale is to give everyone on the team the sense that their contribution matters - to, as we used to say at PayPal (and eBay), keep it human. One of the easiest ways to keep it human is to afford each other the dignity of recognizing the creator in them. Of course, the opposite is also true - take that dignity away and morale will soon follow.
Related posts: Visibility and Obscurity, My Pen Is My Tongue
Wednesday, February 28, 2018
My Pen Is My Tongue
A few days back I sent out a series of tweets about "self-documenting code". Self-documenting code is an idea that's been around for many years, like stories about the wee folk...and like the wee folk, no one has seen self-documenting code.
The short version of the tweet series is that if you're writing code, you should be writing documentation as well - it's really too important to skip. This post, however, is not really about self-documenting code, but rather about how to write documentation, and more specifically a certain piece of documentation that you should never neglect.
I entered the whole techno-geek world at a time when computer labs were a real thing. Punchcards and shelves full of binders stuffed with documentation were commonplace. Documentation isn't like that anymore. When Java came along, I was almost enthused to use JavaDoc because of the level of clarity it added when writing the documentation. Now that nearly all code written by large, technologically advanced firms is either in Java or JavaScript (or ECMAScript), JavaDoc and JsDoc are - or should be - the de facto standard.
There is seldom serious argument against using one of these two tools anymore. There is disagreement about how the tools should be used, however. In the JsDoc community, one of the points of contention is the
However, not only should you use the authorship tag(s), you should be encouraging everyone else to use them as well.
It would come as a surprise to no one if I reminded you that we write code to solve problems. Not only are we writing code to solve problems, we're writing code to solve complex problems. For example, no one would write code to add two numbers...doing simple calculations on large data sets, perhaps, but there is a "complexity bar", below which we wouldn't dream of using code to address a problem. The first step of writing code is understanding the problem you're trying to solve.
As a hypothetical example, let's assume you've inherited a project. You've read the documentation that describes the solution to the problem the code offers, but after getting a small understanding of the problem combined with the solution being used, you have a list of questions. Why was this particular solution chosen over other solutions, for example. You can make some assumptions, but wouldn't it be nice to be able to contact the author to ask for their insight? Code, even well-documented code, is only a partial story. Just like every fan of a book turned into a motion picture knows, even faithful adaptations leave out bits that someone thought important. The first reason to include authorship information in your documentation, then, is the abundance of information it can point you to.
The common response to this concept is that the authorship information is not needed in the documentation because source control software, like
This response, however, misses the purpose of such tools. Version control is tied to a specific change...in git parlance, a commit. Yes, you can look at a particular line and see the last change of that line - the author of that change - but that is qualitatively different information than the author of a solution...and that information is generally only the last change. In order to get authorship information you must follow changes to a specific line back through history, and if at any point history was squashed or rewritten, that information is gone. Version control tools are excellent at solving the problem the author intended them to solve, as the author understood the problem; do not expect another author's code to solve a problem as you understand it.
Another reason authorship is important is we, as an industry...and really we as the human race...have difficult acknowledging the contributions of women and persons of color. The list of women who have significantly contributed in STEM fields without attribution is long...far too long. Not including attribution participates in that system of oppression by reinforcing the status quo. If we want to have any hope of disrupting patterns of discrimination, patterns that have existed for millennia, we must combat it at every turn.
A while ago I wrote a post called Visibility and Obscurity that described a situation in which attribution was changed on work I had done. In academia this is typically called plagiarism, and in most instances it's a punishable offense. Even outside academia, claiming to have done something you have not done can have serious consequences - Scott Thompson's resume scandal is evidence of that.
We should be writing code we can release with pride. Build things you're proud of and put your name on it...and give that same consideration to others. Amplify voices that are too often silenced or ignored - it does not diminish your contribution and it makes a difference. If it only makes a difference to the woman or person of color who finally has their contribution recognized - that's enough. If the only people who see an authorship reference are your employees, your colleagues, that's enough - they are important too.
Happy coding.
The short version of the tweet series is that if you're writing code, you should be writing documentation as well - it's really too important to skip. This post, however, is not really about self-documenting code, but rather about how to write documentation, and more specifically a certain piece of documentation that you should never neglect.
I entered the whole techno-geek world at a time when computer labs were a real thing. Punchcards and shelves full of binders stuffed with documentation were commonplace. Documentation isn't like that anymore. When Java came along, I was almost enthused to use JavaDoc because of the level of clarity it added when writing the documentation. Now that nearly all code written by large, technologically advanced firms is either in Java or JavaScript (or ECMAScript), JavaDoc and JsDoc are - or should be - the de facto standard.
There is seldom serious argument against using one of these two tools anymore. There is disagreement about how the tools should be used, however. In the JsDoc community, one of the points of contention is the
@author
tag. To be clear, the JsDoc tool authors have stopped using the author tag and no 'contributor' tag has been added. It might seem, from their use (or non-use) that these tags are unimportant, and in fact, that is a common perception, especially in light of the advances in source code management, or what we used to call "version control".However, not only should you use the authorship tag(s), you should be encouraging everyone else to use them as well.
It would come as a surprise to no one if I reminded you that we write code to solve problems. Not only are we writing code to solve problems, we're writing code to solve complex problems. For example, no one would write code to add two numbers...doing simple calculations on large data sets, perhaps, but there is a "complexity bar", below which we wouldn't dream of using code to address a problem. The first step of writing code is understanding the problem you're trying to solve.
As a hypothetical example, let's assume you've inherited a project. You've read the documentation that describes the solution to the problem the code offers, but after getting a small understanding of the problem combined with the solution being used, you have a list of questions. Why was this particular solution chosen over other solutions, for example. You can make some assumptions, but wouldn't it be nice to be able to contact the author to ask for their insight? Code, even well-documented code, is only a partial story. Just like every fan of a book turned into a motion picture knows, even faithful adaptations leave out bits that someone thought important. The first reason to include authorship information in your documentation, then, is the abundance of information it can point you to.
The common response to this concept is that the authorship information is not needed in the documentation because source control software, like
git
(my personal favorite), can track that information and expose it through tools like blame
.This response, however, misses the purpose of such tools. Version control is tied to a specific change...in git parlance, a commit. Yes, you can look at a particular line and see the last change of that line - the author of that change - but that is qualitatively different information than the author of a solution...and that information is generally only the last change. In order to get authorship information you must follow changes to a specific line back through history, and if at any point history was squashed or rewritten, that information is gone. Version control tools are excellent at solving the problem the author intended them to solve, as the author understood the problem; do not expect another author's code to solve a problem as you understand it.
Another reason authorship is important is we, as an industry...and really we as the human race...have difficult acknowledging the contributions of women and persons of color. The list of women who have significantly contributed in STEM fields without attribution is long...far too long. Not including attribution participates in that system of oppression by reinforcing the status quo. If we want to have any hope of disrupting patterns of discrimination, patterns that have existed for millennia, we must combat it at every turn.
A while ago I wrote a post called Visibility and Obscurity that described a situation in which attribution was changed on work I had done. In academia this is typically called plagiarism, and in most instances it's a punishable offense. Even outside academia, claiming to have done something you have not done can have serious consequences - Scott Thompson's resume scandal is evidence of that.
We should be writing code we can release with pride. Build things you're proud of and put your name on it...and give that same consideration to others. Amplify voices that are too often silenced or ignored - it does not diminish your contribution and it makes a difference. If it only makes a difference to the woman or person of color who finally has their contribution recognized - that's enough. If the only people who see an authorship reference are your employees, your colleagues, that's enough - they are important too.
Happy coding.
Monday, June 26, 2017
Visibility and Obscurity
Several years ago, when I first started at PayPal, the front-end development environment was still fairly young. As a result, tools that might have existed in other environments were missing.
As a veteran coder, I quickly grew tired of repetitive tasks - I wanted to be writing code - and set about writing scripts that developed into a significant tool suite. I shared that tool suite with both front-end and back-end developers (there were no full-stack developers in those days) and the use of those tools spread throughout the company, across the globe.
Out of that activity, there were two different experiences that bear examination. I'll address the later of the two experiences first.
In later years, as the development environment matured, another engineer - one responsible for establishing a standard development environment - took control of the tool suite (totally understandable) and put his name on my work (not understandable). The tools I had birthed and nurtured through numerous changes in the development environment, and continually promoted so they would be visible to all engineers - were adopted and their new foster father promoted himself as their creator when they became visible to upper management.
This is not an unusual situation. It happens all too often - much more frequently to women, of course - that someone other than the individual who has done the work takes credit, especially as the work becomes more visible.
That experience taught me two lessons. First, how you handle it says volumes to those who see the situation. Second, obscurity can be moments away, behind someone else's shadow, even when you think the visibility you've worked to cultivate over years is secure.
The second experience was much more pleasant. On a regular visit to a development office, I was introduced to an engineer who had recently joined the company. The engineer and I exchanged pleasantries - the normal "nice to meet you" bit - and then the engineer who introduced us told her my username (which was explicitly tied to the aforementioned tool suite)...and her expression and demeanor shifted dramatically. As someone who's never been in the "popular" club (yes, I've been a nerd and geek since before secondary school), that reception was quite an ego boost.
I had no real expectation of receiving such a reception - none of my long-time friends who'd seen me develop the tools reacted in the same manner - and it caught me by surprise. That reception also taught me a lesson - there will be some ways in which you're always more visible than you believe you are.
History is eager to write out of the picture those who have struggled to build great things - whether it's a woman who's made a significant contribution to our community (like Nicole Sullivan, the creator of OOCSS) or a man who is more interested in the work than the credit (like Nikola Tesla).
When you find yourself in these situations - situations of visibility and/or obscurity - how you navigate those shoals says volumes about your ambition, your drive, your values - such as integrity and trust, and what you know to be true about yourself. In those situations, may you have fair winds and running seas.
Happy coding.
As a veteran coder, I quickly grew tired of repetitive tasks - I wanted to be writing code - and set about writing scripts that developed into a significant tool suite. I shared that tool suite with both front-end and back-end developers (there were no full-stack developers in those days) and the use of those tools spread throughout the company, across the globe.
Out of that activity, there were two different experiences that bear examination. I'll address the later of the two experiences first.
In later years, as the development environment matured, another engineer - one responsible for establishing a standard development environment - took control of the tool suite (totally understandable) and put his name on my work (not understandable). The tools I had birthed and nurtured through numerous changes in the development environment, and continually promoted so they would be visible to all engineers - were adopted and their new foster father promoted himself as their creator when they became visible to upper management.
This is not an unusual situation. It happens all too often - much more frequently to women, of course - that someone other than the individual who has done the work takes credit, especially as the work becomes more visible.
That experience taught me two lessons. First, how you handle it says volumes to those who see the situation. Second, obscurity can be moments away, behind someone else's shadow, even when you think the visibility you've worked to cultivate over years is secure.
The second experience was much more pleasant. On a regular visit to a development office, I was introduced to an engineer who had recently joined the company. The engineer and I exchanged pleasantries - the normal "nice to meet you" bit - and then the engineer who introduced us told her my username (which was explicitly tied to the aforementioned tool suite)...and her expression and demeanor shifted dramatically. As someone who's never been in the "popular" club (yes, I've been a nerd and geek since before secondary school), that reception was quite an ego boost.
I had no real expectation of receiving such a reception - none of my long-time friends who'd seen me develop the tools reacted in the same manner - and it caught me by surprise. That reception also taught me a lesson - there will be some ways in which you're always more visible than you believe you are.
History is eager to write out of the picture those who have struggled to build great things - whether it's a woman who's made a significant contribution to our community (like Nicole Sullivan, the creator of OOCSS) or a man who is more interested in the work than the credit (like Nikola Tesla).
When you find yourself in these situations - situations of visibility and/or obscurity - how you navigate those shoals says volumes about your ambition, your drive, your values - such as integrity and trust, and what you know to be true about yourself. In those situations, may you have fair winds and running seas.
Happy coding.
Sunday, June 18, 2017
Am I too SASSy? Should I be Less?
Our parents' (or maybe grandparents') generation in the US have a little game they used to play - something I like to call "what you were doing when...". Of course this was before we were bombarded constantly by news outlets - back when breaking news was covered simultaneously on all three television channels and on the radio stations.
What do you mean "what's a radio"?! Now, that's not funny.
Several generations have played this game - "what were you doing when they announced the war was over", "what were you doing when Kennedy was shot", "what were you doing when Nixon resigned", "what were you doing when Reagan was shot", "what were you doing when the Challenger exploded", "what were you doing when..." - and everyone had their own "what were you doing when" event.
There's a philosophical discussion to be had here regarding καιρός and χρόνος - but that's best served in another venue at another time...but I remember when CSS became a thing, just like I remember when JavaScript became a thing.
Yes, I am that old...but seriously, the reference to three television channels and listening to radio didn't give it away?!
I remember when CSS became a thing, and it was glorious. It was glorious until we figured out that we had to specify everything, multiple times. Suddenly people who were used to writing code couldn't use variables...they just didn't exist. Then, around 2006 someone invents Sass and a few years later we see Less...and CSS preprocessors start moving across the web development world. We combined those wonderful tools with OOP concepts to get OOCSS (thanks to Nicole) and then came BEM and a bunch of other stuff. Seriously, who can keep up with all the abbreviations and acronyms these days?!
Here's what we lost with all the power of preprocessors, though - simple validation.
Sure, there are things like sass-lint, but they're generally geared to keeping developers from making simple mistakes...like using O instead of 0 in a hex value, not actual design screw-ups that not only give a definite impression regarding your ethics and what you value but can cost your organization millions. So your first "take-away" from this should probably be "check your linters to make sure they're covering accessibility as much as they can".
Being that nearly every company I've worked for in the last 10 years has moved to using a preprocessor, I decided it's time to write a rule for at least one of the linters that will check known, common accessibility problems (it's still in progress), and wow is it complicated.
Let me be clear - accessibility in CSS isn't complicated, the preprocessors make it that way. Linter rules are mostly written in JavaScript, which is a skill all good UI engineers should have in their toolbox. I've written linter rules before - I have several on my fork of csslint, in fact, and one of those is a pretty simple accessibility check, but linters for preprocessors - like sass-lint - use parsers that are significantly different than the parser written by Nick Zakas. When I settled on my first "preprocessor" rule being a rule for sass-lint (accessibility-issues), I found that navigating the Gonzales-PE parser gave me fits, until I noticed the Gonazales-PE Playground. Here's the second take-away: find a way to get insight into the object the parser creates - it will make a world of difference.
Now that I know, for instance, that the Gonzales-PE parser builds a ruleset object that contains both selectors and blocks, and that the block object contains declarations, I know how to tie a specific declaration to its selector ancestor.
From there it's pretty easy to eliminate pseudoelements (which are a different accessibility problem than "normal" elements can be) and check the normal elements for common accessibility issues - background color being specified without color also being specified, luminance contrast, content that's made inaccessible a number of ways, default visual cues (like outline) being overwritten, and text being too damned small.
You can see what that rule looks like in my github sass-lint repo for the time being - I don't know if, or when, the rule will be added to the official linter.
Are there ways to improve accessibility with your CSS even though you're using a preprocessor? Yes! (A general clue here - mixins and partials can be a big help.) Unfortunately, you're probably never going to find the answers to problems you don't know you have, so fork your favorite linter and build a rule.
Happy coding.
Being that nearly every company I've worked for in the last 10 years has moved to using a preprocessor, I decided it's time to write a rule for at least one of the linters that will check known, common accessibility problems (it's still in progress), and wow is it complicated.
Let me be clear - accessibility in CSS isn't complicated, the preprocessors make it that way. Linter rules are mostly written in JavaScript, which is a skill all good UI engineers should have in their toolbox. I've written linter rules before - I have several on my fork of csslint, in fact, and one of those is a pretty simple accessibility check, but linters for preprocessors - like sass-lint - use parsers that are significantly different than the parser written by Nick Zakas. When I settled on my first "preprocessor" rule being a rule for sass-lint (accessibility-issues), I found that navigating the Gonzales-PE parser gave me fits, until I noticed the Gonazales-PE Playground. Here's the second take-away: find a way to get insight into the object the parser creates - it will make a world of difference.
Now that I know, for instance, that the Gonzales-PE parser builds a ruleset object that contains both selectors and blocks, and that the block object contains declarations, I know how to tie a specific declaration to its selector ancestor.
From there it's pretty easy to eliminate pseudoelements (which are a different accessibility problem than "normal" elements can be) and check the normal elements for common accessibility issues - background color being specified without color also being specified, luminance contrast, content that's made inaccessible a number of ways, default visual cues (like outline) being overwritten, and text being too damned small.
You can see what that rule looks like in my github sass-lint repo for the time being - I don't know if, or when, the rule will be added to the official linter.
Are there ways to improve accessibility with your CSS even though you're using a preprocessor? Yes! (A general clue here - mixins and partials can be a big help.) Unfortunately, you're probably never going to find the answers to problems you don't know you have, so fork your favorite linter and build a rule.
Happy coding.
Wednesday, May 31, 2017
Unnecessary Complexity: A case against ReactJs
I'll admit, even though the title of this post might imply otherwise, my experience with ReactJs is limited. Unlike a lot of UI engineers, I have been working primarily in pure HTML, CSS, and JavaScript since I began more than two decades ago. Oh, sure I've used popular JavaScript “libraries” in the past – like YUI – and I've written more than a few over the years for some pretty big companies. I've also used some pretty popular “frameworks” – like BackboneJs – and combined them with other JavaScript libraries (e.g., NodeJs, Express, and DustJs). Overall, though, even though I had no prima facie opinion of ReactJs, I've avoided it – in much the same way that I've avoided winning the lottery – but all that has changed with the current workscape as more and more companies adopt ReactJs.
I should mention that I'm generally not a fan of any websites or applications built without using Progressive Enhancement, but then if you've read much of my writing you already know that, so the subtitle – A case against ReactJs – is a little misleading as this isn't just a case against ReactJs but against a practice of which the use of ReactJs is just an example.
I also must point out that I'm not a fan of the WSOD that results from many JavaScript-driven pages. While that's more a general issue with client-side frameworks and how they're woven into a front-end architecture, it also applies to ReactJs. I'm also not a fan of loads of JavaScript that is dependency heavy, intercepts DOM events, encourages a development process that isn't progressive, or discourages graceful degradation.
So, why single out ReactJs when it's clearly not the only library to do this? Good question. It's popular. Massively popular. From the number of job descriptions including it as either a requirement or “nice-to-have”, it's pretty easy to see that without knowledge of or experience with this particular library (it's not a framework), it's becoming very difficult to even get past the CV screening phase.
Although ReactJs is not the only example of client-side libraries that dot the Interwebz landscape, as one of (arguably) the most popular libraries, it bears close examination. And although ReactJs isn't the only example of what's wrong with UI engineering – there are plenty of other examples – most of them boil down to the willingness of engineers to sacrifice the user experience in an effort to make their job easier.
I hear you, and yes, they do say that it's a poor craftsman that blames his tools, which means all this flak I'm directing toward ReactJs might be misplaced. Am I not just blaming a tool for poorly-written code? There are two distinct lines of response that I would take. First, saying that it's a poor craftsman that blames his tools does not imply that the tools used are unimportant. No craftsman would wield a dull blade that made rough cuts when fine cuts were the goal. Every craftsman knows that other famous tool-related expression that clearly says when the only tool you have is a hammer that everything looks like a nail. There is a tool appropriate to every job. Second, I would posit that libraries and frameworks – things like ReactJs – are not, in fact, tools.
If we look at the artisan analogy, the tools in that case are HTML, CSS, and JavaScript. Libraries like ReactJs, and even frameworks like AngularJs, are not really an artisan's tools – they're not the base ingredients that make up a dish, the closer analogy is that they're the prepared foods other artisans have made. As prepared foods, they make the kitchen's job easier, but at a cost, because they remain in the end product even after it's gone to market. They're the frozen, processed food of the Interwebz, encouraging people working in the kitchens to masquerade as Beard Award winners.
Taking this foodie analogy further, as each new processed food is typically built upon other processed foods, the list of ingredients (dependencies) grows longer as “more layers are piled into increasingly complex systems”. These increasingly complex systems (dishes) are not only more fragile but also fraught with other issues, such as increased payload size (which is an issue for anyone connecting over a data-limited network, like mobile) and performance issues as all downloaded code executes in the browser. In the end, users end up with a bloated mess, but at least the “engineers” got the code out in time. They are literally, as the saying goes, “getting shit done”...and just as we wouldn't call someone working in a kitchen combining prepared foods into what must only loosely be described as a “dish” a “chef”, we shouldn't call those who create the monstrosities only loosely termed a “user interface” an “engineer”.
We must, as a community, get back to building actual user interfaces. We must, as a community, stop the madness. We must, as a community, become engineers again. Get the HTML out of your JavaScript. Get the CSS out of your JavaScript. Build agnostic, slim interfaces that everyone can use. We must, as a community, because we are the only ones who can.
I should mention that I'm generally not a fan of any websites or applications built without using Progressive Enhancement, but then if you've read much of my writing you already know that, so the subtitle – A case against ReactJs – is a little misleading as this isn't just a case against ReactJs but against a practice of which the use of ReactJs is just an example.
I also must point out that I'm not a fan of the WSOD that results from many JavaScript-driven pages. While that's more a general issue with client-side frameworks and how they're woven into a front-end architecture, it also applies to ReactJs. I'm also not a fan of loads of JavaScript that is dependency heavy, intercepts DOM events, encourages a development process that isn't progressive, or discourages graceful degradation.
So, why single out ReactJs when it's clearly not the only library to do this? Good question. It's popular. Massively popular. From the number of job descriptions including it as either a requirement or “nice-to-have”, it's pretty easy to see that without knowledge of or experience with this particular library (it's not a framework), it's becoming very difficult to even get past the CV screening phase.
Although ReactJs is not the only example of client-side libraries that dot the Interwebz landscape, as one of (arguably) the most popular libraries, it bears close examination. And although ReactJs isn't the only example of what's wrong with UI engineering – there are plenty of other examples – most of them boil down to the willingness of engineers to sacrifice the user experience in an effort to make their job easier.
The more layers are piled into increasingly complex systems, the more failure paths we introduce. We’ve learned that automation does not eliminate errors. Rather, it changes the nature of the errors that are made, and it makes possible new kinds of errors. Capt. Chesley B. “Sully” Sullenberger
I hear you, and yes, they do say that it's a poor craftsman that blames his tools, which means all this flak I'm directing toward ReactJs might be misplaced. Am I not just blaming a tool for poorly-written code? There are two distinct lines of response that I would take. First, saying that it's a poor craftsman that blames his tools does not imply that the tools used are unimportant. No craftsman would wield a dull blade that made rough cuts when fine cuts were the goal. Every craftsman knows that other famous tool-related expression that clearly says when the only tool you have is a hammer that everything looks like a nail. There is a tool appropriate to every job. Second, I would posit that libraries and frameworks – things like ReactJs – are not, in fact, tools.
If we look at the artisan analogy, the tools in that case are HTML, CSS, and JavaScript. Libraries like ReactJs, and even frameworks like AngularJs, are not really an artisan's tools – they're not the base ingredients that make up a dish, the closer analogy is that they're the prepared foods other artisans have made. As prepared foods, they make the kitchen's job easier, but at a cost, because they remain in the end product even after it's gone to market. They're the frozen, processed food of the Interwebz, encouraging people working in the kitchens to masquerade as Beard Award winners.
Taking this foodie analogy further, as each new processed food is typically built upon other processed foods, the list of ingredients (dependencies) grows longer as “more layers are piled into increasingly complex systems”. These increasingly complex systems (dishes) are not only more fragile but also fraught with other issues, such as increased payload size (which is an issue for anyone connecting over a data-limited network, like mobile) and performance issues as all downloaded code executes in the browser. In the end, users end up with a bloated mess, but at least the “engineers” got the code out in time. They are literally, as the saying goes, “getting shit done”...and just as we wouldn't call someone working in a kitchen combining prepared foods into what must only loosely be described as a “dish” a “chef”, we shouldn't call those who create the monstrosities only loosely termed a “user interface” an “engineer”.
We must, as a community, get back to building actual user interfaces. We must, as a community, stop the madness. We must, as a community, become engineers again. Get the HTML out of your JavaScript. Get the CSS out of your JavaScript. Build agnostic, slim interfaces that everyone can use. We must, as a community, because we are the only ones who can.
Saturday, March 11, 2017
A11y Squared
I know this post has a rather unusual title - hopefully that's part of what's gained your attention. Because it has such an unusual title, I should spend a moment or two talking about it and why I chose it before continuing.
First, for those unfamiliar, A11Y is the numeronym used when we talk about accessibility. In this instance, I'm playing off how the word resembles "ally" because a one looks like a lowercase L...which means A11Y squared is really an "Accessibility Ally".
So, what is an Accessibility Ally?
Merriam-Webster defines an “ally” as “one that is associated with another as a helper”, so a short answer to the question “what is an accessibility ally” is “someone that helps improve or enable accessibility”.
Why is that important? In First World countries, roughly 10 percent of adults under the age of 65 have a speech, hearing, visual, or motor impairment that significantly affects their life. That number increases to approximately 25 percent for adults between the ages of 65 and 75. Having a website that is not accessible impacts that group significantly.
At this point, organizations generally use the reasoning that no one is complaining about the accessibility of their website, so there are no problems. The lack of complaints in the business world has a long history of being a poor indicator of performance. We know that roughly 10 percent of customers who experience an issue complain to the organization and the other 90 percent are drops. In the UK, it's estimated that 90 percent of users who experience an accessibility issue and do not complain but drop instead represent nearly £12 billion (GBP) in lost revenue.
If the loss of revenue were not enough, there are legal ramifications to be considered as well. In the US, the number of cases regarding web accessibility filed in a Federal Court increases nearly four-fold every year. In 2016, there were approximately 200 cases identified as having been filed, so look for around 800 cases to be filed in 2017.
If legal cases and loss of revenue were not enough to make an organization reconsider lack of accessibility, there are ethical issues as well as a matter of fairness and equal access.
There is a lot to accessibility on the web. It sounds simple enough to make a web page accessible - and that false impression is not helped by the general concept that anyone can make a web page - but there are a host of issues with which designers and developers must become familiar and there are a number of places in the design and development process where we can get off track. However, as we know from other development activities, writing code the right way is always less expensive than fixing it later.
So, become an Accessibility Ally - learn what needs to be done and do it. In the long run, learning what needs to be done and doing it takes less time than going back and fixing it (which you would have to do if a court case were filed), and it may even increase your revenue.
Happy coding.
So, what is an Accessibility Ally?
Merriam-Webster defines an “ally” as “one that is associated with another as a helper”, so a short answer to the question “what is an accessibility ally” is “someone that helps improve or enable accessibility”.
Why is that important? In First World countries, roughly 10 percent of adults under the age of 65 have a speech, hearing, visual, or motor impairment that significantly affects their life. That number increases to approximately 25 percent for adults between the ages of 65 and 75. Having a website that is not accessible impacts that group significantly.
At this point, organizations generally use the reasoning that no one is complaining about the accessibility of their website, so there are no problems. The lack of complaints in the business world has a long history of being a poor indicator of performance. We know that roughly 10 percent of customers who experience an issue complain to the organization and the other 90 percent are drops. In the UK, it's estimated that 90 percent of users who experience an accessibility issue and do not complain but drop instead represent nearly £12 billion (GBP) in lost revenue.
If the loss of revenue were not enough, there are legal ramifications to be considered as well. In the US, the number of cases regarding web accessibility filed in a Federal Court increases nearly four-fold every year. In 2016, there were approximately 200 cases identified as having been filed, so look for around 800 cases to be filed in 2017.
If legal cases and loss of revenue were not enough to make an organization reconsider lack of accessibility, there are ethical issues as well as a matter of fairness and equal access.
There is a lot to accessibility on the web. It sounds simple enough to make a web page accessible - and that false impression is not helped by the general concept that anyone can make a web page - but there are a host of issues with which designers and developers must become familiar and there are a number of places in the design and development process where we can get off track. However, as we know from other development activities, writing code the right way is always less expensive than fixing it later.
So, become an Accessibility Ally - learn what needs to be done and do it. In the long run, learning what needs to be done and doing it takes less time than going back and fixing it (which you would have to do if a court case were filed), and it may even increase your revenue.
Happy coding.
Tuesday, May 31, 2016
The Danger of Going Above and Beyond
In a recent interview I was asked to describe the last time I went "above and beyond" for the customer. It seems a fair question, after all, we want people working with us who are willing to "go the extra mile" to serve the customer, right? I want to say both "yes" and "no".
I'll begin this journey by saying that for me, this question triggered a bit of a flashback to my first job not long after university. I was engaged to develop and implement the user training program for a new payment system. I spent hours testing the beta version of the software on all the platforms it was offered and wrote both the user and instructor documentation (including creating the graphics). As the second employee (the first was the person who wrote the software) I also did a few other things - all user-focused - like creating the inventory and supplies management system and some support software our customer service people used. One day as I was having lunch with the COO (Chief Operations Officer) he shared with me two management (related) ideals he had followed since his days with the US armed forces:
We all need down time. Sometimes the down time we need is a break during the day after we've spent hours working a particularly complex problem (been there, lots). Sometimes the down time is a couple of days off after working 80 hours in a week (been there too...more than once). As managers and team members we need to recognize that none of us give 100 percent. Are there some team members who are "more productive" than others? Yes, and no. It's much more common that we're all equally productive, just producing different things. We all need to recognize more than our own contributions (or our own kind of contributions - contributions of team members with roles like ours) and we all need to adjust our perception a little so that we're not only recognizing the 20 percent others are "not productive" but our own 20-percent time as well.
That second bit of advice still seems harsh to me, all these years later, but I can see how that construct shaped my perspective after having been introduced at such an early stage in my career. That does not mean that I mistakenly think that anyone can do anything and that all members of a team are interchangeable.
Team members each have their own strengths and they're not interchangeable. To put this in role-playing game terms, all good teams need a rogue (to help the team overcome obstacles and protect the team from danger), a fighter (to take care of the danger when it comes), and a healer (to restore the team after a dangerous encounter). This same general format works whether you're crawling a dungeon or launching a product (more about that another time).
If, in the midst of danger, the rogue said "hey, it's just my job to tell you about the dragon, not fight it" the rogue may find that he is crunchy and good with ketchup. If, in the midst of danger, a healer who says "my job is to heal the bite and claw marks after the battle" may find that she does not survive that long. In the same way, an engineer who tells the dev ops team that a failure was caused by "something in the way the build was deployed, and that's not my job" when there's an issue is not likely to survive long - and rightfully so.
As I said earlier, I stepped out of my role developing and implementing the user training program to work on other, related customer-facing issues at that first job. Was that going "above and beyond"? Some might say yes, but the correct answer is "no, not really". Was it going "above and beyond" when the COO instructed me to act as courier to redistribute the workload for another department in the same company? No. My job is to serve the customer, and in this case, as in many cases, "the customer" was my employer.
One of the things we have forgotten, misplaced, or perhaps discarded is not the mistaken ideal that "the customer is always right" but the very correct ideal that the customer deserves satisfaction. There was a time when companies employed slogans that were some version of "whatever it takes" but now it seems we have descended into a "give us a good review on Yelp/Google/Facebook/<insert your marketing here>" approach and we have lost a portion of what we were.
And here I am, back at my original premise - saying both "yes" and "no" to "going the extra mile" - and so I'll sum up as clearly as I am able. As soon as you define something as "above and beyond" you've categorized yourself and everyone else according to your perception of both productivity and job description. That, my padawan, is a dangerous road. A road visited by those who are eager to steal your reputation, your customers, and your livelihood. Don't travel that road.
Happy coding.
I'll begin this journey by saying that for me, this question triggered a bit of a flashback to my first job not long after university. I was engaged to develop and implement the user training program for a new payment system. I spent hours testing the beta version of the software on all the platforms it was offered and wrote both the user and instructor documentation (including creating the graphics). As the second employee (the first was the person who wrote the software) I also did a few other things - all user-focused - like creating the inventory and supplies management system and some support software our customer service people used. One day as I was having lunch with the COO (Chief Operations Officer) he shared with me two management (related) ideals he had followed since his days with the US armed forces:
- Even if someone gives 100 percent - and few people give 100 percent to their employer - the most you will get is 80 and the other 20 percent of the time they will not be productive
- Don't keep an employee after you hear them say "it's not my job"
We all need down time. Sometimes the down time we need is a break during the day after we've spent hours working a particularly complex problem (been there, lots). Sometimes the down time is a couple of days off after working 80 hours in a week (been there too...more than once). As managers and team members we need to recognize that none of us give 100 percent. Are there some team members who are "more productive" than others? Yes, and no. It's much more common that we're all equally productive, just producing different things. We all need to recognize more than our own contributions (or our own kind of contributions - contributions of team members with roles like ours) and we all need to adjust our perception a little so that we're not only recognizing the 20 percent others are "not productive" but our own 20-percent time as well.
That second bit of advice still seems harsh to me, all these years later, but I can see how that construct shaped my perspective after having been introduced at such an early stage in my career. That does not mean that I mistakenly think that anyone can do anything and that all members of a team are interchangeable.
Team members each have their own strengths and they're not interchangeable. To put this in role-playing game terms, all good teams need a rogue (to help the team overcome obstacles and protect the team from danger), a fighter (to take care of the danger when it comes), and a healer (to restore the team after a dangerous encounter). This same general format works whether you're crawling a dungeon or launching a product (more about that another time).
If, in the midst of danger, the rogue said "hey, it's just my job to tell you about the dragon, not fight it" the rogue may find that he is crunchy and good with ketchup. If, in the midst of danger, a healer who says "my job is to heal the bite and claw marks after the battle" may find that she does not survive that long. In the same way, an engineer who tells the dev ops team that a failure was caused by "something in the way the build was deployed, and that's not my job" when there's an issue is not likely to survive long - and rightfully so.
As I said earlier, I stepped out of my role developing and implementing the user training program to work on other, related customer-facing issues at that first job. Was that going "above and beyond"? Some might say yes, but the correct answer is "no, not really". Was it going "above and beyond" when the COO instructed me to act as courier to redistribute the workload for another department in the same company? No. My job is to serve the customer, and in this case, as in many cases, "the customer" was my employer.
One of the things we have forgotten, misplaced, or perhaps discarded is not the mistaken ideal that "the customer is always right" but the very correct ideal that the customer deserves satisfaction. There was a time when companies employed slogans that were some version of "whatever it takes" but now it seems we have descended into a "give us a good review on Yelp/Google/Facebook/<insert your marketing here>" approach and we have lost a portion of what we were.
And here I am, back at my original premise - saying both "yes" and "no" to "going the extra mile" - and so I'll sum up as clearly as I am able. As soon as you define something as "above and beyond" you've categorized yourself and everyone else according to your perception of both productivity and job description. That, my padawan, is a dangerous road. A road visited by those who are eager to steal your reputation, your customers, and your livelihood. Don't travel that road.
Happy coding.
Friday, November 14, 2014
The Right Answer
Several years ago – in the late 1980s – I was in a graduate MIS class and the professor posed a question to which he made clear there were multiple answers. After a class discussion the professor brought up the answer he considered to be the right answer. (If you haven't read the problem and answer in the sidebar, do so now.) As I've recalled this event, I've been reminded of a few things, and I've tried to determine what the greater lesson has been over the years.
One of the most important lessons I learned in that experience is that there are typically several answers to a problem and they have varying degrees of difficulty, precision, and accuracy. Sometimes choosing the answer is easy – other times it would seem it is not. For example, if we're choosing which compression algorithm to use, we can relatively easily make the determination based upon whether we want to prioritize speed or the compression ratio; however, if we're trying to balance time-to-market, performance, quality, and user experience the answers are not so easily reached. In real-world scenarios, we may suggest to our hypothetical employer that they fold the gold bar 5 times and make the two cuts at the ends – that answer maintains time-to-market and improves the user experience (the worker has immediate full use of everything they receive), but is that the right answer?
Any time we find ourselves asking if an answer is the right answer, we must look beyond the surface of any potential decisions and ask what we would actually be doing – in language terms, what is the connotation of this conversation rather than just what is the denotation. Phrased another way – what are the patterns we are introducing intentionally and what patterns are we introducing unintentionally? Are we just doing what is good for our organization or are we promoting a greater good or are our customers (or users) seeing the benefit?
In the world of software development, managers and business leaders have been toying with the development triangle for years, trying to squeeze out the best solutions, and just because we're now dealing with web-based applications instead of desktop-based applications makes little difference. Granted, trying to keep the three legs of the development triangle balanced is difficult – and it's harder for some teams than others (but that discussion will have to wait for another post). It's made still more difficult by the fact that none of us like to have the scope of our project constrained by the balance of these three legs. Unfortunately, what many organizations do is prioritize their wants and needs over the needs of their customer. There certainly are times when this is appropriate – the customer simply cannot always be right – but this happens far too often to be valid, especially among Internet companies where development speed – enhanced by bloated code like bootstrap and jQuery or (even worse) user-agent dependent languages like Angularjs – is prioritized over a user experience that people argue simply must have all the bells and whistles. Leadership in these cases is somewhat like the obstetricians who argue that they must have the machine that goes ping, except in the real-world these cases are not intentionally comedic. As a result, many leaders who have prioritized development over user experience or have tried in vain to balance all the legs of the development triangle and not constrict scope, believe the use of various frameworks are a solution they can leverage, and many have attempted just that – going down the Angular.js rabbit hole, for example – but is that the right answer?
No, it's not the right answer. Is there a better answer? Yes. In fact, I would posit that there is a right answer for web-based development – one that prioritizes users over development – but it's one that few developers like or want to admit, because it's not, as we say, cool. The right answer is Progressive Enhancement. Simply put, Progressive Enhancement starts with HTML, and it should be semantic HTML which must include using the appropriate tags as well as ARIA attributes that make it as readable as possible by both humans and machines (I'm looking you, developers who use <i> instead of <em>, <b> instead of <strong>, and never use ARIA states and properties). After the document is created, it is styled using CSS – again paying attention to accessibility (e.g., using clip instead of display or height to hide content) – and finally, layering unobtrusive JavaScript on top of all the rest in a way that pays attention to performance.
Developing in this manner ensures that users will always be able to access your content – be it informational or commercial – regardless of whether or not someone's user-agent supports CSS3 or whether or not the user-agent supports JavaScript or whether or not anyone is doing something like pushing code live without having tested it and introducing an error as a result or blocking JavaScript (either intentionally or unintentionally – and yes, it has happened when a "parental filter" operated by an ISP in the UK blocked access to jQuery via CDN) or whether or not you're running third-party code that is not up to par.
Of course, one of the arguments that I've heard repeated is that there aren't that many users with JavaScript disabled and Progressive Enhancement takes too long, both to write and when rendering (because JavaScript-based rendering is much faster). So, let me just address those arguments – and let me say that this reasoning is not based on some ephemeral justice-based ideology but on solid experience building web pages. (If you're really curious about my work experience, read "A machine-readable resume", where there is both an image of a portion of my CV and a link to the full version.)
While the actual percentage of users browsing the Internet with JavaScript disabled is low (the number is arguably around 1%), that number does not count all the users affected by stuff that is broken by error-infested, untested code, nor does it count users affected by not having access to specific libraries because their ISP decided those libraries were potentially dangerous.
Neither of those groups take into account people who are using accessibility software for whom an otherwise acceptable web page offers a broken user experience. Granted, most accessibility software works acceptably with JavaScript, but without paying special attention to making your JavaScript-enhanced page accessible, e.g., by using aria-live or shifting focus to updated content, users with accessibility issues are still left with a broken experience. Because of the way in which accessibility software works with user-agents, the user-agents are not technically, potentially JavaScript-disabled in general, but the user experience is.
Yes, the number of users with user-agents that recognize a SCRIPT tag is around 1% – and 1% of nearly 3 billion is still a lot – but there is a significant difference between something potentially running JavaScript and something actually running JavaScript – a difference any philosopher should immediately recognize.
As for pages built using Progressive Enhancement not rendering as fast as those built using client-side rendering, I'm going to just come out and say that cannot possibly be true in anything other than a test that is non-analogous to real-life. I suppose client-side rendering might be faster if you were delivering a page with a lot of duplicated content, but in practice delivering a template, a client-side rendering library, and JSON data is not a much smaller payload because most content isn't duplicated. What we've really done with this approach is break up the content – the most important piece – into separate documents. To add insult to injury, there are issues of network latency for each of those three requests (for content), and potentially disastrous re-flow issues, not associated with CSS, as containers are loaded with HTML.
If you coerce a multiple-page application into a single page by delivering partial in-document updates – the way it was done in PayPal checkout products for example – you may see some actual performance improvements and users may perceive some performance improvements – though neither is guaranteed. However, in the case of in-context updates you have to take special care to make sure your page/app is still playing well with accessibility software or you're not only shutting out users you may be violating the law.
As an aside, adding all the accessibility hooks and building out any inadequacies in the various libraries and then testing all the extra code certainly has the potential to destroy any improvement you may gain by avoiding Progressive Enhancement.
Yes, saying you're improving the development experience (or your engineers' lives) is seen as sexy and cool by the engineers building your products and prioritizing time to market is sexy to those who have invested in your business, but as anyone who has pursued event the earliest steps in an Economics program knows, those words don't necessarily what you think they do – it's what your users think, and on the web it's a mistake to not prioritize the user experience over other things.
Our users may not always be right, but when we're trying to solve their problem, more often than not they have the right answer.
Note: if you'd like to read another's thoughts on a similar topic, Nicolas Bevacqua has a few thoughts, and has prompted a few comments as well.
Problem: You have hired a worker who is to be paid 1 inch of gold each day. You have a single bar of gold that is 6 inches and can only cut it twice. Where do you make the cuts to pay the worker exactly the amount they have earned each day - what size are the three resulting bars?
Answer: Cut the bar at 1 inch from one end and 3 inches from the other end, resulting in 3 bars that are 1 inch, 2 inches, and 3 inches. Pay is as follows: Day 1 - the 1" bar; Day 2 - the 2" bar and receive the 1" bar back; Day 3 - the 3" bar and receive the 2" bar back; Day 4 - the 1" bar; Day 5 - the 2" bar and receive the 1" bar back; Day 6 - the 1" bar.
Any time we find ourselves asking if an answer is the right answer, we must look beyond the surface of any potential decisions and ask what we would actually be doing – in language terms, what is the connotation of this conversation rather than just what is the denotation. Phrased another way – what are the patterns we are introducing intentionally and what patterns are we introducing unintentionally? Are we just doing what is good for our organization or are we promoting a greater good or are our customers (or users) seeing the benefit?
In the world of software development, managers and business leaders have been toying with the development triangle for years, trying to squeeze out the best solutions, and just because we're now dealing with web-based applications instead of desktop-based applications makes little difference. Granted, trying to keep the three legs of the development triangle balanced is difficult – and it's harder for some teams than others (but that discussion will have to wait for another post). It's made still more difficult by the fact that none of us like to have the scope of our project constrained by the balance of these three legs. Unfortunately, what many organizations do is prioritize their wants and needs over the needs of their customer. There certainly are times when this is appropriate – the customer simply cannot always be right – but this happens far too often to be valid, especially among Internet companies where development speed – enhanced by bloated code like bootstrap and jQuery or (even worse) user-agent dependent languages like Angularjs – is prioritized over a user experience that people argue simply must have all the bells and whistles. Leadership in these cases is somewhat like the obstetricians who argue that they must have the machine that goes ping, except in the real-world these cases are not intentionally comedic. As a result, many leaders who have prioritized development over user experience or have tried in vain to balance all the legs of the development triangle and not constrict scope, believe the use of various frameworks are a solution they can leverage, and many have attempted just that – going down the Angular.js rabbit hole, for example – but is that the right answer?
No, it's not the right answer. Is there a better answer? Yes. In fact, I would posit that there is a right answer for web-based development – one that prioritizes users over development – but it's one that few developers like or want to admit, because it's not, as we say, cool. The right answer is Progressive Enhancement. Simply put, Progressive Enhancement starts with HTML, and it should be semantic HTML which must include using the appropriate tags as well as ARIA attributes that make it as readable as possible by both humans and machines (I'm looking you, developers who use <i> instead of <em>, <b> instead of <strong>, and never use ARIA states and properties). After the document is created, it is styled using CSS – again paying attention to accessibility (e.g., using clip instead of display or height to hide content) – and finally, layering unobtrusive JavaScript on top of all the rest in a way that pays attention to performance.
Developing in this manner ensures that users will always be able to access your content – be it informational or commercial – regardless of whether or not someone's user-agent supports CSS3 or whether or not the user-agent supports JavaScript or whether or not anyone is doing something like pushing code live without having tested it and introducing an error as a result or blocking JavaScript (either intentionally or unintentionally – and yes, it has happened when a "parental filter" operated by an ISP in the UK blocked access to jQuery via CDN) or whether or not you're running third-party code that is not up to par.
Of course, one of the arguments that I've heard repeated is that there aren't that many users with JavaScript disabled and Progressive Enhancement takes too long, both to write and when rendering (because JavaScript-based rendering is much faster). So, let me just address those arguments – and let me say that this reasoning is not based on some ephemeral justice-based ideology but on solid experience building web pages. (If you're really curious about my work experience, read "A machine-readable resume", where there is both an image of a portion of my CV and a link to the full version.)
While the actual percentage of users browsing the Internet with JavaScript disabled is low (the number is arguably around 1%), that number does not count all the users affected by stuff that is broken by error-infested, untested code, nor does it count users affected by not having access to specific libraries because their ISP decided those libraries were potentially dangerous.
Neither of those groups take into account people who are using accessibility software for whom an otherwise acceptable web page offers a broken user experience. Granted, most accessibility software works acceptably with JavaScript, but without paying special attention to making your JavaScript-enhanced page accessible, e.g., by using aria-live or shifting focus to updated content, users with accessibility issues are still left with a broken experience. Because of the way in which accessibility software works with user-agents, the user-agents are not technically, potentially JavaScript-disabled in general, but the user experience is.
Yes, the number of users with user-agents that recognize a SCRIPT tag is around 1% – and 1% of nearly 3 billion is still a lot – but there is a significant difference between something potentially running JavaScript and something actually running JavaScript – a difference any philosopher should immediately recognize.
As for pages built using Progressive Enhancement not rendering as fast as those built using client-side rendering, I'm going to just come out and say that cannot possibly be true in anything other than a test that is non-analogous to real-life. I suppose client-side rendering might be faster if you were delivering a page with a lot of duplicated content, but in practice delivering a template, a client-side rendering library, and JSON data is not a much smaller payload because most content isn't duplicated. What we've really done with this approach is break up the content – the most important piece – into separate documents. To add insult to injury, there are issues of network latency for each of those three requests (for content), and potentially disastrous re-flow issues, not associated with CSS, as containers are loaded with HTML.
If you coerce a multiple-page application into a single page by delivering partial in-document updates – the way it was done in PayPal checkout products for example – you may see some actual performance improvements and users may perceive some performance improvements – though neither is guaranteed. However, in the case of in-context updates you have to take special care to make sure your page/app is still playing well with accessibility software or you're not only shutting out users you may be violating the law.
As an aside, adding all the accessibility hooks and building out any inadequacies in the various libraries and then testing all the extra code certainly has the potential to destroy any improvement you may gain by avoiding Progressive Enhancement.
Yes, saying you're improving the development experience (or your engineers' lives) is seen as sexy and cool by the engineers building your products and prioritizing time to market is sexy to those who have invested in your business, but as anyone who has pursued event the earliest steps in an Economics program knows, those words don't necessarily what you think they do – it's what your users think, and on the web it's a mistake to not prioritize the user experience over other things.
Our users may not always be right, but when we're trying to solve their problem, more often than not they have the right answer.
Note: if you'd like to read another's thoughts on a similar topic, Nicolas Bevacqua has a few thoughts, and has prompted a few comments as well.
Friday, August 29, 2014
The Party's Over
On the way in to the office today I was listening to the radio. No, not "internet radio", real radio - with a real DJ and everything. There was an interview with Paul McCartney, who was asked, in light of his upcoming appearance at Candlestick Park, why the Beatles stopped performing...his response was "it wasn't fun anymore". Something about the culture of money, fame, and ostensibly doing what they loved changed and it wasn't fun anymore, and that was enough to make them walk away.
I know that the Beatles, individually, were famous and talented enough that they were able to enjoy 'solo' careers, and that even if they never toured or recorded again, they could probably have survived quite comfortably, and that's no small comfort (for them)...but still - walking away from millions of dollars and instant recognition with no guarantees that you will be able to do what you love again...it's nearly mind-blowing.
You might find yourself asking "why is this important, why is he bringing this up" - and with good reason. After all, my most recent posts on this blog have been at least somewhat technical...and there haven't even been any of those for a significant span of time (at least in blog years). So, your curiosity is understandable...and I'll get to the why of the timing eventually.
Back to the original train - "it wasn't fun anymore". When I heard this come over the airwaves, it reminded me of something I'd read in April of this year - Brian Chesky Note 1 relating the advice of Peter Thiel Note 2 when he funded Airbnb.
There have been a lot of people considering Thiel's advice - there are around 31,000 document matches when searching for Thiel's exact words - and now here is my journey down that particular rabbit hole.
When I was at university in the Metaphysics course, a question intended to highlight the division between essentialists and existentialists arose, framed not as a question about people but objects. Consider that we're rebuilding a sailing ship and we tear it down to the keel and replace all of the boards. When our work is complete, is it the same ship? If we think of it in terms of automobiles, it would not fit the definition of being the same car, because the identifier for the car - the VIN Note 3 - no longer accurately represents information about the vehicle. So, is it a question of how much we change something or what we change that makes the determination? Are organization analogous to objects? How does this apply to organisms?
Corporations are not people, but they are organisms, with values and personality that govern their actions, for good or evil. If we go back to the question of 'how much change' or 'what kind of change' makes something no longer identifiable as itself, we can think of situations in which we've thought, even though we may not have formally defined it, that some person we know has changed in some way and now they are not the same person - we may even readily claim this using this exact phrase. We should consider corporations subject to these same rules of behavior and personality. In fact, we can most likely each think of an organization - such as a business or philanthropic organization - that after an unsettling experience left us with the thought "they would have never done that it the past" or "they sure have changed."
Thiel most likely has seen organizations that have damaged their culture time and time again in organizations that have come to him for venture funding. He certainly sees it not only as a possible problem, but one that is likely as well. This, too, stands to reason as the common thinking is that as an organization grows, the culture changes as efficiency of scale is achieved in various areas.
The problem that I imagine Thiel sees - and yes, I am putting words in his mouth to an extent - is that when the culture changes, people leave - a situation that is also accepted as not only survivable, but normal. However, in reality, the turnover caused by culture changes are dangerous for an organization - it's like having an illness that has not yet been diagnosed - one with symptoms that you decide you can live with but that just might kill you. Part of the reasoning here is that culture changes are generally a self-reinforcing loop - the sort of loop that once it's started is not only difficult to stop, but also difficult to control or in some cases to even recognize.
Given the semi-private nature of an organization's culture, the earliest greatest impact of damage to an organization's culture will be to those within the organization. Why is this important? People are generally not motivated by money - culture is what motivates people - and it motivates people to do amazing things - like work 80 hours a week for several weeks at a time even though they're only compensated for 40, meet ridiculous deadlines, or nearly violate the laws of physics to deliver a high-quality, low-cost product quickly - whereas incentive programs generally don't work. Note 4 Because of the tight coupling between culture and other areas - like motivation and productivity, changes to culture can have a dramatic effect on the organization as a whole.
At this point, we might be tempted, after seeing people violate Thiel's advice, to think the culture of the organization is permanently damaged any time that it is changed dramatically. There certainly are people who have departed any number of organizations thinking just this thought. A brief review of companies on a site like glassdoor gives insight into the number of people working for a company who believe "changed" is equivalent to "damaged". However, here we have to stop and notice that Thiel's advice wasn't "don't have a damaged culture", his advice was "don't damage the culture". The linguistic difference between those two messages is less significant than it should be, because they are drastically different concepts.
In one version, culture is in a damaged state and in the other it's different than what it was. We need look no further than our own history of romantic relationships to see the truth of the premise that these are different, regardless of our willingness to admit it in the pain and grief that comes immediately after the recognition of how we, or the other, have changed. Just because someone or something you love - be it a person or an organization - changes and you find they are now intolerable (to you) that does not mean that they are therefore befouled or damaged - they can be a perfectly nice, good person (or organization) and still not be your cup of tea. Changes are significant, however, because once you've changed the culture, people no longer have the company they love, and people not only lose motivation, people start leaving - whether they're customers or employees - and that's seldom a good thing. Note 5
When the people most invested in the success of an organization - like employees - leave because the culture has been damaged, there are likely to be repercussions that ripple out in ever-widening circles, like those created when a pebble is dropped into a pond. If the damage to the organization's culture is significant enough, it's just a matter of time before that trend carries outward as far as customers. Whether the organization can repair the damage and weather depends on a variety of factors that are outside the scope of this brief essay, but in every case, the nature of the business will be profoundly changed. Whether that change is for good or ill is something only time can tell. If your organization survives by knowing their customers (or users), such turbulence can be exceedingly dangerous, and it is unwise to assume that it is not.
Now, to address the question of why this post, now.
Recently there has been a lot of interest in why I left a position I held for nearly a decade. Here is the best brief explanation I can offer - I left for the same reason the Beatles stopped touring - it wasn't fun anymore. Unfortunately, that explanation has frequently proven inadequate and I have developed a longer, but still brief, explanation.
Several years ago, I started what I believed could potentially be my last job in the industry. The work was challenging and interesting, the people were, as they say, "wicked smart" and immensely talented, the product was an economic product geared toward serving under-served populations, and the general corporate culture was based on four basic values that resonated with me. It was, in a lot of ways - very nearly every way in fact - the perfect fit. Over the course of the next several years, things changed - as things do. The work became mundane as my skills were under-utilized, the vast majority of people moved on, and the product and culture changed significantly. The organization was not the same organization with which I had fallen in love, and I finally came to recognize that all the perks and incentive programs were metaphorical chains that bound me in place.
As a result of changes that transformed the organization from something I loved into something that I didn't, I left - and yes, it was before finding another gig - because I'm a firm believer that when you see it's time to go, you put your affairs in order, raise your sails, and go. Now, three months past my departure, I still have a sense of what I've lost, and yet there are times that, like the song says, I'm "too relieved to grieve" Note 6 because, in the end golden chains are still chains (Robert's Rule #33).
I know that the Beatles, individually, were famous and talented enough that they were able to enjoy 'solo' careers, and that even if they never toured or recorded again, they could probably have survived quite comfortably, and that's no small comfort (for them)...but still - walking away from millions of dollars and instant recognition with no guarantees that you will be able to do what you love again...it's nearly mind-blowing.
You might find yourself asking "why is this important, why is he bringing this up" - and with good reason. After all, my most recent posts on this blog have been at least somewhat technical...and there haven't even been any of those for a significant span of time (at least in blog years). So, your curiosity is understandable...and I'll get to the why of the timing eventually.
Back to the original train - "it wasn't fun anymore". When I heard this come over the airwaves, it reminded me of something I'd read in April of this year - Brian Chesky Note 1 relating the advice of Peter Thiel Note 2 when he funded Airbnb.
Don't fuck up the culture.
Peter Thiel, 2012
There have been a lot of people considering Thiel's advice - there are around 31,000 document matches when searching for Thiel's exact words - and now here is my journey down that particular rabbit hole.
When I was at university in the Metaphysics course, a question intended to highlight the division between essentialists and existentialists arose, framed not as a question about people but objects. Consider that we're rebuilding a sailing ship and we tear it down to the keel and replace all of the boards. When our work is complete, is it the same ship? If we think of it in terms of automobiles, it would not fit the definition of being the same car, because the identifier for the car - the VIN Note 3 - no longer accurately represents information about the vehicle. So, is it a question of how much we change something or what we change that makes the determination? Are organization analogous to objects? How does this apply to organisms?
Corporations are not people, but they are organisms, with values and personality that govern their actions, for good or evil. If we go back to the question of 'how much change' or 'what kind of change' makes something no longer identifiable as itself, we can think of situations in which we've thought, even though we may not have formally defined it, that some person we know has changed in some way and now they are not the same person - we may even readily claim this using this exact phrase. We should consider corporations subject to these same rules of behavior and personality. In fact, we can most likely each think of an organization - such as a business or philanthropic organization - that after an unsettling experience left us with the thought "they would have never done that it the past" or "they sure have changed."
Thiel most likely has seen organizations that have damaged their culture time and time again in organizations that have come to him for venture funding. He certainly sees it not only as a possible problem, but one that is likely as well. This, too, stands to reason as the common thinking is that as an organization grows, the culture changes as efficiency of scale is achieved in various areas.
The problem that I imagine Thiel sees - and yes, I am putting words in his mouth to an extent - is that when the culture changes, people leave - a situation that is also accepted as not only survivable, but normal. However, in reality, the turnover caused by culture changes are dangerous for an organization - it's like having an illness that has not yet been diagnosed - one with symptoms that you decide you can live with but that just might kill you. Part of the reasoning here is that culture changes are generally a self-reinforcing loop - the sort of loop that once it's started is not only difficult to stop, but also difficult to control or in some cases to even recognize.
Given the semi-private nature of an organization's culture, the earliest greatest impact of damage to an organization's culture will be to those within the organization. Why is this important? People are generally not motivated by money - culture is what motivates people - and it motivates people to do amazing things - like work 80 hours a week for several weeks at a time even though they're only compensated for 40, meet ridiculous deadlines, or nearly violate the laws of physics to deliver a high-quality, low-cost product quickly - whereas incentive programs generally don't work. Note 4 Because of the tight coupling between culture and other areas - like motivation and productivity, changes to culture can have a dramatic effect on the organization as a whole.
At this point, we might be tempted, after seeing people violate Thiel's advice, to think the culture of the organization is permanently damaged any time that it is changed dramatically. There certainly are people who have departed any number of organizations thinking just this thought. A brief review of companies on a site like glassdoor gives insight into the number of people working for a company who believe "changed" is equivalent to "damaged". However, here we have to stop and notice that Thiel's advice wasn't "don't have a damaged culture", his advice was "don't damage the culture". The linguistic difference between those two messages is less significant than it should be, because they are drastically different concepts.
In one version, culture is in a damaged state and in the other it's different than what it was. We need look no further than our own history of romantic relationships to see the truth of the premise that these are different, regardless of our willingness to admit it in the pain and grief that comes immediately after the recognition of how we, or the other, have changed. Just because someone or something you love - be it a person or an organization - changes and you find they are now intolerable (to you) that does not mean that they are therefore befouled or damaged - they can be a perfectly nice, good person (or organization) and still not be your cup of tea. Changes are significant, however, because once you've changed the culture, people no longer have the company they love, and people not only lose motivation, people start leaving - whether they're customers or employees - and that's seldom a good thing. Note 5
When the people most invested in the success of an organization - like employees - leave because the culture has been damaged, there are likely to be repercussions that ripple out in ever-widening circles, like those created when a pebble is dropped into a pond. If the damage to the organization's culture is significant enough, it's just a matter of time before that trend carries outward as far as customers. Whether the organization can repair the damage and weather depends on a variety of factors that are outside the scope of this brief essay, but in every case, the nature of the business will be profoundly changed. Whether that change is for good or ill is something only time can tell. If your organization survives by knowing their customers (or users), such turbulence can be exceedingly dangerous, and it is unwise to assume that it is not.
Now, to address the question of why this post, now.
Recently there has been a lot of interest in why I left a position I held for nearly a decade. Here is the best brief explanation I can offer - I left for the same reason the Beatles stopped touring - it wasn't fun anymore. Unfortunately, that explanation has frequently proven inadequate and I have developed a longer, but still brief, explanation.
Several years ago, I started what I believed could potentially be my last job in the industry. The work was challenging and interesting, the people were, as they say, "wicked smart" and immensely talented, the product was an economic product geared toward serving under-served populations, and the general corporate culture was based on four basic values that resonated with me. It was, in a lot of ways - very nearly every way in fact - the perfect fit. Over the course of the next several years, things changed - as things do. The work became mundane as my skills were under-utilized, the vast majority of people moved on, and the product and culture changed significantly. The organization was not the same organization with which I had fallen in love, and I finally came to recognize that all the perks and incentive programs were metaphorical chains that bound me in place.
As a result of changes that transformed the organization from something I loved into something that I didn't, I left - and yes, it was before finding another gig - because I'm a firm believer that when you see it's time to go, you put your affairs in order, raise your sails, and go. Now, three months past my departure, I still have a sense of what I've lost, and yet there are times that, like the song says, I'm "too relieved to grieve" Note 6 because, in the end golden chains are still chains (Robert's Rule #33).
Notes and references
Links in the notes and references list open in a new window- Brian Chesky is the founder and CEO of Airbnb. You can learn more about him on wikipedia.
- You can learn more about Peter Thiel, an outspoken entrepreneur and venture capitalist, on wikipedia.
- The Vehicle Identification Number is an alphanumeric sequence used to uniquely identify a vehicle.
- A summary of a journal article in the Harvard Business Review says it clearer than I've seen it said before - "according to numerous studies in laboratories, workplaces, classrooms, and other settings, rewards typically undermine the very processes they are intended to enhance."
- There are a number of reasons it's not good when people leave an organization - brain drain and the ills associated with turnover - hiring costs, overtime costs, low morale, and low productivity to name just a few - are just two of the big reasons.
- "Let It Go" (Kristen Anderson-Lopez and Robert Lopez) as performed by Demi Lovato.
Monday, December 16, 2013
You will be assimilated
Freedom is irrelevant. Self-determination is irrelevant. You must comply.
Borg Collective
You will be assimilated. Resistance is futile.
Hugh
In "Conversion and Acquisition", I wrote about the inverse relationship between conversion and acquisition, possible causes of the inverse relationship and how it might be fixed. In this (much shorter) post, we're going to look at this same issue from another angle.
Let's assume that you have not implemented a forced acquisition method and you ask yourself "what do I know about my customers" and then ask the same question after you implement a forced acquisition method - will your answer be the same? Unlikely. The motivation and values of repeat customers are likely different than the motivation and values of people who are occasional users. Let's consider the simplest of these differences - repeat customers are have a vested interest, to at least some degree, in your continued operation whereas those who intend to be a single-use visitor are not invested in your business to any degree.
Why is this important? Every day we make assumptions and decisions based on what we know about our users. If our representative sample changes, those assumptions and decisions must also change. There may be simple assumptions about the design of a web page that are incorrect - assumptions that we can address by A/B testing, but what if there are assumptions associated with the risk of a transaction or possible fraud - those are considerably more difficult to test and correct.
In short, the more you rely on knowing your users the more contraindicated a forced acquisition method is.
Oh yeah, it's a bloody evil thing to do, too...just look at the Borg.
Saturday, November 9, 2013
What's in a name?
Women face a number of barriers in science-based endeavors, perhaps more so than in other fields1. This matter is not really even open for debate. What is up for debate is whether or not it's justified and whether or not we will actually do anything about it.
Much debate surrounds the causes of the gender disparity evident in many fields. Some argue that girls and women do not pursue STEM2 educational programs and therefore either show a lack of interest in the topics or aren't generally qualified to pursue the programs. This is, almost certainly in part, due to traditional gender roles, but it cannot be limited to that as the limitations based on traditional gender roles have decreased as time has passed and societal norms have adjusted.
Another portion of the lack of pursuit of STEM programs by women is almost certainly self-inflicted doubts. This can be seen in a (1946) conversation between Einstein (yes, that Einstein), and a South African girl named Tyfanny. In corresponding with her, after she revealed her gender, Einstein said,
These problems are significant, and we must fight tenaciously to overcome them; however, these facts alone are not enough. These are facts of history - facts that society has dealt with for years and yet, one might argue that while female representation is much lower in STEM-related fields, it is significantly imbalanced in many fields4. Why is this? Why are we not convinced that especially science, technology, engineering, and mathematics are about ideas and not something as trivial as gender? Are we really so blinded to not be convinced that women can think as well as men?
I refuse to believe that it is something in our conscious behavior, and I posit that our bias goes much deeper than we originally thought. Even though we have convinced ourselves that even if the larger populace does not subscribe to a meritocracy those of us in STEM-related fields are well into a meritocracy, we have deceived ourselves.
In what should have been a mind-blowing study written more than a decade ago, Rhea E. Steinpreis, Katie A. Anders, and Dawn Ritzke revealed that both men and women demonstrated gender bias in hiring recommendations.5 The subjects for this particular study were all PhD-level psychologists - people who should recognize that science is about ideas and not gender, people who should recognize trivial and non-trivial information for what it is. In a similar study, written just last year, it was demonstrated that even among science faculty at research-intensive universities, gender biases favor male students.6
What these two studies illuminate is that our gender bias is so thoroughly ingrained that even individuals who are trained to deal directly with data, identifying what is trivial and non-trivial on a daily basis, are incapable of suppressing something as trivial and unreliable as name-based gender bias. Before anyone starts with the 'academia vs. real-world' arguments, a cursory search regarding this topic yields some very interesting anecdotal evidence that supports the same hypothesis.7
We are, like the characters in Shakespeare's Romeo and Juliet, using names as a priori judgments. These two studies also speak volumes about our decision quality, our hiring and staffing policies, our integrity and values, our knowledge about our ability to evaluate people and ourselves, and even our ability to manage diversity.
When otherwise qualified candidates are eliminated from the process based upon their name it's easy to see where a significant portion of the disparity originates. We can work to correct gender stereotypes and eliminate gender roles from early education, we can do a number of things to encourage girls to enjoy and pursue STEM education and programs, we can even build gender-based groups that encourage and promote not just gender balance, but women in the work-force on university and work campuses across the country. None of our efforts to increase education, ban words, or anything of the sort will mean anything until we address eliminating the gender bias that is demonstrated to occur at the first step in any selection process.
Of course, one of the worst parts of this situation is that even though this has been a known issue for more than a decade, we've done nothing to change the situation even though it is incredibly easy. How easy? Here are four simple policies that every organization could adopt with little to no impact to their schedules or bureaucracy, which would alter the landscape significantly:
One last note: If you follow this blog, you might have noticed that I've been missing of late. To offer explanation (not justification or apology) I will say that sometimes personal lives get very busy, we have a temporary shortage of creativity (e.g. writer's block), and we need time to work up the courage to say what we need to say how we need to say it rather than just exclaim "WTF!" and be done with it. For me, it's been a mixture of all of these as I've seen my oldest niece married, contemplated my daughter's education, and ruminated regarding how to address gender disparity in hiring for quite some time, even discussing the policies that will correct this with women in technology companies before writing this post.
Much debate surrounds the causes of the gender disparity evident in many fields. Some argue that girls and women do not pursue STEM2 educational programs and therefore either show a lack of interest in the topics or aren't generally qualified to pursue the programs. This is, almost certainly in part, due to traditional gender roles, but it cannot be limited to that as the limitations based on traditional gender roles have decreased as time has passed and societal norms have adjusted.
Another portion of the lack of pursuit of STEM programs by women is almost certainly self-inflicted doubts. This can be seen in a (1946) conversation between Einstein (yes, that Einstein), and a South African girl named Tyfanny. In corresponding with her, after she revealed her gender, Einstein said,
I do not mind that you are a girl, but the main thing is that you yourself do not mind. There is no reason for it.3Einstein recognized, in Tyfanny's words, the self-doubt resulting from generations repeating the societal refrain "you're a girl".
These problems are significant, and we must fight tenaciously to overcome them; however, these facts alone are not enough. These are facts of history - facts that society has dealt with for years and yet, one might argue that while female representation is much lower in STEM-related fields, it is significantly imbalanced in many fields4. Why is this? Why are we not convinced that especially science, technology, engineering, and mathematics are about ideas and not something as trivial as gender? Are we really so blinded to not be convinced that women can think as well as men?
I refuse to believe that it is something in our conscious behavior, and I posit that our bias goes much deeper than we originally thought. Even though we have convinced ourselves that even if the larger populace does not subscribe to a meritocracy those of us in STEM-related fields are well into a meritocracy, we have deceived ourselves.
In what should have been a mind-blowing study written more than a decade ago, Rhea E. Steinpreis, Katie A. Anders, and Dawn Ritzke revealed that both men and women demonstrated gender bias in hiring recommendations.5 The subjects for this particular study were all PhD-level psychologists - people who should recognize that science is about ideas and not gender, people who should recognize trivial and non-trivial information for what it is. In a similar study, written just last year, it was demonstrated that even among science faculty at research-intensive universities, gender biases favor male students.6
What these two studies illuminate is that our gender bias is so thoroughly ingrained that even individuals who are trained to deal directly with data, identifying what is trivial and non-trivial on a daily basis, are incapable of suppressing something as trivial and unreliable as name-based gender bias. Before anyone starts with the 'academia vs. real-world' arguments, a cursory search regarding this topic yields some very interesting anecdotal evidence that supports the same hypothesis.7
We are, like the characters in Shakespeare's Romeo and Juliet, using names as a priori judgments. These two studies also speak volumes about our decision quality, our hiring and staffing policies, our integrity and values, our knowledge about our ability to evaluate people and ourselves, and even our ability to manage diversity.
When otherwise qualified candidates are eliminated from the process based upon their name it's easy to see where a significant portion of the disparity originates. We can work to correct gender stereotypes and eliminate gender roles from early education, we can do a number of things to encourage girls to enjoy and pursue STEM education and programs, we can even build gender-based groups that encourage and promote not just gender balance, but women in the work-force on university and work campuses across the country. None of our efforts to increase education, ban words, or anything of the sort will mean anything until we address eliminating the gender bias that is demonstrated to occur at the first step in any selection process.
Of course, one of the worst parts of this situation is that even though this has been a known issue for more than a decade, we've done nothing to change the situation even though it is incredibly easy. How easy? Here are four simple policies that every organization could adopt with little to no impact to their schedules or bureaucracy, which would alter the landscape significantly:
- Publicize the existence of gender biases in relation to CV's and resumes and what is being done to compensate for it or correct it.
- Replace names with unique codes on all CV's and/or resumes that are submitted prior to their being screened.
- Restrict access to names and codes during the selection process
- Identify discussion of a candidate's name as especially problematic and a punishable offense
What's in a name? That which we call a rose by any other name would smell as sweet....as it turns out, there's more than enough information, and if you don't believe me, just ask Romeo.
Romeo and Juliet, Act II, Scene II
Notes and references. Links in the notes and references list open in a new window
- You can find the research regarding the types of barriers women in science face, published by AAUW in "Why So Few? Women in Science, Technology, Engineering, and Mathematics", at http://www.aauw.org/research/why-so-few/
- Science, Technology, Engineering, and Mathematics
- This tidbit is revealed in "Dear Professor Einstein: Albert Einstein’s Letters to and from Children" by Alice Calaprice, along with views on gender's relationship to the study of science that were far ahead of his time - i.e. it doesn't matter.
- One recent edition of philosophers' sound-bites (Philosophy Bites, by David Edmonds & Nigel Warburton) references 44 males and 8 females - a paltry 15%.
- The study is called "The Impact of Gender on the Review of the Curricula Vitae of Job Applicants and Tenure Candidates: A National Empirical Study" and you can easily find it online and read it in its entirety - which I recommend.
- The study is called "Science faculty’s subtle gender biases favor male students", by Corinne A. Moss-Racusin, John F. Dovidio, Victoria L. Brescoll, Mark J. Graham, and Jo Handelsmana. You can read it at http://www.pnas.org/content/109/41/16474.full.pdf+html.
- In the blog post "I understood gender discrimination once I added 'Mr.' to my resume and landed a job", an individual seeking employment in a non-STEM-related field relates how self-identifying as a male on his CV made a positive change in the response rate to his inquiries.
One last note: If you follow this blog, you might have noticed that I've been missing of late. To offer explanation (not justification or apology) I will say that sometimes personal lives get very busy, we have a temporary shortage of creativity (e.g. writer's block), and we need time to work up the courage to say what we need to say how we need to say it rather than just exclaim "WTF!" and be done with it. For me, it's been a mixture of all of these as I've seen my oldest niece married, contemplated my daughter's education, and ruminated regarding how to address gender disparity in hiring for quite some time, even discussing the policies that will correct this with women in technology companies before writing this post.
Subscribe to:
Posts (Atom)