On the Perception of Relevance

Search engines and humans are always looking for relevance. When we first open a document on the web (or in our mail), one of our first actions is frequently to determine whether this document is relevant to our needs. In the case of e-mail or physical mail, this is entirely a “push” experience — messages are sent to us, and we evaluate them for relevance. Spam survives largely on that obscure edge case where we identify their messages as relevant.

When it comes to web documents, the experience is more of an even exchange: we provide search engines with instructions about what we want to find and they send back a response which we both hope will meet our concept of relevance for the given instructions.

Sales and conversions are driven by relevance.

But humans and computers perceive information and relevance in very different ways. This is a challenge for search engines, which need to be tuned to fetch information which appears relevant to a human searcher on the basis of a computer’s understanding of the document. This is also a challenge for people with disabilities — while a person with a disability will generally have the human ability to perceive relevance, their perception of information on the web may be filtered through a computer. For a person who requires the use of a screen reader, this filtering amounts to receiving only the facets of information which are made available both by the author of the document and by the assistive technology they are using.

So, in any online search for information, a person with disabilities is encountering three primary barriers to finding relevant information:

  • Did the author consider their needs when the document was created?
  • Does their assistive technology offer the resources to transfer that information?
  • Did the search engine return information which will be considered relevant to a human?

Everybody encounters the flaws in search engine results. Although searching is far more sophisticated than it was just a few years ago, it is still limited. (And appears to be getting more and more subject to the filter bubble effect — though that’s a different subject.)

Some words have multiple meanings, which are dependent on context. We all use context to guess these meanings: humans use tone of voice, facial gestures, body language, location, environment, and many other factors to assess what the probable meaning of a word may be. Search engines have limited access to this information — some information is mechanically detectable; such as location, and to some degree, environment. They know what words you used to search, and they may know what else you’ve recently searched for. They can use this information to try and better assess your intent and deliver the most relevant information to you.

When it comes down to what information is on a web document, search engines and screen readers have very similar information. As a result, search engines and users of screen readers have similar limitations in their ability to quickly identify relevance on those documents. In some ways, search engines have an advantage, since they can absorb all of the information on the page far more quickly than any human can.

Where a search engine might simply absorb an entire page and move on, humans scan. A visual user will quickly glance over the page, identifying images, headlines, navigation elements, and noticing a few key words along the way down the page. That quick scan of the page can frequently provide all the information the user needs to decide the document deserves another look, or is clearly irrelevant.

Users with disabilities also scan, when it’s an option. This is a place where the capabilities of assistive technology and the attentiveness of a web author really come into play.

For a screen reader user, it’s clear that there won’t be any quick scan of images. Although images may have supporting alt attributes, it wouldn’t be any real savings of effort to check. They can scan through descriptive link text, accessible navigation, and headings, however — if they’ve been provided in a useful fashion. Repetitive link text, headings replaced with images without alt attributes or an inaccessible font replacement method, any of the multitude of inaccessible navigation techniques: they may not prevent the user from getting at your content, but they will absolutely prevent the user from being able to get an overview of the content by scanning.

I mentioned above that search engines have some degree of advantage due to their speed of information absorption. The slower speed with which humans absorb information means that where a search engine might find great relevance caused by repetition of key phrases (or it might not — I would call that keyword stuffing, myself); a human will instead be overwhelmed by the same repetition.

This comes back to context. A human visitor to your web site doesn’t care at all how many times you’ve stated ‘blue widgets’, unless it’s less than once. From our perspective, if you said at the top of the page that you’re selling blue widgets, we understand that everything from that point on is in the context of a blue widget. If you actually say, over and over again, that we’re reading the blue widget specifications, and the pricing on blue widget shipping, and that your blue widget support is superlative — we’re just going to get overwhelmed. It’s unnecessary, and creates the sense of what I’m choosing to call “information blur” (the effect when information is lost due to excessive adjectival description).

What’s my point, again?

Relevance is a prime factor in converting a web visitor into a paying customer. However, humans and computers perceive relevance using very different factors. The best performing web site is going to be created according to the compromise rules which will provide the most relevant information to both humans and to search engines. Visitors with assistive technology, and particularly screen readers, are an interesting test case in this scenario, because the information they receive is filtered in a similar manner to that which a search engine will see.

Creating accessible content: it’s not just the right thing to do.

5 Comments to “On the Perception of Relevance”

  1. Your point is well received. Unfortunately, some of the best content for humans is overlooked by search engines due to the webmasters being do-it-yourself types that are now web savvy. I expect that as websites become easier to build and search engines become better at finding what the user actually wants, that will get better

  2. You bring up some valid point regarding search engines vs. human visitors. That’s why it’s important to first write your content for visitors, since ultimately they are the ones that will or will not convert. However, in order to get them to the page in the first place, you need to optimize for the search engines. This should be done in a way that is so natural that the average visitor wouldn’t even notice.

  3. I agree that the goal is creating accessible content but everybody feels that their content is the most relevant especially in a service space. SO how do you separate the marketing side and still stay relevant?

  4. “Sales and conversions are driven by relevance” it quite obvious and i fully agree with this term. Professional web designers and developers keep in mind relevance aspects in mind.

  5. Great post.
    Actually every developer out there should not forget that websites are made for humans and not bots. I agree best website is the one that is relevant to both humans and search engines but priority is still the humans. There are still many websites out there that are invisible in search engines and still have success due to other factors. But its hard to find the opposite happening

    Thanks

Return to Top