A few years back, we conducted one of the most painful
usability studies in the history of our research. We learned some
really important things, but I'm not sure the users in that study will
ever forgive us.
Before that particular study, we'd noticed, when searching
large web sites for information, there were some sites where users
always seemed to know where to find the content. No matter what content
they were seeking, every user somehow knew to make a bee-line for it.
Not every site worked this way and we wanted to know what made these
particular sites work so well.
What Makes Links Work?
We had suspected that the secret was in the links. On those sites
where users consistently found their target content (the content they
were seeking), we had gut feel those links were helping users more than
the sites where users rarely found the target content. Unfortunately,
we couldn't explain why.
For example, when we put the links side-by-side, mixing the
links from successful sites with those that were not helping users, we
couldn't pick out the ones that were more effective. There was
something that was special about those links, but we couldn't identify
the magical traits. We studied the links hard, but whatever it was just
kept eluding us.
That's when we decided to conduct the Lincoln study. We named
it 'Lincoln' because we were studying 'Links'. (I know -- it's a dumb
name. But you have to name the project something and this name just
sort of stuck.)
Questions About Every Click
In the Lincoln study, we looked at sites where users frequently
found their target content and sites where they didn't. We wanted to
know what the difference was between the links, so we compiled two
questionnaires to identify key attributes for each link.
We asked each user to fill out one questionnaire before each
click and another one after each click. That meant if a user clicked on
15 links to find their target content, they filled out 30
questionnaires. Fortunately, we pay our users extremely well.
Going into the study, our hypothesis was that the better sites
were somehow 'telegraphing' the path to the content and users somehow
knew what each link was going to bring. To test this, part of the
questionnaires asked users to predict what they thought the next page
would contain. On the better sites, we expected users would always know
what came next.
When we compiled the results, we found users weren't any more
likely to know what content was on the next page. Our theory about
telegraphing was a dead-end. That wasn't the secret to good sites.
In fact, users always assumed the next page contained their
target content -- no matter where they were in the site. They could be
on the home page, clicking on a generic link like "Sports" or
"Research" and still think the next page was going to answer their very
Confidence Was The Key
In our analysis of the data, we did isolate a factor we didn't
expect. With each click, users told us they were more confident they
would succeed on those sites where they actually did succeed. Somehow,
they were predicting their success.
We measured each user's confidence with two questions. Before
they clicked, we asked "Do you think clicking on this link will lead
you to the info you seek?" with a 7-point scale that had the endpoints
marked as "Not at all" and "Extremely Likely". After they clicked and
had a quick chance to inspect the result page, we asked "Do you think
this page is getting you closer to your goal?" with the same 7-point
We were amazed when we discovered the answers from the first
three clicks strongly predicted whether the user would eventually
succeed or fail, even if the clickstream was 15 or 20 clicks long. Not
only that, but as long as every subsequent click had high confidence
values, the user was very likely to succeed. As soon as the confidence
values dropped, so did the likelihood of the users finding their
This was the clue we needed -- the key to our research. Once
we could see when the user's confidence rose and fell, we could analyze
the links and determine what was contributing.
Using Confidence to Identify Trigger Words
Some very clear link attributes immediately jumped out at us.
First, users expect to find 'trigger words' in the links. A trigger
word is a word (or phrase) that causes the user to click. When the
trigger words match the user's goals, they find those words right away
and the links make them more confident that they are going to find
You can find a good example of trigger words on the
Edmunds.com home page. People who are just starting the process of
selecting a new car are likely to click on the word "New" or the phrase
"Find a New Car". If they know they want a sedan or SUV, they are
likely to click on one of those trigger words. If they are specifically
looking for pricing of a model, with options, they are likely to choose
the "Price with Options" link.
Edmunds.com makes sure all the trigger words are visible.
Some people will just click on the big word "New" or the phrase "Find a
New Car". If they know they want a coupe, sedan, or SUV, they are
likely to click on one of those trigger words. If they are looking for
pricing of a particular model, with options, they'll choose the "Price
with Options" link.
The fascinating thing is that all those links go basically to
the same part of the site. The designers made sure the trigger words
are all out on the surface, where users can see them.
Jumping from Specific to General
Another finding from the Lincoln study was users expected the site
to become more specific with each click. As users move through the
site, they want each subsequent page to have more detail related to
their goal than the page before.
If all of a sudden a page is about a general topic, the users
lose confidence. For example, when we were testing the Boston.com's Red
Sox page, users lost confidence when the Sports Calendar link didn't
produce a schedule of the Red Sox games. Instead that link brought them
to a listing of all sports activities (including paintball, sky-diving,
and frisbee) in the greater Boston area. Since the users were already
on the Red Sox page, they naturally assumed that any link from that
page would be even more details about the Red Sox.
Confidence Gave Us Insight
Because we could now use the user's confidence to tell us how well
the links were working, we could start to identify other patterns. We
saw that the links on many global navigation panels were sorely
lacking. (What is the difference between "Products" and " Solutions"?)
We could tell when graphics were helping and when they weren't. We
could see when pogosticking was causing problems.
We now call the magical force that pulls users to their
content the Scent of Information. (We didn't come up with the name. We
heard it from Peter Pirolli and his team at PARC. But, like project
names, good concept names just catch on and stick.)
We can't measure when a link has good scent, but we can
measure when it gives off confidence. By looking at the confidence of
the user as they move through the site, we can tell what parts are
working well and what parts need rethinking.
You can measure confidence, too. As you're watching people use
your site, just trying asking those two questions. Pretty soon, you'll
know when users are feeling confident and when they aren't. That will
give you a good sense as to where to focus your design efforts.