I’ve been enjoying learning through the CXL Institute.
We have schemas for products, websites, and all the other things that we now use on a daily basis. To provide a better and positive user experiences, it is important to consider the expectations or schemas users bring with them when they are using your products. Otherwise, people must maintain a higher-than-usual level of attention and concentration when using products, to ensure their prior experience does not inhibit or negatively impact on their current experience.
If a website homepage is cluttered with irrelevant information, first-time visitors may become distracted or overwhelmed. More time is spent trying to simply figure out the purpose of a site and, if the page is confusing enough, visitors will abandon a website when this task becomes too difficult. This issue can lead to short and long-term decreases in sales.
In a study, an existing homepage of Colonial Candle was tested because, while offering one clear product (Candles), the homepage doesn’t instantly communicate this product except in the name. Additionally, the company is not widely known enough that participants would immediately recognize the brand. In a five second test, participants had five seconds to observe the website homepage. A total of three banner ads were tested. The findings were that the site’s purpose was more apparent than expected. When asked, “what is this website selling?” 66% of respondents mentioned candles.
Biggest takeaway: While it’s not a very surprising discovery, these results do provide data supporting the idea that advertisements on a website homepage are clutter and interfere with first-time visitors’ initial and immediate understanding of the site’s purpose.
The website we tested (Colonial Candles) also has an impact here. While we chose this site for specific reasons, the results don’t directly apply to other verticals or site layouts. For instance, if the site had a huge graphic saying “100s OF CANDLES”, it’s likely that more participants would’ve understood what the site was selling within five seconds. If the site was more nuanced and sold many different products, participants may have had a more difficult time discerning what the site was selling.
Does it really matter how clearly you articulate your message if nobody registers it. Garnering your audience’s attention is a prerequisite for learning. When a stimulus fails to grab your attention, it has zero chance of providing value. It’s not even a blip on your radar. Over thousands of years, the human brain has evolved systems and processes for deciding what’s worth paying attention to, and what’s not. These processes occur in a matter of milliseconds. If you can understand what these processes are, you can apply them to your own site.
When you do successfully gain someone’s attention, you open the door for a meaningful impression. Gaining and maintaining attention are two separate tasks. Failing to grab attention is your fault, not the users’ lack of interest. It’s your job to grab site visitors’ attention and keep it. I learned this through both a physiological and theoretical standpoint.
Lead Generation Form on a Landing Page: Visual cues are strategically placed graphics that web designers use to guide user experience and attention on a website. By implementing visual cues, designers and business owners can subtly direct users to the most important facets of their website. However, the situation gets sticky when considering the vast selection of visual cues that are available. You can use arrows, lines, photos of people, borders, pointing fingers, bright banners, exclamation points, check marks… The list goes on. Are some visual cues more effective than others?
Which Cues Are Effective and Memorable? Eye-tracking was implemented to quantify user behavior. The homepage for the law firm Lemon Law Group was iterated to display different visual cues. A total of six different visual cues were tested along with a control condition, which showed no visual cue. Visual cues were strategically placed to direct users’ eyes to a signup form on the law firm’s homepage in order to measure the effectiveness of each cue. Cues were placed in the same approximate location for each treatment to maintain consistency.
Participants were given 15 seconds to browse the page as if they were considering the law firm’s services. Then they were asked the following question: “Imagine you’re in need of legal help. Please browse the following law firm’s web page as you normally would to assess their quality of service.”
Visual Cues Used: Visual treatment with a human looking away the form (really towards the user). Visual cue treatment with a human looking away from the form (towards the user). Visual cue treatment a human looking towards the form (really towards the user). Visual cue treatment of a human looking towards the form. Visual cue treatment of a ‘hand-drawn’ arrow pointing towards the form. Visual cue treatment of a ‘hand-drawn’ arrow pointing towards the form. Visual cue treatment of a broad, triangular shaped arrow pointing towards the form. Visual cue treatment of a broad, triangular-shaped arrow pointing towards the form. Visual cue treatment of a line leading from the text under the value proposition to the form. Visual cue treatment of a line leading from the text under the value proposition to the form. Visual cue treatment of a ‘prominent’ form, being darker and with a subtle outline compared with the others. Visual cue treatment of a ‘prominent’ form (darker with a subtle yellow outline).
Analyzing eye-tracking data allows us to run stats to see how much people paid attention to the form and how that differed among cues. The stats that were primarily concerned with included: the average time spent fixating on the form and the average time to first fixation on the form. An understanding of these measures allows conclusions and refined hypotheses about the effectiveness of the six cues to be drawn.
A post-task questionnaire measured the efficacy of each visual cue by identifying which cues best facilitated recall of the signup form. After viewing the web page, users were asked how they would contact the law firm. If participants answered that they would fill out the form to get in contact with the firm, the visual cue was considered effective at directing attention to the form and thus increasing the probability of recall. Comparing rates of recall among the visual cue treatments allowed comparison of them.
The visual cues do not differentially impact the speed at which users first notice the form.
A simple one-way ANOVA analysis tells us that the average time to first fixation of the signup form does not vary significantly among the treatments [F(6, 237) = 0.7947, p = 0.5748].
After thinking about the results, this makes some sense. Take a look at the means:
Summary statistics for time to first fixation of the form for all treatments
Summary statistics for time to first fixation for all treatments.
Remember, these means are not ‘significantly’ different from one another, but that is at a pretty high/conservative standard (alpha — 0.05).
There is still an interesting pattern to see. The visual cue itself appears to take some time to process. The control resulted in the shortest mean time to first fixation, followed by the next least conspicuous treatment (triangular). We see the pattern continue with: prominent, arrow, line, human looking at form, and then human looking away from form.
This pattern is intuitive, if not backed by significance at an alpha of 0.05. If we were to set the treatments on a scale from least to most conspicuous, this might be the order we’d get.
This is obviously not the measure that we need to consider. So what about the amount of time users look at the form on average? This measure might get at how the visual cues differentially drive information processing via engagement (i.e. actually reading the text and processing the information).
2. The visual cues do differentially impact how much a user pays attention to the form.
Analysis of variance indicates that the average amount of time viewing the form area does vary significantly among the treatments [F(6, 237) = 2.3108, p = 0.0346].
Here are the average and standard deviation stats:
Summary statistics for amount of time fixating on the form for all treatments
Summary statistics for amount of time fixating on the form for all treatments.
The significant ANOVA results were driven by the differences between the highest performing treatment (arrow) and the lowest (human looking away). A post-hoc Tukey test showed that these two treatments differed significantly at p < .05.
Here’s a histogram of the means for a visualization. The red bars indicate the two means that are significantly different from one another.
Histogram of the mean time fixating on the form for each treatment. Red indicates significance differences at an alpha of 0.05
Histogram of the mean time fixating on the form for each treatment. Red indicates significant differences at an alpha of 0.05.
Takeaways? Well, don’t have a human looking away from where you want a person to look, that’s for sure. At least in this study, on average, users spent less time (by about half) considering the form compared to the control. So, use a visual cue leading towards your form.
The simple line, prominent form and human looking all did pretty well, but the arrow resulted in the highest total time spent looking at the form.
Based on our pairwise tests, we can’t say at a 95% confidence level that the arrow resulted in a different amount of time spent compared with most of the others, but it still provides support for further testing of this hypothesis.
These stats are fun to geek out on, but what about the specific patterns of people’s gaze? Specifically, what are the visual patterns of viewers and how does this differ among the cue treatments?
For this type of insight,t eye-tracking heatmaps provide something that the statistics obscure. That is, exactly where people are looking, in what order, and for how long.
Visual cue treatments with aggregate heatmap displayed.
Visual cue treatments with aggregate heatmap.
The heatmaps provide a supplemental perspective for the visual perception of viewers as they consume the page. And they tell a pretty clear story.
The arrow focuses the viewer’s gaze with the most precision, guiding user attention quite specifically in the direction it’s pointing. This pattern surely explains some of the results.
The cue of the human looking away from the form seems to make people actively avoid it and anything to the right. The triangular cue treatment didn’t stand out particularly with the statistics above, but here we see it did result in guiding attention to the form.
3. The visual cues do not differentially impact how viewers remember the form.
Following the website stimulus, we asked each user: “Considering the web page you just saw, what would your next step be in getting in touch with this law firm?”
This was to test the short-term memory effects among the different treatments.
Here is a table of the number of participants who recalled the email capture form and the number who didn’t:
Numbers of participants who recalled and didn’t recall the form as a means to get in touch with law firm, answered in a follow-up questionnaire.
Number of participants who recalled and didn’t recall the form as a means to get in touch with the law firm, answered in a follow-up questionnaire.
We performed a Chi-Squared test on this data and found non-significance [X2 (5, N = 232) = 8.942, p = 0.111]. However, note that the prominent treatment did have a noticeably low number of people recall it.
Overall, these results were not insightful and it is likely we a larger sample size to detect differences. Given the sample size average of 35, a sample size calculator indicated that we should have expected significant differences at a confidence level of 90% if the critical difference between proportions was 30%.
There are thousands of different visual cues we could have tested (e.g. the type of human used). Maybe he’s not lawyerly enough, or too much so?
These results are limited in their transferability, but they do provide ideas and hypotheses for further testing. For example, we might implement some lessons learned here in a follow-up study that will test visual cues to influence people to scroll down a page.
The arrow performed well, but all arrows surely won’t perform the same. Perhaps it performed well because of the ‘hand-drawn’ nature of it. Thoughts?
The post-survey questionnaire wasn’t insightful and it’s likely that the question needs to be more precise (less open-ended) or our sample size needs to increase… or both. To us, this shows the value of eye-tracking compared to survey designs in getting more objective results, even if they are only visual perception results.
3 Most Common Reading Patterns
When users first come to your site, they will most likely read your content in an F-shape pattern. Likewise, attention is heavily weighted towards the left side of the screen in browsing and examining search results (in English speaking and reading countries).
They will first move in a horizontal movement, usually across the upper part of the content area.
NOTE: sometimes users disregard the whole line if the 1st word is not appealing! Then, if they like what they see in the first line, they will proceed along the second horizontal movement that typically covers a shorter area than the previous movement. Finally, they will explore the left side in a vertical movement.
If their initial scanning fulfills their needs, they will move to the second pattern:
Layer cake pattern
Now they are using a more committed pattern, a layer cake pattern- where they explore horizontal lines quickly to see if the section they chose strikes their interest.
In this CXL study, the heatmap shows users reading the headlines but not the text below.
If the layer cake scan pattern shows that the user is still interested, he/ she will proceed to a spotted pattern- looking for the main ideas.
So, how can you implement this knowledge for your online writing?
You need to make the text more scannable. Position the most important text along the F-line breaking the text into convenient paragraphs; that each line starts with the catchy word.
We recommend utilizing the following elements for better scanning:
words in color
8 instead of eight
words in CAPITAL LETTERS
words in “quotation marks”
words w/ trademarks™, copyright©, or other symbols
Case Study: Online Reading Patterns
Expanding on existing research, we delve into reading patterns online and ask how internet articles are actually read. What percentage of copy is read? Do people read image captions? How many readers finish an entire article?
Additionally, we examine the relationship between age and online reading patterns.
According to Nielsen Norman Group’s 2008 study on online reading patterns, internet users read just 28% of an article’s copy during the average site visit. An article is read word-for-word only 16% of the time (1997).
When reading particularly short articles (111 words or less), users will read about half of the copy.
However, the fact that most readers don’t read an entire article is not to say they don’t understand that article. Duggin et. al.’s 2011 study found that “skim readers” (readers who skim through content rather than read it word-for-word) are usually able to pick out valuable information and therefore understand the gist of the article quite successfully.
In our study, we wanted to answer a few specific questions:
How are online articles read?
How much of the article gets read?
Do people read image captions?
Do older internet users read articles the same way younger users do?
A short article on astronaut training was used for the research stimuli:
Screen Shot 2016–08–19 at 1.30.33 PM
We chose an interesting but brief National Geographic article
The article was short — approximately 100 words long — and included a title, featured image, side banner ad, and varying font sizes. Although the article had little content, it was three folds. We wanted to study how far participants read, and if there’s a drop in reading rates when one has to actively scroll.
Data Collection Methods and Operations:
The same article was shown to two groups: younger participants aged 18–30 and older participants aged 50–60.
All participants were prompted with this scenario:
You are interested in reading about astronaut space training. Please read the following web article about this subject.
They then had 30 seconds to read the article.
Usable eye-tracking data was collected for 62 participants in group 1 (ages 18–30).
Usable eye-tracking data was collected for 33 participants in group 2 (ages 50–60).
NOTE: This is a smaller sample size than we normally like to use (~50), but it took almost 3 weeks to get even this number of participants, the panels that we use don’t have many people in this age group.
Reading behaviors between the two age groups were quite similar.
To study what participants looked at, and for how long they looked at it, we created “areas of interest” — AOIs — on the article page:
AOIs were placed over the article main elements.
Using these AOIs we were able to quantify the following results:
“Which elements of the page were looked at the most?”
“How much of the article was read?”
“How many people read the image caption?”
“How many people paid attention to the ads?”
It’s possible that the participants adapted their behaviors since they knew they were being studied. Perhaps they read more of the article than they usually would, or oppositely, read the article more slowly anticipating survey questions that might follow.
There’s also the likelihood that some participants didn’t have enough time to read the entire article. Because the testing platform used allows a maximum of 30 seconds for a picture to be shown (the picture being the article in this case), there must have been at least a few people who simply didn’t have enough time to read the whole story.
User Reading Patterns of the New York Times — 2004 vs. 2016
We found an interesting, and rather old, eye-tracking study from 2004 and decided to try and replicated a part of it to see how it works today. It involved eye-tracking a couple homepages of the New York Times, one from this year, 2016, and one from 2004. Our primary goal wasn’t the comparison to the old study, rather it was to see what were the ‘priority viewing areas’ for how people process a news site and to see if ‘today’s users’ process the contemporary design differently than one from more than a decade ago.
So the original study that this little test was inspired by was performed by a seemingly out of business research branch of Poynter.org. This study doesn’t necessarily reproduce any part of the old study explicitly, rather, we take their approach and see how it works today.
Data Collection Methods and Operations:
Here is the ‘priority viewing area’ aggregate map of 5 news sites analyzed in the 2004 eyetracking study by Eyetrack III. This study only considered user viewing patterns of the news site http://www.nytimes.com/.
Our ‘priority viewing area’ grid had to adjust according to the precision of our eye-tracking tool. We used a grid of 3X5 rather than 4X4.
The 2004 version of the New York Times homepage was obtained through the internet archive ‘way back machine’.
The eye-tracking survey was completed by 200 participants, though only 132 resulted in data with enough accuracy to be useful, 68 for the 2016 version and 64 for the 2004 version.
Other Key Info (like treatment variations)
The large banner ads in the 2016 variation made users jump around the page, causing way more variability in what people read compared to the 2004 version. Study participants started staring at the ad in the 2016 version, but quickly went elsewhere, although where they went was much more variable compared to the ‘seen path’ of the 2004 design.
The priority viewing areas did not differ much at all between the two version of the New York Times website, and also generally agreed with viewing patterns from the 2004 study. Given the designs tested, the regions that have more immediate prominence were the center and upper left of the page. The variation in the 2016 design was seemingly a result of the banner advertisement from IBM, which caused viewers to essentially skip around it to find and fixate on text content.