I’ve been enjoying learning through the CXL Institute.

Usually for people, the visual cue itself appears to take some time to process. The control resulted in the shortest mean time to first fixation, followed by the next least conspicuous treatment (triangular). The pattern continues with: prominent, arrow, line, human looking at form, and then human looking away from form.

Here are the average and standard deviation stats:

Summary statistics for amount of time fixating on the form for all treatments
Summary statistics for amount of time fixating on the form for all treatments.
The significant ANOVA results were driven by the differences between the highest performing treatment (arrow) and the lowest (human looking away). A post-hoc Tukey test showed that these two treatments differed significantly at p < .05.

Here’s a histogram of the means for a visualization. The red bars indicate the two means that are significantly different from one another.

Histogram of the mean time fixating on the form for each treatment. Red indicates significance differences at an alpha of 0.05
Histogram of the mean time fixating on the form for each treatment. Red indicates significant differences at an alpha of 0.05.
Takeaways? Well, don’t have a human looking away from where you want a person to look, that’s for sure. At least in this study, on average, users spent less time (by about half) considering the form compared to the control. So, use a visual cue leading towards your form.

The simple line, prominent form and human looking all did pretty well, but the arrow resulted in the highest total time spent looking at the form.

Based on our pairwise tests, we can’t say at a 95% confidence level that the arrow resulted in a different amount of time spent compared with most of the others, but it still provides support for further testing of this hypothesis.

These stats are fun to geek out on, but what about the specific patterns of people’s gaze? Specifically, what are the visual patterns of viewers and how does this differ among the cue treatments?

For this type of insight,t eye-tracking heatmaps provide something that the statistics obscure. That is, exactly where people are looking, in what order, and for how long.

Visual cue treatments with aggregate heatmap displayed.
Visual cue treatments with aggregate heatmap.
The heatmaps provide a supplemental perspective for the visual perception of viewers as they consume the page. And they tell a pretty clear story.

The arrow focuses the viewer’s gaze with the most precision, guiding user attention quite specifically in the direction it’s pointing. This pattern surely explains some of the results.

The cue of the human looking away from the form seems to make people actively avoid it and anything to the right. The triangular cue treatment didn’t stand out particularly with the statistics above, but here we see it did result in guiding attention to the form.

3. The visual cues do not differentially impact how viewers remember the form.
Following the website stimulus, we asked each user: “Considering the web page you just saw, what would your next step be in getting in touch with this law firm?”

This was to test the short-term memory effects among the different treatments.

Here is a table of the number of participants who recalled the email capture form and the number who didn’t:

Numbers of participants who recalled and didn’t recall the form as a means to get in touch with law firm, answered in a follow-up questionnaire.
Number of participants who recalled and didn’t recall the form as a means to get in touch with the law firm, answered in a follow-up questionnaire.
We performed a Chi-Squared test on this data and found non-significance [X2 (5, N = 232) = 8.942, p = 0.111]. However, note that the prominent treatment did have a noticeably low number of people recall it.

Overall, these results were not insightful and it is likely we a larger sample size to detect differences. Given the sample size average of 35, a sample size calculator indicated that we should have expected significant differences at a confidence level of 90% if the critical difference between proportions was 30%.

Limitations
There are thousands of different visual cues we could have tested (e.g. the type of human used). Maybe he’s not lawyerly enough, or too much so?

These results are limited in their transferability, but they do provide ideas and hypotheses for further testing. For example, we might implement some lessons learned here in a follow-up study that will test visual cues to influence people to scroll down a page.

The arrow performed well, but all arrows surely won’t perform the same. Perhaps it performed well because of the ‘hand-drawn’ nature of it. Thoughts?

The post-survey questionnaire wasn’t insightful and it’s likely that the question needs to be more precise (less open-ended) or our sample size needs to increase… or both. To us, this shows the value of eye-tracking compared to survey designs in getting more objective results, even if they are only visual perception results.

3 Most Common Reading Patterns
F-Pattern
When users first come to your site, they will most likely read your content in an F-shape pattern. Likewise, attention is heavily weighted towards the left side of the screen in browsing and examining search results (in English speaking and reading countries).

They will first move in a horizontal movement, usually across the upper part of the content area.
NOTE: sometimes users disregard the whole line if the 1st word is not appealing! Then, if they like what they see in the first line, they will proceed along the second horizontal movement that typically covers a shorter area than the previous movement. Finally, they will explore the left side in a vertical movement.

If their initial scanning fulfills their needs, they will move to the second pattern:

Layer cake pattern

Now they are using a more committed pattern, a layer cake pattern- where they explore horizontal lines quickly to see if the section they chose strikes their interest.

In this CXL study, the heatmap shows users reading the headlines but not the text below.
Spotted Pattern
If the layer cake scan pattern shows that the user is still interested, he/ she will proceed to a spotted pattern- looking for the main ideas.

CXL, 2016
So, how can you implement this knowledge for your online writing?
You need to make the text more scannable. Position the most important text along the F-line breaking the text into convenient paragraphs; that each line starts with the catchy word.

We recommend utilizing the following elements for better scanning:

bolded words
underlined text
words in color
8 instead of eight
words in CAPITAL LETTERS
long words
words in “quotation marks”
words w/ trademarks™, copyright©, or other symbols

Case Study: Online Reading Patterns
Expanding on existing research, we delve into reading patterns online and ask how internet articles are actually read. What percentage of copy is read? Do people read image captions? How many readers finish an entire article?

Additionally, we examine the relationship between age and online reading patterns.

Background
According to Nielsen Norman Group’s 2008 study on online reading patterns, internet users read just 28% of an article’s copy during the average site visit. An article is read word-for-word only 16% of the time (1997).

When reading particularly short articles (111 words or less), users will read about half of the copy.

However, the fact that most readers don’t read an entire article is not to say they don’t understand that article. Duggin et. al.’s 2011 study found that “skim readers” (readers who skim through content rather than read it word-for-word) are usually able to pick out valuable information and therefore understand the gist of the article quite successfully.

In our study, we wanted to answer a few specific questions:

How are online articles read?
How much of the article gets read?
Do people read image captions?
Do older internet users read articles the same way younger users do?
Study Report
A short article on astronaut training was used for the research stimuli:

Screen Shot 2016–08–19 at 1.30.33 PM
We chose an interesting but brief National Geographic article
The article was short — approximately 100 words long — and included a title, featured image, side banner ad, and varying font sizes. Although the article had little content, it was three folds. We wanted to study how far participants read, and if there’s a drop in reading rates when one has to actively scroll.

Data Collection Methods and Operations:
The same article was shown to two groups: younger participants aged 18–30 and older participants aged 50–60.

All participants were prompted with this scenario:

You are interested in reading about astronaut space training. Please read the following web article about this subject.

They then had 30 seconds to read the article.

Participants

Usable eye-tracking data was collected for 62 participants in group 1 (ages 18–30).

Usable eye-tracking data was collected for 33 participants in group 2 (ages 50–60).

NOTE: This is a smaller sample size than we normally like to use (~50), but it took almost 3 weeks to get even this number of participants, the panels that we use don’t have many people in this age group.

Findings
Key Takeaways:

Video Player

00:00
00:28

Reading behaviors between the two age groups were quite similar.

To study what participants looked at, and for how long they looked at it, we created “areas of interest” — AOIs — on the article page:

screen-shot-2016–09–20-at-1–18–22-pm
AOIs were placed over the article main elements.
Using these AOIs we were able to quantify the following results:

“Which elements of the page were looked at the most?”

aoi-1

“How much of the article was read?”

aoi-4

“How many people read the image caption?”

aoi-5

“How many people paid attention to the ads?”

aoi-6

Limitations
It’s possible that the participants adapted their behaviors since they knew they were being studied. Perhaps they read more of the article than they usually would, or oppositely, read the article more slowly anticipating survey questions that might follow.

There’s also the likelihood that some participants didn’t have enough time to read the entire article. Because the testing platform used allows a maximum of 30 seconds for a picture to be shown (the picture being the article in this case), there must have been at least a few people who simply didn’t have enough time to read the whole story.

User Reading Patterns of the New York Times — 2004 vs. 2016

Background

Data Collection Methods and Operations:

Here is the ‘priority viewing area’ aggregate map of 5 news sites analyzed in the 2004 eyetracking study by Eyetrack III. This study only considered user viewing patterns of the news site http://www.nytimes.com/.

Our ‘priority viewing area’ grid had to adjust according to the precision of our eye-tracking tool. We used a grid of 3X5 rather than 4X4.

The 2004 version of the New York Times homepage was obtained through the internet archive ‘way back machine’.

Participants

The eye-tracking survey was completed by 200 participants, though only 132 resulted in data with enough accuracy to be useful, 68 for the 2016 version and 64 for the 2004 version.

Other Key Info (like treatment variations)

Findings

The large banner ads in the 2016 variation made users jump around the page, causing way more variability in what people read compared to the 2004 version. Study participants started staring at the ad in the 2016 version, but quickly went elsewhere, although where they went was much more variable compared to the ‘seen path’ of the 2004 design.

The priority viewing areas did not differ much at all between the two version of the New York Times website, and also generally agreed with viewing patterns from the 2004 study. Given the designs tested, the regions that have more immediate prominence were the center and upper left of the page. The variation in the 2016 design was seemingly a result of the banner advertisement from IBM, which caused viewers to essentially skip around it to find and fixate on text content.