This week, I continued learning about conducting surveys.

We need to understand what is the specific concern on a specific page. The type of question, or the type of concern, a person has varies. If I’m on a checkout page, I might be concerned about security, privacy. If I’m on a product page looking at a pair of pants, the question might be do these fit me, can I return them, stuff like that — think about all of these concerns.

Another thing that should be done, is conducting surveys on people who just bought something. This is our actual target audience. These people just gave us money, now, within a reasonable amount of time, maybe between couple of hours, couple of days, we should survey them. Send them a survey over email, where they click through and we ask them open-ended questions, 8 to 10 questions about their shopping experience, while they still freshly remember what went on inside their mind as they were completing the purchase. We want to know things such as what kind of doubts they had, and so on and so forth.

Then there’s user testing. To do user testing, we should recruit people who represent your actual target audience. Ideally, your actual target audience, but anybody’s better than nobody, so your mom and your grandma also matter — in cases where you can’t find anyone else to test. To start off with user testing, what you do is you have these people use your website. They go through your website, you give them certain tasks, you give them a broad task. For example, you can start with telling your user, “It’s your birthday coming up, find something you like in this store and let me see how you go about it. Or, it could be a very specific task. You need to find a pair of dark jeans in size 34 under 50 bucks. Be this specific, and then you’ll see how they go about it. While they use the site, observe things like if they talk out loud, so they’re thinking out loud, sharing their thoughts, but, more important than what they say is what they do. People may encounter things that can impact their experience, for example, they might not be able to complete the task, if they see error messages. Therefore, what people say and do might not overlap, so pay more attention to what people do.

When people browse a site, we can track their clicking, tapping behavior, tapping on a mobile phone. We can track their scrolling behavior, as well as where they hover with their mouse. The hover maps where they hover over with their mouse, this is not very useful. Where people look and where they hover with their mouse, is not very correlated. There’s some correlation, but also a lot of bullshit in there. Click maps, so where do people click is important to capture. You have that information in digital analytics as well, but it creates nice, aggregate heat maps that are very useful to sell an idea to executives, bosses, clients, it can be insightful as well.

Then there’s scroll maps, which shows how far down people scroll, what is seen, what is ignored on a page can be useful. With mouse tracking we also have session read plays, this means that we actually record videos of people going through your site.T heir whole experience on your site, what they see and what they do is recorded without voice, so we don’t know what they’re saying. But we see what they’re doing, so a use case for that is you have a page, a bunch of people drop in on it, you don’t know why. You watch a lot of session replays and you might, or might not, see something that people are doing or something that’s a roadblock. We want to find those roadblocks.

How do we measure the effectiveness of a testing program?

  • One is testing velocity. Testing velocity measures how many tests are you running per month, per year? There might be some obstacles here. Set by traffic, if you don’t have much traffic, you can’t run very fast. Let’s say that you can run lots of tests, but you’re only running one test a month, even though you could be running five. Obviously you’re not doing a good job. The faster we test, the more tests we run, the more we learn, the more we grow.
  • Number two measures the percentage of tests that provide a win. On average, in the world, this is really, really low. Most tests end up in either failure or no significant difference. And of course it’s because most people are testing stupid things that make no difference. Now, that you’ve made it this far in the course, you’re using research, Excel, Frameworks, you’re gathering data, you are identifying actual problems that your actual users have, and now we can run tests that are tackling those identified problems. Your percentage of winning tests will go way up.
  • Number three is impact per successful experiment. Now if your test wins is it one percent better, three percent better, five percent better? So on average, in the world, most tests are somewhere in between minus 15 and plus 15 percent relative increase or decrease. You can do better than that. Once you start measuring your average uplift per test, you’ll know how good your testing program is. Of course, it also depends on how optimized your site is. If it looks like your grandma designed it, you’re gonna have more bigger wins, up front, and as your site gets more optimized your average percentage might go down.

Site Walkthroughs

Questions we’re trying to answer here:

Technical issues and poor user experience will kill conversions.

It is great practice to create a custom report to see conversions per browser:
We can see here differences between conversions. We can immediately spot that mobile browsers tend to do much worse. While it’s to be expected for smartphones, we often think that tablet conversions should be similar to desktop conversions. Next, we should look at conversion rates by browser version.

When thinking about converting to a web browser, we need to first see if it’s worth it. For example, if it looks like it’s just short of $21,000 per month that will be made in revenue. If the conversion rate would be 6.5% instead of 4.6%, how much extra revenue would that be? About $8,500 extra dollars per month, over $100,000 per year. Then we can say that yes, it’s worth it, the next step will be to conduct a thorough walkthrough. Try to run into every possible scenario you can.

What you need to do is to conduct thorough walkthroughs of the site with all the top browsers and each device category (desktop, tablet, mobile). Pay attention to the site structure, and definitely go through the checkout and form filling process. The goal here is to put yourself in the customers’ shoes and see what they’re experiencing. It’s also a great way to familiarize yourself with the site and its structure.

Walk through the site in question, making note of URL structure & handling.
Checking page URLs when moving around is useful for measuring flows etc.
You should practice here building a picture of site navigation flows, and identifying key points of interest for analytics: all the suspicious stuff should be checked later with data. Focus on finding bad or suspicious pages or parts of the site. Check if the URLs are shared or split for different flows (e.g. if the site sells 3 different products, do they have unique funnels — can each funnel be measured separately?). As you walk the site, takes notes — call this your ‘Areas of Interest’ document. Use this work to drive an analytics inspection later on.

Make sure that all key devices are tested — desktop browsers (and their different versions), tablets (iPad and Android), smartphones (various). When you choose which devices to test with, besides all iOS devices pick the most popular Android / Windows / Blackberry devices to test with. Check your Google Analytics mobile devices report to see what your audience is using. Note: that the data will not be perfectly accurate.

Find PPC and organic routes for mobile, tablet, desktop. Walk the journeys starting with “landing page zero.” Walk these three journeys: iPhone+Android, iPad+Android and a non-primary desktop browser (e.g. IE8). Walk the real journey, not the analytics data. It might be tempting just to check user flows within GA, but you need to experience the site as users do. It will be 10x more insightful. Try to figure out in the journey, if it is focused on being device centric or all device centric? Make a note of areas of interest for later.

All the findings will be very practical: you’ll get a series of prioritized fixes. If you’re doing work for a client, getting quick wins early on helps you build a relationship while delivering client value. This will take 1–2 days, depending on the site and device mix. If you can afford it, this is something you can outsource to technical people or even dedicated companies.

Heuristic Analysis

We’re all humans with our biases, don’t fall in the trap of thinking that your judgment is the truth. Heuristic analysis is one of the first things to do
You want to start with heuristic analysis. This helps you familiarize yourself with the site + helps you map you “problem areas”, so you could see later if the data validates and disproves your findings. What you discover here will also help you determine if you need to collect more data on one thing or another. There is no such thing as an objective point of view.

Biases to be aware of:

In conclusion heuristic analysis serves as an input to a hypothesis, and we need to look for data that confirms it.

Structured approach

Evaluating a website is a process.

The 7 Levels of Conversion by Web Arts
Web Arts uses 7 levels to assess each page:

  • Trust. Can I trust this provider?
  • Orientation. Where should I click? What do I have to do?
  • Stimulance. Why should I do it right here and right now?
  • Security. Is it secure here? What if…?
  • Convenience. How complicated will it be?
  • Confirmation. Did I do the right thing?

Invesp Conversion Framework
Invesp has 8 principles in their conversion framework.

  • Build user confidence, make them trust you by using all kinds of trust elements.
  • Engagement. Entice visitors to spend a longer time, come back to visit, bookmark it, and/or refer others to it.
  • Understand the impact of buying stages. Not everybody will buy something on their first visit, so build appropriate sales funnels and capture leads instead, and sell them later.
  • Deal with fears, uncertainties and doubts (FUDs). Address users concerns, hesitations, doubts.
  • Calm their concerns. Incentives are a great way to counter FUDs and relieve friction.
  • Test, Test, Test.
  • Implement in an iterative manner. Build smaller blocks, make smaller changes, and test them and improve their performance.

Marketing Experiments Methodology heuristic approach

This is what the characters mean:

C = Probability of conversion
m = Motivation of user (when)
v = Clarity of the value proposition (why)
i = Incentive to take action
f = Friction elements of process
a = Anxiety about entering information

Translation: The probability of conversion depends on the match between the offer and visitor motivation + the clarity of the value proposition + (incentives to take action now — friction) — anxiety. The numbers next to characters signify the importance of it.

Friction is defined as a psychological resistance to a given element in the sales or sign-up process. Anxiety is a psychological concern stimulated by a given element in the sales or sign-up process. Reduce these as much as possible and do what you can to increase the users’ motivation and incentive and clarify the value position.

LIFT framework
An interesting framework for analyzing web pages is LIFT, developed by WiderFunnel:

Steps ConversionXL uses for heuristic analysis
Here are the steps used for performing heuristic analysis of a given website.

Start by conducting thorough walkthroughs of the site with all the top browsers and each device category (desktop, tablet, mobile). Pay attention to the site structure, go through the checkout and form filling process. The goal here is to familiarize myself with the site and its structure, and to identify any cross-browser and cross-device issues (both UX and technical issues).

When evaluating a site, assess each page for clarity — is it perfectly clear and understandable what’s being offered and how it works? This is not just about the value proposition — it applies to all the pages (pricing, featured, product pages etc). Understand the context and evaluate page relevancy for visitors: does the web page relate to what the visitor thought they were going to see? Do pre-click and post-click messages and visuals align?

Assess incentives to take action: Is it clear what people are getting for their money? Is there some sort of believable urgency? What kind of motivators are used? Is there enough product information? Is the sales copy persuasive?
Evaluate all the sources of friction on the key pages. This includes difficult and long processes, insufficient information, poor readability and UX, bad error validation, fears about privacy & security, any uncertainties and doubts, unanswered questions.

Pay attention to distracting elements on every high priority pages. Are there any blinking banners or automatic sliders stealing attention? Too much information unrelated to the main call to action? Any elements that are not directly contributing to visitors taking desired action? Understand buying phases and see if visitors are rushed into too big of a commitment too soon. Are there paths in place for visitors in different stages (research, evaluation etc)?

Going into detail:

Ask questions like:

  • Do call to action buttons match the value they’re going to get?
  • Are the images on the page relevant to the content?
  • If the user came from an external site (Google search, PPC, referral etc), will they recognize that it’s a continuation of their journey?
    The bottom line is to bring in relevant traffic.

Besides optimizing traffic sources, the following has to be done:

  • Mapping out all the key sources of traffic, and identifying the top landing pages for each.
  • Comparing pre-click and post-click messaging and visuals.
  • Identifying any mismatch between what people thought they’re going to get and what they’re actually getting in terms of the offer and the wording of the offer.

Content personalization is a useful ally here as well, and there are many tools you can use for that.

You will always sell more by being crystal clear about what you offer, for whom and what they get as opposed to using psychological tricks and clever copywriting. When evaluating clarity, it’s important to focus on both design clarity and content clarity. The best way to go about content clarity is to assess whether you could instantly answer the following questions on any page:

  • Where am I?
  • What is this page about?
  • What can I do here?
  • How is it useful to me?
  • Why should I do it?
  • Can I understand what the product or service is and how it works in a reasonable amount of time?
  • Are there supporting images and or videos that help me understand it?
  • Is the product information adequate and sufficiently thorough for making a decision?
  • Are all important associated pieces of information clear, such as pricing, shipping info, warranty, and return policy etc?
  • Is it clear what I have to do next?

Evaluating design clarity:

  • Is there strong visual hierarchy in place?
  • Does it follow a most wanted action?
  • Are less important things also less important design wise?
  • Is there enough white space to draw attention to what matters?
  • Are the visuals in place that support the content?
  • Does call to action stand out enough?
  • How much top priority information is below the fold?
  • If there’s more information below the fold, is it clear that they should scroll?
  • Are there any logical breaks that stop the eye flow?
  • Is the eye path clear?
  • Is the body message font size large enough for easy reading? The optimal size is 16px, but that depends on the font.

Friction

When it comes to asking for sensitive information, usually the more personal you get, the less people will feel comfortable sharing. If you can, do not ask for social security numbers, phone numbers during the first steps.

Slow loading pages can be monitored by checking Google Analytics and will be reviewed in an upcoming chapter about speed optimization. Slow sites drive people nuts, especially if they’re on their mobile phone browsing. Something else that drives people nuts is also having difficulty finding features or content. Whenever something is difficult, versus being obvious or intuitive, this causes friction. When a website is hard to navigate through or doesn’t look well done, and you’re wondering where’s the search box here? This will turn people away. No one likes a website full with spam — this is why the design of the website matters. Create a website that doesn’t leave people with privacy and security concerns, does have cheesy & fake stock images or complicated language, jargon and hype. If you treat people with the language that you use like idiots or you try to seem smarter than you are, it will backfire. Typos and poor spelling is another thing to focus on, as well as usability problems, technical errors, cross-browser and cross-device issues. Low contrast between text and background colors, can cause poor readability.

Every key page on your website should have a clearly defined ‘most wanted action’ — what is it that you want your visitors to do on this page? This might be to fill out a form, buy something, click somewhere, etc... define the action item that you want a page to have. Everything that does not contribute to people taking that particular action might serve as a distraction, and you’re probably better off either removing those completely or minimizing them — pushing them down in the visual hierarchy.

Use these guiding questions to identify sources of distraction on a given page:

  • Are there any moving, blinking elements such as banners, automatic sliders?
  • Which elements on the page are not contributing to people taking the most wanted action?
  • How many of those elements could be distracting?
  • What could be remove from the page without compromising its performance?
  • In the checkout/conversion funnel pages, are there navigation elements that could be removed?
  • Is the top header compact, or is taking up too much valuable screen space.
  • Are there visual elements of lesser importance high in the visual hierarchy?
  • Is there an item that is not about the specific action we want people to take?

Motivation and Incentives

  • Is there a clear, benefit-driven offer?
  • Do I understand why I should take action?
  • Are features translated into benefits?
  • Is it clear what people are getting when they click on a button or fill a form out?
  • Is it something that’s desirable or useful for the target audience?
  • Is there enough product information?
  • Is the content interesting?
  • Does the content displayed use simple language?
  • Is the sales copy persuasive?
  • Could we apply some persuasion principles here that would be a good match, such as social proof, urgency or scarcity?

Buying Stages

  • Awareness is the stage where a customer first becomes aware of your product. Or could also refer to the point where a customer first becomes aware of a need that they want to fulfill.
  • Consideration is the stage where a customer starts evaluating solutions to their need(s).
  • Purchase is the stage where people are ready to spend money.

While most websites are usually focused on their selling process, a good portion of customers are not ready to buy right away. Hard selling them will just make them leave, so instead of selling them, we should help them do their research, capture their contacts so we could bring them back. Each of these stages of the buying cycle requires site elements to be designed and structured in a certain way to deliver the information the visitor is looking for.

As an example, for a SaaS business when it comes to awareness-consideration-purchase, there would be questions to assess consideration for buying stages on a website. If the user is not ready to buy, we should help them in their research and evaluation process. We should also focus on the secondary calls to action to help users learn more about the product in the key funnels. The following two questions should be answered: is there an effective email capture process? Does the email capture process have a proper lead magnet? Email capture is important in general. Try to get this contact from the users or viewers who are visiting your site. This is the best way to bring people back and develop a relationship with them, — capture their email address.

Once you have it, you need to put them through a lead nurturing process, so have a drip email campaign ready.

As you’re going through the site and analyzing each page for clarity, distraction and so on — write down everything you notice. This is called “mapping out areas of interest”. All of these observations that are being made come with a personal bias. Just because you think something sucks, doesn’t mean that this is the reality. This is not about ego, but the truth. We’re scientists trying to figure out where the website is leaking money. So everything we “discover” needs to be checked in analytics, and checked when doing user testing.

Once you’ve mapped out all the areas of interest, go through the website again, but now using Google Analytics, survey data, and other sources open in other tabs and see if you can find data to confirm or disprove your findings. If there doesn’t seem to be data available about something, your job as growth marketer is to figure out how to get that data. If you don’t know something, figure it out. See if data points from other research methods such as qualitative research, and mouse tracking, can help you improve your site or re-think how to optimize it.

Usability Evaluation

Jakob Nielsen defines usability by 5 quality components:

  • Learnability, is how easy is it for users to accomplish basic tasks the first time they encounter the design.
    Efficiency, is once users have learned the design, how quickly can they perform tasks?
    Memorability refers to when users return to the design after a period of not using it, how easily can they reestablish proficiency?
    Errors, is about how many errors do users make, how severe are these errors, and how easily can they recover from the errors?
    Satisfaction means how pleasant is it to use the design?

Usability is not the same as user testing. User testing is really, really important, but it has its limits:

On average 5 to 15 users will not discover all the issues on a site. What they discover depends a lot on the tasks that they have to perform. Testers know that they are testers, they don’t have to part with their money and buy something — meaning that they might not comment on everything as they don’t even know when something is bothering them. This is why in addition to user testing you want to complete a usability audit on your website as well.

Use checklists to help you perform this audit. You don’t need to be a usability expert, there are some checklists out there that you can use to evaluate the site you’re working on. The good thing is that most of those issues are quite quick and easy to fix. If you have the budget for it, include a dedicated usability expert or even a third-party consultancy. If you don’t have the budget, you can learn about usability enough to be useful.

I look forward to learning more next week!