Posted by Stephen_Job
There are several studies (and lots of data) out there about how people use Google SERPs, what they ignore, and what they focus on. An example is Moz’s recent experiment testing whether SEOs should continue optimizing for featured snippets or not (especially now that Google has announced that if you have a featured snippet, you no longer appear elsewhere in the search results).
Two things I have never seen tested are the actual user reactions to and behavior with SERPs. My team and I set out to test these ourselves, and this is where biometric technology comes into play.
What is biometric technology and how can marketers use it?
Biometric technology measures physical and behavioral characteristics. By combining the data from eye tracking devices, galvanic skin response monitors (which measure your sweat levels, allowing us to measure subconscious reactions), and facial recognition software, we can gain useful insight into behavioral patterns.
We’re learning that biometrics can be used in a broad range of settings, from UX testing for websites, to evaluating consumer engagement with brand collateral, and even to measuring emotional responses to TV advertisements. In this test, we also wanted to see if it could be used to help give us an understanding of how people actually interact with Google SERPs, and provide insight into searching behavior more generally.
The plan
The goal of the research was to assess the impact that SERP layouts and design have on user searching behavior and information retrieval in Google.
To simulate natural searching behavior, our UX and biometrics expert Tom Pretty carried out a small user testing experiment. Users were asked to perform a number of Google searches with the purpose of researching and buying a new mobile phone. One of the goals was to capture data from every point of a customer journey.
Participants were given tasks with specific search terms at various stages of purchasing intent. While prescribing search terms limited natural searching behavior, it was a sacrifice made to ensure the study had the best chance of achieving consistency in the SERPs presented, and so aggregated results could be gained.
The tests were run on desktop, although in the future we have plans to expand the study on mobile.
Users began each task on the Google homepage. From there, they informed the moderator when they found the information they were looking for. At that point they proceeded to the next task.
Data inputs
- Eye tracking
- Facial expression analysis
- Galvanic skin response (GSR)
Data sample
- 20 participants
Key objectives
- Understand gaze behavior on SERPs (where people look when searching)
- Understand engagement behavior on SERPs (where people click when searching)
- Identify any emotional responses to SERPs (what happens when users are presented with ads?)
- Interaction analysis with different types of results (e.g. ads, shopping results, map packs, Knowledge Graph, rich snippets, PAAs, etc.).
Research scenario and tasks
We told participants they were looking to buy a new phone and were particularly interested in an iPhone XS. They were then provided with a list of tasks to complete, each focused on searches someone might make when buying a new phone. Using the suggested search terms for each task was a stipulation of participation.
Tasks
- Find out the screen size and resolution of the iPhone XS
Search term: iPhone XS size and resolution - Find out the talk time battery life of the iPhone XS
Search term: iPhone XS talk time - Find reviews for the iPhone XS that give a quick list of pros and cons
Search term: iPhone XS reviews - Find the address and phone number of a phone shop in the town center that may be able to sell you an iPhone XS
Search term: Phone shops near me - Find what you feel is the cheapest price for a new iPhone XS (handset only)
Search term: Cheapest iPhone XS deals - Find and go on to buy a used iPhone XS online (stop at point of data entry)
Search term: Buy used iPhone XS
We chose all of the search terms first for ease of correlating data. (If everyone had searched for whatever they wanted, we may not have gotten certain SERP designs displayed.) And second, so we could make sure that everyone who took part got exactly the same results within Google. We needed the searches to return a featured snippet, the Google Knowledge Graph, Google's “People also ask” feature, as well as shopping feeds and PPC ads.
On the whole, this was successful, although in a few cases there were small variations in the SERP presented (even when the same search term had been used from the same location with a clear cache).
“When designing a study, a key concern is balancing natural behaviors and giving participants freedom to interact naturally, with ensuring we have assets at the end that can be effectively reported on and give us the insights we require.” — Tom Pretty, UX Consultant, Coast Digital
The results
Featured Snippets
This was the finding that our in-house SEOs were most interested in. According to a study by Ahrefs, featured snippets get 8.6% of clicks while 19.6% go to the first natural search below it, but when no featured snippet is present, 26% of clicks go to the first result. At the time, this meant that having a featured snippet wasn’t terrible, especially if you could gain a featured snippet but weren't ranking first for a term. who doesn't want to have real estate above a competitor?
However, with Danny Sullivan of Google announcing that if you appear in a featured snippet, you will no longer appear anywhere else in the search engine results page, we started to wonder how this would change what SEOs thought about them. Maybe we would see a mass exodus of SEOs de-optimising pages for featured snippets so they could keep their organic ranking instead. Moz’s recent experiment estimated a 12% drop in traffic to pages that lose their featured snippet, but what does this mean about user behavior?
What did we find out?
In the information-based searches, we found that featured snippets actually attracted the most fixations. They were consistently the first element viewed by users and were where users spent the most time gazing. These tasks were also some of the fastest to be completed, indicating that featured snippets are successful in giving users their desired answer quickly and effectively.
All of this indicates that featured snippets are hugely important real estate within a SERP (especially if you are targeting question-based keywords and more informational search intent).
In both information-based tasks, the featured snippet was the first element to be viewed (within two seconds). It was viewed by the highest number of respondents (96% fixated in the area on average), and was also clicked most (66% of users clicked on average).
People also ask
The “People also ask” (PAA) element is an ideal place to find answers to question-based search terms that people are actively looking for, but do users interact with them?
What did we find out?
From the results, after looking at a featured snippet, searchers skipped over the PAA element to the standard organic results. Participants did gaze back at them, but clicks in those areas were extremely low, thus showing limited engagement. This behavior indicates that they are not distracting users or impacting how they journey through the SERP in any significant way.
Knowledge Graph
One task involved participants searching using a keyword that would return the Google Knowledge Graph. The goal was to find out the interaction rate, as well as where the main interaction happened and where the gaze went.
What did we find out?
Our findings indicate that when a search with purchase intent is made (e.g. “deals”), then the Knowledge Graph attracts attention sooner, potentially because it includes visible prices.
By also introducing heat map data, we can see that the pricing area on the Knowledge Graph picked up significant engagement, but there was still a lot of attention focused on the organic results.
Essentially, this shows that while the knowledge graph is useful space, it does not wholly detract from the main SERP column. Users still resort to paid ads and organic listings to find what they are looking for.
Location searches
We have all seen data in Google Search Console with “near me” under certain keywords, and there is an ongoing discussion of why, or how, to optimise for them. From a pay-per-click (PPC) point of view, should you even bother trying to appear in them? By introducing such a search term in the study, we were hoping to answer some of these questions.
What did we find out?
From the fixation data, we found that most attention was dedicated to the local listings rather than the map or organic listings. This would indicate that the greater amount of detail in the local listings was more engaging.
However, in a different SERP variant, the addition of the product row led to users spending a longer time reviewing the SERP and expressing more negative emotions. This product row addition also changed gaze patterns, causing users to progress through each element in turn, rather than skipping straight to the local results (which appeared to be more useful in the previous search).
This presentation of results being deemed irrelevant or less important by the searcher could be the main cause of the negative emotion and, more broadly, could indicate general frustration at having obstacles put in the way of finding the answer directly.
Purchase intent searching
For this element of the study, participants were given queries that indicate someone is actively looking to buy. At this point, they have carried out the educational search, maybe even the review search, and now they are intent on purchasing.
What did we find out?
For “buy” based searches, the horizontal product bar operates effectively, picking up good engagement and clicks. Users still focused on organic listings first, however, before returning to the shopping bar.
The addition of Knowledge Graph results for this type of search wasn't very effective, picking up little engagement in the overall picture.
These results indicate that the shopping results presented at the top of the page play a useful role when searching with purchasing intent. However, in both variations, the first result was the most-clicked element in the SERP, showing that a traditional PPC or organic listing remains highly effective at this point in the customer journey.
Galvanic skin response
Looking at GSR when participants were on the various SERPs, there is some correlation between the self-reported “most difficult” tasks and a higher than normal GSR.
For the “talk time” task in particular, the featured snippet presented information for the iPhone XS Max, not the iPhone XS model, which was likely the cause of the negative reaction as participants had to spend longer digging into multiple information sources.
For the “talk time” SERP, the challenges encountered when incorrect data was presented within a featured snippet likely caused the high difficulty rating.
What does it all mean?
Unfortunately, this wasn't the largest study in the world, but it was a start. Obviously, running this study again with greater numbers would be the ideal and would help firm up some of the findings (and I for one, would love to see a huge chunk of people take part).
That being said, there are some solid conclusions that we can take away:
- The nature of the search greatly changes the engagement behavior, even when similar SERP layouts are displayed. (Which is probably why they are so heavily split tested).
- Featured snippets are highly effective for information-based searching, and while they led to some 33% of users choosing not to follow through to the site after finding the answer, two-thirds still clicked through to the website (which is very different from the data we have seen in previous studies).
- Local listings (especially when served without a shopping bar) are engaging and give users essential information in an effective format.
- Even with the addition of Knowledge Graph, “People also ask”, and featured snippets, more traditional PPC ads and SEO listings still play a big role in searching behavior.
Featured snippets are not the worst thing in the world (contrary to the popular knee-jerk reaction from the SEO industry after Google's announcement). All that has changed is that now you have to work out what featured snippets are worth it for your business (instead of trying to just claim all of them). On purely informational or educational searches, they actually performed really well. People stayed fixated on them for a fairly lengthy period of time, and 66% clicked through. However, we also have an example of people reacting badly to the featured snippet when it contained irrelevant or incorrect information.
The findings also give some weight to the fact that a lot of SEO is now about context. What do users expect to see when they search a certain way? Are they expecting to see lots of shopping feeds (they generally are if it’s a purchasing intent keyword), but at the same time, they wouldn't expect to see them in an educational search.
What now?
Hopefully, you found this study useful and learned something new about search behavior . Our next goal is to increase the amount of people in the study to see if a bigger data pool confirms our findings, or shows us something completely unexpected.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Moz Blog https://moz.com/blog/google-serp-layouts-searching-behavior
via IFTTT
No comments:
Post a Comment