Monday, August 31, 2015

Traffic and Engagement Metrics and Their Correlation to Google Rankings

Posted by Royh

When Moz undertook this year’s Ranking Correlation Study (Ranking Factors), there was a desire to include data points never before studied. Fortunately, SimilarWeb had exactly what was needed. For the first time, Moz was able to measure ranking correlations with both traffic and engagement metrics.

Using Moz’s ranking data on over 200,000 domains, combined with multiple SimilarWeb data points—including traffic, page views, bounce rate, time on site, and rank—the Search Ranking Factors study was able to measure how these metrics corresponded to higher rankings.

These metrics differ from the traditional SEO parameters Moz has measured in the past in that they are primarily user-based metrics. This means that they vary based on how users interact with the individual websites, as opposed to static features such as title tag length. We'll find these user-based metrics important as we learn how search engines may use them to rank webpages, as illustrated in this excellent post by Dan Petrovic.

Every marketer and SEO professional wants to know if there is a correlation between web search ranking results and the website’s actual traffic. Here, we’ll examine the relationship between website rankings and traffic engagement to see which metrics have the biggest correlation to rankings.

You can view the results below:

Traffic correlated to higher rankings

For the study, we examined both direct and organic search visits over a three-month period. SimilarWeb’s traffic results show that there is a generally a high correlation between website visits and Google’s search rankings.

Put simply, the more traffic a site received, the higher it tended to rank. Practically speaking, this means that you would expect to see sites like Amazon and Wikipedia higher up in the results, while smaller sites tended to rank slightly worse.

This doesn't mean that Google uses traffic and user engagement metrics as an actual ranking factor in its search algorithm, but it does show that a relationship exists. Hypothetically, we can think of many reasons why this might be the case:

  • A "brand" bias, meaning that Google may wish to treat trusted, popular, and established brands more favorably.
  • Possible user-based ranking signals (described by Dan here) where uses are more inclined to choose recognizable brands in search results, which in theory could push their rankings higher.
  • Which came first—the chicken or the egg? Alternatively, it could simply be the case that high-ranking websites become popular simply because they are ranking highly.

Regardless of the exact cause, it seems logical that the more you improve your website’s visibility, trust, and recognition, the better you may perform in search results.

Engagement: Time on site, bounce rate, and page views

While not as large as the traffic correlations, we also found a positive correlation between a website’s user engagement and its rank in Google search results. For the study, we examined three different engagement metrics from SimilarWeb.

  • Time on site: 0.12 is not considered a strong correlation by any means within this study, but it does suggest there may be a slight relationship between how long a visitor spends on a particular site and its ranking in Google.
  • Page views: Similar to time on site, the study found a small correlation of 0.10 between the number of pages a visitor views and higher rankings.
  • Bounce rate: At first glance, with a correlation of -0.08, the correlation between bounce rate and rankings may seem out-of-whack, but this is not the case. Keep in mind that lower bounce rate is often a good indication of user engagement. Therefore, we find as bounce rates rise (something we often try to avoid), rankings tend to drop, and vice-versa.

This means that sites with lower bounce rates, longer time-on-site metrics, and more page views—some of the data points that SimilarWeb measures—tend to rank higher in Google search results.

While these individual correlations aren’t large, collectively they do lend credence to the idea that user engagement metrics can matter to rankings.

To be clear, this doesn’t mean to imply that Google or other search engines use metrics like bounce rate or click-through rate directly in their algorithm. Instead, a better way to think of this is that Google uses a number of user inputs to measure relevance, user satisfaction, and quality of results.

This is exactly the same argument the SEO community is currently debating over click-through rate and its possible use by Google as a ranking signal. For an excellent, well-balanced view of the debate, we highly recommend reading AJ Kohn’s thoughts and analysis.

It could be that Google is using Panda-like engagement signals. If a site’s correlated bounce rate is negative, that means that the website should have a lower bounce rate because the site is healthy. Similarly, if the time that users spend on-site and the page views are higher, the website should also tend to produce higher Google SERPs.

Global Rank correlations

SimilarWeb’s Global Rank is calculated by data aggregation, and is based on a combination of website traffic from six different sources and user engagement levels. We include engagement metrics to make sure that we’re portraying an accurate picture of the market.

If the website has a lower Global Rank on SimilarWeb, then the website will generally have more visitors and good user engagement.

As Global Rank is a combination of traffic and engagement metrics, it’s no surprise that it was one of the highest correlated features of the study. Again, even though the correlation is negative at -0.24, a low Global Rank is actually a good thing. A website with a Global Rank of 1 would be the highest-rated site on the web. This means that the lower the Global Rank, the better the relationship with higher rankings.

As a side note, SimilarWeb’s Website Ranking provides insights for estimating any website’s value and benchmarking your site against it. You can use its tables to find out who’s leading per industry category and/or country.

Methodology

The Moz Search Engine Ranking Factors study examined the relationship between web search results and links, social media signals, visitor traffic and usage signals, and on-page factors. The study compiled datasets and conducted search result queries in English with Google’s search engine, focusing exclusively on US search results.

The dataset included a list of 16,521 queries taken from 22 top-level Google Adwords categories. Keywords were taken from head, middle, and tail queries. The searches ranged from infrequent (less than 1,000 queries per month), to frequent (more than 20,000 per month), to enormously frequent with keywords being searched more than one million times per month!

The top 50 US search results for each query were pulled from the datasets in a manner that did not account for location or personalization in a location- and personalization-agnostic manner.

SimilarWeb checked the traffic and engagement stats of more than 200,000 websites, and we have analytics on more than 90% of them. After we pulled the traffic data, we checked for a correlation using keywords from the Google AdWords tool to see what effect metrics like search traffic, time on site, page views, and bounce rates—especially with organic searches—have upon Google’s rankings.

Conclusion

We found a positive correlation between websites that showed highly engaging user traffic metrics on SimilarWeb’s digital measurement platform, and higher placement on Google search engine results pages. SimilarWeb also found that a brand’s popularity correlates to higher placement results in Google searches.

With all the recent talk of user engagement metrics and rankings, we’d love to hear your take. Have you observed any relationship, improvement, or drop in rankings based on engagement? Share your thoughts in the comments below.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1Kn9gZ8
via IFTTT

Saturday, August 29, 2015

Cómo Construir un Marketing Online Efectivo utilizando tu Contenido

Cómo Construir un Marketing Online Efectivo utilizando tu Contenido fue visto primero en http://ift.tt/1Re5CUr

Tu web tiene que ser una herramienta de marketing que funciona y cuando decimos “funciona” significa básicamente que ayuda a tu negocio a crecer o por lo menos que éste cumpla algunos de los objetivos que te has propuesto. Es decir, si has decidido que tu presencia online está dirigida sobretodo a mejorar tu imagen,...

Read More

Thursday, August 27, 2015

Moz's Acquisition of SERPscape, Russ Jones Joining Our Team, and a Sneak Peek at a New Tool

Posted by randfish

Today, it's my pleasure to announce some exciting news. First, if you haven't already seen it via his blog post, I'm thrilled to welcome Russ Jones, a longtime community member and great contributor to the SEO world, to Moz. He'll be joining our team as a Principal Search Scientist, joining the likes of Dr. Pete, Jay Leary, and myself as a high-level individual contributor on research and development projects.

If you're not familiar with Mr. Jones' work, let me embarrass my new coworker for a minute. Russ:

  • Was Angular's CTO after having held a number of roles with the company (previously known as Virante)
  • Is the creator of not just SERPscape, but the keyword data API, Grepwords, too (which Moz isn't acquiring—Russ will continue operating that service independently)
  • Runs a great Twitter profile sharing observations & posts about some of the most interesting, hardcore-nerdy stuff in SEO
  • Operates The Google Cache, a superb blog about SEO that's long been on my personal must-read list
  • Contributes regularly to the Moz blog through excellent posts and comments
  • Was, most recently, the author of this superb post on Moz comparing link indices (you can bet we're going to ask for his help to improve Mozscape)
  • And, perhaps most impressively, replies to emails almost as fast as I do :-)

Russ joins the team in concert with Moz's acquisition of a dataset and tool he built called SERPscape. SERPscape contains data on 40,000,000 US search results and includes an API capable of querying loads of interesting data about what appears in those results (e.g. the relative presence of a given domain, keywords that particular pages rank for, search rankings by industry, and more). For now, SERPscape is remaining separate from the Moz toolset, but over time, we'll be integrating it with some cool new projects currently underway (more on that below).

I'm also excited to share a little bit of a sneak preview of a project that I've been working on at Moz that we've taken to calling "Keyword Explorer." Russ, in his new role, will be helping out with that, and SERPscape's data and APIs will be part of that work, too.

In Q1 of this year, I pitched our executive team and product strategy folks for permission to work on Keyword Explorer and, after some struggles (welcome to bigger company life and not being CEO, Rand!), got approval to tackle what I think remains one of the most frustrating parts of SEO: effective, scalable, strategically-informed keyword research. Some of the problems Russ, I, and the entire Keyword Explorer team hope to solve include:

  • Getting more accurate estimates around relative keyword volumes when doing research outside AdWords
  • Having critical metrics like Difficulty, Volume, Opportunity, and Business Value included alongside our keywords as we're selecting and prioritizing them
  • A tool that lets us build lists of keywords, compare lists against one another, and upload sets of keywords for data and metrics collections
  • A single place to research keyword suggestions, uncover keyword metrics (like Difficulty, Opportunity, and Volume), and select keywords for lists that can be directly used for prioritization and tactical targeting

You can see some of this early work in Dr. Pete's KW Opportunity model, which debuted at Mozcon, in our existing Keyword Difficulty & SERP Analysis tool (an early inspiration for this next step), and in a few visuals below:

BTW: Please don't hold the final product to any of these; they're not actual shots of the tool, but rather design comps. What's eventually released almost certainly won't match these exactly, and we're still working on features, functionality, and data. We're also not announcing a release date yet. That said, if you're especially passionate about Keyword Explorer, want to see more, and don't mind giving us some feedback, feel free to email me (rand at moz dot com), and I'll have more to share privately in the near future.

But, new tools aren't the only place Russ will be contributing. As he noted in his post, he's especially passionate about research that helps the entire SEO field advance. His passion is contagious, and I hope it infects our entire team and community. After all, a huge part of Moz's mission is to help make SEO more transparent and accessible to everyone. With Russ' addition to the team, I'm confident we'll be able to make even greater strides in that direction.

Please join me in welcoming him and SERPscape to Moz!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1NBZwug
via IFTTT

Wednesday, August 26, 2015

The SEO Professional's Guide to Waterfall Diagrams

Posted by Zoompf

As we know well by now, the speed of a web page is very important from an SEO and user experience perspective. Faster pages have higher search engine ranks, and users will visit more pages and convert higher on a fast performing website. In short, the smart SEO professional needs to also think about optimizing for performance as well as content.

As we discussed in our last article, WebPageTest is a great free tool you can use to optimize your website performance. One of the most useful outputs of the WebPageTest tool is a graphic known as the waterfall diagram. A waterfall diagram is a graphical view of all the resources loaded by a web browser to present your page to your users, showing both the order in which those resources were loaded and how long it took to load each resource. Analyzing how those resources are loaded can give you insight into what's slowing down your webpage, and what you can fix to make it faster.

Waterfall diagrams are a lot like Microsoft Excel: they are simple in concept and can be very powerful, yet most people aren't using them to their fullest potential. In this article, we will show how an SEO professional can use waterfall diagrams created by tools like WebPageTest to identify and improve their site's performance and user experience.

How to read a waterfall diagram

If you haven't done so already, go to WebPageTest and run a test of your site. When the results are finished, click into the first test result to see the waterfall. Below is a sample waterfall chart (click for a larger version).

cia-waterfall-small

As mentioned above, waterfall diagrams are cascading charts that show how a web browser loads and renders a web page. Every row of the diagram is a separate request made by the browser. The taller the diagram, the more requests that are made to load the web page. The width of each row represents how long it takes for the browser to request a resource and download the response.

For each row, the waterfall chart uses a multi-colored bar to show where the browser spent its time loading that resource, for example:

waterfall-row-better

It's important to understand each phase of a request since you can improve the speed of your site by reducing the amount of time spent in each of these phases. Here is a brief overview:

  • DNS Lookup [Dark Green] - Before the browser can talk to a server it must do a DNS lookup to convert the hostname to an IP Address. There isn't much you can do about this, and luckily it doesn't happen for all requests.
  • Initial Connection [Orange] - Before the browser can send a request, it must create a TCP connection. This should only happen on the first few rows of the chart, otherwise there's a performance problem (more on this later).
  • SSL/TLS Negotiation [Purple] - If your page is loading resources securely over SSL/TLS, this is the time the browser spends setting up that connection. With Google now using HTTPS as a search ranking factor, SSL/TLS negotiation is more and more common.
  • Time To First Byte (TTFB) [Green] - The TTFB is the time it takes for the request to travel to the server, for the server to process it, and for the first byte of the response to make it make to the browser. We will use the measurement to determine if your web server is underpowered or you need to use a CDN.
  • Downloading (Blue) - This is the time the browser spends downloading the response. The longer this phase is, the larger the resource is. Ideally you can control the length of this phase by optimizing the size of your content.

You will also notice a few other lines on the waterfall diagram. There is a green vertical line which shows when "Start Render" happens. As we discussed in our last article, until Start Render happens, the user is looking at a blank white screen. A large Start Render time will make the user feel like your site is slow and unresponsive. There are some additional data points in the waterfall, such as "Content Download", but these are more advanced topics beyond the scope of this article.

Optimizing performance with a waterfall diagram

So how do we make a webpage load more quickly and create a better user experience? A waterfall chart provides us with 3 great visual aids to assist with this goal:

  1. First, we can optimize our site to reduce the amount of time it takes to download all the resources. This reduces the width of our waterfall. The skinnier the waterfall, the faster your site.
  2. Second, we can reduce the number of requests the browser needs to make to load a page. This reduces the height of our waterfall. The shorter your waterfall, the better.
  3. Finally, we can optimize the ordering of resource requests to improve rendering time. This moves the green Start Render line to the left. The further left this line, the better.

Let's now dive into each of these in more detail.

Reducing the width of the waterfall

We can reduce the width of the waterfall by reducing how long it takes to download each resource. We know that each row of the waterfall uses color to denote the different phases of fetching a resource. How often you see different colors reveals different optimizations you can make to improve the overall speed.

  • Is there a lot of orange? Orange is for the initial TCP connection made to your site. Only the first 2-6 requests to a specific hostname should need to create a TCP connection, after that the existing connections get reused. If you see a lot of orange on the chart, it means your site isn't using persistent connections. Below you can see a waterfall diagram for a site that isn't using persistent connections and note the orange section at the start of every request row. connections-bad Once persistent connections is enabled, the width of every request row will be cut in half because the browser won't have to make new connections with every request.
  • Are there long, purple sections? Purple is the time spent performing an SSL/TLS negotiation. If you are seeing a lot of purple over and over again for the same site, it means you haven't optimized for TLS. In the snippet of diagram below, we see 2 HTTPS requests. One server has been properly optimized, whereas the other has a bad TLS configuration: ms-is-silly To optimize TLS performance, see our previous Moz article .
  • Are there any long blue sections? Blue is the time spent downloading the response. If a row has a big blue section, it most likely means the response (the resource) is very large. A great way to speed up a site is to simply reduce the amount of data that has to be sent to the client. If you see a lot of blue, ask yourself "Why is that resource so large?" Chances are you can reduce the size of it through HTTP compression, minification, or image optimization. As an example, in the diagram below, we see a PNG image that is taking a long time to download. We can tell because the of the long blue section. long-download Further research revealed that this image is nearly 1.1 MB in size! Turns out the designer forgot to export it properly from Photoshop. Using image optimization techniques reduced this row and made the overall page load faster.
  • Is there a lot of green? Chances are there is a lot of green. Green is the browser just waiting to get content. Many times you'll see a row where the browser is waiting 80 or 90 ms, only to spend 1 ms downloading the resource! The best way to reduce the green section is to move your static content, like images, to a content delivery network (CDN) closer to your users. More on this later.

Reducing the height of the waterfall

If the waterfall diagram is tall, the browser is having to make a large number of requests to load the page. The best way to reduce the number of requests is to review all the content your page is including and determine if you really need all of it. For example:

  • Do you see a lot of CSS or JavaScript files? Below is a snippet of a waterfall diagram from an AOL site which, I kid you not, requests 48 separate CSS files! aol-is-silly If you site is loading a large number of individual CSS or JavaScript files, you should try combining them as with a CMS plugin or as part of your build process. Combining files reduces the number of requests made, improving your overall page speed.
  • Do you see a lot of "small" (less than 2kb) JavaScript files or CSS files? Consider including the contents of those files directly in your HTML via inline <script>, <code>, or <style> tags.
  • Do you see a lot of 302 redirects? Redirects appear as yellow highlighted rows and represent links on your page that are usually outdated or mistakenly made. This creates an unnecessary redirect which is just needlessly increasing the height of your waterfall. Replace those links with direct links to the new URLs.

Improving rendering time

Recall that the Start Render time represents when the user first sees something on the page other than a blank white page.

What is your Start Render time? If its longer than 1.5 seconds, you should try and improve it. To do so, first take a look at all the resources "above and to the left" of the Start Render line. This represents everything that should be considered for optimization to improve your render time.

Here are some tips:

  • Do you see any calls to load JavaScript libraries? JavaScript includes can block page rendering, move these lower in your page if possible.
  • Do you see a lot of requests for separate CSS items? Browsers wait until all the CSS is downloaded before they start rendering the page. Can you combine or inline any of those CSS files?
  • Do you see external fonts? When using an external font, the browser won't draw anything until it downloads that font. If possible, try to avoid using externally loaded fonts. If that is not possible, make sure you are eliminating any unnecessary 302 redirects to load that font, or (even better) consider hosting a copy of that font locally on your own webserver.

As an example, here is the top of a waterfall diagram:

cia-render

The green start render line is just over 1 second which is pretty good. However, if you look to the left of the line, you can see some optimizations. First, there are multiple JS files. With the exception of jQuery, these can probably be deferred until later. There are also multiple CSS files. These could be combined. These optimizations would improve the start render time.

You may need to coordinate with your designers and your developers to implement these optimizations. However the results are well worth it. No one likes looking at an empty white screen!

Other factors

Is my server fast enough?

We know that the time-to-first-byte from your server is a factor in search engine rankings. Luckily a waterfall tells you this metric. Simply look at the first row of the diagram. This should show you timing information for how the browser downloads the base HTML page. Look at the TTFB measurement. If it is longer than about 500 ms, your server may be underpowered or unoptimized. Talk with your hosting provider to improve your server capabilities. Below is an example of a waterfall diagram where the server was taking nearly 10 seconds to respond! That's a slow server!

bad-server

Do I need a CDN?

Latency can be a big source of delay for a website, and it has to do with the geographic distance between your server and your website visitors. As we have discussed, latency is driven by distance and the speed of light; a high speed internet connection alone doesn't fix the problem. Content Delivery Networks (CDNs) speed up your website by storing copies of your static assets (images, CSS, JavaScript files, etc) all over the world, reducing the latency for your visitors.

Waterfalls reveal how latency is affecting the speed of your site, and whether you should use a CDN. We can do this by looking at the TTFB measurements for requests the browser makes to your server for static assets. The TTFB is composed of the time it takes for your request to travel to the server, for the server to process it, and for the first byte of the response to come back. For static assets, the server doesn't have to do any real processing of the request, so the TTFB measurement really just tells us how long a round-trip takes from a visitor to a user. If you are getting high round-trip numbers it means your content is too far away from your visitors.

To determine if you need a CDN, you first need to know the location of your server. Next, use WebPageTest and run a test from a location that is far away from your server. If your site is hosted in the US, run a test from Asia or Europe. Now, find the rows for requests for several images or CSS files on your server and look at the TTFB measurement. If you are getting a TTFB for static content that is more than 150 ms, you should consider a CDN. For commercial sites, you might want to look at the enterprise grade capabilities of Akamai. For a cheaper option, check out CloudFlare which offers free CDN services.

Summary

Believe it or not, we have only scratched the surface of the performance insights you can learn from a waterfall chart. However this should be more than enough to begin to understand how to read a chart and use it to detect the most basic and impactful performance issues that are slowing down your site.

You can reduce the width of the chart by optimizing your content and ensuring that each resource is received as quickly as possible. You can reduce the height of the waterfall by removing unneeded requests. Finally, you can speed up how quickly your users first see your page by optimizing all the content before the Start Render line.

If you're still not sure where to start, check out Zoompf's Free Performance Report to analyze your site and prioritize those fixes that will make the biggest impact on improving your page speed and waterfall chart metrics.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1U5L024
via IFTTT

Tuesday, August 25, 2015

The True Cost of Local Business Directories

Posted by kristihines

If you're a local business owner, you've likely heard that you should submit your business to local business directories like Yelp, Merchant Circle, Yellow Pages, and similar networks in order to help boost your local search visibility on Google. It sounds easy at first: you think you’ll just go to a few websites, enter your contact information, and you’ll be set. Because all you really want to do is get some links to your website from these profiles.

But the truth is, there are a lot of local business listings to obtain if you go the DIY route. There are local business directories that offer free listings, paid listings, and package listings on multiple networks. There are also local data providers that aren’t necessarily directories themselves, but they push your information out to other directories.

In this post, we’re going to look at the real cost of getting local business listings for your local business.

Finding the right directories

Since one of a business owner’s most important commodities is time, it’s important to note the time investment that you must make to individually create and manage local business listings. Here's what you'll need to do to find the right directories for your business.

Directories ranking for your business

You can start by looking your business up on Google by name to see where you already have listings that need to be claimed.

These are the first directories you'll want to tackle, as they're the ones that people are viewing when they search for your business by name. This is especially important for local businesses that don't have their own website or social media presence. Updating these directories will help customers get to know your business, your hours, and what you have to offer.

These are going to be the easiest, in many cases, because the listing is already there. Most local business directories offer a link to help you start the process.

Depending on the directory, you'll need to look in several places to find the link to claim your business. Sometimes it can be found near the top of your listing. Other times, it may be hidden in the directory's header or footer.

It's important to claim your listings so you can add your website link, business hours, and photos to help your listing stand out from others. Claiming your listing will also help make sure you're notified about any reviews or public updates your business receives.

Directories ranking for your competitors

Once you've claimed the listings you already have, you'll want to start finding new ones. Creating listings on local business directories where your competitors have listings will help you get in front of your target audience. If you notice your competitors have detailed profiles on some networks, but not others, that should clue you in to which ones are going to be most effective.

To find these directories, search for your competitors by name on Google. You should be able to spot which ones you haven't claimed for yourself already and go from there.

Directories ranking for your keywords

What keywords and phrases does your business target in search? Do a quick search for them to see which local directories rank in the top ten search results. Most keyword searches related to local businesses will lead you to your website, your competitors' websites, specific business listings in local business directories, and categories on local business directories.

You should make sure you have a listing on the local business directories that rank for your competitors, as well as the ones whose categories rank. For the latter, you may even want to consider doing paid advertising or sponsorship to make sure your business is first for the category, since that page is likely receiving traffic from your target customers.

Directories ranking in mobile search

After you've looked for the directories that rank for your business name, your competitors, and your target keywords, you'll want to do the same research on mobile search. This will help you find additional directories that are favorites for mobile users. Considering the studies showing that 50% of mobile searchers end up visiting a local store to make a purchase, getting your business in local business directories that rank well in mobile is key to business success.

Claiming and creating local business directory listings

If you think finding the right local business directories is time-consuming, wait until you start to claim and create them. Some directories make it simple and straightforward. Others have a much more complicated process.

Getting your business listing verified is usually the toughest part. Some networks will not require any verification past confirming your email address. Some will have an automated call or texting system for you to use to confirm your phone number. Some will have you speak to a live representative in order to confirm your listing and try to sell you paid upgrades and advertising.

The lengthiest ones from start to finish are those that require you to verify your business by postal mail. It means that you will have to wait a couple of days (or weeks, depending on the directory) to complete your listing.

In the event that you're trying to claim a listing for your business that needs the address or phone number updated, you'll need to invest additional time to contact the directory's support team directly to get your information updated. Otherwise, you won't be able to claim your business by phone or mail.

The cost of local business listings

Now that you know the time investment of finding, claiming, and creating local business directories, it's time to look at the actual cost. While some of the top local business directories are free, others require payment if you want beyond the basic listings, such as the addition of your website link, a listing in more than one category, removal of ads from your listing, and the ability to add media.

Pricing for local directory listings can range from $29 to $499 per year. You will find some directories that sell listings for their site alone, while others are grouped under plans like this one where you can choose to pay for one directory or a group of directories annually.

With the above service, you're looking at a minimum of $199 per year for one network, or $999 per year for dozens of networks. While it might look like a good deal, in reality, you are paying for listings that you could have gotten for free (Yahoo, Facebook, Google+, etc.) in addition to ones that have a paid entry.

So how can you decide what listings are worth paying for? If they are not listings that appear on the first page of search results for your business name, your competitors, or your keywords, you can do some additional research in the following ways:

Check the directory's search traffic

You can use SEMrush for free (10 queries prior to registering + 10 after entering your email address) to see the estimated search traffic for any given local business directory. For example, you can check Yelp's traffic by searching for their domain name:

Then, compare it with other local business directories you might not be familiar with, like this one:

This can help you decide whether or not it's worth upgrading to an account at $108 per month to get a website link and featured placement.

Alternatively, you can use sites like Alexa to estimate traffic through seeing which site has a lower Alexa ranking. For example, you can check Yelp's Alexa ranking:

Then compare it with other local business directories, like this one:

Instantly, you can see that between the two sites, Yelp is more popular in the US, while the other directory is more popular in India. You can scroll down further through the profile to see what countries a local business directory gets the majority of their traffic from to determine if they are getting traffic from your target customer base.

If you have a business in the US, and the directory you're researching doesn't get a lot of US traffic, it won't be worth getting a listing there, and certainly not worth paying for one.

Determine the directory's reputation

The most revealing search you can do for any local business directory that you are considering paying is the directory's name, plus the word "scam." If the directory is a scam, you'll find out pretty quickly. Even if it's not a scam, you will find out what businesses and customers alike find unappealing about the directory's service.

The traffic a directory receives may trump a bad reputation, however. If you look at Yelp's Better Business Bureau page, you will find over 1,700 complaints. It goes to show that while some businesses have a great experience on Yelp, others do not.

If you find a directory with little traffic and bad reviews or complaints, it's best to steer clear, regardless of whether they want payment for your listing.

Look for activity in your category

Are other businesses in your category getting reviews, tips, or other engagement? If so, that means there are people actually using the website. If not, it may not be worth the additional cost.

The "in your category" part is particularly important. Photography businesses may be getting a ton of traffic, but if you have an air conditioning repair service, and none of the businesses in that category have reviews or engagement, then your business likely won't, either.

This also goes for local business directories that allow you to create a listing for free, but make you pay for any leads that you get. If businesses in your category are not receiving reviews or engagement, then the leads you receive may not pan out into actual paying customers.

See where your listing would be placed

Does paying for a listing on a specific local business directory guarantee you first-page placement? In some cases, that will make the listing worth it—if the site is getting enough traffic from your target customers.

This is especially important for local business directories whose category pages rank on the first page for your target keyword. For these directories, it's essential that your business gets placed in the right category and at the top of the first page, if possible.

Think of that category page as search results—the further down the page you are, the less likely people are to click through to your business. If you're on the second or third page, those chances go down even further.

In conclusion

Local business directories can be valuable assets for your local business marketing. Be sure to do your due diligence in researching the right directories for your business. You can also simplify the process and see what Moz Local has to offer. Once your listings are live, be sure to monitor them for new reviews, tips, and other engagement. Also be sure to monitor your analytics to determine which local business directory is giving you the most benefit!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1EfsMUu
via IFTTT

Monday, August 24, 2015

User Behaviour Data as a Ranking Signal

Posted by Dan-Petrovic

Question: How does a search engine interpret user experience?
Answer: They collect and process user behaviour data.

Types of user behaviour data used by search engines include click-through rate (CTR), navigational paths, time, duration, frequency, and type of access.

Click-through rate

Click-through rate analysis is one of the most prominent search quality feedback signals in both commercial and academic information retrieval papers. Both Google and Microsoft have made considerable efforts towards development of mechanisms which help them understand when a page receives higher or lower CTR than expected.

Position bias

CTR values are heavily influenced by position because users are more likely to click on top results. This is called “position bias,” and it’s what makes it difficult to accept that CTR can be a useful ranking signal. The good news is that search engines have numerous ways of dealing with the bias problem. In 2008, Microsoft found that the "cascade model" worked best in bias analysis. Despite slight degradation in confidence for lower-ranking results, it performed really well without any need for training data and it operated parameter-free. The significance of their model is in the fact that it offered a cheap and effective way to handle position bias, making CTR more practical to work with.

Result attractiveness

Good CTR is a relative term. A 30% CTR for a top result in Google wouldn't be a surprise, unless it’s a branded term; then it would be a terrible CTR. Likewise, the same value for a competitive term would be extraordinarily high if nested between “high-gravity” search features (e.g. an answer box, knowledge panel, or local pack).

I've spent five years closely observing CTR data in the context of its dependence on position, snippet quality and special search features. During this time I've come to appreciate the value of knowing when deviation from the norm occurs. In addition to ranking position, consider other elements which may impact the user’s choice to click on a result:

  • Snippet quality
  • Perceived relevance
  • Presence of special search result features
  • Brand recognition
  • Personalisation

Practical application

Search result attractiveness is not an abstract academic problem. When done right, CTR studies can provide a lot of value to a modern marketer. Here's a case study where I take advantage of CTR average deviations in my phrase research and page targeting process.

Google's title bolding study

Google is also aware of additional factors that contribute to result attractiveness bias, and they've been busy working on non-position click bias solutions .

Google CTR study

They show strong interest in finding ways to improve the effectiveness of CTR-based ranking signals. In addition to solving position bias, Google's engineers have gone one step further by investigating SERP snippet title bolding as a result attractiveness bias factor. I find it interesting that Google recently removed bolding in titles for live search results, likely to eliminate the bias altogether. Their paper highlights the value in further research focused on the bias impact of specific SERP snippet features.

URL access, duration, frequency, and trajectory

Logged click data is not the only useful user behaviour signal. Session duration, for example, is a high-value metric if measured correctly. For example, a user could navigate to a page and leave it idle while they go out for lunch. This is where active user monitoring systems become useful.

There are many assisting user-behaviour signals which, while not indexable, aid measurement of engagement time on pages. This includes various types of interaction via keyboard, mouse, touchpad, tablet, pen, touch screen, and other interfaces.

Google's John Mueller recently explained that user engagement is not a direct ranking signal, and I believe this. Kind of. John said that this type of data (time on page, filling out forms, clicking, etc) doesn't do anything automatically.

At this point in time, we're likely looking at a sandbox model rather than a live listening and reaction system when it comes to the direct influence of user behaviour on a specific page. That said, Google does acknowledge limitations of quality-rater and sandbox-based result evaluation. They’ve recently proposed an active learning system, which would evaluate results on the fly with a more representative sample of their user base.

"Another direction for future work is to incorporate active learning in order to gather a more representative sample of user preferences."

Google's result attractiveness paper was published in 2010. In early 2011, Google released the Panda algorithm. Later that year, Panda went into flux, indicating an implementation of one form of an active learning system. We can expect more of Google's systems to run on their own in the future.

The monitoring engine

Google has designed and patented a system in charge of collecting and processing of user behaviour data. They call it "the monitoring engine", but I don't like that name—it's too long. Maybe they should call it, oh, I don't know... Chrome?

The actual patent describing Google's monitoring engine is a truly dreadful read, so if you're in a rush, you can read my highlights instead.

MetricsService

Let's step away from patents for a minute and observe what's already out there. Chrome's MetricsService is a system in charge of the acquisition and transmission of user log data. Transmitted histograms contain very detailed records of user activities, including opened/closed tabs, fetched URLs, maximized windows, et cetera.

Enter this in Chrome: chrome://histograms/
(Click here for technical details)

Here are a few external links with detailed information about Chrome's MetricsService, reasons and types of data collection, and a full list of histograms.

Use in rankings

Google can process duration data in an eigenvector-like fashion using nodes (URLs), edges (links), and labels (user behaviour data). Page engagement signals, such as session duration value, are used to calculate weights of nodes. Here are the two modes of a simplified graph comprised of three nodes (A, B, C) with time labels attached to each:

nodes

In an undirected graph model (undirected edges), the weight of the node A is directly driven by the label value (120 second active session). In a directed graph (directed edges), node A links to node B and C. By doing so, it receives a time-label credit from the nodes it links to.

In plain English, if you link to pages that people spend a lot of time on, Google will add a portion of that “time credit” towards the linking page. This is why linking out to useful, engaging content is a good idea. A “client behavior score” reflects the relative frequency and type of interactions by the user.

What's interesting is that the implicit quality signals of deeper pages also flow up to higher-level pages.

Reasonable surfer model

“Reasonable surfer” is the random surfer's successor. The PageRank dampening factor reflects the original assumption that after each followed link, our imaginary surfer is less likely to click on another random link, resulting in an eventual abandonment of the surfing path. Most search engines today work with a more refined model encompassing a wider variety of influencing factors.

For example, the likelihood of a link being clicked on within a page may depend on:

  • Position of the link on the page (top, bottom, above/below fold)
  • Location of the link on the page (menu, sidebar, footer, content area, list)
  • Size of anchor text
  • Font size, style, and colour
  • Topical cluster match
  • URL characteristics (external/internal, hyphenation, TLD, length, redirect, host)
  • Image link, size, and aspect ratio
  • Number of links on page
  • Words around the link, in title, or headings
  • Commerciality of anchor text

In addition to perceived importance from on-page signals, a search engine may judge link popularity by observing common user choices. A link on which users click more within a page can carry more weight than the one with less clicks. Google in particular mentions user click behaviour monitoring in the context of balancing out traditional, more manipulative signals (e.g. links).

In the following illustration, we can see two outbound links on the same document (A) pointing to two other documents: (B) and (C). On the left is what would happen in the traditional "random surfer model,” while on the right we have a link which sits on a more prominent location and tends to be a preferred choice by many of the pages' visitors.

link nodes

This method can be used on a single document or in a wider scope, and is also applicable to both single users (personalisation) and groups (classes) of users determined by language, browsing history, or interests.

Pogo-sticking

One of the most telling signals for a search engine is when users perform a query and quickly bounce back to search results after visiting a page that didn't satisfy their needs. The effect was described and discussed a long time ago, and numerous experiments show its effect in action. That said, many question the validity of SEO experiments largely due to their rather non-scientific execution and general data noise. So, it's nice to know that the effect has been on Google's radar.

Address bar

URL data can include whether a user types a URL into an address field of a web browser, or whether a user accesses a URL by clicking on a hyperlink to another web page or a hyperlink in an email message. So, for example, if users type in the exact URL and hit enter to reach a page, that represents a stronger signal than when visiting the same page after a browser autofill/suggest or clicking on a link.

  • Typing in full URL (full significance)
  • Typing in partial URL with auto-fill completion (medium significance)
  • Following a hyperlink (low significance)

Login pages

Google monitors users and maps their journey as they browse the web. They know when users log into something (e.g. social network) and they know when they end the session by logging out. If a common journey path always starts with a login page, Google will add more significance to the login page in their rankings.

"A login page can start a user on a trajectory, or sequence, of associated pages and may be more significant to the user than the associated pages and, therefore, merit a higher ranking score."

I find this very interesting. In fact, as I write this, we're setting up a login experiment to see if repeated client access and page engagement impacts the search visibility of the page in any way. Readers of this article can access the login test page with username: moz and password: moz123.

The idea behind my experiment is to have all the signals mentioned in this article ticked off:

  • URL familiarity, direct entry for maximum credit
  • Triggering frequent and repeated access by our clients
  • Expected session length of 30-120 seconds
  • Session length credit up-flow to home page
  • Interactive elements add to engagement (export, chart interaction, filters)

Combining implicit and traditional ranking signals

Google treats various user-generated data with different degrees of importance. Combining implicit signals such as day of the week, active session duration, visit frequency, or type of article with traditional ranking methods improves reliability of search results.

page quality metrics

Impact on SEO

The fact that behaviour signals are on Google's radar stresses the rising importance of user experience optimisation. Our job is to incentivise users to click, engage, convert, and keep coming back. This complex task requires a multidisciplinary mix, including technical, strategic, and creative skills. We're being evaluated by both users and search engines, and everything users do on our pages counts. The evaluation starts at the SERP level and follows users during the whole journey throughout your site.

"Good user experience"

Search visibility will never depend on subjective user experience, but on search engines' interpretation of it. Our most recent research into how people read online shows that users don't react well when facing large quantities of text (this article included) and will often skim content and leave if they can't find answers quickly enough. This type of behaviour may send the wrong signals about your page.

My solution was to present all users with a skeletal content form with supplementary content available on-demand through use of hypotext. As a result, our test page (~5000 words) increased the average time per user from 6 to 12 minutes and bounce rate reduced from 90% to 60%. The very article where we published our findings shows clicks, hovers, and scroll depth activity of double or triple values to the rest of our content. To me, this was convincing enough.

clicks

Google's algorithms disagreed, however, devaluing the content not visible on the page by default. Queries contained within unexpanded parts of the page aren't bolded in SERP snippets and currently don't rank as well as pages which copied that same content but made it visible. This is ultimately something Google has to work on, but in the meantime we have to be mindful of this perception gap and make calculated decisions in cases where good user experience doesn't match Google's best practices.

Relevant papers


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1Eeqoxv
via IFTTT

Friday, August 21, 2015

How Much Keyword Repetition is Optimal - Whiteboard Friday

Posted by randfish

With all the advancements search engines have made, a lot of folks in the SEO world are circling back to a fundamental question: If I'm targeting a particular keyword, where and how often should I use that in the front and back ends of my page? In today's Whiteboard Friday, Rand puts his recommendations into the context of today's SERPs.

How Much Keyword Use & Repetition is Optimal Whiteboard

For reference, here's a still of this week's whiteboard. Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about keyword use, keyword repetition and overuse.

I know this might seem like a basic topic, but actually it's advanced a little bit in the last few years, and I still get a surprising amount of email and see a surprising number of questions around things like, "How many times should I use my keyword that I'm targeting to rank for in my URL string or my H1 tag or my title? Or how many different pages should I have that target this keyword?" So let's try and clear a little bit of this up.

Let's say I've done a search here, "Are skeleton keys real?" I see results ranking. This is actually kind of a nice result, because what you see are not a lot of pages that say, "Are skeleton keys real?" I just did this search, and the top 20 results, there's actually not even one where the title of the piece or the headline of the document is, "Are skeleton keys real?"

You see lots of documents ranking in Google that don't perfectly match this keyword set. I think that's a good example of how far Google has come in trying to understand the intent behind queries, how far they've come in terms of connecting topics and keywords, how far they've come on topic modeling algorithms.

Keyword repetition considerations

So really there are three primary considerations that we do still need to worry about as SEOs.

1) Search result snippet

The first one is the search result snippet itself. I've taken Etsy's snippet here, which is not fantastic. Then when you get to the page, that product is actually gone, and Etsy is suggesting some other ones, which aren't skeleton keys. Kind of frustrating because they do have skeleton keys if you search on the site. In any case, I'm sure Stephanie and the SEO team over at Etsy will take care of that ASAP.

The primary considerations in your search result snippet are: Is the result informative and useful? I want to be able to look at this and think to myself, "Aha! That tells me something that I didn't already know, or it starts to tell me something about whether or not skeleton keys are real or not and where they come from and history and what they are." Is it useful? Can I apply that information? Is that going to help me accomplish whatever I'm trying to accomplish? In this case, a very information-based query, so the only accomplishment is the knowledge itself.

Is it going to draw the eye and the click? This is a great reason why rich snippets are so valuable and why anything you can do to bulk up or add to your snippet, get more vertical space, make your listing stand out can be helpful.

Then is it perceived as relevant and trustworthy by searchers? So a lot of times, that's going to be a brand consideration set. They're going to be looking at the domain name. They might be looking in the title for a brand there, if it is there, if it's not there, those kinds of things.

2) Keyword analysis algorithms

This is kind of the classic thing where I think a lot of early SEOs get lost and maybe even some folks who have been around for a long time remember back in the day when Google and Yahoo and old MSN search, before Bing, would actually look at the count and the repetition numbers, probably never actually used density, but they probably did use simplistic algorithms like TFIDF, term frequency times inverse document frequency, looking for those less frequently used terms across the Web and seeing if you have a higher concentration of those in your document than other people do.

Keyword Matching

Well, now there are probably still some elements of keyword matching. Google is likely to give you a little bit of a boost if you say, "Are skeleton keys real?" and everybody else says, "Real vintage antique skeleton keys," or something like that.

I'm not suggesting against using this actual keyword phrase precisely. If you know that's what your article is about, that's the piece of content that you have and those are the searchers you want to target, yeah, go ahead. Make the title of the piece, "Are Skeleton Keys Real? We Dig Into History to Find Out." That's a compelling title. I would click on that if that were my search query. So there are some keyword matching elements.

Topic Modeling

There's probably some topic modeling, well, almost certainly some topic modeling stuff where they're looking at, "What are other terms and phrases that are frequently used when we see skeleton keys used?" If we do see those terms and phrases on other people's pages, but we don't see them on yours, we might not consider your document to be relevant to the keyword. Maybe you're talking about skeleton keys as a new programming language. Maybe you're talking about the skeleton key mobile app. I don't know if that's a real thing, but it could be. Maybe Skeleton Keys is the name of your dog.

They don't know. So they look at these topic modeling sorts of algorithms to try and figure out, "Oh, okay, look, they're talking about locks. They're talking about antiques. They're talking about history. I think we can be relatively assured that, yes, this document is on the topic of skeleton keys." If you don't use those words and phrases, the topic modeling algorithm is going to miss you.

Intent analysis

Google is looking very hard for user intent. What do people want from this query? They have a huge store of knowledge around past queries that people have performed, trillions and trillions of queries over the decade and a half that Google has been around that they can look at and say, "Aha, when was the intent of a keyword informational, transactional, navigational, and can we try and figure out what the intent of this particular keyword search is and then serve up results that hit that intent right on the nose?"

Look, sometimes when you get analyzed for this, if you are not serving the same intent, so if you're selling skeleton keys, it could be that you actually won't rank as well for "Are skeleton keys real?" as someone who's providing purely an informational document. If someone searches for antique skeleton keys, your document about "Are Antique Skeleton Keys Real?" might not rank as well as someone who actually sells them, because Google is trying to serve that intent, and they do a pretty good job with that.

QDD/QDF

Then there might be some other algorithm elements in there like QDD, query deserves diversity. So maybe Google sees different intents for a search, and so they try and provide different results and you might rank because of that, or you might not rank because of that, or things like QDF where they say, "We need a fresh result here." People are looking for recent documents around skeleton keys because there was a big item in the news about a break-in using skeleton keys. So we know we should put the news box in there and maybe we should have a document that's much fresher, those kinds of things.

3) Searcher opinions and Engagement

This matters a ton because if searchers don't engage with your piece, if they stand around and they go, "I don't think I should click on that." It doesn't even take as long as I just took to say those words. You just make that split-second judgment as you're scanning down a page of search results about whether something is relevant to your needs or not.

Searchers are constantly asking themselves when they look at a set of results, "Should I click back? Should I reengage? Should I share and amplify the content once I reach it? Should I remember this brand or this page or bookmark it?" All of those kinds of things go into the search engine's consideration set as well. They make their way in there through user and usage data. We know that Google can monitor and measure, certainly when you click back to the search results. We know that through Chrome and through Android and all these other things, Toolbar, that they can look at activity that's happening on a website or through a search journey.

We know they can see sharing and amplification data absolutely if it's links. They can probably look at other kinds of amplification too. They definitely can look at people who remember a brand and search for a brand. So if someone searches for "Etsy skeleton keys," that might be a strong signal to Google that they should rank Etsy's page when people search for just skeleton keys. All of those kinds of things are making it in here.

So we have to ask ourselves, "Does this match the need that I have? Are we creating pages that searchers feel matches their need?" They're asking, "Do I recognize or trust this brand?" Or at least, "If I don't know this brand at all, when I look at the URL and the domain name" -- Bing did a big study on this a couple of years ago -- they ask themselves, "Does that sound like a sketchy domain name?" For example, I did see antique-skeleton-keys.com ranking for this query. They're still on page one. It's not actually that terrible a page, although it has some kind of spammy AdSense all mixed in there. But it has some information.

That kind of stuff, when searchers see that, they are less likely to click it because they've had bad experiences with multiple-hyphenated, keyword matchy domains. Antique-skeleton-keys.com, no offense, but you all aren't doing that world an entirely big favor right now.

Then they're going to ask, "Does the snippet stand out and grab my attention?" If it does, more likely to get a higher click-through rate, more likely to get that engagement.

So... how many times should I repeat my keywords?

So these three big considerations lead us to some quick rules of thumb. I'm going to say that for 95% of pages out there -- not every single one, there are always going to be a few exceptions --but for 95% of the pages out there, you should do at least these things. I'll put nice little boxes here to help out.

Yes, I should have my keyword that I'm targeting, if I know that I'm going after this keyword, this search intent, that's what the page is about, that's the primary keyword target, I should target it at least once in the title element of the page.

Likewise, you should do the same thing in the headline. This is not because the H1 tag is all that important. It doesn't even matter all that much whether it's H1 or H2 or H3, or if your CSS is a little messy, that's okay. As long as the big letters at the top of the page that make up the headline, so that when a searcher lands after clicking, "Are skeleton keys real?" and seeing your, "Are Skeleton Keys Real?" article, they again see right at the top of the page, "Are Skeleton Keys Real?"

So they know they clicked on the right result and they have that consistency. People really like that. That's very important from a psychological perspective, and you need that so that people don't click back and choose a different result because they're like, "Wait a minute, this article is not the one that I thought I clicked on. This is something else."

I'm going to say two to three times in content. That is a very rough rule. Generally speaking, if you don't have the keyword at least a couple of times in the content of the page, unless you have an extremely visual page or an interactive page with almost no content, which maybe that would fall in the 5%, you should definitely be hitting that.

Then one time in the meta description. Meta description is important because of the snippet aspect of it. Not that critical from like, "Oh, that will boost my ranking." No, but it might boost your click-through rate. It might make you appear more relevant to the searcher as they're searching through, and it will help target that.

Again, sometimes in that 5% there, there might be times when a snippet is actually better without the keyword. Again, especially if it's a long keyword phrase and you only have a little bit of room to explain things, okay.

So 95% of pages should do at least this.

Secondary considerations

Then many pages should also consider doing a little bit of image optimization with things like a keyword in the image alt attribute, assuming you have an image on the page. For a keyword like this, you would definitely want to have some pictures of what skeleton keys have looked like, do look like today, that kind of thing.

The image file name itself too, which is important for image SEO. Images still get a good amount of search traffic. Even if you don't get a ton of click-through rate, you might get people using your image and then citing it, and that could lead to link behavior. So we're talking about a long tail here, but a valuable long tail.

Once in the URL, generally this is important, but not critical, certainly not critical. There could be plenty of reasons why you have a perfectly reasonable URL that does not include the keywords many, many times. A homepage is a great example. Homepage, you don't need to change your default homepage to include your keyword string so that when you request whatever, Etsy.com, it redirects to vintage antique skeleton keys. No, don't do that.

One or more times in the subheaders of the page. If you have multiple blocks of subheaders that are describing different attributes of a particular piece of content, well, go ahead, use your keywords in there as they might apply.

Don't go overboard. Another big rule of thumb. You can see my friend here. He's being weighed down by his keywords. His ship has almost turned over. Search engines are going to use stemming. So stemming is basically saying, "I'm going to look at skeleton and I'm going to cut that down to 'skelet' so that if the word 'skeletal' or the words 'skeletons' or 'skeletals' or 'Skeletor' . . ." well, maybe Skeletor means a little something different. You guys remember He-Man, right? I know some of my viewers do.

But that stemming is going to mean that lots and lots of repetitions of minor variance of a keyword are totally unnecessary. In fact, they can annoy searchers and people who are consuming that content, and they might even trigger the engine systems that say, "This is keyword stuffing. This is bad. Don't do it." Keyword stuffing, by the way, super easy to pattern match for engines. It's going to make searchers click the Back button. So use a lot of caution if you're thinking about that.

What about on-page keyword use?

Remember that on-page keyword use is only a small piece of the algorithm. We're talking about a relatively small piece. You could get all of this absolutely perfect, or you could get only, say, 80%, just this stuff right, and the difference is pretty minute in terms of your ranking ability. So I would urge you not to spend too much time trying to go from, "Well, I hit these basic things that Rand talked about, but now I'm going to try and take my keyword targeting and on-page optimization to the absolute max." You can get a very, very tiny extra amount of value.

Do consider searchers' intent and target topics and questions that they have. Engines are smart about this too. So engines have these topic analysis and intent analysis models. So a page that talks about skeleton keys, that fails to mention words like locks or wards or master keys, the engines might go, "That doesn't seem particularly relevant or not as relevant as the pages that do. So even though it has more links pointing at it, we're going to rank it lower."

Likewise, plenty of searchers are searching for those topics as well. So if you don't answer those queries and someone else does, well, they might click on you, but then they'll click back. Or they might click on you, but they won't share you or amplify you or link to you or bookmark you or remember your brand. You need all those signals in the modern on-page world.

All right, everyone. Look forward to some great keyword targeting, some good questions in the Q&A and the comments below. I'll see you next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1huZTtR
via IFTTT