Skip to main content
All Posts By

Chris Berkley

Pagination vs. Incremental Page Loading For Search Engine Crawlability

By Technical SEO No Comments

In this episode of SEO With Chris, I’m talking about pagination vs. incremental page loading. Specifically, how incremental page loading (load more and infinite scroll) pose crawlability issues that pagination does not.

In order for a website’s pages to show up in search results search engines have to be able to find them and that’s where pagination and incremental page loading come into play.

(This content is available in video form, or in a slightly modified text form below the video).

What Is Pagination?

When an e-commerce page, blog page or resources section has too many products/blog posts/articles to show them all on the first page, the page will usually display 10 or 20 or 30 and then you’ll click over to page two in order to see the next batch.

Here’s an example of what pagination looks like. The hiking shirts category page from REI, by default, shows 30 items per page. To see more, you can scroll down to the bottom, and click to page two to see 30 more shirts, and so on and so forth until you reach page 8. Each time you click to another page, the URL will change to /page=2, /page=3, etc.

screenshot of traditional pagination from REI

Historically all websites used this method of making website content available. It’s very easy to implement, it’s easy for humans to use and it’s really good for search engines as well. Search engines have no problem crawling traditional forms of pagination.

Incremental Page Loading

As technology has marched on, we saw the introduction of incremental page loading, which comes in two forms: Load more buttons and infinite scroll.

Load More Buttons

The first type of incremental page loading is the load more or the show more button. In this configuration, you’ll see a default number of products/articles, and a button that you have to click in order to see more. The button usually says “Show More” or “Load More”, but it can be labeled anything. Once you click the button, more articles are dynamically loaded with JavaScript.

Here’s an example from NBCPhiladelphia.com. They have a tag page for the Philadelphia Phillies, which displays 20 articles above a “Show More” button. Clicking “Show More” triggers JavaScript to dynamically load another 20 articles. Those articles are not present in the HTML until the Show More button is clicked. This is problematic for search engines, and I’ll show you why in a minute.

show more incremental page loading example from NBC Philadephia

At the time of this publishing, the next article after the “Show More” button is clicked is about a man who met his kidney donor at a Phillies game. But when we inspect the page, and search for it, that link doesn’t appear…until after you click “Show More.” Without additional configuration, search engines cannot find that link.

Infinite Scroll

The second type of incremental page loading is called infinite scroll. Infinite scroll is similar to a load more button, except that you don’t have to click a button, and more content is loaded automatically as you scroll. This configuration also uses JavaScript to dynamically load content which is not present in the HTML prior to scrolling.

Infinite scroll is challenging to show in a screenshot, so check out this example to see what I mean: https://htmx.org/examples/infinite-scroll/

Problems For Search Engines

Both methods of incremental page loading cause issues for search engines, and prevent them from comprehensively crawling the site. Here’s what Google says about crawling load more and infinite scroll:

“Once you’ve selected the most appropriate UX strategy for your site and SEO, make sure the Google crawler can find all of your content.

For example, you can implement pagination using links to new pages on your ecommerce site, or using JavaScript to update the current page. Load more and infinite scroll are generally implemented using JavaScript. When crawling a site to find pages to index, Google only follows page links marked up in HTML with <a href> tags. The Google crawler doesn’t follow buttons (unless marked up with <a href>) and doesn’t trigger JavaScript to update the current page contents.”

These two technologies that I just discussed are problematic because Google clearly states that they will not execute the JavaScript needed in order to crawl them.

Finding Crawlability Issues From Incremental Page Loading

We know that pagination is crawlable. Load more, show more, etc. are not crawlable. Infinite scroll is also not crawlable. Here’s how to check for them…

One thing is you can crawl the site with Screaming Frog, Sitebulb or another crawler and look for pages with URLs that have ?page=2, ?page=3, etc. in the URL. That would be an indicator that pagination is present and you may not have an issue. If you crawl the site and don’t find those pages, it may be an indicator that they have a load more or an infinite scroll setup.

If it’s a small site and you know it has e-commerce pages or blog/resources pages, you can go look at those manually and see if incremental loading is present.

Another thing you can do is crawl the site normally and then also crawl the XML sitemap. If the URL counts are off between the sitemap crawl and the regular crawl then that might be an indicator that you have pages that are present in the site map but can’t get discovered because of incremental page loading. (You might also find that you have island pages, which could be a separate issue).

Fixing Crawlability Issues From Incremental Page Loading

There are a few ways to effectively fix crawlability issues created by incremental page loading.

Change To Traditional Pagination

One option is you can implement a traditional style pagination setup. It’s time tested, we know it works, it’s pretty lightweight and it’s crawlable. I’m not a UX person so I can’t speak to the user experience impact of traditional pagination vs. incremental loading, but I do know that traditional pagination is crawlable.

Hidden Pagination

The other option is to implement hidden pagination on pages with an incremental scroll configuration. It’s just a version of pagination that search engines can see but visitors can’t because it’s hidden in the HTML.

Remember the NBC Philadelphia page from earlier? Here’s the hidden pagination they’re using on their incremental loading pages:

screenshot of hidden pagination nbc philadelphia

Hiding Content With CSS 

The other option is loading all of the content in the HTML and then using CSS to hide it underneath a load more/show more button. In that situation, clicking the button simply unhides the content – you’re not dynamically loading it like you would with JavaScript. Search engines can crawl it because they don’t need to click the button – it’s right there in the HTML the whole time.

Wrap-Up

Whereas traditional pagination is crawlable, incremental loading by default is not, and to fix it you need to implement one of several different configurations. Questions? Comment here or find me on Twitter/X.

 

Need SEO help? Contact me!





    Virtual Event Schema Sample

    By Technical SEO No Comments

    This is a sample of Event Schema for Virtual Events. The goal is to craft Schema that *may* associate the Virtual Event with an Organization’s Google My Business listing.

    <script type="application/ld+json">
    {"@context" : "http://schema.org",
    "@type" : "Event",
    "name" : "Indy Hall Event Number One",
    "description" : "This is the first Indy Hall virtual event.",
    "url" : "https://www.indyhall.org/events/number-one/",
    "image" : "https://www.indyhall.org/media/events/number-one.jpg",
    "startDate": "2020-04-23T14:00-4:00",
    "endDate": "2020-04-23T15:00-4:00",
    "eventAttendanceMode": "https://schema.org/OnlineEventAttendanceMode",
    "eventStatus": "https://schema.org/EventScheduled",
    "performer" : "Alex Hillman",
    "offers": {
        "@type": "Offer",
        "price": "0.00",
        "priceCurrency": "USD",
    	"availability" : "InStock",
    	"url" : "https://www.indyhall.org/tickets/",
    	"validFrom" : "2020-04-23T12:00-5:00"},
    "organizer" :
    {"@type" : "Organization",
    "name" : "Indy Hall",
    "url" : "https://www.indyhall.org/",
    "address": 
    {"@type" : "PostalAddress",
    "streetAddress" : "399 Market Street Suite 360",
    "addressLocality" : "Philadelphia",
    "addressRegion" : "PA",
    "postalCode" : "19106"}},
    "location" : 
    {"@type" : "VirtualLocation",
    "url" : "https://www.indyhall.org/zoom/"},
    "sameAs" : ["https://meetingplace.io/indyhall/events/0001"]}
    </script>

     

    Speaking At WordCamp Philly on October 5th

    By Conferences No Comments

    I’m excited to announce that I’ll be speaking at WordCamp Philly on October 5th! WordCamp Philly is a single day conference for WordPress experts and enthusiasts to come together and listen to presenters talk about innovative ways to use WordPress.

    My session will focus on How To Get The Most Out Of Yoast. I will discuss how to use the Yoast SEO plugin effectively in order to get the most out of it and maximize your site’s SEO potential using the plugin.

    SEO Crawlability vs. Indexation vs. Ranking

    By Uncategorized No Comments

    Crawlability, indexation and ranking are often confused, and while they relate very closely, they mean very different things. Before ranking can happen, crawling and indexing has to occur, and that’s why technical SEO is so important.

    Crawlability

    Crawlability is at the core of SEO. When a search engine crawler like Googlebot or Bingbot accesses a website, they crawl it to find all the pages, images, links, CSS and Javascript files, etc. By definition, crawlability is the ability of search engines to find and access website content.

    If a search engine can’t access website content, then neither indexation nor ranking are possible. Those things rely on search engines being able to crawl the content in the first place.

    Changing Crawlability with Robots.txt

    Robots.txt is the most common method to prevent search engines from crawling a website. If a specific URL or subfolder is blocked in robots.txt, search engines will not crawl it. That means links on that page won’t be discovered, and entire site sections might not be discovered (that’s why it’s important to have an XML and/or HTML sitemap – redundancy!)

    In this example, robots.txt is being used to block search engines from crawling pages that fall in the /properties/listing/ subfolder on a real estate website.

    User-agent: *
    Disallow: /properties/listing/

    Blocking Crawlability

    Sometimes content is prevented from being crawled inadvertently. If content is rendered with Javascript, this can sometimes be problematic, although changes to Googlebot will now allow it to better crawl Javascript than ever before.

    Additionally, if links are using a nofollow directive, that may present issues as the nofollow attribute instructs Googlebot NOT to follow links. However, there’s debate about how Google really handles nofollow links, with some claiming that Google does crawl them, but doesn’t give them any link equity.

    Indexation

    Once content has been crawled, it’s up to search engines to decide whether to index that content. If the content is duplicated or plagiarized, then search engines may choose to crawl it, but not index it. Similarly, thin or low quality pages may fall victim to the same fate.

    Preventing Indexation

    You may choose to intentionally prevent search engines from indexing content using a noindex directive. However, search engines still need to crawl that page in order to see the noindex tag. If you want to noindex a page, you have to let it be crawled first. Sometimes a noindexed page will be stuck in the index because it’s blocked by robots.txt and therefore Google cannot see the noindex directive after it’s been added.

    Common types of pages that might be noindexed include tag & category pages (common on WordPress). These pages are valuable in that they automate creation of internal links, but they aren’t great for search since they have primarily dynamic content and aren’t optimized well for organic search. I typically recommend noindexing tag & category pages.

    Ranking

    Ranking is the last step in the process. A page can get crawled and indexed, but not rank well at all. In a competitive space like retail and eCommerce, there are hundreds of websites trying to rank for the same keywords and only 10 will end up with page one visibility.

    Improving rankings relies very heavily on A) the on-page content and B) overall site authority and off-site (linking) efforts. Ranking is usually the most difficult part – assuming there aren’t glaring technical issues on the site, crawling and indexation are a lot easier.

    Summary

    To summarize: rankings rely on crawling and indexation, in that order: Crawling > Indexing > Ranking.

    Want more SEO and Analytics content?

    Subscribe to my email list

    Speaking At WordCamp Philly on October 27th

    By Conferences No Comments

    I’m excited to announce that I’ll be speaking at WordCamp Philly on October 27th! WordCamp Philly is a single day conference for WordPress experts and enthusiasts to come together and listen to presenters talk about innovative ways to use WordPress.

    My session will focus on Using Data & Research For Website Planning. I will discuss the importance of planning ahead, and how you can build a content strategy using commercially available tools and your own existing data. I will also discuss technical aspects including content structure, the hub and spoke model, URL hierarchy, site organization and more.

    Download Slides

    How Does Google Treat Subdomains For SEO?

    By SEO, Technical SEO 14 Comments

    Time and time again, Google has shown that they treat subdomains very differently from root domains, in some cases treating them as completely different sites. For SEO purposes, it’s generally recommended to use a subfolder instead of a subdomain.

    Subdomain vs. Subfolder

    A subdomain is a string of characters that precedes the root domain and uses a period to separate them. A subfolder comes after the domain suffix and is separated by a forward slash. You can have multiple subdomains or subfolders, and you’ll frequently see them combined.

    Examples:

    • Blog.chrisberkley.com is a subdomain
    • Chrisberkley.com/posts/ is a subfolder
    • Blog.chrisberkley.com/posts/ is a subdomain with a subfolder.
    • First.blog.chrisberkley.com is two subdomains (“first” and “blog”)
    • First.blog.chrisberkley.com/posts/recent/ is two subdomains (“first” and “blog”) with two subfolders (“posts” and “recent”).

    Did You Know?

    In the URL www.chrisberkley.com, “www” is technically a subdomain. It’s true!

    Why Use Subdomains?

    There are legitimate reasons that necessitate the use of subdomains and subdomains are not completely unavoidable.

    Technical Limitations

    Sometimes there are technical infrastructure limitations that prevent the use of a subdomain. In large organizations with big sites, it’s common for access to the root domain to be limited, instead using subdomains for ease of use.

    This may include piecing together multiple CMSs. If the core site is hosted on one CMS like Magento or Sitecore, but the blog is hosted on WordPress, it can be difficult (or impossible) to make them work together on the root domain.

    Organizational Control

    Large organizations often have multiple divisions that operate independently. Such is the case with universities, where individual colleges need to have edit access to their own sites (School of Nursing, School of Engineering, etc.). The same is true for other national organizations like banking institutions.

    It’s a lot easier to spool up a separate site on a subdomain and grant a team of people edit access to that particular subdomain. You wouldn’t want the School of Nursing making edits that ended up taking down the entire root domain for the whole college.

    International

    Sometimes organizations will create international subdomains like fr.chrisberkley.com or en.chrisberkley.com. There’s no inherent SEO benefit to including a country code in the subdomain, but it may comeback to organizational structure or technical limitations. In a perfect world, you’d place those in subfolders (chrisberkley.com/fr/ or chrisberkley.com/en/) and implement hreflang. Alas, we  don’t live in a vacuum and that isn’t always possible.

    How Google Treats Subdomains

    Working with subdomain-heavy clients, my firsthand experience is that Google treats subdomains as separate sites. A client of mine who had two divisions of their company had one set up on subdomain and another on the root domain. They had some content overlap and we sometimes saw their pages swap places in search results.

    It’s my belief that subdomains don’t inherit domain authority or site equity from the root domain. WordPress.com has a domain authority of 94. If subdomains inherited that value, wouldn’t it make sense to setup free blogs on their platform (which uses subdomains) and immediately benefit from the SEO value?

    Secondly, Google’s own Search Console requires you to set up separate profiles for subdomains. That’s another good indicator that they value subdomains differently.

    That doesn’t mean subdomains inherit ZERO equity from their root domains. They may inherit a greatly reduced amount. OR, Google may adjust the amount of equity they inherit on a case-by-case basis. Since WordPress.com has thousands of low-authority blogs on subdomains, Google may devalue their subdomains more than other sites that only have a handful.

    Google has stated that their search engine is indifferent to subdomains vs. subfolders, but the SEO community has repeatedly found that to be false. Industry thought-leader Moz moved their content from a subdomain to a subfolder and saw measurable increases just as a result of that move.

    Questions? Comments? Leave them here or reach out to me on Twitter: @BerkleyBikes.

    Speaking At WordCamp Lehigh Valley on August 18th

    By Conferences No Comments

    I’m excited to announce that I’ll be speaking at WordCamp Lehigh Valley on August 18th! WordCamp Lehigh Valley is a single day conference for WordPress experts and enthusiasts to come together and listen to presenters talk about innovative ways to use WordPress.

    My session will focus on Using WordPress Tags & Categories Effectively. Tags and categories are often misused, often in ways that can cause confusion and SEO issues as a result of bloated sites with numerous thin or low value pages. I’ll discuss how to use these WordPress features in a meaningful way that provides benefit for both human site visitors and search engines.

    Download My Slides

    Gravity Forms Event Tracking with Google Tag Manager

    By Analytics 10 Comments

    You’re here because you want to use Google Tag Manager and Google Analytics to track form submissions using the Gravity Forms plugin. This tutorial, like my others for Contact Form 7 and Ninja Forms, will show you how to configure these platforms to track successful submissions, and use different form IDs to setup different goal conversions in GA. These are the steps we’ll be following:

    Step 1: Configure Gravity Forms to use AJAX to submit the form.
    Step 2: Create a GTM Tag that pushes a Data Layer event when the form is submitted successfully.
    Step 3: Create a GTM Trigger that looks for the Data Layer event when the form is submitted, and triggers a second GTM Tag.
    Step 4: Create a second GTM Tag that looks for the Data Layer push and creates Google Analytics Events.
    Step 5: Create a form ID Data Layer Variable so that we can capture form ID.
    Step 6: Set up a Goal Conversion in GA based on the GA Event from Step 3.

    Step 1: Enable AJAX Submissions

    If you use Gravity Forms’ default settings, the page will reload when the form is submitted, and none of these steps will work. Instead you need to enable AJAX submissions, which is done in the form’s shortcode.

    Your normal shortcode looks like this:  gravityform id=”1″ title=”true” description=”true”

    Your edited shortcode will look like this: gravityform id=”1″ title=”true” description=”true” ajax=”true”

     

    Step 2: Create GTM Tag #1

    The first tag is a few lines of Javascript that listens for a form submission. Technically it listens for the gform_confirmation_loaded event…which occurs when the confirmation message is loaded…which occurs when the form has been submitted. This is important because it differentiates between actual form submissions and false submissions (like when one of the required form fields is incomplete). The plugin fires a submit event WHENEVER the submit button is clicked (whether actually submitted or not) so the submit event is not reliable (and button clicks aren’t reliable with any plugin).

    Here’s the tag you need to setup. I named it gravityFormsSubmission. 

     

    Tag Type: Custom HTML

    Add the following code:

    <script>
    jQuery(document).ready( function() {
    jQuery(document).bind('gform_confirmation_loaded', function(event, formId){
    	window.dataLayer.push({
                'event' : 'gravityFormSubmission',
                'gfformID' : formId		
    	});
    });
    	})
    </script>

    Triggering: All Pages

    This listens for the gform_confirmation_loaded event and fires an event into the Data Layer called “gravityFormSubmission.” It also grabs the form ID in the Gravity Forms shortcode and pushes it into the Data Layer too, you can see the different form IDs in the screenshot here.  You can use different forms with different IDs to create different Goal Conversions later in this tutorial.

     

     Step 3: Create a GTM Trigger

    Next, create a Trigger for the Tag you’ve just created. Mine is Gravity Forms Trigger.

     

    Trigger Type: Pick Other – Custom Event from the default list. We’ll use the event we created in Tag #1 as the trigger event.

    Event Name: gravityFormSubmission

    This Trigger Fires On: All Custom Events

    This uses the custom Data Layer event from Step 1 to trigger a Google Analytics event tag that we will setup in Step 3.

     

    Step 4: Create Tag #2

    Create another tag and name it Gravity Form SubmissionFill out the following fields as such:

    Tag Type: Universal Analytics

    Track Type: Event

    Category: You can set this as whatever you want and that’s what it’s going to show up as in Google Analytics when you go to the Events report. I named mine contact-form.

    Action: I picked gfsuccessful-form-submission.

    Label: If you want to track different forms separately, you need to set this as form-id-{{GFformID}} and follow the rest of these instructions closely.

    Value: I did not add a value.

    Non-Interaction Hit: I recommend selecting False – this will prevent the session from be counted as a bounce if the visitor only looks at one page, but completes a form submission before leaving.

    Tracking ID: Your Google Analytics UA-XXXXXXXX tracking ID. I have mine set up as a custom, constant variable, which explains why it says {{GA Tracking Code}}

    More Settings & Advanced Settings: If you’re an analytics wiz, there are advanced settings in here that you may want to configure. However, I did not change any of them.

    When you’re done, you’ll end up with this:

     

    Step 5: Create a Form ID Data Layer Variable

    In Step 2 we pushed the form ID into the Data Layer when the form is submitted. Now we need to capture the form ID and get it into GTM and into our Google Analytics Events.

     

     

    Add a new User-Defined Variable called GFformID.

    Variable Type: Data Layer Variable

    Data Layer Variable Name: gfformID (case sensitive).

     

    Submit (publish) your updated GTM workspace and then we’ll move on to Google Analytics Goal configuration!

     

    Step 5: Setting up the Goal Conversion(s)

    In Google Analytics and navigate to Admin > View > Goals. Click +New Goal and set it up as such:

    Goal setup: Select Custom

    Goal description:

    Name: Call it whatever you want. I’ve called mine Gravity Form Submission.

    Type: Select Event.

    Goal details:

    Category: Equals to > contact-form

    Action: Equals to > gfsuccessful-form-submission

    Label: Select the form ID of the form you want to track. If you don’t want to track different forms separately, you can leave this blank. Mine was form-id-2.

    Value: You can add a monetary value here if you’d like.

    Use the Event Value as the Goal Value: If you set up a Value in ad a value attached to each event, you could set that up here. Since I did not, I will not.

    Save the Goal Conversion and you’re done. If you want to set up other goals for different form IDs, just repeat the process but change the Label in the Goal Conversion.

    Lastly, test the implementation by doing a couple test submissions on the site. You can use GA’s real time reporting to look for events firing. If you’re not seeing them, make sure you’re not excluding traffic with filters or opt-out extensions in your browser. For a mostly foolproof solution, submit the form on a mobile device using a data connection (no wifi), and use a completely unfiltered GA view to watch for the event.

    Ninja Forms Event Tracking with Google Tag Manager

    By Analytics 40 Comments

    Tracking Ninja Form submissions as Google Analytics goal conversions is a bit complex when using Google Tag Manager. This post will show you how to set up tracking, log the form ID on submission, AND only track conversions when the form is actually submitted.

    Step 1: Create a GTM Tag that pushes a Data Layer event when the form is submitted successfully.
    Step 2: Create a GTM Trigger that looks for the Data Layer event when the form is submitted, and triggers a second GTM Tag.
    Step 3: Create a second GTM Tag that looks for the Data Layer push and creates Google Analytics Events.
    Step 4: Create a form ID Data Layer Variable so that we can capture form ID.
    Step 5: Set up a Goal Conversion in GA based on the GA Event from Step 3.

    Ninja Forms

    Step 1: Create Tag #1

    The first tag is just a bit of Javascript that listens for a form submission. When the form is submitted, it sends a DOM event called nfFormSubmitResponse – this event is only triggered when the form is actually submittedLog into GTM and create a new Tag (I’ve called mine nfFormSubmission) with the following configuration:

    Tag Type: Custom HTML

    Add the following code:

    <script>
    jQuery(document).ready( function() {
      jQuery(document).on('nfFormSubmitResponse', function(event, response, id) {         
          dataLayer.push ({
            'event' : 'ninjaFormSubmission',
            'NFformID' : response.id           
                });
             });
        });
    </script>

    Triggering: All Pages

    This listens for the nfFormSubmitResponse DOM event and fires an event into the Data Layer called “ninjaFormSubmission.” It also captures the form ID in the Ninja Forms shortcode and pushes it into the Data Layer too (in the screenshot below, that ID is “2”). The form ID is what will allow you to create different Goal Conversions for different forms – we’ll revisit that in later steps.

    Step 2: Create a GTM Trigger

    Next, create a Trigger for the Tag you’ve just created. I named mine Ninja Forms Trigger.

    Trigger Type: Pick Other – Custom Event from the default list. We’ll use the event we created in Tag #1 as the trigger event.

    Event Name: ninjaFormSubmission

    This Trigger Fires On: All Custom Events

    This uses the custom Data Layer event from Step 1 to trigger a Google Analytics event tag that we will setup in Step 3.

    Step 3: Create Tag #2

    Create another tag and name it whatever you want (I used “Ninja Form Submission”). Fill out the following fields as such:

    Tag Type: Universal Analytics

    Track Type: Event

    Category: You can set this as whatever you want and that’s what it’s going to show up as in Google Analytics when you go to the Events report. I named mine contact-form.

    Action: I picked ninja-successful-form-submission.

    Label: If you want to track different forms separately, you need to set this as form-id-{{NFformID}} and follow the rest of these instructions closely.

    Value: I did not add a value.

    Non-Interaction Hit: I recommend selecting False – this will prevent the session from be counted as a bounce if the visitor only looks at one page, but completes a form submission before leaving.

    Tracking ID: Your tracking ID. I have mine set up as a variable, which explains why it says {{GA Tracking Code}}

    More Settings & Advanced Settings: If you’re an analytics wiz, there are advanced settings in here that you may want to configure. However, I did not change any of them.

    When you’re done, you’ll end up with this:

    Step 4: Create a Form ID Data Layer Variable

    In Step 2 we pushed the form ID into the Data Layer when the form is submitted. Now we need to capture the form ID and get it into GTM and into our Google Analytics Events. 

    Add a new User-Defined Variable called NFformID.

    Variable Type: Data Layer Variable

    Data Layer Variable Name: NFformID (case sensitive).

    Submit (publish) your updated GTM workspace and then we’ll move on to Google Analytics Goal configuration!

    Step 5: Setting up the Goal Conversion(s)

    In Google Analytics and navigate to Admin > View > Goals. Click +New Goal and set it up as such:

    Goal setup: Select Custom

    Goal description:

    Name: Call it whatever you want. I’ve called mine Ninja Form Submission.

    Type: Select Event.

    Goal details:

    Category: Equals to > contact-form

    Action: Equals to > ninja-successful-form-submission

    Label: Select the form ID of the form you want to track. If you don’t want to track different forms separately, you can leave this blank. Mine was form-id-2.

    Value: You can add a monetary value here if you’d like.

    Use the Event Value as the Goal Value: If you set up a Value in ad a value attached to each event, you could set that up here. Since I did not, I will not.

    Save the Goal Conversion and you’re done. If you want to set up other goals for different form IDs, just repeat the process but change the Label in the Goal Conversion.

    Lastly, test the implementation by doing a couple test submissions on the site. You can use GA’s real time reporting to look for events firing. If you’re not seeing them, make sure you’re not excluding traffic with filters or opt-out extensions in your browser. For a mostly foolproof solution, submit the form on a mobile device using a data connection (no wifi), and use a completely unfiltered GA view to watch for the event.

    Get Ninja Forms!

    Ninja Forms

    Using Contact Form 7 instead? See how to track CF7 form submissions with GTM and GA.

    How Long For Content To Rank?

    By Content Marketing, SEO One Comment

    The number one struggle I face with pitching clients and showing them the value of SEO, is that it takes time. Whereas pay-per-click advertising and social media can be spun up relatively quickly and provide a return on investment rather quickly, SEO is an annuity investment.

    To make a relevant analogy: you can’t invest money in the stock market today and expect dividends tomorrow. The money you invest today is done so with the understanding that it will provide value later. SEO is similar.

    Nevertheless, that’s a real problem because when clients are making a significant investment in SEO, they want to see results. That’s why I prepare clients by telling them “some of the work we do isn’t going to yield results right away. It’s going to take 6-12 months.”

    This is especially true with publishing new content. Ahrefs did a study about how long it takes to rank in Google. They looked at the average age of pages ranking in positions 1-10, and the overall takeaway was that higher positions typically featured pages that have been live for several years. They also noted that higher authority sites took less time to rank well, which is a no-brainer. If there’s one single graph that shows their findings best, it’s this one:

    That’s helpful, but does their large scale study align with actual first hand findings? Sure there’s value in a larger data sample, but having actual anecdotal data would certainly help reinforce those findings.

    Fortunately I have that data. Across multiple clients in multiple industries, I can highlight examples of pages that rank well for target keywords, but didn’t reach full potential until months after they were published. I’m sharing these examples so that both consultants and clients can form realistic expectations for SEO campaigns, which is something I believe this industry can and should do a much better job at.

    Example #1

    Client Industry: Construction

    Type of page: WordPress blog post

    This particular page targeted “rental cost” keywords which are fairly low volume but highly relevant in the client’s industry. The client was hesitant to discuss pricing, but competitors were doing it, so we pushed them to create their own page. Not only does it drive meaningful traffic, but it has resulted in ~3 leads per month since it was published 16 months ago.

    Example #2

    Client Industry: Web hosting

    Type of page: Resource center pages

    These two pages were both created as part of a large content initiative – more than 120 pages of long form content over a one year period. Notably, they both saw steady growth and then marked increases in January 2018, possibly as a result of an algorithm update.

     

    Example #3

    Client Industry: Healthcare

    Type of page: Core site page

    This page saw long periods of inactivity in the very competitive healthcare space, before eventually moving into ranking positions that drive meaningful amounts of traffic (this is also a result of other improvements made to the site during that time).

    Example #4

    Client Industry: Local retail

    Type of page: WordPress blog post

    This example comes from a mom & pop retail store. A blog post that I wrote eventually moved into top ranking positions for some industry head terms, outranking even the brands that the retailer sold in their store. Unfortunately, the business owners did not continue digital marketing efforts after I left my position there, and the content did not retain its visibility in search results.

    Example #5

    Client Industry: Digital marketing

    Type of page: WordPress blog post

    The last example comes from my own website (which has lower site authority than any of my clients). While not initially a large traffic source, an analytics blog post I wrote moved into top positions (including the answer box) over a period of one year.

    Summary

    The key takeaway here is that firsthand data supports the study that Ahrefs did – that content may take months or more to move into top ranking positions, especially for competitive keywords. Site authority absolutely helps – two of the sites included here had domain authority ratings between 50 and 80, which is a rough indicator that they’re authoritative, especially in their respective industries.

    With some of the examples, we did employ other tactics like building internal and external links. All pages were submitted to Google Search Console after publishing to make sure they got crawled as soon as possible. Also obvious is the fact that none of these pages were in a vacuum meaning that there were other marketing (and SEO) initiatives that could’ve contributed to better rankings. Nevertheless, there is a clear pattern showing that even highly optimized content on authoritative sites doesn’t always achieve top rankings immediately, and SEO continues to require patience.