E-Commerce

BigCommerce Category Pages: The Duplicate Content Fix

F
Faris Khalil
Apr 16, 2026
21 min read

BigCommerce stores with 20+ product filters on a single category page can generate thousands of indexable URLs from one parent. A category with 8 brand filters, 5 price ranges, and 4 size options produces 160 unique filter combinations. Each one gets its own URL. Each one competes with the parent category for crawl attention and ranking signals. That’s the duplicate content problem nobody warns you about until Google Search Console shows 4,000 “Discovered, currently not indexed” URLs and your category pages start dropping out of the top 50.

This post walks through the exact mechanics of how BigCommerce generates these URLs, where the platform’s default canonical handling breaks down, and how to fix each failure mode in the Stencil template layer. Every recommendation includes the code you’ll edit, the admin path you’ll use, and the specific thing that goes wrong if you skip a step. We’ve fixed this across dozens of BigCommerce stores, and the pattern is remarkably consistent. If you’re running a BigCommerce SEO strategy on a catalog with more than a few hundred products, this is the post that keeps your index clean.

The Faceted Navigation URL Pattern BigCommerce Generates

BigCommerce’s faceted search appends query string parameters to the category URL every time a shopper clicks a filter. A customer browsing /shoes/ who selects the “Nike” brand filter and the “$50-$100” price range lands on this URL:

/shoes/?brand[]=Nike&price=50-100

Add a second brand, and the URL grows:

/shoes/?brand[]=Nike&brand[]=Adidas&price=50-100

The bracket notation (brand[]) tells BigCommerce this is a multi-select facet. Single-select facets like price use a plain key-value pair. Color filters, size filters, custom fields. They all stack. The URL length grows linearly with each selected facet, and every unique combination is a distinct URL from Googlebot’s perspective.

Here’s what makes this dangerous: BigCommerce renders these filtered URLs as full HTML pages with their own <title> tags, <meta> descriptions, and product grids. The title tag typically inherits the parent category title. The meta description does the same. So you end up with dozens or hundreds of pages sharing the same title and description, each displaying a subset of the same product list. Google sees that as duplicate content, thin content, or both.

You can confirm this behavior in your store right now. Navigate to any category with active facets. Click a filter. Look at the URL bar. Then view page source and compare the <title> tag to the unfiltered category. They’ll match. That’s the core problem, and everything else in this post flows from it.

The Parameter Explosion Problem

The math gets ugly fast. A category page with the following facets creates a combinatorial explosion:

Multiply those together for multi-facet combinations, and a single category generates over 8,000 unique URLs. Across 50 categories, you’re looking at 400,000+ filterable URLs competing for crawl budget against your actual product and category pages. Google’s John Mueller has stated directly that excessive parameterized URLs dilute crawl budget, and BigCommerce’s default configuration does nothing to prevent it. Your BigCommerce SEO foundation erodes from the inside out when this goes unchecked.

How Rel=Canonical Works on Filter Combinations (and Where It Fails)

BigCommerce’s default behavior canonicalizes every filtered page back to the unfiltered parent category. If a shopper lands on /shoes/?brand[]=Nike&price=50-100, the <link rel="canonical"> tag in the <head> points to /shoes/. This is the right move for most filter combinations. It consolidates ranking signals to the parent category and tells Google to ignore the filtered variant.

The problem is that BigCommerce applies this rule universally. Every filter combination canonicalizes to the parent. No exceptions. No conditional logic. That’s a problem when a specific filter combination has genuine search volume.

Take “nike running shoes under 100” as an example. That query has 2,400 monthly searches. The filtered URL /running-shoes/?brand[]=Nike&price=50-100 is the most relevant page on your site for that query. But the canonical tag tells Google to ignore it and rank /running-shoes/ instead. A page showing all brands at all price points. Google listens to the canonical, skips the filtered page, and your store loses traffic it should have captured.

Overriding the Canonical in the Stencil Template

BigCommerce doesn’t offer a UI toggle to override canonicals on specific filter combinations. You have to do it in the Stencil theme layer. Open your theme’s templates/pages/category.html file and locate the <head> section, or more precisely the partial that renders canonical tags. In most Stencil themes, this lives in templates/components/common/head.html.

The default canonical output looks like this:

{{#if page.canonical_url}}
  <link rel="canonical" href="{{page.canonical_url}}">
{{/if}}

BigCommerce populates page.canonical_url server-side, stripping all query parameters. To override this for specific filter combinations, you need to inject conditional logic that checks the current URL parameters and decides if the canonical should be self-referencing:

{{#if page.canonical_url}}
  {{#if (contains current_url "brand[]=Nike" )}}
    {{#if (contains current_url "price=50-100")}}
      <link rel="canonical" href="{{current_url}}">
    {{else}}
      <link rel="canonical" href="{{page.canonical_url}}">
    {{/if}}
  {{else}}
    <link rel="canonical" href="{{page.canonical_url}}">
  {{/if}}
{{/if}}

This is a simplified example. In production, you’d build a more robust helper that checks against a defined list of “high-value” filter combinations stored in a theme setting or custom JSON file. The Stencil Handlebars contains helper works for string matching, but be careful: parameter order in the URL isn’t guaranteed. brand[]=Nike&price=50-100 and price=50-100&brand[]=Nike are different strings but the same page. Your logic needs to account for both orderings, or normalize the URL before comparing.

Failure mode: If you override the canonical on too many filter combinations, you re-create the duplicate content problem you’re trying to solve. Only override for filter combos with proven search volume. Pull keyword data from Google Search Console or Ahrefs before making any canonical exceptions. A filter combo that gets zero impressions in GSC doesn’t need its own canonical.

The Stencil Template Line That Controls Faceted URL Generation

Faceted search rendering starts in templates/pages/category.html with a single conditional block:

{{#if category.faceted_search_enabled}}
  {{> components/category/shop-by-price}}
  {{#each facets}}
    {{> components/category/sidebar}}
  {{/each}}
{{/if}}

This block checks the faceted_search_enabled flag, which you control in the BigCommerce admin at Products > Product Filtering. When enabled, it loops through each configured facet and renders the sidebar filter UI. The sidebar partial (components/category/sidebar.html) outputs the individual filter links. Each link includes the query parameter for that facet value.

But the sidebar HTML isn’t where the URL actually gets built. That happens in JavaScript. Open assets/js/theme/category.js and look for the faceted search event handler. In Cornerstone (BigCommerce’s reference theme), it’s in the onReady() method:

import FacetedSearch from './common/faceted-search';

onReady() {
  if ($('#facetedSearch').length) {
    this.initFacetedSearch();
  }
}

initFacetedSearch() {
  const facetedSearchOptions = {
    config: {
      category: {
        shop_by_price: true,
        products: {
          limit: this.context.categoryProductsPerPage,
        },
      },
    },
    template: {
      productListing: 'category/product-listing',
      sidebar: 'category/sidebar',
    },
    showMore: 'category/show-more',
  };

  new FacetedSearch(facetedSearchOptions);
}

The FacetedSearch class (located in assets/js/theme/common/faceted-search.js) handles the URL parameter manipulation. When a shopper clicks a filter checkbox, this class reads the current URL, appends or removes the relevant query parameter, pushes the new URL to the browser’s history, and fires an AJAX request to fetch the filtered product grid. The URL construction happens in the updateView() method, which calls urlUtils.buildQueryString().

Controlling Which Facets Generate Indexable URLs

The real control point is in the admin panel at Products > Product Filtering > Configure. Each facet (brand, price, rating, custom fields) has an on/off toggle. Disabling a facet here removes it from the sidebar and stops BigCommerce from generating filter URLs for it. But this is a blunt instrument. You can’t disable a facet for search engines while keeping it available to shoppers.

For more granular control, modify the faceted-search.js file to add rel="nofollow" to filter links you don’t want crawled. In the template partial that renders each facet link, wrap the anchor in a conditional:

{{#if (isSeoFacet facet.name)}}
  <a href="{{url}}">{{name}} ({{count}})</a>
{{else}}
  <a href="{{url}}" rel="nofollow">{{name}} ({{count}})</a>
{{/if}}

You’d define the isSeoFacet helper in your theme’s Handlebars helpers to check against a whitelist of facets you want indexed. Brand facets might be worth indexing. Rating facets almost never are. This approach keeps the shopper experience intact while guiding Googlebot away from low-value filter pages.

Failure mode: Adding rel="nofollow" to facet links doesn’t prevent indexing. It prevents link equity from flowing through those links. Google can still discover and index the filtered URLs through sitemaps, internal links elsewhere on the site, or external backlinks. You need nofollow combined with noindex meta tags for complete control. One without the other leaves gaps.

Using Robots.txt and Meta Robots to Block Low-Value Filter URLs

BigCommerce provides a built-in robots.txt editor in the admin panel at Storefront > Robots.txt. This is one of the few SEO controls BigCommerce gives you at the server level, and it’s critical for managing faceted navigation crawl waste.

The simplest rule blocks all filter parameters:

Disallow: /*?brand
Disallow: /*?price
Disallow: /*?color
Disallow: /*?size

This tells crawlers not to fetch any URL containing those query parameters. It’s effective but blunt. If you have brand filter URLs that rank and drive traffic, this rule kills them. A Disallow in robots.txt also doesn’t remove already-indexed pages. It just prevents future crawling. Pages already in Google’s index stay there, accruing impressions (and potentially rankings) for months.

The Better Approach: Meta Robots Tags

A meta robots tag gives you page-level control and actually tells Google to remove the page from its index. The implementation goes in your Stencil template. In templates/components/common/head.html, add conditional logic above the existing canonical tag:

{{#if (hasLowValueFilters current_url)}}
  <meta name="robots" content="noindex,follow">
{{else}}
  <meta name="robots" content="index,follow">
{{/if}}

The noindex,follow combination is deliberate. noindex removes the page from search results. follow tells Google to still crawl the links on the page, which means product links on a filtered page still pass equity to the individual product pages. Switching to noindex,nofollow would cut off that equity flow, which hurts your product pages.

The hasLowValueFilters helper needs to check for filter parameters that don’t correspond to high-value search queries. Here’s a practical implementation approach:

  1. Export your Google Search Console data for the last 12 months. Filter for URLs containing query parameters.
  2. Identify filter combinations that generated more than 100 impressions and had a click-through rate above 1%.
  3. Those are your “high-value” filter combinations. Everything else gets noindex.
  4. Store the high-value combinations in your theme’s config.json or a custom settings file.
  5. Build the Handlebars helper to check the current URL against that list.

Failure mode: If you apply noindex to filter pages that have existing backlinks, you lose the equity those backlinks carry. Check Ahrefs or Moz for inbound links to filtered URLs before noindexing them. If a filtered URL has strong backlinks, keep it indexed and set a self-referencing canonical instead.

Combining Robots.txt and Meta Robots

Don’t use both on the same URL. If robots.txt blocks a URL, Google can’t crawl it, which means it can’t read the meta robots tag. The noindex directive never gets processed. Google has confirmed that blocked URLs can remain in the index indefinitely if they were indexed before the robots.txt rule was added. Use robots.txt for URL patterns you never want crawled at all (like internal search result pages). Use meta robots for filter URLs where you want Google to crawl the page (passing link equity) but not index it.

Pagination Canonical Chain Issues

BigCommerce paginates category pages using the page query parameter. Page 2 of the “Shoes” category loads at /shoes/?page=2. By default, BigCommerce canonicalizes every paginated page back to page 1. /shoes/?page=2 gets <link rel="canonical" href="/shoes/">. Same for page 3, page 4, and page 47.

For categories with 10 or fewer pages, this is usually fine. The products on pages 2-10 are also discoverable through other paths (search, related products, cross-sells). But for categories with 50+ pages, this canonical structure creates real problems.

Why Deep Pagination Needs Its Own Canonicals

A category with 1,000 products at 20 per page creates 50 paginated URLs. Products on page 40 are 800 products deep. The only internal link path to those products goes through the pagination chain: page 1 links to page 2, page 2 links to page 3, and so on. If every page canonicalizes to page 1, Google has no incentive to crawl past page 1. The canonical tag says “this is the same as page 1,” so Google treats pages 2-50 as duplicates and often skips crawling them entirely.

The products on pages 30-50 become effectively invisible to Google. They don’t get crawled. They don’t get indexed. They don’t rank. You can verify this in Google Search Console by checking the “Crawled, currently not indexed” and “Discovered, currently not indexed” reports. Deep-pagination products almost always show up there.

Fixing the Pagination Canonical in Stencil

The pagination partial in Stencil themes lives at templates/components/common/pagination.html (or templates/components/category/pagination.html in some themes). The canonical tag itself is rendered in the head.html partial, but the pagination context is available there through the pagination object.

Replace the default canonical logic with self-referencing canonicals for paginated pages:

{{#if pagination.current}}
  {{#if (gt pagination.current 1)}}
    <link rel="canonical" href="{{category.url}}?page={{pagination.current}}">
  {{else}}
    <link rel="canonical" href="{{category.url}}">
  {{/if}}
{{else}}
  <link rel="canonical" href="{{page.canonical_url}}">
{{/if}}

This tells Google that page 2 is its own canonical page, not a duplicate of page 1. Google can then decide independently if page 2 is worth indexing based on its content. Pair this with rel="prev" and rel="next" tags to signal the pagination relationship. Google deprecated these as ranking signals in 2019, but Bing still uses them, and they help crawlers understand the sequence:

{{#if pagination.previous}}
  <link rel="prev" href="{{category.url}}{{#if (gt pagination.previous 1)}}?page={{pagination.previous}}{{/if}}">
{{/if}}
{{#if pagination.next}}
  <link rel="next" href="{{category.url}}?page={{pagination.next}}">
{{/if}}

Failure mode: Self-referencing canonicals on paginated pages can create thin content issues if the paginated pages have identical title tags and meta descriptions. Update the title tag template to include the page number: {{category.name}} - Page {{pagination.current}} | {{settings.store_name}}. Without unique titles, Google may still treat paginated pages as duplicates even with correct canonicals.

Pagination Combined with Filters

The worst-case scenario is pagination stacked on top of filters. /shoes/?brand[]=Nike&page=3 is page 3 of Nike-filtered shoes. The canonical chain logic needs to handle both the filter parameters and the page number. If you’ve set the filter combination to noindex, then all its paginated variants should also be noindex. If you’ve set the filter combination as a high-value canonical-override, then its paginated pages need self-referencing canonicals that include both the filter and page parameters.

BigCommerce’s default behavior canonicalizes /shoes/?brand[]=Nike&page=3 to /shoes/, stripping both the filter and the page number. That’s two layers of signal loss on a single URL. Your Stencil template logic needs to handle four scenarios:

  1. Unfiltered page 1: canonical to self (the category URL)
  2. Unfiltered page 2+: canonical to self with page parameter
  3. High-value filter, page 1: canonical to self with filter parameter
  4. High-value filter, page 2+: canonical to self with filter and page parameters

All other combinations get noindex,follow.

Category Page Content Strategy: Intro Text Above the Product Grid

BigCommerce’s category pages default to a product grid with zero unique text content. The category name appears as an H1, and that’s it. Every filtered variant of that category has the same H1, the same empty content area, and the same product grid (minus a few products). From Google’s perspective, these are thin pages with near-identical content.

Adding 150-300 words of unique intro text above the product grid solves two problems at once. It differentiates the category page from its filtered variants, and it gives Google text signals to understand what the category is about. You add this content in the BigCommerce admin at Products > Product Categories > [Category Name] > Description. The description field supports HTML, so you can include internal links, formatted text, and even structured content.

What to Write in Category Descriptions

Skip the generic marketing copy. “Browse our wide selection of shoes” adds zero SEO value because every shoe store on the internet says the same thing. Instead, write content that answers the specific questions a shopper has when they land on that category:

The description field renders above the product grid in most Stencil themes. Check your theme’s templates/pages/category.html to confirm the placement. Look for:

{{#if category.description}}
  <div class="category-description">
    {{{category.description}}}
  </div>
{{/if}}

The triple-brace syntax ({{{ }}}) tells Handlebars to render raw HTML without escaping. If your theme uses double braces ({{ }}), HTML tags in your category description will render as plain text. Fix this in the template before writing descriptions that include links or formatting.

Failure mode: Category descriptions that are identical or near-identical across categories create site-wide thin content patterns. “We offer the best selection of [category] at great prices” templated across 50 categories does more harm than good. Each description needs genuinely unique, specific content. If you can’t write unique descriptions for all categories, prioritize the top 20 by revenue and leave the rest blank.

Sub-Category Architecture and URL Depth

BigCommerce allows unlimited category nesting. A top-level “Clothing” category can contain “Men’s Clothing,” which contains “Men’s Shirts,” which contains “Men’s Dress Shirts,” which contains “Men’s Slim Fit Dress Shirts.” Each level adds a URL segment: /clothing/mens-clothing/mens-shirts/mens-dress-shirts/mens-slim-fit-dress-shirts/. That’s five levels deep.

Deep nesting creates three distinct SEO problems. First, URL depth correlates with reduced crawl frequency. Google’s crawl scheduler prioritizes URLs closer to the root. A page five clicks from the homepage gets crawled less often than a page two clicks away. Second, link equity dilutes with each level. The homepage passes equity to level 1 categories, which pass a portion to level 2, which pass a smaller portion to level 3. By level 5, the equity trickle is negligible. Third, deep sub-categories often duplicate the content of their parent categories with minor filtering differences.

Flattening the Category Hierarchy

Limit your BigCommerce category tree to three levels maximum. Configure this in Products > Product Categories by restructuring your hierarchy. Instead of five levels of clothing sub-categories, use three levels and let faceted search handle the rest:

Dress shirts, slim fit, regular fit. Those become facet filters on the /clothing/mens/shirts/ category, not sub-categories with their own URLs. This keeps URLs short, concentrates link equity on fewer pages, and reduces crawl depth. The products are still browseable. The shopper experience doesn’t change. You just consolidate the URL structure to match what Google prefers.

For stores with existing deep hierarchies, use 301 redirects when flattening. A URL like /clothing/mens-clothing/mens-shirts/mens-dress-shirts/ should 301 to /clothing/mens/shirts/?style=dress. Set up redirects in the BigCommerce admin at Server Settings > 301 Redirects or via the BigCommerce API for bulk operations. The BigCommerce development side of this involves API scripting for stores with hundreds of redirects to process.

Failure mode: Flattening category hierarchy without redirects creates 404 errors for every bookmarked, linked, or indexed deep URL. Run a full crawl with Screaming Frog before restructuring. Export all existing category URLs. Map every one to its new destination. Implement all 301s before changing the category tree. Missing even one redirect means lost traffic and broken backlinks.

Crawl Budget Implications of Faceted Navigation

Crawl budget is the number of URLs Google will crawl on your site within a given time window. BigCommerce stores served through Akamai’s CDN benefit from fast server response times, which increases crawl rate capacity. Akamai’s edge caching means Google gets sub-200ms responses for most pages, so Googlebot can crawl more URLs per session. That sounds like a good thing until you realize it means Google is burning through your faceted navigation URLs faster than it would on a slower server.

How Akamai CDN Behavior Affects Crawl Patterns

BigCommerce routes all storefront traffic through Akamai. The CDN caches category page responses at edge locations, including filtered variants. When Googlebot hits /shoes/?brand[]=Nike, Akamai serves the cached response in under 100ms. Googlebot interprets this as “this server is fast, I can crawl more.” So it does. It pulls every filter combination it has discovered, processes them quickly, and moves on. Your crawl budget gets consumed by thousands of low-value filter pages that Akamai serves efficiently.

The Akamai cache behavior for BigCommerce stores treats query string parameters as cache key components by default. /shoes/ and /shoes/?brand[]=Nike are separate cache entries. Each filtered URL gets its own cached response. This is correct for user experience (filtered results should be fast) but problematic for SEO because it removes the natural throttle that slow server responses provide against crawl waste.

Measuring Your Crawl Budget Waste

Google Search Console’s Crawl Stats report (found under Settings > Crawl Stats) shows total crawl requests per day. Export this data and cross-reference with your server logs (available through BigCommerce’s support team or your CDN analytics) to identify what percentage of crawl requests go to filtered URLs.

A healthy BigCommerce store should see less than 15% of crawl budget going to filtered URLs. If your filtered URL crawl share exceeds 30%, you’re losing significant crawl capacity to duplicate content variants. The fixes described in this post, including meta robots tags, canonical overrides, and robots.txt rules, bring that ratio back in line.

The Interaction Between Sitemap and Crawl Budget

BigCommerce auto-generates an XML sitemap at /xmlsitemap.php. This sitemap includes category pages but excludes filtered variants. That’s the correct behavior. The problem arises when Google discovers filtered URLs through internal links (faceted navigation sidebar links) and crawls them anyway, regardless of sitemap inclusion.

Your sitemap tells Google what you consider important. Your internal links tell Google what pages exist. When those two signals conflict, Google follows the links. A sitemap listing 200 category pages paired with internal links to 10,000 filter variants sends a mixed signal. Google resolves the conflict by crawling everything and making its own decisions about what to index. The structured data you add to these pages becomes another signal Google uses to assess page importance and uniqueness.

Aligning your sitemap and your internal link signals means ensuring that filter links carry nofollow attributes (for low-value facets) and that your sitemap only includes the canonical versions of pages you want indexed. The combination of sitemap discipline, canonical tags, meta robots directives, and link-level controls gives you a four-layer defense against crawl budget waste.

Implementation Checklist

Fixing BigCommerce category page duplicate content requires changes across four layers: the admin panel, the Stencil template, the theme JavaScript, and the robots.txt file. Here’s the execution order, which matters because later steps depend on earlier ones:

  1. Audit current state: Crawl your site with Screaming Frog. Export all URLs with query parameters. Identify total filter URL count and existing canonical/meta robots directives.
  2. Classify filter combinations: Cross-reference filter URLs against Google Search Console impression data. Tag each combination as “high-value” (worth indexing) or “low-value” (noindex).
  3. Update head.html partial: Add conditional canonical logic and meta robots tags based on your classification.
  4. Update facet link templates: Add rel="nofollow" to sidebar filter links for low-value facets.
  5. Fix pagination canonicals: Implement self-referencing canonicals for paginated pages.
  6. Update robots.txt: Block crawling of filter parameter patterns that have zero indexing value (ratings, internal sort orders).
  7. Write category descriptions: Add unique intro text to your top 20 revenue-generating categories.
  8. Flatten category hierarchy: Restructure any category tree deeper than 3 levels. Implement 301 redirects for all changed URLs.
  9. Monitor for 30 days: Track crawl stats, index coverage, and ranking changes in Google Search Console. Adjust classifications based on data.

Skip a step, and the others lose effectiveness. Canonical tags without meta robots tags leave gaps. Meta robots without nofollow links still waste crawl budget on discovery. Nofollow links without canonical fixes still send confusing signals. The full stack of fixes working together is what produces the result.

Frequently Asked Questions

Does BigCommerce automatically handle duplicate content from category page filters?

BigCommerce sets a default rel=canonical tag on filtered category pages that points back to the unfiltered parent category. This handles the most basic duplicate content scenario. It does not handle high-value filter combinations that deserve their own index entry, it doesn’t address pagination canonical chains, and it doesn’t apply noindex tags to low-value filter pages. The default behavior is a starting point, not a complete solution. Stores with more than 10 active facets across multiple categories need manual intervention in the Stencil template layer to get canonical and meta robots directives right.

Should I noindex all BigCommerce category page filter URLs?

No. Noindexing every filter URL throws away traffic opportunities. Filter combinations that match real search queries (“nike running shoes under 100,” “red cocktail dresses size 8”) deserve indexing. Pull 12 months of Google Search Console data and identify filter URLs that received impressions. Any filter URL with consistent impressions for a query that has commercial intent should stay indexed with a self-referencing canonical. Noindex everything else. The typical split is 5-10% of filter combinations worth indexing and 90-95% that should be noindexed.

How do I fix pagination canonical tags in BigCommerce?

BigCommerce defaults to canonicalizing all paginated category pages (page 2, page 3, etc.) back to page 1. Fix this by editing the head.html partial in your Stencil theme. Add a conditional that checks pagination.current and sets a self-referencing canonical for pages beyond page 1. Include the page parameter in the canonical URL: <link rel="canonical" href="{{category.url}}?page={{pagination.current}}">. Also update the title tag to include the page number so Google treats each paginated page as unique content. Without the title differentiation, self-referencing canonicals alone won’t prevent Google from collapsing paginated pages.

What’s the difference between using robots.txt and meta robots tags for filter URLs?

Robots.txt blocks Google from crawling a URL at all. The page never gets fetched, so Google can’t read any directives on it. If the URL was previously indexed, it stays in the index because Google can’t access the page to process a removal signal. Meta robots tags require Google to crawl the page first. Google fetches the URL, reads the noindex directive, and removes the page from search results. The follow value in noindex,follow also tells Google to follow the links on that page, passing equity to linked product pages. Use robots.txt only for URLs that have zero value to crawl (internal search pages, sort-order variants). Use meta robots for filter URLs where you want the link equity flowing through to products.

How much crawl budget do faceted navigation URLs actually waste on BigCommerce?

On a BigCommerce store with 50 categories and 10 facets per category, faceted navigation can generate 50,000 to 500,000 unique URLs depending on filter combination depth. Google Search Console’s Crawl Stats report typically shows 40-60% of total crawl requests going to these parameterized URLs on stores without faceted navigation controls. After implementing the canonical, meta robots, and nofollow fixes described in this post, that ratio drops to 10-15%. The freed crawl budget gets redirected to product pages, blog posts, and high-value category pages, which is where you actually want Google spending time.

Ready to automate your marketing?

Deploy 7 AI agents per client. Research, strategy, content, SEO, and sales on autopilot.

Get Started
FK
Faris Khalil
Founder and lead developer at Digital Roxy. Builds custom e-commerce stores on Shopify, WordPress, and BigCommerce. Specializes in platform migrations, headless architecture, and AI-driven marketing systems for agencies.
Scroll to Top