Technical SEO Checklist: Fix What’s Costing You Rankings

A technical SEO checklist covers the structural and infrastructure elements of your website that affect how search engines crawl, index, and rank your pages. It includes crawlability, site speed, mobile usability, indexation settings, structured data, and Core Web Vitals. Get these foundations wrong and the quality of your content becomes largely irrelevant.

Most ranking problems I’ve seen over the years aren’t content problems. They’re infrastructure problems that nobody noticed because the tools weren’t configured to surface them, or the team was too focused on publishing to look at what was already broken.

Key Takeaways

  • Technical SEO issues silently suppress rankings regardless of content quality. Crawl errors, duplicate content, and slow load times are often invisible until you audit for them specifically.
  • Indexation control matters more than most teams realise. Noindex tags, robots.txt misconfigurations, and canonical errors can deindex pages you actively want to rank.
  • Core Web Vitals are a real ranking signal, but chasing perfect scores at the expense of user experience is the wrong trade-off. Optimise for the user, not the metric.
  • A technical audit without a prioritised fix list is just a document. Severity and traffic impact should determine what gets fixed first, not what’s easiest.
  • Technical SEO is not a one-time project. Sites accumulate debt as they grow, and regular crawls catch problems before they compound into ranking drops.

Technical SEO sits inside a broader strategic picture. If you’re building out your SEO programme from the ground up, the Complete SEO Strategy hub covers the full framework, from keyword strategy and content architecture through to link acquisition and measurement. This checklist focuses specifically on the technical layer.

Why Technical SEO Fails Quietly

When I was leading the SEO division at iProspect, we inherited a client account where the previous agency had delivered months of content production. Good content, well-researched, properly targeted. Traffic had flatlined. The client was frustrated and the relationship was close to breaking down.

We ran a crawl on day one. The site had a noindex tag applied to an entire subdirectory that contained most of the commercial pages. A developer had added it during a staging migration and nobody had removed it. The content team had been publishing into a black hole for four months.

This is not an unusual story. Technical SEO problems are quiet. They don’t announce themselves. Rankings slip gradually, or they never materialise in the first place, and the team assumes the content isn’t good enough or the keywords are too competitive. The actual cause sits in a configuration file nobody has looked at.

The other failure mode I see regularly is over-engineering. Teams implement hreflang across a site that doesn’t need it, add structured data schemas to pages where they add no value, or build elaborate redirect chains in the name of “SEO hygiene.” Complexity creates fragility. Every unnecessary layer is another thing that can break. Semrush’s technical audit guide covers the core elements worth auditing without overcomplicating the scope.

The checklist below is ordered by priority, not by what’s most technically interesting. Fix the things that are actively suppressing rankings first. Optimise for marginal gains second.

Crawlability and Indexation

Before Google can rank your pages, it needs to find and index them. Crawlability and indexation are the most fundamental layer of technical SEO, and the most commonly broken.

Robots.txt

Your robots.txt file tells crawlers which parts of your site to access and which to skip. Check that it exists, that it’s accessible at yourdomain.com/robots.txt, and that it isn’t blocking pages you want indexed. A disallow rule on /blog/ or /products/ is a common mistake that surfaces during migrations and rarely gets caught before it causes damage.

Equally, robots.txt should not be treated as a security tool. It doesn’t prevent pages from being indexed if they have inbound links. It only prevents crawling. If you want a page excluded from search results, noindex is the correct mechanism.

Noindex Tags and Meta Robots

Check every page you want indexed for noindex tags in the HTML head or in the X-Robots-Tag HTTP header. CMS plugins, particularly in WordPress environments, often add noindex tags to entire site sections with a single checkbox. It’s easy to enable, easy to forget, and slow to detect if you’re not running regular crawls.

Also check that your staging environment is correctly blocked. It’s surprisingly common to find staging content indexed in Google Search Console, particularly after a site launch where the staging block was removed and not reinstated.

XML Sitemaps

Your sitemap should list only the canonical, indexable URLs on your site. If it contains noindex pages, redirected URLs, or pages returning 4xx errors, you’re sending Google mixed signals. Submit your sitemap in Google Search Console and check the coverage report for errors. A sitemap full of excluded URLs is worse than no sitemap at all, because it creates unnecessary crawl work and signals poor site hygiene.

Crawl Budget

For most sites under a few thousand pages, crawl budget isn’t a significant concern. For large e-commerce sites, news publishers, or sites with faceted navigation, it matters considerably. Faceted navigation can generate millions of URL combinations that dilute crawl budget and create duplicate content at scale. Pagination, filters, and sorting parameters all need to be managed through a combination of noindex, canonical tags, or robots.txt exclusions depending on the specific situation.

Duplicate Content and Canonicalisation

Duplicate content is one of the most persistent technical SEO problems, and one of the least understood. It doesn’t always result in a penalty. More often, it splits ranking signals between multiple versions of the same content, diluting the authority of the page you actually want to rank.

Canonical Tags

Every indexable page should have a self-referencing canonical tag unless you’re explicitly pointing to a preferred version elsewhere. Check for canonical tags that point to the wrong URL, to pages that redirect, or to noindex pages. These create canonicalisation loops that confuse crawlers and are difficult to diagnose without a structured audit.

Common canonical problems I encounter during audits: canonical tags pointing to HTTP when the site has migrated to HTTPS, canonical tags pointing to www when the preferred domain is non-www, and canonical tags that were set correctly on the old CMS but were lost during a platform migration.

URL Parameter Handling

Tracking parameters, session IDs, and sorting parameters all create URL variations that can be indexed as separate pages. Use Google Search Console’s URL parameters tool to indicate how Google should handle them, or implement canonical tags on parameterised URLs pointing to the clean version. Consistency matters more than the specific approach you choose.

Protocol and Domain Consistency

Your site should resolve to a single canonical version: one protocol (HTTPS), one subdomain preference (www or non-www), and one URL structure (trailing slash or no trailing slash). If http://example.com, https://example.com, http://www.example.com, and https://www.example.com all return 200 responses rather than redirecting to a single preferred version, you have four competing versions of your homepage in Google’s index. Check this with a simple browser test and fix it with server-level 301 redirects.

Site Speed and Core Web Vitals

Core Web Vitals became a confirmed Google ranking signal in 2021. The three metrics are Largest Contentful Paint (LCP), which measures loading performance; Interaction to Next Paint (INP), which replaced First Input Delay as the interactivity measure; and Cumulative Layout Shift (CLS), which measures visual stability.

My honest view on Core Web Vitals: they matter, but they’re frequently used as a justification for expensive development work that produces marginal ranking improvements. I’ve seen agencies charge significant fees for CWV optimisation projects on sites that had no meaningful ranking suppression from speed issues. Before investing heavily here, check whether your CWV scores are actually in the “Needs Improvement” or “Poor” range in Search Console. If you’re already in the “Good” range, the return on further optimisation is limited.

LCP Optimisation

The Largest Contentful Paint element is typically a hero image, a large block of text, or a video thumbnail. To improve LCP: serve images in next-gen formats (WebP or AVIF), implement lazy loading for below-the-fold images but not for the LCP element itself, use a content delivery network, and ensure your server response time is under 200ms. The most common LCP problem I see is render-blocking JavaScript that delays the browser from painting the page.

CLS and Layout Stability

Cumulative Layout Shift is caused by elements that load after the page has already rendered and push content around. The most common causes are images without defined dimensions, dynamically injected content like cookie banners and ads, and web fonts that swap in after initial render. Define explicit width and height attributes on all images. Use font-display: swap with a fallback font that closely matches the web font dimensions.

Page Speed Beyond Core Web Vitals

Total page weight, time to first byte, and render-blocking resources all affect user experience even when they don’t directly map to CWV scores. Minify CSS and JavaScript, eliminate unused code, and compress server responses with Brotli or gzip. Google PageSpeed Insights and the Chrome User Experience Report give you field data based on real users rather than lab conditions, which is what Google actually uses for ranking. Lab data from tools like Lighthouse is useful for diagnosis but doesn’t represent your actual CWV scores in Search Console.

Mobile Usability

Google operates on mobile-first indexing, meaning the mobile version of your site is what Google primarily uses for indexing and ranking. If your mobile experience is degraded relative to desktop, your rankings reflect the mobile version.

Check the Mobile Usability report in Google Search Console for specific errors: text too small to read, clickable elements too close together, content wider than screen. These are straightforward to fix and the report tells you exactly which pages are affected.

Beyond the technical errors, check that your mobile content matches your desktop content. If you’re hiding content on mobile for design reasons, that content may not be indexed. Google indexes what the crawler sees, and the crawler uses a mobile user agent.

HTTPS and Security

HTTPS has been a confirmed Google ranking signal since 2014. If your site is still running on HTTP, fixing this should be the first item on your list, not somewhere in the middle. Beyond ranking, browsers now actively warn users when they’re on insecure connections, which affects conversion rates independently of SEO.

After migrating to HTTPS, verify that all internal links, canonical tags, sitemap URLs, and hreflang tags reference HTTPS URLs. A common post-migration problem is mixed content: pages served over HTTPS that reference HTTP resources like images, scripts, or stylesheets. Mixed content triggers browser warnings and can cause resources to fail to load entirely in some browsers.

Check your SSL certificate expiry date and set up automated renewal. A lapsed certificate takes your site offline for visitors and creates an immediate crawlability problem. It’s an entirely avoidable issue that I’ve seen cause significant traffic drops at organisations that should have had monitoring in place.

Structured Data and Schema Markup

Structured data doesn’t directly improve rankings, but it enables rich results in the SERP: star ratings, FAQ dropdowns, product information, recipe cards, event details. These enhanced listings can meaningfully improve click-through rates from the same ranking position.

The most commonly valuable schema types for most sites are Article, Product, FAQ, LocalBusiness, and BreadcrumbList. Implement what’s relevant to your content and business model. Don’t implement schema for the sake of it. I’ve audited sites with fifteen different schema types applied to pages where half of them were irrelevant or incorrectly implemented, which creates validation errors that undermine the schemas that should be working.

Use Google’s Rich Results Test to validate your implementation. Search Console also has a rich results report that shows which pages are eligible for enhanced features and which have errors preventing them from appearing. Fix errors before adding new schema types.

One important constraint: your structured data must accurately represent the content on the page. Google’s guidelines prohibit schema that describes content not visible to users, and violations can result in manual actions that remove rich results entirely. Optimizely’s SEO checklist covers structured data implementation alongside other on-page technical considerations worth reviewing.

Internal Linking and Site Architecture

Site architecture affects how PageRank flows through your site and how efficiently crawlers can discover and index your content. Pages buried deep in the site structure, requiring many clicks from the homepage, receive less crawl attention and less internal link equity.

Aim for a flat architecture where important pages are reachable within three clicks from the homepage. This isn’t always achievable for large sites, but it’s a useful principle for prioritising which pages get prominent internal links.

Audit your internal links for broken links returning 4xx errors, links pointing to redirected URLs rather than the final destination, and orphaned pages with no internal links at all. Orphaned pages are invisible to crawlers unless they’re in your sitemap, and they accumulate on most sites over time as content is published without being linked from existing pages.

Anchor text matters for internal links in a way that it no longer does for external links after years of algorithmic refinement. Use descriptive anchor text that reflects the topic of the destination page. Generic anchors like “click here” or “read more” pass no topical signal. This is a quick win that requires no development resource, just editorial discipline.

Redirect Management

Redirects are necessary and legitimate. The problems arise when they’re implemented carelessly, left in place indefinitely, or allowed to chain.

A redirect chain occurs when URL A redirects to URL B, which redirects to URL C. Each hop loses a small amount of link equity and adds latency. Crawlers also have a hop limit, beyond which they stop following redirects. Audit your redirects and flatten chains to point directly from the original URL to the final destination.

Redirect loops, where A redirects to B and B redirects back to A, cause pages to return errors rather than content. They’re easy to create accidentally during CMS migrations and surprisingly difficult to diagnose without a crawl tool.

After any significant site migration, run a full crawl to verify that all old URLs are correctly redirecting to the appropriate new URLs. Check that internal links, canonical tags, and sitemap entries have been updated to reference the new URLs rather than relying on redirects indefinitely. Redirects should be temporary infrastructure, not permanent plumbing.

Running Your Technical Audit

The tools most commonly used for technical SEO audits are Screaming Frog (desktop crawler), Semrush Site Audit, Ahrefs Site Audit, and Google Search Console. Each surfaces different data. Search Console gives you Google’s perspective on your site, which is authoritative but not always comprehensive. Third-party crawlers give you a more complete picture of your site’s technical state but don’t reflect what Google has actually indexed.

I’d treat Search Console as the primary source of truth for indexation and coverage issues, and a third-party crawler as the primary source for site-wide structural problems like broken links, redirect chains, and missing tags. They complement each other.

On the question of audit frequency: for most sites, a quarterly crawl is sufficient to catch problems before they compound. For large e-commerce sites or sites that publish high volumes of content, monthly audits make more sense. The right cadence is the one that means you’re finding problems before they affect traffic, not after. Moz’s quick-start SEO guide offers a useful baseline for teams setting up their first audit process.

When you have your audit results, prioritise by impact. A crawl error on a page that receives no traffic is a lower priority than a slow load time on your highest-converting landing page. An audit that produces a list of 200 issues without prioritisation is not useful. It’s just a to-do list that nobody will finish.

One pattern I’ve seen repeatedly when managing large accounts: teams fix the issues that are easiest to fix rather than the ones that matter most. The developer fixes 40 missing alt tags because it’s a quick win, while the crawl budget problem on the faceted navigation goes unaddressed for another quarter. Build your fix list around traffic impact and ranking suppression, not technical tidiness.

For B2B sites specifically, the technical considerations don’t change, but the priority weighting can shift. If you’re running a site with a small number of high-value pages rather than thousands of pages, indexation accuracy and page speed on key landing pages matter more than crawl budget optimisation. Moz’s piece on B2B SEO strategy covers how technical priorities shift in that context.

Technical SEO is one component of a complete search strategy. If you’re working through the full picture, the Complete SEO Strategy hub brings together the technical, content, and authority-building elements in a single framework. Technical fixes without content and link acquisition will only take you so far, and the reverse is equally true.

About the Author

Keith Lacy is a marketing strategist and former agency CEO with 20+ years of experience across agency leadership, performance marketing, and commercial strategy. He writes The Marketing Juice to cut through the noise and share what works.

Frequently Asked Questions

How often should I run a technical SEO audit?
For most sites, a quarterly audit is sufficient to catch problems before they affect rankings. Large e-commerce sites or high-volume publishers benefit from monthly crawls. At minimum, run a full audit after any significant site change: a CMS migration, a URL restructure, a major template change, or a new section launch. Sites accumulate technical debt over time and problems compound when they go undetected.
What is the most common technical SEO mistake?
Incorrect indexation settings cause more ranking damage than any other single technical issue. Noindex tags applied to the wrong pages, robots.txt rules blocking important sections, and canonical tags pointing to the wrong URLs all prevent Google from indexing content you want to rank. These problems are often introduced during site migrations and go undetected for months because traffic declines gradually rather than dropping overnight.
Do Core Web Vitals scores directly affect rankings?
Core Web Vitals are a confirmed Google ranking signal, but they function as a tiebreaker rather than a primary ranking factor. A site with strong content, good authority, and poor CWV scores will generally outrank a site with perfect CWV scores and weak content. That said, if your scores are in the “Needs Improvement” or “Poor” range in Google Search Console, fixing them is worth prioritising. If you’re already in the “Good” range, the marginal ranking benefit of further optimisation is limited.
What tools do I need to run a technical SEO audit?
Google Search Console is free and provides authoritative data on indexation, coverage errors, Core Web Vitals, mobile usability, and rich results. It should be your first stop. Screaming Frog (free up to 500 URLs, paid for larger sites) is the most widely used desktop crawler for identifying broken links, redirect chains, missing tags, and duplicate content. Semrush Site Audit and Ahrefs Site Audit are cloud-based alternatives that include scheduling and historical comparisons. You don’t need all of them. Search Console plus one crawler covers the majority of technical audit requirements.
Does structured data improve rankings?
Structured data does not directly improve rankings. What it does is make your pages eligible for rich results in the SERP: star ratings, FAQ dropdowns, product pricing, event details. These enhanced listings can improve click-through rates from the same position, which can indirectly signal relevance to Google. Implement schema that accurately reflects your content and is relevant to your business model. Avoid implementing schema types purely because they exist. Incorrect or irrelevant schema creates validation errors that undermine the schemas that should be working.

Similar Posts