What Is Technical SEO?
Making sure a website fully complied with the standards of modern search engines to achieve higher organic rankings is known as technical SEO. Crawling, indexing, rendering, and website architecture are crucial components of technical SEO.
The Importance Of Technical SEO
Although on-page SEO and content are important, technical SEO is often disregarded by website owners because it seems “complicated.” Technical SEO, however, is essential because it enables search engines to crawl your website and rank it in the search results at the outset.
All the elements that make your website quick, responsive to mobile devices, user-friendly, and functional are included in technical SEO. Without it, you might have a visually appealing website that loads slowly, is unusable on mobile devices, fails to point users to the information they require, or experiences technical difficulties when visitors attempt to get in touch with you or purchase your goods.
Technical SEO Checklist
Technical SEO is the work you need to do to make sure your website shows the technical factors that search engines prefer in search results, such as a secure connection, a responsive design, or a quick loading time.
A list of crucial actions you can take to make sure your technical SEO is up to par is provided below. By adhering to these recommendations, you can ensure that the security and organization of your website satisfy search engine algorithm requirements and are rewarded in search results as such.
Migrate Your Site To HTTPS Protocol
Google first declared that the HTTPS protocol was a ranking factor in 2014. Therefore, if your website is still using HTTP in 2022, it’s time to make the change.
To ensure that the data provided is encrypted to prevent hacking or data leaks, HTTPS will safeguard the data of your visitors.
Update Your Page Experience – Core Web Vitals
Google’s new page experience signals combine Core Web Vitals with their existing search signals. These include:
If you need a refresher, Google’s Core Web Vitals are comprised of three factors:
- First Input Delay (FID) – FID measures when someone can first interact with the page. To ensure a good user experience, the page should have an FID of less than 100 ms.
- Largest Contentful Paint (LCP) – LCP measures the loading performance of the largest contentful element on the screen. This should happen within 2.5 seconds to provide a good user experience.
- Cumulative Layout Shift (CLS) – This measures the visual stability of elements on the screen. Sites should strive for their pages to maintain a CLS of less than .1 seconds.
These ranking factors can be measured in a report found in Google Search Console, which shows you which URLs have potential issues:
Here are plenty of tools to help you improve your site speed and Core Web Vitals, the main one being Google PageSpeed Insights.
Webpagetest.org is also a good place to check how fast different pages of your site are from various locations, operating systems, and devices.
Some optimizations you can make to improve website speed include:
- Implementing lazy-loading for non-critical images
- Optimizing image formats for the browser
Optimize & Reduce Image Size Without Affecting The Visual Appearance
If a website is loading really slow, one of the first things that come in mind are images. Why? Because they’re big. And we’re not talking in size on screen but in size on disk.
Besides all the information an image has, as mentioned before, it also downloads lots of bytes on a page, making the server take more time than it should to load all the information. Instead, if we optimize the page, the server will perform faster because we removed the additional bytes and irrelevant data. The fewer the downloaded bytes by the browser, the faster a browser can download and render content on the screen.
Since GIF, PNG, and JPEG are the most used types of extension for a picture, there are lots of solutions for compressing images.
Here are a few tips and recommendations to optimize your images:
- Use PageSpeed Insights;
- Compress images automatically in bulk with dedicated tools (tinypng.com, compressor.io, optimizilla.com) and plugins (WP Smush, CW Image Optimizer, SEO Friendly Images) and so on;
- Use GIF and PNG formats because they are lossless. PNG is the desired format. The best compression ratio with a better visual quality can be achieved by PNG formats;
- Convert GIF to PNG if the image is not an animation;
- Remove transparency if all of the pixels are opaque for GIF and PNG;
- Reduce quality to 85% for JPEG formats; that way you reduce the file size and don’t visually affect the quality;
- Use progressive format for images over 10k bytes;
- Prefer vector formats because they are resolution and scale independent;
- Remove unnecessary image metadata (camera information and settings);
- Use the option to “Save for Web” from dedicated editing programs.
Fix Broken Pages
Broken links can negatively impact user experience and break the flow of ‘authority’ into and around your website.
To find broken links on your site, use Ahrefs Webmaster Tools.
- Crawl your website with Site Audit
- Go to the Internal pages report
- Look for “404 page” errors
Here’s how to deal with any broken links you find:
Optimize Your 404 Page
404 page is shown to the users when the URL they visited does not exist on your website. Maybe the page was deleted, the URL was changed or they mistyped the URL in their browsers.
Most modern WordPress themes have optimized 404 pages by default, if not you can easily make your 404 page SEO friendlier by using a plugin or editing your theme templates.
What is an optimized 404 page?
An optimized 404 page should:
- Have the same structure and navigation menus as your website
- Tell visitors in a friendly language that the page they are looking for is no longer available
- Give them alternatives (suggest other related pages)
- Make it easy to go back to the previous page, your homepage, or other important pages
How to check your 404 pages?
Testing how your 404 page looks is very easy, just open a new browser window and type a URL on your website that does not exist. What will be shown in the browser is your 404 page.
Don’t spend too much time optimizing your 404 pages, just make sure that when a page is not found it returns a custom 404 page.
Robots.txt file is among the most important way to communicate to search engines where you want them to crawl and what pages you don’t want to be crawled.
Super important: The robots.txt file only controls the crawling of but not the indexing of pages.
Some small sites have one or two lines in the file, while massive sites have incredibly complex setups.
Your average site will have just a few lines, and it rarely changes week to week.
Despite the file rarely changing, it’s important to double-check that it’s still there and that nothing unintentional was added to it.
In the worst-case scenario, such as on a website relaunch or a new site update from your development team, the robots.txt file might get changed to “Disallow: /” to block search engines from crawling while the pages are under development on a staging server and then brought over to the live site with the disallow directive intact.
Make sure this is not on the live website:
User-agent: * Disallow: /
But if it’s a normal week, there won’t be any changes, and it should only take a minute.
Every site has a different configuration every week; you’ll want to compare it against your best-practice setup to ensure nothing has changed in error.
Check Meta Robots Tags
Meta robots tags are a great way to instruct crawlers how to treat individual pages. Meta robots tags are added to the <head> section of your HTML page, thus the instructions are applicable to the whole page. You can create multiple instructions by combining robots meta tag directives with commas or by using multiple meta tags. It may look like this:
<!DOCTYPE html> <html><head> <meta name="robots" content="noindex", "follow" /> (…) </head> <body>(…)</body> </html>
You can specify meta robots tags for various crawlers, for example:
<meta name="googlebot" content="noindex"> <meta name="googlebot-news" content="nosnippet">
Google understands such tags as:
- noindex to keep the page out of search index;
- nofollow not to follow the link,
- none that corresponds to noindex, nofollow;
- noarchive telling Google not to store a cached copy of your page.
The opposite tags index / follow / archive override the corresponding prohibiting directives. There are some other tags telling how the page may appear in search results, such as snippet / nosnippet / notranslate / nopagereadaloud / noimageindex.
If you use some other tags valid for other search engines but unknown to Google, Googlebot will simply ignore them.