Technical SEO includes a few practices that are dealing with crawling and indexation. The main issues that we will address are:
Crawling issues: We will make sure the Search Engine Bots will discover all the pages we are interested to rank with by:
- managing the XML sitemaps (no 404s, no redirections, no URLs with canonical tags pointing somewhere else, only URLs that we want to rank should be in sitemap)
- the meta robots
- the robots.txt file
Crawl budget optimisation:
- We find and compare the pages indexed by Google with the pages on the live website
- Find the old URLs that are still hanging up in Google and de-index them
- If we will have access to server stats, we will also analyse the server hits done by the Google bots and see what pages Google prefer most when visits the website.
Hierarchy and Internal linking: The hierarchy should be clear, we will make sure we don’t have over 300 links on a page, we will not have redirected or “404 – not found” links and the all the important content should be reachable within 3 clicks.
Duplications: Almost every website will have to face the issue of having duplicate content, especially the trading ones. A few types of duplications are often found and can be ignored by Google. To correct the faceted, content or meta tags duplications, we use a few methods to feed Google the page variant that we want to be indexed.
Pagination: Much of this type of content on the web is not marked with pagination tags (rel=”next” & rel=”prev”) which will indicate Google that the content is spread on multiple URLs.
Structured Data: we will advise of using the Schema markup where possible or using the Data Highlighter tool in Google Search Console.
Multi Language SEO: for multi-language websites, the correct usage of the href lang tag is mandatory. We have worked with large multinational companies in over 70 countries. We’ve done the mapping and generate all the XML files needed for setup and migrations.