March 15

0 comments

Common SEO Mistakes And How To Fix Them

By Fabian Pott

March 15, 2021

404, canonical, seo mistakes, seo obstacles

Common SEO Mistakes And How To Fix Them

Search engine optimization is a continuous process and not a one-off improvement measure. Only the combination of many individual steps ensures long-term success. Thus, even small mistakes can become poison in the SEO strategy.

Some SEO mistakes can lie hidden for years and significantly affect your ranking, such as broken links (see below). Therefore, you should not let it get that far and always be up to date with the latest trends and guidelines in SEO, because they are constantly changing.

To get you started, I’ve compiled the 10 most common website mistakes that negatively impact search rankings and give you practical tips on how to fix them:

Long page load times

If you’re planning to improve just one thing on your website at first, then you should take a look at page speed. Especially for mobile devices, because on the one hand about 53% of mobile website visitors leave pages that take longer than 3 seconds to load, and secondly, with Google’s Mobile First Indexing, your focus should be on your mobile site anyway.

The main reason for frustratingly long loading times? Huge image files without compression or mobile-friendly formats like JPEG 2000, JPEG XR and WebP! Reducing mobile load times is more important than ever, as the switch to Google’s Core Web Vitals means that your ranking will drop sharply if important images take more than 2.5 seconds to load.

How fast is my website?

First of all, you should know which pages should have priority. You can test pages one by one with Google PageSpeed Insights or Lighthouse. How you can really improve the pagespeed of your website will be covered in a future article here at SEOgeekLab. But believe me: It’s going to be a massive article! Just sign up to our newsletter if you want to get informed ASAP!

In 2021, optimizing websites for mobile devices is no longer an option, but a must! More than half of the world’s data traffic comes from mobile devices and with the introduction of the page experience as a key ranking factor, Google is focusing on mobile performance this year.

But as several recent research found, mobile site performance is still poor even among market-leading e-commerce sites. For example, most users still experience long load times, visual inconsistencies, poor response behavior, and annoying pop-ups. I wonder if it’s because website owners still prefer desktop when testing? This should not happen on your website!

Broken links & 404 pages

Every website has some kind of big problems and errors, which means that there are broken links to pages that no longer exist. Users may click on these links and get a 404 error code (file not found). Your webmasters may not be aware that these links still exist.

But guess who recognizes each of these broken redirects? The crawlers of Google and other search engines! And if search engines find too many 404 errors on your website, not only will crawling be hindered, but it will also be taken as a sign that your website is not being maintained appropriately – and your ranking will drop.

Duplicate Content

Duplicate content occurs when the same content is accessible under different URLs within the website. Duplicate content is problematic because search engines have to decide between the same content and different URLs. As a result, your web pages compete against each other for one or more keywords in the SERPs.

In the worst case, only one website will rank and the other “duplicates” will not be considered. To prevent this, you should avoid duplicate content at all costs.

How can duplicate content be created?

  1. Your page is available with or without www.In this case, the server returns the following when requestig http://www.yourpage.com displays the same contents as when calling
    http://your site.com. Many content management systems make no distinction when creating the page.The solution: 301-redirect

    To avoid duplicate content through two URL versions, you should set up a permanent redirect to the desired version. This is the so-called 301-redirect. When a client, such as a browser, requests a URL, the server automatically redirects to the new URL. Since the redirect is so fast, the user usually hardly notices it.

    You can enter a redirect in the .htaccess file of your Apache server. The rule consists of three parts. In the first part the corresponding module is started for rewriting. The next part specifies which directories should be redirected. Finally the exact rewrite rule is defined.

    RewriteEngine On
    RewriteCond %{HTTP_HOST} ^yourpage.com$
    RewriteRule ^(.*) http://www.yourpage.com/$1 [L,R=301]

  2. A product from your online store is available under different URLs.Duplicate content can arise very quickly in web stores if you use different categories in which the product is listed. An example: You have an online store for pants.An example: You have an online store for pants. There is the category “Men’s pants” with the URL www.yourshop.com/mens-pants and the category “Pants” at www.yourshop.com/pants. Now you offer a pair of jeans for men. The product is available at www.yourshop.com/mens-pants/nice-jeans as well as at www.yourshop.com/pants/nice-jeans. A classic case of duplicate content.

    A solution: Canonical tag

    The canonical tag is a meta element in the <head> area of a web page, with which you can point search engines to the original URL. Search engines then usually index only this “canonical URL” and ignore the copy.

    If you want to avoid that a page with duplicate content is indexed, you add the tag to this page and provide the link to the original URL. In our example above, let’s assume that you want to specify the jeans in the men’s pants as the original URL. In this case, you add this tag on the page yourshop.com/pants/nice-jeans:

    <link rel=”canonical” href=“http:// www.yourshop.com/mens-pants/nice-jeans”/>

What is near duplicate content?

In the context of duplicate content, the term “near duplicate content” is often used. This is also identical content. However, this is achieved via the same content blocks or texts that are too similar. You can usually avoid near duplicate content if you optimize existing content and make it unique.

Near duplicate content can also result from the fact that your CMS automatically generates URLs for filters, for example. In this case, it is a good idea to mark the pages in question with the tag

<meta name=”robots” content=”noindex, follow” />

in the <head> area. This prevents these pages from being indexed and avoids near duplicate content. At the same time, you enable the Googlebot to continue following all links on the page involved.

Title tags are not optimized

Probably the title is the easiest to optimize, ranking-relevant OnPage element of a web page. It is therefore also called the so-called “low hanging fruit”. Therefore, it is even more careless if it is not maintained or set randomly.

At the same time, the page title is one of the first elements of your website that a Google user sees. The page title is also used by search engines as a clickable title of the search snippet. It leads users directly to your website with one click.

How do I optimize my title tags?

  1. Use a maximum of 70 characters in the title. Otherwise it will be shortened by Google in the snippet
  2. Briefly describe what the user can expect from visiting your page
  3. Include the main keyword of the website in the title
  4. Create each title individually and use it only once

Note: If you run an online store with many subpages, the title is usually generated automatically. Online store owners are therefore usually recommended to use the product name as the title. However, make sure that the title is not too long.

Use possible technical possibilities of your store system, with which the title can be shortened reasonably. For example, with some store systems you can store schemes according to which a title is sensibly created, for example:

{Buy [product name]}

Missing or not optimized meta descriptions

Besides the title, meta descriptions are also among the basics of OnPage factors that you should definitely optimize. Descriptions fulfill several important functions in the SERPs. Among other things, they are part of the search snippet together with the title and ensure that the user’s interest is aroused to visit your website.

If no meta description is stored or the existing description is duplicated, Google selects phrases or other text elements from your page to create a description. Thus, it is up to you how your website is perceived by users via the snippet.

Why should I optimize my meta descriptions?

With the help of an appealing description, you can increase the click rate within the SERPs and thus generate more traffic for your website. In addition, you actively control the brand image of your website and stand out from your competitors with a meaningful description.

How can I optimize my descriptions?

  • Use a so-called “call-to-action” in your description. It encourages the user to click
  • Keep to the limit of 175 characters including spaces so that your description can be displayed in full. The number of characters is only approximate, as Google ultimately uses the pixel size of the description. It’s best to keep it between 170 and 175 characters
  • Use the central keyword of the target page in the description text. It will be additionally bolded in the snippet if it was entered by the user in the search bar
  • Summarize the content of the landing page briefly and concisely
  • Google Search Console and seotesting.com can help you analyze meta descriptions (read our new article here)

Note: Rich snippets are an extended form of snippets that can contain other elements, such as rating stars, links, images, pricing and other types of information. These represent additional information for the searcher. The information can be stored in the source code through certain formatting and is presented prominently in the search results. This allows visitors to more quickly determine whether the search result is relevant to their search. Rich snippets can be used to increase the click-through rate of a snippet.

Incorrect internal linking

The internal linking of your website contributes significantly to how well users find their way around. At the same time, it supports the Googlebot in crawling and determining the thematic relevance of subpages.

Common mistakes in internal linking are:

  • different link texts for one link target
  • links to targets that no longer exist
  • too many internal links

How can I fix errors in my internal linking?

Always use the central keyword as anchor text per inner page. This way you can strengthen the thematic relevance of the subpage. In contrast to incoming backlinks, internal links should always be provided with the important keyword.

Check regularly via Google Search Console in the section “Crawling errors” if there are problems with the output of error pages with the code 404.

Make sure that you link from one page to another inner page only once. Because with the internal linking the link juice is divided on all existing links. The less internal links you use, the more link power is transferred to the inner pages.

You can find more insights on this topic in our newsletter – subscribe now for free!

No sitemap.xml stored

A sitemap.xml is a file that lists all URLs of your website in machine-readable form. It is therefore a kind of complete table of contents. This file can be uploaded in the Google Search Console or in the BING Tools for webmasters and support the crawling of the search engines. So you can use a Sitemap.xml to make sure that crawlers are informed about new or changed URLs of your domain.

The transmission of the XML sitemap does not guarantee that the URLs contained therein will actually be indexed. However, the chance increases that the Googlebot will visit weakly linked inner pages on the basis of the file and include them in the index.
A Sitemap.xml can usually be created with conventional content management systems as well as store systems.

How do I store my sitemap.xml in the GSC?

  • Save the file in the root directory of your domain, for example like this: www.yourpage.com/sitemap.xml
  • Log in to the Google Search Console
  • Click on the item “Sitemaps” under “Index”
  • Now click on the red button “Add/Test Sitemap”, add the directory path where the file is stored and click on “Submit”
  • Google will now check your sitemap and show you possible errors

Non-optimized images

Images serve many functions on your website. They enrich your content and can increase the chance of purchases in stores. At the same time, images increase the topic relevance and the user experience of your page. The positive results are a higher retention time and lower bounce rates. These user signals can in turn have a positive impact on your rankings. However, there are several pitfalls that lead to common SEO mistakes with images:

  • missing ALT tags prevent accessibility and recognition of image content by search engines
  • too large images increase the loading time unnecessarily
  • non-optimized images prevent your graphics from ranking well in image search and strengthen the relevance of your page

How do I optimize my images for SEO?

  • compress images that you use for your website. Use common image editors that allow lossless compression
  • use ALT texts. They are displayed if a browser cannot display images or has problems with the display. Search engines cannot (yet) read images without additional text. With the ALT text you provide important clues for the image content. Describe the image with the ALT attribute as concisely as possible and, if possible, include the central keyword of the web page
  • use meaningful file names: Give your image files meaningful names that ideally contain a relevant search term

Using too many h1 tags

To strengthen the keyword focus of your website, each URL should have only one h1 tag. The h tags are HTML tags that are used to mark up headings. They are used chronologically on a page in descending order of importance. Thus, the h1 tag encloses the most important and central heading of a web page.

In the source code, an h1 heading looks like this:

<h1>most important heading of the page</h1>

Most content management systems or online stores set the h tags automatically. Sometimes graphic designers also use the h tags to define font sizes. But for your SEO purposes, double or multiple used h1 tags are harmful. Because then it becomes unclear for search engines like Google what the focus of the landing page is.

What should I keep in mind for the headline structure?

  • use the h1 tag only once on each page
  • include the central keyword in the main heading
  • do not use h tags to define font sizes, use CSS instead

Final thoughts

SEO is certainly a whole lot of work to have on your radar when optimizing your website. And SEO can be an overwhelming amount of work. Where do you even begin? Don’t worry, we’re here to help!

We have created our newsletter especially for this purpose, where we selectively send you the right content based on your interests – and when you need it! So sign up now for free and we look forward to helping you with our experience and solutions.

All the best,
Fabian

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Direct Your Visitors to a Clear Action at the Bottom of the Page

>