last posts

How to create a sitemap and robot txt file for Blogger Blog

To complement the rest of the lessons for the Blogger blog creation course, today we learn a full explanation of how a sitemap and robot txt file works to archive Blogger topics and appear well in search engines: a comprehensive explanation of the text and sitemap robot file to speed up archiving topics and how to lead the first search results.


how to create robots.txt file and sitemap
How to create robots.txt file and sitemap

Welcome to the blog "abouelmagd tech احمد ابو المجد 2", it where the third lesson of the set of lessons for the blog creation course, through this lesson, we will learn how to make a sitemap and a robot txt file to archive blogger topics automatically! The search results are issued, and this lesson is the first step on the road to publicizing the blog and ensuring that it appears in the search results, not only this but the way to reach the top of the search results on Google.

How to create a sitemap and robot txt file for Blogger | Solve the problem of excluded pages

How to create a sitemap and robot txt file is one of the important things that you must know if you are dealing with a blogger, as the sitemap and robot txt files have an important role in archiving articles and making them appear in Google search results.

Whereas, creating a sitemap and robot txt files improves SEO and search engines, through the help of Google Search Console, which leads to the top of search results.

You must be aware that it is very important when creating a sitemap and dealing with a robot txt file that you make sure that the files are correct because any error, even small and simple, may disrupt or block your site or blog or cause your articles not to be archived on search engines.

Given the importance of knowing how to create a Sitemap and a Robot txt file, we will talk today in this article about how to create them, as well as the importance of each of them, and how to use and deal with them.

What is a Sitemap:-

Sitemaps are files that represent a map of your blog or site. A sitemap automatically archives the articles of the site or blog after submitting this map to Google and makes the site or blog appear in the search engine.

A sitemap can be depicted as a guide for the webpage or blog. It is a list of all the pages that exist in the blog or site, and it is clarified by what each page contains.

A sitemap is used when you sign up for Google Search Console, is installed on the Blogger blog property, and is sent to Google through a section called the Sitemap located in the Search Console Tools.

And when you add a sitemap.xml file in Search Console tools, you should not add any other file or another file with it, because you are trying to archive the article only once, and if the archive is repeated, the engine can stop your site.

How to create a sitemap for blogger blogs:-

You can find the Sitemap.xml sitemap in a blog or blogger site embedded in it, and you can access it by adding or typing at the end of the URL of your blog sitemapr.xml or sitemap-pages.xml.

The idea of, making a sitemap file is that when you publish a new article on the site, that article page is automatically added to the sitemap, and then the sitemap files indicate to the search engine to alert it that there is a new article that has been added so that it archives, which makes spiders crawl that page and recognize it and appear in the search results.

How to add sitemaps for webmaster tools:-

You can add a sitemap to Google Webmaster Tools through several simple steps, which are presented below:

  1. Log in to Google Search Console with the blog's email.
  2. Click on Sitemap in the side menu.
  3. Add the sitemap.xml code after the site name.
  4. Click on submit to add a sitemap.
  5. Do not add any other file so as not to harm your site or delay the indexing and archiving of your articles, as adding more than one file may cause delays in archiving Blogger blogs.

What is a Robot txt file:-

The robot text file is very important. This file is placed in your Blogger blog settings in its box, and it is one of the files you make to point to Google and tell it to automatically archive your article pages placed on your blog or site.

The creation of this file is based on the structure of the site and the site manager creates it, but he must take into account the necessity of creating it correctly so as not to negatively affect the site and not cause it not to be archived or delayed.

After creating a Robots txt file, it can be sent to Google by Google Console, and this file can be tested to ensure that there are no errors in it and that it was created correctly.

How to Create Robots.txt File for Blogger Blogger:-

One of the basic tasks of a Robots.txt file is to tell engines which pages search engines should crawl and which pages they should not crawl, allowing you to control the work of search engine bots.

Thus, it can prevent the problem of duplicate content by preventing bots from crawling to archive it.

This is done by applying the Disallow / 20 * rule in the Robots.txt file that stops crawling pages. To avoid this and allow pages to be crawled, we also added the /*.html rule to allow the bot to crawl pages in addition to a sitemap for the pages we learned how to create from Before, this is a Robots.txt file for Blogger Blogger.

Applying the above, we will find that it starts working on improving search engines, which leads to the appearance of the Blogger blog in the first results of the search.

How to add Robots.txt file to Blogger:

You can add a robot text file to your Blogger blog through the following simple steps that I have explained:

  1. Log in to the Blogger website through your website email.
  2. Go to settings and then to indexing and crawling programs.
  3. Activate the content of the customized Robots.txt file with the toggle button.
  4. Put the code of the Robots.txt file, and then click Save to File.
  5. Check the Robots.txt file after the update by visiting www.example.com/robots.txt of course and replacing the example with your URL.

Download the Robots.txt file

Now that we have listed the meaning and concept of robot txt file and Sitemap file and the correct way to create each of them, we should know how to use them to solve all the problems of excluded pages, and that is what we will talk about below.


Read also

How to make a Google account on the mobile 2022

How to create a blogger blog for free and profit from it - 2022

How to Proving Blog Ownership in Webmaster Tools 2022

How to solve the problem of excluded pages:-

When you add the bot file and sitemap, it starts to speed up blog archiving, and so-called excluded pages appear, here many website owners think that there is a problem with their site causing this error message (indexed, but forbidden with robots.txt)

This message is not entirely a mistake, but rather it is the result of organizing the archive on the site after creating a sitemap and the correct robot file.

Where the robot file starts to prevent you from archiving search and archive pages and also prevents it from archiving static pages such as the Privacy Policy page and the Contact Us page, the important thing for him is to archive your articles and archive the content that you write.

This helps a ton in coordinating the chronicling of the blog and speeding up filing subjects in the web crawler.

What are the custom header tags:-

It is an add-on through Blogger that you can use to prevent the archiving of a specific part of the blog or article by preventing search spiders from accessing this part.

all. command:

This allows search spiders to access every part of the site and archive all content without any restrictions.

noindex. command:

This means preventing search spiders from archiving the page and preventing it from appearing in search engines.

nofollow. command:

This means that search spiders are prevented from archiving links on this page.

command none:

This command is a combination of nofollow and noindex.

archive. command:

This prevents search spiders from having a cache or archive copy of that page.

nosnippet command:

This command is intended to prevent the description from appearing at the bottom of the article in a search engine or to prevent excerpts from the article from appearing under the title in the search engine.

noodp. command:

This prevents search spiders from taking any data from the Open Directory.

nottranslate. command:

Prevent spiders from allowing the translation of that page.

noimageindex command:

This is to prevent search spiders from archiving images on that page.

unavailable_after command:

This means that you choose a specific date when the page is not archived or unavailable in the search engine.

Here we have reached the end of the article that talks about explaining the Sitemap and Robots.txt files how you can create each of them, and how to add them to improve site SEO and top search engines, as well as to archive different blogs and sites.

احمد ابو المجد
By : احمد ابو المجد
Ahmed Abu Al-Majd is an Egyptian blogger. I work on the blog "abouelmagd tech | احمد ابو المجد 2" which is a technical blog whose most important priority is to provide a group of exclusive explanations related to computers, technology, and information. This includes detailed explanations of each domain and also includes creating and explaining topics in the Windows and Android domains. The blog was created in late 2019 and its purpose is to provide everything new in the field of computers, technology, and information. facebook ــــ twitter ــــ linkedin ــــ pinterest ــــ instagram ــــ visit site abouelmagd tech | احمد ابو المجد 2
Comments



Font Size
+
16
-
lines height
+
2
-