Linux 软件免费装
Banner图

DB Robots.txt

开发者 Denis Bisteinov
更新时间 2024年4月27日 05:49
捐献地址: 去捐款
PHP版本: 4.6 及以上
WordPress版本: 6.5
版权: GPL2

标签

google seo robots bing yandex search engines robots.txt robot indexing crawler robots txt

下载

3.8

详情介绍:

Have you encountered an obstacle while creating and editing robots.txt file on your website? DB Robots.txt is an easy-to-use plugin for generating and configuring the file robots.txt that is essential for SEO (search engine optimization). The file should contain the rules for crawler robots of search engines such as Google, Bing, Yahoo!, Yandex, etc. The plugin works perfectly both if the file robots.txt has never been created or if it already exists. Once installed the plugin makes an optimized robots.txt file that includes special rules common for WordPress websites. After that you can proceed further customization specific for your own website if needed. If the plugin detects one or several Sitemap XML files it will include them into robots.txt file. No FTP access, manual coding or file editing is required that makes managing settings easy and convenient!

安装:

  1. Upload bisteinoff-robots-txt folder to the /wp-content/plugins/ directory
  2. Activate the plugin through the 'Plugins' menu in WordPress
  3. Enjoy

常见问题:

Will it conflict with any existing robots.txt file?

No, it will not. If the file robots.txt is found in the root folder than it will not be overriden. On the Settings page it will appear the corresponding notification and you will find two options: remove or rename the existing file robots.txt. The plugin provides the functionality.

Could I accidently block all search robots?

Once the plugin is installed it will work fine for all search engine robots. If you are not aware of the rules for fine-tune of a robots.txt it is better to leave the file as is or read first a corresponding manual to learn more about the directives used for robots.txt. Note: the following directives would block the corresponding search robot(s): Disallow: Disallow: / Disallow: * Disallow: / Disallow: / You should use any of the directives only in case if you do not want any page of your website would be accessible for crawling.

Where I could read the up-to-date guide on robots.txt?

更新日志:

3.8 3.7 3.6 3.5 3.4.2 3.4.1 3.4 3.3 3.2 3.1 3.0 2.3 2.2 2.1 2.0 1.0