Determine if a page may be crawled from robots.txt, robots meta tags and robot headers
-
Updated
Feb 3, 2025 - PHP
Determine if a page may be crawled from robots.txt, robots meta tags and robot headers
An extensible robots.txt parser and client library, with full support for every directive and specification.
A complete SEO solution for Symfony projects. This bundle handles meta tags, Open Graph, Twitter Cards, canonical URLs, sitemaps, and more—helping your app stay search-engine friendly and socially shareable out of the box.
Manage the robots.txt from the Kirby config file
Behat extension for testing some On-Page SEO factors: meta title/description, canonical, hreflang, meta robots, robots.txt, redirects, sitemap validation, HTML validation, performance...
Simple robots generation module for Silverstripe (SS 4 and above)
A Multisite Robots.txt Manager - Quickly and easily manage all robots.txt files on a WordPress Multisite Website Network.
Optimizes your site's robots.txt to reduce server load and CO2 footprint by blocking unnecessary crawlers while allowing major search engines and specific tools.
Robots Exclusion Standard/Protocol Parser for Web Crawling/Scraping
TYPO3 sitemap crawler
Declarative, scriptable web robot (crawler) and scrapper
This is Pico's official robots plugin to add a robots.txt and sitemap.xml to your website. Pico is a stupidly simple, blazing fast, flat file CMS.
Robots for Kirby CMS
Middleware to avoid search engine indexing with PSR-7 using robots.txt and X-Robots-Tag
PSR-15 middleware to enable/disable the robots of the search engines
This is ready to use template to quickly start selling domain with minimum setup.
Kirby 2 CMS plugin that adds a route for robots.txt
🔧 Robots.txt generator component for Nette framework.
Add a description, image, and links to the robots-txt topic page so that developers can more easily learn about it.
To associate your repository with the robots-txt topic, visit your repo's landing page and select "manage topics."