Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 33 additions & 3 deletions docs/content/1.getting-started/3.troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,43 @@ navigation:

The best tool for debugging is the Nuxt DevTools integration with Nuxt Robots.

This will show you the current robot rules and your robots.txt file.
**How to Access:**

1. Install [Nuxt DevTools](https://devtools.nuxt.com/) if not already installed (it's enabled by default in Nuxt 3.7+)
2. Run your dev server with `npm run dev` or `pnpm dev`
3. Open your browser DevTools (F12 or Cmd+Opt+I on Mac)
4. Look for the Nuxt icon in the bottom-left corner of your browser window
5. Click the icon to open Nuxt DevTools
6. Navigate to the "Robots" tab in the left sidebar

**What You'll See:**

The DevTools panel will show you:
- Current robot rules applied to your site
- The generated `robots.txt` file content
- Active configuration from your `nuxt.config.ts`
- Route-specific rules and their sources

This makes it easy to verify that your robots configuration is working as expected without having to visit `/robots.txt` directly.

### Debug Config

You can enable the [debug](/docs/robots/api/config#debug) option which will give you more granular output.
You can enable the [debug](/docs/robots/api/config#debug) option which will give you more granular output in your server console.

```ts [nuxt.config.ts]
export default defineNuxtConfig({
robots: {
debug: true
}
})
```

This is enabled by default in development mode and will log detailed information about:
- Which rules are being applied
- How the robots.txt file is being generated
- Any parsing or configuration issues

This is enabled by default in development mode.
The debug output appears in your terminal/console where you're running the dev server, not in the browser console.

## Submitting an Issue

Expand Down
15 changes: 15 additions & 0 deletions docs/content/2.guides/1.disable-indexing.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
title: Disabling Site Indexing
description: Learn how to disable indexing for different environments and conditions to avoid crawling issues.
navigation:
title: "Disabling Site Indexing"
---

## Introduction
Expand Down Expand Up @@ -55,3 +57,16 @@ A robots meta tag should also be generated that looks like:
```

For full confidence you can inspect the URL within Google Search Console to see if it's being indexed.

## Troubleshooting

If indexing is not being disabled as expected:

1. **Check your environment variable** - Make sure `NUXT_SITE_ENV` is set correctly in your `.env` file or deployment environment
2. **Verify the configuration** - Check that `site.indexable` is set to `false` in your `nuxt.config.ts`
3. **Clear your cache** - Sometimes cached responses may show old data. Try clearing your browser cache and rebuilding your app
4. **Check the robots.txt file** - Visit `/robots.txt` on your site to see the actual output
5. **Inspect the meta tags** - View page source and look for the robots meta tag in the `<head>` section
6. **Use Nuxt DevTools** - Open the Robots tab in Nuxt DevTools to see the active configuration (see [Troubleshooting](/docs/robots/getting-started/troubleshooting) guide)

If you're still having issues after trying these steps, please create an issue on the [GitHub repository](https://github.com/nuxt-modules/robots) with details about your setup.
28 changes: 26 additions & 2 deletions docs/content/2.guides/1.robots-txt.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,24 @@ If you need programmatic control, you can configure the module using [nuxt.confi

## Creating a `robots.txt` file

You can place your file in any location; the easiest is to use: `<rootDir>/public/_robots.txt`.
You can place your file in any location. The easiest and recommended location is: `<rootDir>/public/_robots.txt`

**Note:** The file is named `_robots.txt` (with an underscore prefix) in the `public` folder to prevent conflicts with the auto-generated file. The module will automatically merge this file with generated rules.

**Quick Start:**

1. Create a file named `_robots.txt` in your `public` folder
2. Add your robots.txt rules to this file
3. The module will automatically detect and merge it with the generated robots.txt

**Example `public/_robots.txt`:**

```txt [public/_robots.txt]
User-agent: *
Allow: /

Sitemap: https://example.com/sitemap.xml
```

Additionally, the following paths are supported by default:

Expand Down Expand Up @@ -96,4 +113,11 @@ Both directives are parsed identically and output as `Content-Usage` in the gene

To ensure other modules can integrate with your generated robots file, you must not have a `robots.txt` file in your `public` folder.

If you do, it will be moved to `<rootDir>/public/_robots.txt` and merged with the generated file.
**Important:** Always use `_robots.txt` (with underscore) instead of `robots.txt` in your public folder.

If you accidentally create a `public/robots.txt` file, the module will automatically:
1. Move it to `<rootDir>/public/_robots.txt`
2. Merge it with the generated file
3. Log a warning in your console

This ensures your custom rules are preserved while allowing the module to function correctly.