diff --git a/docs/versioned_docs/version-8.4/ReactPlayer.jsx b/docs/versioned_docs/version-8.4/ReactPlayer.jsx
new file mode 100644
index 000000000000..eea078342da8
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/ReactPlayer.jsx
@@ -0,0 +1,5 @@
+'use client'
+
+import ReactPlayer from 'react-player'
+
+export default ReactPlayer
diff --git a/docs/versioned_docs/version-8.4/a11y.md b/docs/versioned_docs/version-8.4/a11y.md
new file mode 100644
index 000000000000..7cc09b9b0df3
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/a11y.md
@@ -0,0 +1,170 @@
+---
+slug: accessibility
+description: Accessibility is a core feature that's built-in
+---
+
+# Accessibility (aka a11y)
+
+We built Redwood to make building websites more accessible (we write all the config so you don't have to), but Redwood's also built to help you make more accessible websites.
+Accessibility shouldn't be a nice-to-have.
+It should be a given from the start.
+A core feature that's built-in and well-supported.
+
+There's a lot of great tooling out there that'll not only help you build accessible websites, but also help you learn exactly what that means.
+
+> **Does tooling obviate the need for manual testing?**
+>
+> No—even with all the tooling in the world, manual testing is still important, especially for accessibility.
+> But just because tooling doesn't catch everything doesn't mean it's not valuable.
+> It'd be much harder to learn what to look for without it.
+
+## Accessible Routing
+
+For single-page applications (SPAs), accessibility starts with the router.
+Without a full-page refresh, you just can't be sure that things like announcements and focus are being taken care of the way they're supposed to be.
+Here's a great example of [how disorienting SPAs can be to screen-reader users](https://www.youtube.com/watch?v=NKTdNv8JpuM).
+On navigation, nothing's announced.
+The lack of an announcement isn't just buggy behavior—it's broken.
+
+Normally, the onus would be on you as a developer to announce to screen-reader users that they've navigated somewhere new.
+That's a lot to ask—and hard to get right—especially when you're just trying to build your app.
+
+Luckily, if you're writing thoughtful content and marking it up semantically, there's nothing you have to do!
+The router automatically announces pages on navigation, and looks for announcements in this order:
+
+1. The `RouteAnnouncement` component
+2. The page's `
`
+3. `document.title`
+4. `location.pathname`
+
+The reason for this order is that announcements should be as specific as possible.
+more specific usually means more descriptive, and more descriptive usually means that users can not only orient themselves and navigate through the content, but also find it again.
+
+> If you're not sure if your content is descriptive enough, see the [W3 guidelines](https://www.w3.org/WAI/WCAG21/Techniques/general/G88.html).
+
+Even though Redwood looks for a `RouteAnnouncement` component first, you don't have to have one on every page—it's more than ok for the `` to be what's announced.
+`RouteAnnouncement` is there for when the situation calls for a custom announcement.
+
+### `RouteAnnouncement`
+
+The way `RouteAnnouncement` works is simple: its children will be announced.
+Note that this can be something on the page or can be something that's visually hidden using the `visuallyHidden` prop:
+
+```jsx title="web/src/pages/HomePage/HomePage.js"
+import { RouteAnnouncement } from '@redwoodjs/router'
+
+const HomePage = () => {
+ return (
+ // This will still be visible
+
+ Welcome to my site!
+
+ )
+}
+
+export default HomePage
+```
+
+```jsx title="web/src/pages/AboutPage/AboutPage.js"
+import { RouteAnnouncement } from '@redwoodjs/router'
+
+const AboutPage = () => {
+ return (
+ Welcome to my site!
+ // This won't be visible
+ // highlight-start
+
+ All about me
+
+ // highlight-end
+ )
+}
+
+export default AboutPage
+```
+
+`visuallyHidden` shouldn't be the first thing you reach for—it's good to maintain parity between your site's visual and audible experiences.
+But it's there if you need it.
+
+## Focus
+
+On page change, Redwood Router resets focus to the top of the DOM so that users can navigate through the new page.
+While this is the expected behavior (and the behavior you usually want), for some pages—especially those with a lot of navigation—it can be cumbersome for users to have tab through navigation before getting to the main point.
+(And that goes for every page change!)
+
+Right now, there's two ways to alleviate this: with skip links or the `RouteFocus` component.
+
+### Skip Links
+
+Since the main content isn't usually the first thing on the page, it's a best practice to provide a shortcut for keyboard and screen-reader users to skip to it.
+Skip links do just that, and if you generate a layout using the `--skipLink` option, you'll get one with a skip link:
+
+```
+yarn rw g layout main --skipLink
+```
+
+```jsx title="web/src/layouts/MainLayout/MainLayout.js"
+import { SkipNavLink, SkipNavContent } from '@redwoodjs/router'
+import '@redwoodjs/router/skip-nav.css'
+
+const MainLayout = ({ children }) => {
+ return (
+ <>
+
+
+
+ {children}
+ >
+ )
+}
+
+export default MainLayout
+```
+
+`SkipNavLink` renders a link that remains hidden till focused and `SkipNavContent` renders a div as the target for the link.
+The code for these components comes from Reach UI. For more details, see [Reach UI's docs](https://reach.tech/skip-nav/#reach-skip-nav).
+
+One thing you'll probably want to do is change the URL the skip link sends the user to when activated.
+You can do that by changing the `contentId` and `id` props of `SkipNavLink` and `SkipNavContent` respectively:
+
+```jsx
+
+{/* ... */}
+
+```
+
+If you'd prefer to implement your own skip link, [Ben Myers' blog](https://benmyers.dev/blog/skip-links/) is a great resource, and a great place to read about accessibility in general.
+
+### `RouteFocus`
+
+Sometimes you don't want to just skip the nav, but send a user somewhere.
+In this situation, you of course have the foresight that that place is where the user wants to be.
+So please use this at your discretion—sending a user to an unexpected location can be worse than sending them back the top.
+
+Having said that, if you know that on a particular page change a user's focus is better off being directed to a particular element, the `RouteFocus` component is what you want:
+
+```jsx title="web/src/pages/ContactPage/ContactPage.js"
+// highlight-next-line
+import { RouteFocus } from '@redwoodjs/router'
+
+const ContactPage = () => (
+
+ {/* Way too much nav... */}
+
+
+ // The contact form the user actually wants to interact with
+ // highlight-start
+
+
+
+ // highlight-end
+)
+
+export default ContactPage
+```
+
+`RouteFocus` tells the router to send focus to it's child on page change. In the example above, when the user navigates to the contact page, the name text field on the form is focused—the first field of the form they're here to fill out.
+
+
+ VIDEO
+
diff --git a/docs/versioned_docs/version-8.4/app-configuration-redwood-toml.md b/docs/versioned_docs/version-8.4/app-configuration-redwood-toml.md
new file mode 100644
index 000000000000..afb3abbdd613
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/app-configuration-redwood-toml.md
@@ -0,0 +1,212 @@
+---
+title: App Configuration
+description: Configure your app with redwood.toml
+---
+
+# App Configuration: redwood.toml
+
+One of the premier places you can configure your Redwood app is `redwood.toml`. By default, `redwood.toml` lists the following configuration options:
+
+```toml title="redwood.toml"
+[web]
+ title = "Redwood App"
+ port = 8910
+ apiUrl = "/.redwood/functions"
+ includeEnvironmentVariables = []
+[api]
+ port = 8911
+[browser]
+ open = true
+[notifications]
+ versionUpdates = ["latest"]
+```
+
+These are listed by default because they're the ones that you're most likely to configure, but there are plenty more available.
+
+You can think of `redwood.toml` as a frontend for configuring Redwood's build tools.
+For certain options, instead of having to configure build tools directly, there's quick access via `redwood.toml`.
+
+## [web]
+
+| Key | Description | Default |
+| :---------------------------- | :-------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------- |
+| `title` | Title of your Redwood app | `'Redwood App'` |
+| `port` | Port for the web server to listen at | `8910` |
+| `apiUrl` | URL to your api server. This can be a relative URL in which case it acts like a proxy, or a fully-qualified URL | `'/.redwood/functions'` |
+| `includeEnvironmentVariables` | Environment variables made available to the web side during dev and build | `[]` |
+| `host` | Hostname for the web server to listen at | Defaults to `'0.0.0.0'` in production and `'::'` in development |
+| `apiGraphQLUrl` | URL to your GraphQL function | `'${apiUrl}/graphql'` |
+| `apiDbAuthUrl` | URL to your dbAuth function | `'${apiUrl}/auth'` |
+| `sourceMap` | Enable source maps for production builds | `false` |
+| `a11y` | Enable storybook `addon-a11y` and `eslint-plugin-jsx-a11y` | `true` |
+
+### Customizing the GraphQL Endpoint
+
+By default, Redwood derives the GraphQL endpoint from `apiUrl` such that it's `${apiUrl}/graphql`, (with the default `apiUrl`, `./redwood/functions/graphql`).
+But sometimes you want to host your api side somewhere else.
+There's two ways you can do this:
+
+1. Change `apiUrl`:
+
+```toml title="redwood.toml"
+[web]
+ apiUrl = "https://api.coolredwoodapp.com"
+```
+
+Now the GraphQL endpoint is at `https://api.coolredwoodapp.com/graphql`.
+
+2. Change `apiGraphQLUrl`:
+
+```diff title="redwood.toml"
+ [web]
+ apiUrl = "/.redwood/functions"
++ apiGraphQLUrl = "https://api.coolredwoodapp.com/graphql"
+```
+
+### Customizing the dbAuth Endpoint
+
+Similarly, if you're using dbAuth, you may decide to host it somewhere else.
+To do this without affecting your other endpoints, you can add `apiDbAuthUrl` to your `redwood.toml`:
+
+```diff title="redwood.toml"
+ [web]
+ apiUrl = "/.redwood/functions"
++ apiDbAuthUrl = "https://api.coolredwoodapp.com/auth"
+```
+
+:::tip
+
+If you host your web and api sides at different domains and don't use a proxy, make sure you have [CORS](./cors.md) configured.
+Otherwise browser security features may block client requests.
+
+:::
+
+### includeEnvironmentVariables
+
+`includeEnvironmentVariables` is the set of environment variables that should be available to your web side during dev and build.
+Use it to include env vars like public keys for third-party services you've defined in your `.env` file:
+
+```toml title="redwood.toml"
+[web]
+ includeEnvironmentVariables = ["PUBLIC_KEY"]
+```
+
+```text title=".env"
+PUBLIC_KEY=...
+```
+
+Instead of including them in `includeEnvironmentVariables`, you can also prefix them with `REDWOOD_ENV_` (see [Environment Variables](environment-variables.md#web)).
+
+:::caution `includeEnvironmentVariables` isn't for secrets
+
+Don't make secrets available to your web side. Everything in `includeEnvironmentVariables` is included in the bundle.
+
+:::
+
+## [api]
+
+| Key | Description | Default |
+| :----------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------- |
+| `port` | Port for the api server to listen at | `8911` |
+| `host` | Hostname for the api server to listen at | Defaults to `'0.0.0.0'` in production and `'::'` in development |
+| `schemaPath` | The location of your Prisma schema. If you have [enabled Prisma multi file schemas](https://www.prisma.io/docs/orm/prisma-schema/overview/location#multi-file-prisma-schema), then its value is the directory where your `schema.prisma` can be found, for example: `'./api/db/schema'` | Defaults to `'./api/db/schema.prisma'` |
+| `debugPort` | Port for the debugger to listen at | `18911` |
+
+Additional server configuration can be done using [Server File](docker.md#using-the-server-file)
+
+### Multi File Schema
+
+Prisma's `prismaSchemaFolder` [feature](https://www.prisma.io/docs/orm/prisma-schema/overview/location#multi-file-prisma-schema) allows you to define multiple files in a schema subdirectory of your prisma directory.
+
+:::note Important
+If you wish to [organize your Prisma Schema into multiple files](https://www.prisma.io/blog/organize-your-prisma-schema-with-multi-file-support), you will need [enable](https://www.prisma.io/docs/orm/prisma-schema/overview/location#multi-file-prisma-schema) that feature in Prisma, move your `schema.prisma` file into a new directory such as `./api/db/schema` and then set `schemaPath` in the api toml config.
+:::
+
+For example:
+
+```toml title="redwood.toml"
+[api]
+ port = 8911
+ schemaPath = "./api/db/schema"
+```
+
+## [browser]
+
+```toml title="redwood.toml"
+[browser]
+ open = true
+```
+
+Setting `open` to `true` opens your browser to `http://${web.host}:${web.port}` (by default, `http://localhost:8910`) after the dev server starts.
+If you want your browser to stop opening when you run `yarn rw dev`, set this to `false`.
+(Or just remove it entirely.)
+
+There's actually a lot more you can do here. For more, see Vite's docs on [`preview.open`](https://vitejs.dev/config/preview-options.html#preview-open).
+
+## [generate]
+
+```toml title="redwood.toml"
+[generate]
+ tests = true
+ stories = true
+```
+
+Many of Redwood's generators create Jest tests or Storybook stories.
+Understandably, this can be lot of files, and sometimes you don't want all of them, either because you don't plan on using Jest or Storybook, or are just getting started and don't want the overhead.
+These options allows you to disable the generation of test and story files.
+
+## [cli]
+
+```toml title="redwood.toml"
+[notifications]
+ versionUpdates = ["latest"]
+```
+
+There are new versions of the framework all the time—a major every couple months, a minor every week or two, and patches when appropriate.
+And if you're on an experimental release line, like canary, there's new versions every day, multiple times.
+
+If you'd like to get notified (at most, once a day) when there's a new version, set `versionUpdates` to include the version tags you're interested in.
+
+## Using Environment Variables in `redwood.toml`
+
+You may find yourself wanting to change keys in `redwood.toml` based on the environment you're deploying to.
+For example, you may want to point to a different `apiUrl` in your staging environment.
+
+You can do so with environment variables.
+Let's look at an example:
+
+```toml title="redwood.toml"
+[web]
+ // highlight-start
+ title = "App running on ${APP_TITLE}"
+ port = "${PORT:8910}"
+ apiUrl = "${API_URL:/.redwood/functions}"
+ // highlight-end
+ includeEnvironmentVariables = []
+```
+
+This `${:[fallback]}` syntax does the following:
+
+- sets `title` by interpolating the env var `APP_TITLE`
+- sets `port` to the env var `PORT`, falling back to `8910`
+- sets `apiUrl` to the env var `API_URL`, falling back to `/.redwood/functions` (the default)
+
+That's pretty much all there is to it.
+Just remember two things:
+
+1. fallback is always a string
+2. these values are interpolated at build time
+
+## Running in a Container or VM
+
+To run a Redwood app in a container or VM, you'll want to set both the web and api's `host` to `0.0.0.0` to allow network connections to and from the host:
+
+```toml title="redwood.toml"
+[web]
+ host = '0.0.0.0'
+[api]
+ host = '0.0.0.0'
+```
+
+You can also configure these values via `REDWOOD_WEB_HOST` and `REDWOOD_API_HOST`.
+And if you set `NODE_ENV` to production, these will be the defaults anyway.
diff --git a/docs/versioned_docs/version-8.4/assets-and-files.md b/docs/versioned_docs/version-8.4/assets-and-files.md
new file mode 100644
index 000000000000..2cdcf544242c
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/assets-and-files.md
@@ -0,0 +1,180 @@
+---
+description: How to include assets—like images—in your app
+---
+
+# Assets and Files
+
+There are two ways to add an asset to your Redwood app:
+
+1. co-locate it with the component using it and import it into the component as if it were code
+2. add it to the `web/public` directory and reference it relative to your site's root
+
+Where possible, prefer the first strategy.
+
+It lets Vite include the asset in the bundle when the file is small enough.
+
+### Co-locating and Importing Assets
+
+Let's say you want to show your app's logo in your `Header` component.
+First, add your logo to the `Header` component's directory:
+
+```text
+web/src/components/Header/
+// highlight-next-line
+├── logo.png
+├── Header.js
+├── Header.stories.js
+└── Header.test.js
+```
+
+Then, in the `Header` component, import your logo as if it were code:
+
+```jsx title="web/src/components/Header/Header.js"
+// highlight-next-line
+import logo from './logo.png'
+
+const Header = () => {
+ return (
+
+ {/* ... */}
+ // highlight-next-line
+
+
+ )
+}
+
+export default Header
+```
+
+If you're curious how this works, see the Vite docs on [static asset handling](https://vitejs.dev/guide/assets.html).
+
+## Adding to the `web/public` Directory
+
+You can also add assets to the `web/public` directory, effectively adding static files to your app.
+During dev and build, Redwood copies `web/public`'s contents into `web/dist`.
+
+> Changes to `web/public` don't hot-reload.
+
+Again, because assets in this directory don't go through Vite, **use this strategy sparingly**, and mainly for assets like favicons, manifests, `robots.txt`, libraries incompatible with Vite, etc.
+
+### Example: Adding Your Logo and Favicon to `web/public`
+
+Let's say that you've added your logo and favicon to `web/public`:
+
+```
+web/public/
+├── img/
+│ └── logo.png
+└── favicon.png
+```
+
+When you run `yarn rw dev` and `yarn rw build`, Redwood copies
+`web/public/img/logo.png` to `web/dist/img/logo.png` and `web/public/favicon.png` to `web/dist/favicon.png`:
+
+```text
+web/dist/
+├── static/
+│ ├── js/
+│ └── css/
+// highlight-start
+├── img/
+│ └── logo.png
+└── favicon.png
+// highlight-end
+```
+
+You can reference these files in your code without any special handling:
+
+```jsx title="web/src/components/Header/Header.js"
+import { Head } from '@redwoodjs/web'
+
+const Header = () => {
+ return (
+ <>
+
+ // highlight-next-line
+
+
+ // highlight-next-line
+
+ >
+ )
+}
+
+export default Header
+```
+
+## Styling SVGs: The special type of image
+
+By default you can import and use SVG images like any other image asset.
+
+```jsx title="web/src/components/Example.jsx"
+// highlight-next-line
+import svgIconSrc from '../mySvg.svg'
+
+const Example = () => {
+ return (
+ <>
+ // highlight-next-line
+
+ >
+ )
+}
+
+export default Example
+```
+
+Sometimes however, you might want more control over styling your SVGs - maybe you want to modify the `stroke-width` or `fill` color.
+
+The easiest way to achieve this, is to make your SVGs a React component. Open up your SVG file, and drop in its contents into a component – for example:
+
+```tsx title="web/src/components/icons/CarIcon.tsx"
+import type { SVGProps } from "react"
+
+export const CarIcon = (props: SVGProps) => {
+ return (
+ // 👇 content of your SVG file
+ {
+ const { isAuthenticated, signUp } = useAuth()
+
+ return (
+ <>
+ {/* MetaTags, h1, paragraphs, etc. */}
+
+ {JSON.stringify({ isAuthenticated })}
+ sign up
+ >
+ )
+}
+```
+
+Clicking sign up should redirect you to Auth0:
+
+
+
+After you sign up, you should be redirected back to your Redwood app, and you should see `{"isAuthenticated":true}` on the page.
diff --git a/docs/versioned_docs/version-8.4/auth/azure.md b/docs/versioned_docs/version-8.4/auth/azure.md
new file mode 100644
index 000000000000..61477529fcb6
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/auth/azure.md
@@ -0,0 +1,181 @@
+---
+sidebar_label: Azure
+---
+
+# Azure Active Directory Authentication
+
+To get started, run the setup command:
+
+```bash
+yarn rw setup auth azure-active-directory
+```
+
+This installs all the packages, writes all the files, and makes all the code
+modifications you need. For a detailed explanation of all the api- and web-side
+changes that aren't exclusive to Azure, see the top-level
+[Authentication](../authentication.md) doc. For now, let's focus on Azure's
+side of things.
+
+Follow the steps in [Single-page application: App registration](https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-spa-app-registration).
+After registering your app, you'll be redirected to its "Overview" section.
+We're interested in two credentials here, "Application (client) ID" and "Directory (tenant) ID".
+Go ahead and copy "Application (client) ID" to your `.env` file as `AZURE_ACTIVE_DIRECTORY_CLIENT_ID`.
+But "Directory (tenant) ID" needs a bit more explanation.
+
+Azure has an option called "Authority". It's a URL that specifies a directory that MSAL (Microsoft Authentication Library) can request tokens from.
+You can read more about it [here](https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-client-application-configuration#authority),
+but to cut to the chase, you probably want `https://login.microsoftonline.com/${tenantId}` as your Authority, where `tenantId` is "Directory (tenant) ID".
+
+After substituting your app's "Directory (tenant) ID" in the URL, add it to your `.env` file as `AZURE_ACTIVE_DIRECTORY_AUTHORITY`.
+All together now:
+
+```bash title=".env"
+AZURE_ACTIVE_DIRECTORY_CLIENT_ID="..."
+# Where `tenantId` is your app's "Directory (tenant) ID"
+AZURE_ACTIVE_DIRECTORY_AUTHORITY="https://login.microsoftonline.com/${tenantId}"
+```
+
+Ok, back to [Single-page application: App registration](https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-spa-app-registration).
+At the end, it says...
+
+> Next, configure the app registration with a Redirect URI to specify where the Microsoft identity platform should redirect the client along with any security tokens.
+> Use the steps appropriate for the version of MSAL.js you're using in your application:
+>
+> - MSAL.js 2.0 with auth code flow (recommended)
+> - MSAL.js 1.0 with implicit flow
+
+Redwood uses [MSAL.js 2.0 with auth code flow](https://learn.microsoft.com/en-us/azure/active-directory/develop/scenario-spa-app-registration#redirect-uri-msaljs-20-with-auth-code-flow), so follow the steps there next.
+When it asks you for a Redirect URI, enter `http://localhost:8910` and `http://localhost:8910/login`, and copy these into your `.env` file as `AZURE_ACTIVE_DIRECTORY_REDIRECT_URI` and `AZURE_ACTIVE_DIRECTORY_LOGOUT_REDIRECT_URI`:
+
+:::tip Can't add multiple URI's?
+
+Configure one, then you'll be able to configure another.
+
+:::
+
+```bash title=".env"
+AZURE_ACTIVE_DIRECTORY_CLIENT_ID="..."
+# Where `tenantId` is your app's "Directory (tenant) ID"
+AZURE_ACTIVE_DIRECTORY_AUTHORITY="https://login.microsoftonline.com/${tenantId}"
+AZURE_ACTIVE_DIRECTORY_REDIRECT_URI="http://localhost:8910"
+AZURE_ACTIVE_DIRECTORY_LOGOUT_REDIRECT_URI="http://localhost:8910/login"
+```
+
+That's it for .env vars. Don't forget to include them in the `includeEnvironmentVariables` array in `redwood.toml`:
+
+```toml title="redwood.toml"
+[web]
+ # ...
+ includeEnvironmentVariables = [
+ "AZURE_ACTIVE_DIRECTORY_CLIENT_ID",
+ "AZURE_ACTIVE_DIRECTORY_AUTHORITY",
+ "AZURE_ACTIVE_DIRECTORY_REDIRECT_URI",
+ "AZURE_ACTIVE_DIRECTORY_LOGOUT_REDIRECT_URI",
+ ]
+```
+
+Now let's make sure everything works: if this is a brand new project, generate
+a home page. There we'll try to sign up by destructuring `signUp` from the
+`useAuth` hook (import that from `'src/auth'`). We'll also destructure and
+display `isAuthenticated` to see if it worked:
+
+```
+yarn rw g page home /
+```
+
+```tsx title="web/src/pages/HomePage.tsx"
+import { useAuth } from 'src/auth'
+
+const HomePage = () => {
+ const { isAuthenticated, signUp } = useAuth()
+
+ return (
+ <>
+ {/* MetaTags, h1, paragraphs, etc. */}
+
+ {JSON.stringify({ isAuthenticated })}
+ Sign Up
+ >
+ )
+}
+```
+
+## Roles
+
+To add roles exposed via the `roles` claim, follow [Add app roles to your application and receive them in the token](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps).
+
+## `logIn` Options
+
+`options` in `logIn(options?)` is of type [RedirectRequest](https://azuread.github.io/microsoft-authentication-library-for-js/ref/types/_azure_msal_browser.RedirectRequest.html) and is a good place to pass in optional [scopes](https://docs.microsoft.com/en-us/graph/permissions-reference#user-permissions) to be authorized.
+By default, MSAL sets `scopes` to [/.default](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-permissions-and-consent#the-default-scope) which is built in for every application that refers to the static list of permissions configured on the application registration. Furthermore, MSAL will add `openid` and `profile` to all requests. In the example below we explicit include `User.Read.All` in the login scope.
+
+```jsx
+await logIn({
+ scopes: ['User.Read.All'], // becomes ['openid', 'profile', 'User.Read.All']
+})
+```
+
+See [loginRedirect](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.PublicClientApplication.html#loginRedirect), [PublicClientApplication](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.PublicClientApplication.html) class and [Scopes Behavior](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/msal-lts/lib/msal-core/docs/scopes.md#scopes-behavior) for more documentation.
+
+## `getToken` Options
+
+`options` in `getToken(options?)` is of type [RedirectRequest](https://azuread.github.io/microsoft-authentication-library-for-js/ref/types/_azure_msal_browser.RedirectRequest.html).
+By default, `getToken` will be called with scope `['openid', 'profile']`.
+Since Azure Active Directory applies [incremental consent](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/resources-and-scopes.md#dynamic-scopes-and-incremental-consent), we can extend the permissions from the login example by including another scope, for example `Mail.Read`:
+
+```js
+await getToken({
+ scopes: ['Mail.Read'], // becomes ['openid', 'profile', 'User.Read.All', 'Mail.Read']
+})
+```
+
+See [acquireTokenSilent](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.PublicClientApplication.html#acquireTokenSilent), [Resources and Scopes](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/resources-and-scopes.md#resources-and-scopes) or [full class documentation](https://pub.dev/documentation/msal_js/latest/msal_js/PublicClientApplication-class.html#constructors) for more.
+
+## Azure Active Directory B2C-specific configuration
+
+You can design your own auth flow with Azure Active Directory B2C using [hosted user flows](https://docs.microsoft.com/en-us/azure/active-directory-b2c/add-sign-up-and-sign-in-policy?pivots=b2c-user-flow).
+Using it requires two extra settings.
+
+#### Update the .env file:
+
+```bash title=".env"
+AZURE_ACTIVE_DIRECTORY_AUTHORITY=https://{your-microsoft-tenant-name}.b2clogin.com/{{your-microsoft-tenant-name}}.onmicrosoft.com/{{your-microsoft-user-flow-id}}
+AZURE_ACTIVE_DIRECTORY_JWT_ISSUER=https://{{your-microsoft-tenant-name}}.b2clogin.com/{{your-microsoft-tenant-id}}/v2.0/
+AZURE_ACTIVE_DIRECTORY_KNOWN_AUTHORITY=https://{{your-microsoft-tenant-name}}.b2clogin.com
+```
+
+Here's an example:
+
+```bash title=".env.example"
+AZURE_ACTIVE_DIRECTORY_AUTHORITY=https://rwauthtestb2c.b2clogin.com/rwauthtestb2c.onmicrosoft.com/B2C_1_signupsignin1
+AZURE_ACTIVE_DIRECTORY_JWT_ISSUER=https://rwauthtestb2c.b2clogin.com/775527ef-8a37-4307-8b3d-cc311f58d922/v2.0/
+AZURE_ACTIVE_DIRECTORY_KNOWN_AUTHORITY=https://rwauthtestb2c.b2clogin.com
+```
+
+And don't forget to add `AZURE_ACTIVE_DIRECTORY_KNOWN_AUTHORITY` to the `includeEnvironmentVariables` array in `redwood.toml`.
+(`AZURE_ACTIVE_DIRECTORY_JWT_ISSUER` is only used on the API side. But more importantly, it's sensitive—do _not_ include it in the web side.)
+
+#### Update `activeDirectoryClient` instance
+
+This lets the MSAL web-side client know about our new B2C authority:
+
+```jsx title="web/src/auth.{js,ts}"
+const azureActiveDirectoryClient = new PublicClientApplication({
+ auth: {
+ clientId: process.env.AZURE_ACTIVE_DIRECTORY_CLIENT_ID,
+ authority: process.env.AZURE_ACTIVE_DIRECTORY_AUTHORITY,
+ redirectUri: process.env.AZURE_ACTIVE_DIRECTORY_REDIRECT_URI,
+ postLogoutRedirectUri:
+ process.env.AZURE_ACTIVE_DIRECTORY_LOGOUT_REDIRECT_URI,
+ // highlight-next-line
+ knownAuthorities: [process.env.AZURE_ACTIVE_DIRECTORY_KNOWN_AUTHORITY],
+ },
+})
+```
+
+Now you can call the `logIn` and `logOut` functions from `useAuth()`, and everything should just work.
+
+Here's a few more links to relevant documentation for reference:
+
+- [Overview of tokens in Azure Active Directory B2C](https://docs.microsoft.com/en-us/azure/active-directory-b2c/tokens-overview)
+- [Working with MSAL.js and Azure AD B2C](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/working-with-b2c.md)
diff --git a/docs/versioned_docs/version-8.4/auth/clerk.md b/docs/versioned_docs/version-8.4/auth/clerk.md
new file mode 100644
index 000000000000..ec0f98555c50
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/auth/clerk.md
@@ -0,0 +1,121 @@
+---
+sidebar_label: Clerk
+---
+
+# Clerk Authentication
+
+:::warning Did you set up Clerk a while ago?
+
+If you set up Clerk a while ago, you may be using a deprecated `authDecoder` that's subject to rate limiting.
+This decoder will be removed in the next major.
+There's a new decoder you can use right now!
+See the [migration guide](https://github.com/redwoodjs/redwood/releases/tag/v5.3.2) for how to upgrade.
+
+:::
+
+To get started, run the setup command:
+
+```text
+yarn rw setup auth clerk
+```
+
+This installs all the packages, writes all the files, and makes all the code modifications you need.
+For a detailed explanation of all the api- and web-side changes that aren't exclusive to Clerk, see the top-level [Authentication](../authentication.md) doc.
+But for now, let's focus on Clerk's side of things.
+
+If you don't have a Clerk account yet, now's the time to make one: navigate to https://clerk.dev, sign up, and create an application.
+The defaults are good enough to get us going, but feel free to configure things as you wish.
+We'll get the application's API keys from its dashboard next.
+
+:::note we'll only focus on the development instance
+
+By default, Clerk applications have two instances, "Development" and "Production".
+We'll only focus on the "Development" instance here, which is used for local development.
+When you're ready to deploy, switch the instance the dashboard is displaying by clicking "Development" in the header at the top.
+How you get your API keys to production depends on your deploy provider.
+
+:::
+
+After you create the application, you should be redirected to its dashboard where you should see the RedwoodJS logo.
+Click on it and copy the two API keys it shows into your project's `.env` file:
+
+```bash title=".env"
+CLERK_PUBLISHABLE_KEY="..."
+CLERK_SECRET_KEY="..."
+```
+
+Lastly, in your project's `redwood.toml` file, include `CLERK_PUBLISHABLE_KEY` in the list of env vars that should be available to the web side:
+
+```toml title="redwood.toml"
+[web]
+ # ...
+ includeEnvironmentVariables = [
+ "CLERK_PUBLISHABLE_KEY",
+ ]
+```
+
+That should be enough; now, things should just work.
+Let's make sure: if this is a brand new project, generate a home page:
+
+```bash
+yarn rw g page Home /
+```
+
+There we'll try to sign up by destructuring `signUp` from the `useAuth` hook (import that from `'src/auth'`). We'll also destructure and display `isAuthenticated` to see if it worked:
+
+```tsx title="web/src/pages/HomePage/HomePage.tsx"
+import { useAuth } from 'src/auth'
+
+const HomePage = () => {
+ const { isAuthenticated, signUp } = useAuth()
+
+ return (
+ <>
+ {/* MetaTags, h1, paragraphs, etc. */}
+
+ {JSON.stringify({ isAuthenticated })}
+ sign up
+ >
+ )
+}
+```
+
+Clicking sign up should open a sign-up box and after you sign up, you should see `{"isAuthenticated":true}` on the page.
+
+## Customizing the session token
+
+There's not a lot to the default session token.
+Besides the standard claims, the only thing it really has is the user's `id`.
+Eventually, you'll want to customize it so that you can get back more information from Clerk.
+You can do so by navigating to the "Sessions" section in the nav on the left, then clicking on "Edit" in the "Customize session token" box:
+
+![clerk_customize_session_token](https://github.com/redwoodjs/redwood/assets/32992335/6d30c616-b4d2-4b44-971b-8addf3b79e5a)
+
+As long as you're using the `clerkJwtDecoder`
+all the properties you add will be available to the `getCurrentUser` function:
+
+```ts title="api/src/lib/auth.ts"
+export const getCurrentUser = async (
+ decoded, // 👈 All the claims you add will be available on the `decoded` object
+ // ...
+) => {
+ decoded.myClaim...
+
+ // ...
+}
+```
+
+## Avoiding feature duplication
+
+Redwood's Clerk integration is based on [Clerk's React SDK](https://clerk.dev/docs/reference/clerk-react/installation).
+This means that there's some duplication between the features in the SDK and the ones in `@redwoodjs/auth-clerk-web`.
+For example, the SDK ha a `SignedOut` component that redirects a user away from a private page—very much like wrapping a route with Redwood's `Private` component.
+We recommend you use Redwood's way of doing things as much as possible since it's much more likely to get along with the rest of the framework.
+
+## Deep dive: the `ClerkStatusUpdater` component
+
+With Clerk, there's a bit more going on in the `web/src/auth.tsx` file than other auth providers.
+This is because Clerk is a bit unlike the other auth providers Redwood integrates with in that it puts an instance of its client SDK on the browser's `window` object.
+That means Redwood has to wait for it to be ready.
+With other providers, Redwood instantiates their client SDK in `web/src/auth.ts{x}`, then passes it to `createAuth`.
+With Clerk, instead Redwood uses Clerk components and hooks, like `ClerkLoaded` and `useUser`, to update Redwood's auth context with the client when it's ready.
diff --git a/docs/versioned_docs/version-8.4/auth/custom.md b/docs/versioned_docs/version-8.4/auth/custom.md
new file mode 100644
index 000000000000..a694585d5ddf
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/auth/custom.md
@@ -0,0 +1,307 @@
+---
+sidebar_label: Custom
+---
+
+# Custom Authentication
+
+If Redwood doesn't officially integrate with the auth provider you want to use, you're not out of luck just yet: Redwood has an API you can use to integrate your auth provider of choice.
+
+:::tip Were you using Nhost, magic.link, GoTrue, Okta or Wallet Connect (ethereum)?
+
+If you're here because you're using one of the providers Redwood used to support (Nhost, magic.link, GoTrue, Okta or Wallet Connect (Ethereum)), we've moved the code for them out into their own separate repos:
+
+- [Nhost](https://github.com/redwoodjs/auth-nhost)
+- [magic.link](https://github.com/redwoodjs/auth-magiclink)
+- [GoTrue](https://github.com/redwoodjs/auth-gotrue)
+- [Okta](https://github.com/redwoodjs/auth-okta)
+- [WalletConnect (Ethereum)](https://github.com/redwoodjs/auth-walletconnect)
+
+The code has been updated to work with the auth APIs introduced in v4, but it's mostly untested, so no guarantee it'll work.
+But together with this doc, we hope getting one of the auth providers working won't be too difficult.
+
+:::
+
+When it comes to writing a custom auth integration, there's a little more work to do than just using one of the ready-made packages. But we'll walk you through all that work here, using [Nhost](https://nhost.io/) as an example. Hopefully you have auth up and running before too long!
+
+To get started, run the setup command:
+
+```bash
+yarn rw setup auth custom
+```
+
+This makes all the code modifications it can, but whereas with other auth providers, all you have to do now is get your keys, here you have to write some code.
+
+Let's work on the web side first.
+Here most of our time will be spent in the `web/src/auth.ts` file.
+It comes commented to guide us, but we'll get into it here.
+If you're using TypeScript, scroll past the boilerplate interfaces for now to get to our first task, instantiating the client:
+
+```ts title="web/src/auth.ts"
+import { createAuthentication } from '@redwoodjs/auth'
+
+// ...
+
+// Replace this with the auth service provider client sdk
+const client = {
+ login: () => ({
+ id: 'unique-user-id',
+ email: 'email@example.com',
+ roles: [],
+ }),
+ signup: () => ({
+ id: 'unique-user-id',
+ email: 'email@example.com',
+ roles: [],
+ }),
+ logout: () => {},
+ getToken: () => 'super-secret-short-lived-token',
+ getUserMetadata: () => ({
+ id: 'unique-user-id',
+ email: 'email@example.com',
+ roles: [],
+ }),
+}
+```
+
+As the comment says, we need to replace this placeholder client object with an instance of our auth provider's client SDK.
+Since we're using Nhost, it's time to navigate to [their docs](https://docs.nhost.io/reference/javascript) for a bit of reading.
+We'll take all the work you have to do reading docs for granted here and cut to the chase—setting up Nhost's client looks like this:
+
+```ts
+import { NhostClient } from '@nhost/nhost-js'
+
+const client = new NhostClient({
+ backendUrl: '...',
+})
+```
+
+This means we have to install `@nhost/nhost-js` on the web side, so let's go ahead and do that:
+
+```
+yarn workspace web add @nhost/nhost-js
+```
+
+Then we'll have to make an account, an application, and get it's `backendUrl`.
+On your application's dashboard, click "Settings" at the bottom of the the nav on the left, then "Environment Variables", and look for "NHOST_BACKEND_URL".
+Copy its value into your project's `.env` file and include it in the list of env vars the web side has access to in your project's `redwood.toml` file:
+
+```bash title=".env"
+NHOST_BACKEND_URL="..."
+```
+
+```toml title="redwood.toml"
+[web]
+ # ...
+ includeEnvironmentVariables = ["NHOST_BACKEND_URL"]
+```
+
+Lastly, let's update `web/src/auth.ts`:
+
+```ts title="web/src/auth.ts"
+import { createAuthentication } from '@redwoodjs/auth'
+
+import { NhostClient } from '@nhost/nhost-js'
+
+// ...
+
+const client = new NhostClient({
+ backendUrl: process.env.NHOST_BACKEND_URL,
+})
+```
+
+Ok, that's it for the client.
+At this point, you could update some of the TS interfaces, but we'll leave that to you and press on with the integration.
+Now we have to create the `useAuth` hook using the client we just made so that the rest of Redwood, like the router, works.
+Scroll down a little more to the `createAuthImplementation` function:
+
+```ts title="web/src/auth.ts"
+// This is where most of the integration work will take place. You should keep
+// the shape of this object (i.e. keep all the key names) but change all the
+// values/functions to use methods from the auth service provider client sdk
+// you're integrating with
+function createAuthImplementation(client: AuthClient) {
+ return {
+ type: 'custom-auth',
+ client,
+ login: async () => client.login(),
+ logout: async () => client.logout(),
+ signup: async () => client.signup(),
+ getToken: async () => client.getToken(),
+ /**
+ * Actual user metadata might look something like this
+ * {
+ * "id": "11111111-2222-3333-4444-5555555555555",
+ * "aud": "authenticated",
+ * "role": "authenticated",
+ * "roles": ["admin"],
+ * "email": "email@example.com",
+ * "app_metadata": {
+ * "provider": "email"
+ * },
+ * "user_metadata": null,
+ * "created_at": "2016-05-15T19:53:12.368652374-07:00",
+ * "updated_at": "2016-05-15T19:53:12.368652374-07:00"
+ * }
+ */
+ getUserMetadata: async () => client.getUserMetadata(),
+ }
+}
+```
+
+This may seem like a lot, but it's actually not so bad: it's just about mapping the client's functions to these properties, many of which are pretty straightforward.
+The fact that this is eventually the `useAuth` hook is hidden a bit—`createAuthImplementation` gets passed to `createAuthentication`, which returns the `AuthProvider` component and `useAuth` hook—but you don't have to concern yourself with that here.
+
+Again, let's take all the reading and trial and error you'll have to do for granted, though it may be long and tedious:
+
+```ts title="web/src/auth.ts"
+function createAuthImplementation(client: AuthClient) {
+ return {
+ type: 'custom-auth',
+ client,
+ // See sign in options at https://docs.nhost.io/reference/javascript/auth/sign-in
+ login: async (options) => {
+ return await client.auth.signIn(options)
+ },
+ // See sign out options at https://docs.nhost.io/reference/javascript/auth/sign-out
+ logout: async (options) => {
+ return await client.auth.signOut(options)
+ },
+ // See sign up options at https://docs.nhost.io/reference/javascript/auth/sign-up
+ signup: async (options) => {
+ return await client.auth.signUp(options)
+ },
+ getToken: async () => {
+ return (await client.auth.getJWTToken()) || null
+ },
+ // See https://docs.nhost.io/reference/javascript/auth/get-user
+ getUserMetadata: async () => {
+ return await client.auth.getUser()
+ },
+ restoreAuthState: async () => {
+ return await client.auth.refreshSession()
+ },
+ }
+}
+```
+
+That's it for the web side.
+Let's head over to the api side.
+
+## api side
+
+Now that we've set up the web side, every GraphQL request includes a token.
+But without a way to verify and decode that token, the api side doesn't know what to do with it, so let's start there.
+
+In `api/src/lib/auth.ts`, make an empty function, `authDecoder`.
+Eventually we'll pass this to the `createGraphQLHandler` function in `api/src/graphql.ts`.
+The GraphQL server calls it with two arguments, the token and the type. Both are strings:
+
+```ts title="api/src/lib/auth.ts"
+export const authDecoder = async (token: string, type: string) => {
+ // decode token...
+}
+```
+
+First, let's make sure that the type is the same as the type in `createAuthImplementation`, `'custom-auth'`. If it's not, we can call it quits:
+
+```ts title="api/src/lib/auth.ts"
+export const authDecoder = async (token: string, type: string) => {
+ if (type !== 'custom-auth') {
+ return null
+ }
+
+ // decode token...
+}
+```
+
+Now let's verify and decode the token.
+We'll use the npm module [jose](https://www.npmjs.com/package/jose) to do that; it has a `jwtVerify` function that does exactly what we want.
+Go ahead and add it:
+
+```
+yarn workspace api add jose
+```
+
+For `jwtVerify` to do it's job, it needs the secret.
+Time for another trip to your Nhost application's dashboard.
+This time you're looking for "NHOST_JWT_SECRET".
+Just like "NHOST_BACKEND_URL", it should be in "Settings", "Environment Variables".
+(This one is a JSON object, with two properties, `type` and `key`. We just need `key`.)
+Add that one to your project's `.env` file (no need to put it in `redwood.toml` though):
+
+```shell title=".env"
+NHOST_JWT_SECRET="..."
+```
+
+Now we can use it in the `authDecoder`:
+
+```ts title="api/src/lib/auth.ts"
+import { jwtVerify } from 'jose'
+
+export const authDecoder = async (token: string, type: string) => {
+ if (type !== 'custom-auth') {
+ return null
+ }
+
+ const secret = new TextEncoder().encode(process.env.NHOST_JWT_SECRET)
+
+ const decoded = await jwtVerify(token, secret)
+
+ return decoded
+}
+```
+
+Great—now we've got a way of decoding the token in requests coming from the web side.
+Just one more important step that's easy to overlook: we have to pass this function to `createGraphQLHandler` in `api/src/functions/graphql.ts`:
+
+```ts title="api/src/functions/graphql.ts"
+// highlight-next-line
+import { authDecoder, getCurrentUser } from 'src/lib/auth'
+
+// ...
+
+export const handler = createGraphQLHandler({
+ // highlight-next-line
+ authDecoder,
+ getCurrentUser,
+ // ...
+})
+```
+
+That should be enough; now, things should just work.
+Let's make sure: if this is a brand new project, generate a home page.
+There we'll try to sign up by destructuring `signUp` from the `useAuth` hook (import that from `'src/auth'`). We'll also destructure and display `isAuthenticated` to see if it worked:
+
+```tsx title="web/src/pages/HomePage.tsx"
+import { useAuth } from 'src/auth'
+
+const HomePage = () => {
+ const { isAuthenticated, signUp } = useAuth()
+
+ return (
+ <>
+ {/* MetaTags, h1, paragraphs, etc. */}
+
+ {JSON.stringify({ isAuthenticated })}
+
+ signUp({
+ // email: 'your.email@email.com',
+ // password: 'super secret password',
+ })
+ }
+ >
+ sign up
+
+ >
+ )
+}
+```
+
+Nhost doesn't redirect to a hosted sign-up page or open a sign-up modal.
+In a real app, you'd build a form here, but we're going to hardcode an email and password.
+One thing you may want to do before signing up: disable email verification, else you'll actually have to verify your email.
+Go to back to "Settings" in your Nhost application, but this time click "Sign in methods".
+There should be a checkbox there, "Require Verified Emails".
+Toggle it off.
+Now try signing up and you should see `{"isAuthenticated":true}` on the page.
diff --git a/docs/versioned_docs/version-8.4/auth/dbauth.md b/docs/versioned_docs/version-8.4/auth/dbauth.md
new file mode 100644
index 000000000000..b4207909941c
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/auth/dbauth.md
@@ -0,0 +1,709 @@
+---
+sidebar_label: Self-hosted (dbAuth)
+---
+
+# Self-hosted Authentication (dbAuth)
+
+Redwood's own **dbAuth** provides several benefits:
+
+- Use your own database for storing user credentials
+- Use your own login, signup and forgot password pages (or use Redwood's pre-built ones)
+- Customize login session length
+- No external dependencies
+- No user data ever leaves your servers
+- No additional charges/limits based on number of users
+- No third party service outages affecting your site
+
+And potentially one large drawback:
+
+- Use your own database for storing user credentials
+
+However, we're following best practices for storing these credentials:
+
+1. Users' passwords are [salted and hashed](https://auth0.com/blog/adding-salt-to-hashing-a-better-way-to-store-passwords/) with PBKDF2 before being stored
+2. Plaintext passwords are never stored anywhere, and only transferred between client and server during the login/signup phase (and hopefully only over HTTPS)
+3. Our logger scrubs sensitive parameters (like `password`) before they are output
+4. We only store the hashes of reset tokens
+
+Even if you later decide you want to let someone else handle your user data for you, dbAuth is a great option for getting up and running quickly (we even have a generator for creating basic login and signup pages for you).
+
+## How It Works
+
+dbAuth relies on good ol' fashioned cookies to determine whether a user is logged in or not. On an attempted login, a serverless function on the api-side checks whether a user exists with the given username (internally, dbAuth refers to this field as _username_ but you can use anything you want, like an email address). If a user with that username is found, does their salted and hashed password match the one in the database?
+
+If so, an [HttpOnly](https://owasp.org/www-community/HttpOnly), [Secure](https://owasp.org/www-community/controls/SecureCookieAttribute), [SameSite](https://owasp.org/www-community/SameSite) cookie (dbAuth calls this the "session cookie") is sent back to the browser containing the ID of the user. The content of the cookie is a simple string, but AES encrypted with a secret key (more on that later).
+
+When the user makes a GraphQL call, we decrypt the cookie and make sure that the user ID contained within still exists in the database. If so, the request is allowed to proceed.
+
+If there are any shenanigans detected (the cookie can't be decrypted properly, or the user ID found in the cookie does not exist in the database) the user is immediately logged out by expiring the session cookie.
+
+## Setup
+
+A single CLI command will get you everything you need to get dbAuth working, minus the actual login/signup pages:
+
+```bash
+yarn rw setup auth dbAuth
+```
+
+You will be prompted to ask if you want to enable **WebAuthn** support. WebAuthn is an open standard for allowing authentication from devices like TouchID, FaceID, USB fingerprint scanners, and more. If you think you want to use WebAuthn, enter `y` at this prompt and read on configuration options.
+
+You can also add WebAuthn to an existing dbAuth install. [Read more about WebAuthn usage and config below](#webauthn).
+
+Read the post-install instructions carefully as they contain instructions for adding database fields for the hashed password and salt, as well as how to configure the auth serverless function based on the name of the table that stores your user data. Here they are, but could change in future releases (these do not include the additional WebAuthn required options, make sure you get those from the output of the `setup` command):
+
+> You will need to add a couple of fields to your User table in order to store a hashed password and salt:
+>
+> ```
+> model User {
+> id Int @id @default(autoincrement())
+> email String @unique
+> hashedPassword String // <─┐
+> salt String // <─┼─ add these lines
+> resetToken String? // <─┤
+> resetTokenExpiresAt DateTime? // <─┘
+> }
+> ```
+>
+> If you already have existing user records you will need to provide a default value or Prisma complains, so change those to:
+>
+> ```
+> hashedPassword String @default("")
+> salt String @default("")
+> ```
+>
+> You'll need to let Redwood know what field you're using for your users' `id` and `username` fields In this case we're using `id` and `email`, so update those in the `authFields` config in `/api/src/functions/auth.js` (this is also the place to tell Redwood if you used a different name for the `hashedPassword` or `salt` fields):
+>
+> ```
+> authFields: {
+> id: 'id',
+> username: 'email',
+> hashedPassword: 'hashedPassword',
+> salt: 'salt',
+> resetToken: 'resetToken',
+> resetTokenExpiresAt: 'resetTokenExpiresAt',
+> },
+> ```
+>
+> To get the actual user that's logged in, take a look at `getCurrentUser()` in `/api/src/lib/auth.js`. We default it to something simple, but you may use different names for your model or unique ID fields, in which case you need to update those calls (instructions are in the comment above the code).
+>
+> Finally, we created a `SESSION_SECRET` environment variable for you in `.env`. This value should NOT be checked into version control and should be unique for each environment you deploy to. If you ever need to log everyone out of your app at once change this secret to a new value. To create a new secret, run:
+>
+> ```
+> yarn rw g secret
+> ```
+>
+> Need simple Login, Signup and Forgot Password pages? Of course we have a generator for those:
+>
+> ```
+> yarn rw generate dbAuth
+> ```
+
+Note that if you change the fields named `hashedPassword` and `salt`, and you have some verbose logging in your app, you'll want to scrub those fields from appearing in your logs. See the [Redaction](logger.md#redaction) docs for info.
+
+## Scaffolding Login/Signup/Forgot Password Pages
+
+If you don't want to create your own login, signup and forgot password pages from scratch we've got a generator for that:
+
+```bash
+yarn rw g dbAuth
+```
+
+Once again you will be asked if you want to create a WebAuthn-enabled version of the LoginPage. If so, enter `y` and follow the setup instructions.
+
+The default routes will make them available at `/login`, `/signup`, `/forgot-password`, and `/reset-password` but that's easy enough to change. Again, check the post-install instructions for one change you need to make to those pages: where to redirect the user to once their login/signup is successful.
+
+If you'd rather create your own, you might want to start from the generated pages anyway as they'll contain the other code you need to actually submit the login credentials or signup fields to the server for processing.
+
+## Configuration
+
+Almost all config for dbAuth lives in `api/src/functions/auth.js` in the object you give to the `DbAuthHandler` initialization. The comments above each key will explain what goes where. Here's an overview of the more important options:
+
+### allowedUserFields
+
+```javascript
+allowedUserFields: ['id', 'email']
+```
+
+Most of the auth handlers accept a `user` argument that you can reference in the body of the function. These handlers also sometimes return that `user` object. As a security measure, `allowedUserFields` defines the only properties that will be available in that object so that sensitive data isn't accidentally leaked by these handlers to the client.
+
+:::info
+
+The `signup` and `forgotPassword` handlers return to the client whatever data is returned from their handlers, which can be used to display something like the email address that a verification email was just sent to. Without `allowedUserFields` it would be very easy to include the user's `hashedPassword` and `salt` in that response (just return `user` from those handlers) and then any customer could open the Web Inspector in their browser and see those values in plain text!
+
+:::
+
+`allowedUserFields` is defaulted to `id` and `email` but you can add any property on `user` to that list.
+
+### login.enabled
+
+Allow users to call login. Defaults to true. Needs to be explicitly set to false to disable the flow.
+
+```javascript
+login: {
+ enabled: false
+}
+```
+
+### login.handler()
+
+If you want to do something other than immediately let a user log in if their username/password is correct, you can add additional logic in `login.handler()`. For example, if a user's credentials are correct, but they haven't verified their email address yet, you can throw an error in this function with the appropriate message and then display it to the user. If the login should proceed, simply return the user that was passed as the only argument to the function:
+
+```javascript
+login: {
+ handler: (user) => {
+ if (!user.verified) {
+ throw new Error('Please validate your email first!')
+ } else {
+ return user
+ }
+ }
+}
+```
+
+### signup.enabled
+
+Allow users to sign up. Defaults to true. Needs to be explicitly set to false to disable the flow.
+
+```javascript
+signup: {
+ enabled: false
+}
+```
+
+### signup.handler()
+
+This function should contain the code needed to actually create a user in your database. You will receive a single argument which is an object with all of the fields necessary to create the user (`username`, `hashedPassword` and `salt`) as well as any additional fields you included in your signup form in an object called `userAttributes`:
+
+```javascript
+signup: {
+ handler: ({ username, hashedPassword, salt, userAttributes }) => {
+ return db.user.create({
+ data: {
+ email: username,
+ hashedPassword: hashedPassword,
+ salt: salt,
+ name: userAttributes.name,
+ },
+ })
+ }
+}
+```
+
+Before `signup.handler()` is invoked, dbAuth will check that the username is unique in the database and throw an error if not.
+
+There are three things you can do within this function depending on how you want the signup to proceed:
+
+1. If everything is good and the user should be logged in after signup: return the user you just created
+2. If the user is safe to create, but you do not want to log them in automatically: return a string, which will be returned by the `signUp()` function you called after destructuring it from `useAuth()` (see code snippet below)
+3. If the user should _not_ be able to sign up for whatever reason: throw an error in this function with the message to be displayed
+
+You can deal with case #2 by doing something like the following in a signup component/page:
+
+```jsx
+const { signUp } = useAuth()
+
+const onSubmit = async (data) => {
+ const response = await signUp({ ...data })
+
+ if (response.message) {
+ toast.error(response.message) // user created, but not logged in
+ } else {
+ toast.success('Welcome!') // user created and logged in
+ navigate(routes.dashboard())
+ }
+}
+```
+
+### signup.passwordValidation()
+
+This function is used to validate that the password supplied at signup meets certain criteria (length, randomness, etc.). By default it just returns `true` which means the password is always considered valid, even if only a single character (dbAuth features built-in validation that the password is not blank, an empty string, or made up of only spaces). Modify it to enforce whatever methodology you want on the password.
+
+If the password is valid, return `true`. Otherwise, throw the `PasswordValidationError` along with a (optional) message explaining why:
+
+```javascript
+signup: {
+ passwordValidation: (password) => {
+ if (password.length < 8) {
+ throw new PasswordValidationError(
+ 'Password must be at least 8 characters'
+ )
+ }
+
+ if (!password.match(/[A-Z]/)) {
+ throw new PasswordValidationError(
+ 'Password must contain at least one capital letter'
+ )
+ }
+
+ return true
+ }
+}
+```
+
+For the best user experience you should include the same checks on the client side and avoid the roundtrip to the server altogether if the password is invalid. However, having the checks here makes sure that someone can't submit a user signup programmatically and skirt your password requirements.
+
+### forgotPassword.enabled
+
+Allow users to request a new password via a call to `forgotPassword`. Defaults to true. Needs to be explicitly set to false to disable the flow.
+When disabling this flow you probably want to disable `resetPassword` as well.
+
+```javascript
+forgotPassword: {
+ enabled: false
+}
+```
+
+### forgotPassword.handler()
+
+This handler is invoked if a user is found with the username/email that they submitted on the Forgot Password page, and that user will be passed as an argument. Inside this function is where you'll send the user a link to reset their password—via an email is most common. The link will, by default, look like:
+
+```
+https://example.com/reset-password?resetToken=${user.resetToken}
+```
+
+If you changed the path to the Reset Password page in your routes you'll need to change it here. If you used another name for the `resetToken` database field, you'll need to change that here as well:
+
+```
+https://example.com/reset-password?resetKey=${user.resetKey}
+```
+
+> Note that although the user table contains a hash of `resetToken`, only for the handler, `user.resetToken` will contain the raw `resetToken` to use for generating a password reset link.
+
+### resetPassword.enabled
+
+Allow users to reset their password via a code from a call to `forgotPassword`. Defaults to true. Needs to be explicitly set to false to disable the flow.
+When disabling this flow you probably want to disable `forgotPassword` as well.
+
+```javascript
+resetPassword: {
+ enabled: false
+}
+```
+
+### resetPassword.handler()
+
+This handler is invoked after the password has been successfully changed in the database. Returning something truthy (like `return user`) will automatically log the user in after their password is changed. If you'd like to return them to the login page and make them log in manually, `return false` and redirect the user in the Reset Password page.
+
+### usernameMatch
+
+This configuration allows you to perform a case insensitive check on a username at the point of db check. You will need to provide the configuration of your choice for both signup and login.
+
+```javascript
+signup: {
+ usernameMatch: 'insensitive'
+}
+```
+
+```javascript
+login: {
+ usernameMatch: 'insensitive'
+}
+```
+
+By default no setting is required. This is because each db has its own rules for enabling this feature. To enable please see the table below and pick the correct 'userMatchString' for your db of choice.
+
+| DB | Default | usernameMatchString | notes |
+| -------------------- | ------------------ | ------------------- | ---------------------------------------------------------------------------- |
+| Postgres | 'default' | 'insensitive' | |
+| MySQL | 'case-insensitive' | N/A | turned on by default so no setting required |
+| MongoDB | 'default' | 'insensitive' |
+| SQLite | N/A | N/A | [Not Supported] Insensitive checks can only be defined at a per column level |
+| Microsoft SQL Server | 'case-insensitive' | N/A | turned on by default so no setting required |
+
+### Cookie config
+
+These options determine how the cookie that tracks whether the client is authorized is stored in the browser. The default configuration should work for most use cases. If you serve your web and api sides from different domains you'll need to make some changes: set `SameSite` to `None` and then add [CORS configuration](#cors-config).
+
+```javascript
+cookie: {
+ HttpOnly: true,
+ Path: '/',
+ SameSite: 'Strict',
+ Secure: true,
+ // Domain: 'example.com',
+}
+```
+
+### CORS config
+
+If you're using dbAuth and your api and web sides are deployed to different domains then you'll need to configure CORS for both GraphQL in general and dbAuth. You'll also need to enable a couple of options to be sure and send/accept credentials in XHR requests. For more info, see the complete [CORS doc](cors.md#cors-and-authentication).
+
+### Error Messages
+
+There are several error messages that can be displayed, including:
+
+- Username/email not found
+- Incorrect password
+- Expired reset password token
+
+We've got some default error messages that sound nice, but may not fit the tone of your site. You can customize these error messages in `api/src/functions/auth.js` in the `errors` prop of each of the `login`, `signup`, `forgotPassword` and `resetPassword` config objects. The generated file contains tons of comments explaining when each particular error message may be shown.
+
+### WebAuthn Config
+
+See [WebAuthn Configuration](#function-config) section below.
+
+## Environment Variables
+
+### Cookie Domain
+
+By default, the session cookie will not have the `Domain` property set, which a browser will default to be the [current domain only](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#define_where_cookies_are_sent). If your site is spread across multiple domains (for example, your site is at `example.com` but your api-side is deployed to `api.example.com`) you'll need to explicitly set a Domain so that the cookie is accessible to both.
+
+To do this, set the `cookie.Domain` property in your `api/src/functions/auth.js` configuration, set to the root domain of your site, which will allow it to be read by all subdomains as well. For example:
+
+```json title="api/src/functions/auth.js"
+cookie: {
+ HttpOnly: true,
+ Path: '/',
+ SameSite: 'Strict',
+ Secure: process.env.NODE_ENV !== 'development' ? true : false,
+ Domain: 'example.com'
+}
+```
+
+### Session Secret Key
+
+If you need to change the secret key that's used to encrypt the session cookie, or deploy to a new target (each deploy environment should have its own unique secret key) we've got a CLI tool for creating a new one:
+
+```
+yarn rw g secret
+```
+
+Note that the secret that's output is _not_ appended to your `.env` file or anything else, it's merely output to the screen. You'll need to put it in the right place after that.
+
+:::warning .env and Version Control
+
+The `.env` file is set to be ignored by git and not committed to version control. There is another file, `.env.defaults`, which is meant to be safe to commit and contain simple ENV vars that your dev team can share. The encryption key for the session cookie is NOT one of these shareable vars!
+
+:::
+
+## WebAuthn
+
+[WebAuthn](https://webauthn.guide/) is a specification written by the W3C and FIDO with participation from Google, Mozilla, Microsoft, and others. It defines a standard way to use public key cryptography instead of a password to authenticate users.
+
+That's a very technical way of saying: users can log in with [TouchID](https://en.wikipedia.org/wiki/Touch_ID), [FaceID](https://en.wikipedia.org/wiki/Face_ID), [Windows Hello](https://support.microsoft.com/en-us/windows/learn-about-windows-hello-and-set-it-up-dae28983-8242-bb2a-d3d1-87c9d265a5f0), [Yubikey](https://www.yubico.com/), and more.
+
+
+
+We'll refer to whatever biometric device that's used as simply a "device" below. The WebAuthn flow includes two "phases":
+
+1. **Registration**: the first time a new device is added for a user (a user can have multiple devices registered)
+2. **Authentication**: the device is recognized and can be used to login on subsequent visits
+
+### User Experience
+
+The `LoginPage` generated by Redwood includes two new prompts on the login page, depending on the state of the user and whether they have registered their device yet or not:
+
+**Registration**
+
+The user is prompted to login with username/password:
+
+
+
+Then asked if they want to enable WebAuthn:
+
+
+
+If so, they are shown the browser's prompt to scan:
+
+
+
+If they skip, they just proceed into the site as usual. If they log out and back in, they will be prompted to enable WebAuthn again.
+
+**Authentication**
+
+When a device is already registered then it can be used to skip username/password login. The user is immediately shown the prompt to scan when they land on the login page (if the prompt doesn't show, or they mistakenly cancel it, they can click "Open Authenticator" to show the prompt again)
+
+
+
+They can also choose to go to use username/password credentials instead of their registered device.
+
+### How it Works
+
+The back and forth between the web and api sides works like this:
+
+**Registration**
+
+1. If the user selects to enable their device, a request is made to the server for "registration options" which is a JSON object containing details about the server and user (domain, username).
+2. Your app receives that data and then makes a browser API call that says to start the biometric reader with the received options
+3. The user scans their fingerprint/face and the browser API returns an ID representing this device, a public key and a few other fields for validation on the server
+4. The ID, public key, and additional details are sent to the server to be verified. Assuming the are, the device is saved to the database in a `UserCredential` table (you can change the name if you want). The server responds by placing a cookie on the user's browser with the device ID (a random string of letters and numbers)
+
+A similar process takes place when authenticating:
+
+**Authentication**
+
+1. If the cookie from the previous process is present, the web side knows that the user has a registered device so a request is made to the server to get "authentication options"
+2. The server looks up user who's credential ID is in the cookie and gets a list of all of the devices they have registered in the past. This is included along with the domain and username
+3. The web side receives the options from the server and a browser API call is made. The browser first checks to see if the list of devices from the server includes the current device. If so, it prompts the user to scan their fingerprint/face (if the device is not in the list, the user will directed back to username/password signup)
+4. The ID, public key, user details and a signature are sent to the server and checked to make sure the signature contains the expected data encrypted with the public key. If so, the regular login cookie is set (the same as if the user had used username/password login)
+
+In both cases, actual scanning and matching of devices is handled by the operating system: all we care about is that we are given a credential ID and a public key back from the device.
+
+### Browser Support
+
+WebAuthn is supported in the following browsers (as of July 2022):
+
+| OS | Browser | Authenticator |
+| ------- | ------- | -------------------------------------------------------------- |
+| macOS | Firefox | Yubikey Security Key NFC (USB), Yubikey 5Ci, SoloKey |
+| macOS | Chrome | Touch ID, Yubikey Security Key NFC (USB), Yubikey 5Ci, SoloKey |
+| iOS | All | Face ID, Touch ID, Yubikey Security Key NFC (NFC), Yubikey 5Ci |
+| Android | Chrome | Fingerprint Scanner, caBLE |
+| Android | Firefox | Screen PIN |
+
+### Configuration
+
+WebAuthn support requires a few updates to your codebase:
+
+1. Adding a `UserCredential` model
+2. Adding configuration options in `api/src/functions/auth.js`
+3. Adding a `client` to the `` in `App.js`
+4. Adding an interface during the login process that prompts the user to enable their device
+
+:::info
+If you setup dbAuth and generated the LoginPage with WebAuthn support then all of these steps have already been done for you! As described in the post-setup instructions you just need to add the required fields to your `User` model, create a `UserCredential` model, and you're ready to go!
+
+If you didn't setup WebAuthn at first, but decided you now want WebAuthn, you could run the setup and generator commands again with the `--force` flag to overwrite your existing files. Any changes you made will be overwritten, but if you do a quick diff in git you should be able to port over most of your changes.
+:::
+
+### Schema Updates
+
+You'll need to add two fields to your `User` model, and a new `UserCredential` model to store the devices that are used and associate them with a user:
+
+```javascript title="api/db/schema.prisma"
+datasource db {
+ provider = "sqlite"
+ url = env("DATABASE_URL")
+}
+
+generator client {
+ provider = "prisma-client-js"
+ binaryTargets = "native"
+}
+
+model User {
+ id Int @id @default(autoincrement())
+ email String @unique
+ hashedPassword String
+ salt String
+ resetToken String?
+ resetTokenExpiresAt DateTime?
+ // highlight-start
+ webAuthnChallenge String? @unique
+ credentials UserCredential[]
+ // highlight-end
+}
+
+// highlight-start
+model UserCredential {
+ id String @id
+ userId Int
+ user User @relation(fields: [userId], references: [id])
+ publicKey Bytes
+ transports String?
+ counter BigInt
+}
+// highlight-end
+```
+
+Run `yarn rw prisma migrate dev` to apply the changes to your database.
+
+:::warning Do Not Allow GraphQL Access to `UserCredential`
+
+As you can probably tell by the name, this new model contains secret credential info for the user. You **should not** make this data publicly available by adding an SDL file to `api/src/graphql`.
+
+Also: if you (re)generate the SDL for your `User` model, the generator will happily include the `credentials` relationship, assuming you want to allow access to that data (it does this automatically for all relaionships). This will result in an error and warning message in the console from the API server when it tries to read the new SDL file: the `User` SDL refers to a `UserCredential` type, which does not exist (there's no `userCredential.sdl.js` file to define it).
+
+If you see this notice after (re)generating, simply remove the following line from the `User` SDL:
+
+```
+credentials: [UserCredential]!
+```
+
+:::
+
+### Function Config
+
+Next we need to let dbAuth know about the new field and model names, as well as how you want WebAuthn to behave (see the highlighted section)
+
+```javascript title="api/src/functions/auth.js"
+import { db } from 'src/lib/db'
+import { DbAuthHandler } from '@redwoodjs/api'
+
+export const handler = async (event, context) => {
+ // assorted handler config here...
+
+ const authHandler = new DbAuthHandler(event, context, {
+ db: db,
+ authModelAccessor: 'user',
+ // highlight-start
+ credentialModelAccessor: 'userCredential',
+ // highlight-end
+ authFields: {
+ id: 'id',
+ username: 'email',
+ hashedPassword: 'hashedPassword',
+ salt: 'salt',
+ resetToken: 'resetToken',
+ resetTokenExpiresAt: 'resetTokenExpiresAt',
+ // highlight-start
+ challenge: 'webAuthnChallenge',
+ // highlight-end
+ },
+
+ cookie: {
+ HttpOnly: true,
+ Path: '/',
+ SameSite: 'Strict',
+ Secure: process.env.NODE_ENV !== 'development' ? true : false,
+ },
+
+ forgotPassword: forgotPasswordOptions,
+ login: loginOptions,
+ resetPassword: resetPasswordOptions,
+ signup: signupOptions,
+
+ // highlight-start
+ webAuthn: {
+ enabled: true,
+ expires: 60 * 60 * 14,
+ name: 'Webauthn Test',
+ domain:
+ process.env.NODE_ENV === 'development' ? 'localhost' : 'server.com',
+ origin:
+ process.env.NODE_ENV === 'development'
+ ? 'http://localhost:8910'
+ : 'https://server.com',
+ type: 'platform',
+ timeout: 60000,
+ credentialFields: {
+ id: 'id',
+ userId: 'userId',
+ publicKey: 'publicKey',
+ transports: 'transports',
+ counter: 'counter',
+ },
+ },
+ // highlight-end
+ })
+
+ return await authHandler.invoke()
+}
+```
+
+- `credentialModelAccessor` specifies the name of the accessor that you call to access the model you created to store credentials. If your model name is `UserCredential` then this field would be `userCredential` as that's how Prisma's naming conventions work.
+- `authFields.challenge` specifies the name of the field in the user model that will hold the WebAuthn challenge string. This string is generated automatically whenever a WebAuthn registration or authentication request starts and is one more verification that the browser request came from this user. A user can only have one WebAuthn request/response cycle going at a time, meaning that they can't open a desktop browser, get the TouchID prompt, then switch to iOS Safari to use FaceID, then return to the desktop to scan their fingerprint. The most recent WebAuthn request will clobber any previous one that's in progress.
+- `webAuthn.enabled` is a boolean, denoting whether the server should respond to webAuthn requests. If you decide to stop using WebAuthn, you'll want to turn it off here as well as update the LoginPage to stop prompting.
+- `webAuthn.expires` is the number of seconds that a user will be allowed to keep using their fingerprint/face scan to re-authenticate into your site. Once this value expires, the user _must_ use their username/password to authenticate the next time, and then WebAuthn will be re-enabled (again, for this length of time). For security, you may want to log users out of your app after an hour of inactivity, but allow them to easily use their fingerprint/face to re-authenticate for the next two weeks (this is similar to login on macOS where your TouchID session expires after a couple of days of inactivity). In this example you would set `login.expires` to `60 * 60` and `webAuthn.expires` to `60 * 60 * 24 * 14`.
+- `webAuthn.name` is the name of the app that will show in some browser's prompts to use the device
+- `webAuthn.domain` is the name of domain making the request. This is just the domain part of the URL, ex: `app.server.com`, or in development mode `localhost`
+- `webAuthn.origin` is the domain _including_ the protocol and port that the request is coming from, ex: [https://app.server.com](https://app.server.com) In development mode, this would be `http://localhost:8910`
+- `webAuthn.type`: the type of device that's allowed to be used (see [next section below](#webauthn-type-option))
+- `webAuthn.timeout`: how long to wait for a device to be used in milliseconds (defaults to 60 seconds)
+- `webAuthn.credentialFields`: lists the expected field names that dbAuth uses internally mapped to what they're actually called in your model. This includes 5 fields total: `id`, `userId`, `publicKey`, `transports`, `counter`.
+
+### WebAuthn `type` Option
+
+The config option `webAuthn.type` can be set to `any`, `platform` or `cross-platform`:
+
+- `platform` means to _only_ allow embedded devices (TouchID, FaceID, Windows Hello) to be used
+- `cross-platform` means to _only_ allow third party devices (like a Yubikey USB fingerprint reader)
+- `any` means to allow both platform and cross-platform devices
+
+In some browsers this can lead to a pretty drastic UX difference. For example, here is the interface in Chrome on macOS with the included TouchID sensor on a Macbook Pro:
+
+#### **any**
+
+
+
+If you pick "Add a new Android Phone" you're presented with a QR code:
+
+
+
+If you pick "USB Security Key" you're given the chance to scan your fingerprint in a 3rd party USB device:
+
+
+
+And finally if you pick "This device" you're presented with the standard interface you'd get if used `platform` as your type:
+
+
+
+You'll have to decide if this UX tradeoff is worth it for your customers, as it can be pretty confusing when first presented with all of these options when someone is just used to using TouchID or FaceID.
+
+#### **platform**
+
+The `platform` option provides the simplest UI and one that users with a TouchID or FaceID will be immediately familiar with:
+
+
+
+Note that you can also fallback to use your user account password (on the computer itself) in addition to TouchID:
+
+
+
+Both the password and TouchID scan will count as the same device, so users can alternate between them if they want.
+
+#### **cross-platform**
+
+This interface is the same as `any`, but without the option to pick "This device":
+
+
+
+So while the `any` option is the most flexible, it's also the most confusing to users. If you do plan on allowing any device, you may want to do a user-agent check and try to explain to users what the different options actually mean.
+
+The api-side is now ready to go.
+
+### App.js Updates
+
+If you generated your login/signup pages with `yarn rw g dbAuth --webauthn` then all of these changes are in place and you can start using WebAuthn right away! Otherwise, read on.
+
+First you'll need to import the `WebAuthnClient` and give it to the `` component:
+
+```jsx title="web/src/App.js"
+import { AuthProvider } from '@redwoodjs/auth'
+// highlight-start
+import WebAuthnClient from '@redwoodjs/auth-dbauth-web/webAuthn'
+// highlight-end
+import { FatalErrorBoundary, RedwoodProvider } from '@redwoodjs/web'
+import { RedwoodApolloProvider } from '@redwoodjs/web/apollo'
+
+import FatalErrorPage from 'src/pages/FatalErrorPage'
+import Routes from 'src/Routes'
+
+import './scaffold.css'
+import './index.css'
+
+const App = () => (
+
+
+ // highlight-start
+
+ // highlight-end
+
+
+
+
+
+
+)
+
+export default App
+```
+
+Now you're ready to access the functionality added by the WebAuthn client. The easiest way to do this would be to generate a new `LoginPage` with `yarn rw g dbAuth --webauthn`, even if it's in a brand new, throwaway app, and copy the pieces you need (or just replace your existing login page with it).
+
+The gist of building a login flow is that you now need to stop after username/password authentication and, if the browser supports WebAuthn, give the user the chance to register their device. If they come to the login page and already have the `webAuthn` cookie then you can show the prompt to authenticate, skipping the username/password form completely. This is all handled in the LoginPage template that Redwood generates for you.
+
+### WebAuthn Client API
+
+The `client` that we gave to the `AuthProvider` can be destructured from `useAuth()`:
+
+```javascript
+const { isAuthenticated, client, logIn } = useAuth()
+```
+
+`client` gives you access to four functions for working with WebAuthn:
+
+- `client.isSupported()`: returns a Promise which resolves to a boolean—whether or not WebAuthn is supported in the current browser browser
+- `client.isEnabled()`: returns a boolean for whether the user currently has a `webAuthn` cookie, which means this device has been registered already and can be used for login
+- `client.register()`: returns a Promise which gets options from the server, presents the prompt to scan your fingerprint/face, and then sends the result up to the server. It will either resolve successfully with an object `{ verified: true }` or throw an error. This function is used when the user has not registered this device yet (`client.isEnabled()` returns `false`).
+- `client.authenticate()`: returns a Promise which gets options from the server, presents the prompt to scan the user's fingerprint/face, and then sends the result up to the server. It will either resolve successfully with an object `{ verified: true }` or throw an error. This should be used when the user has already registered this device (`client.isEnabled()` returns `true`)
diff --git a/docs/versioned_docs/version-8.4/auth/firebase.md b/docs/versioned_docs/version-8.4/auth/firebase.md
new file mode 100644
index 000000000000..6fa20136e3e4
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/auth/firebase.md
@@ -0,0 +1,79 @@
+---
+sidebar_label: Firebase
+---
+
+# Firebase Authentication
+
+To get started, run the setup command:
+
+```bash
+yarn rw setup auth firebase
+```
+
+This installs all the packages, writes all the files, and makes all the code modifications you need.
+For a detailed explanation of all the api- and web-side changes that aren't exclusive to Firebase, see the top-level [Authentication](../authentication.md) doc.
+For now, let's focus on Firebase's side of things.
+
+If you don't have a Firebase account yet, now's the time to make one: navigate to https://firebase.google.com and click "Go to console", sign up, and create a project.
+After it's ready, we'll get the API keys.
+
+To get the API keys, we need to add a web app to our project.
+Click the `>` icon in main call to action on the dashboard—"Get started by adding Firebase to your app".
+Give your app a nickname, then you should see the API keys.
+Since we're only using Firebase for auth, we only need `apiKey`, `authDomain`, and `projectId`.
+Copy them into your project's `.env` file:
+
+```bash title=".env"
+FIREBASE_API_KEY="..."
+FIREBASE_AUTH_DOMAIN="..."
+FIREBASE_PROJECT_ID="..."
+```
+
+Lastly, include `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` in the list of env vars that should be available to the web side (`FIREBASE_PROJECT_ID` is for the api side):
+
+```toml title="redwood.toml"
+[web]
+ # ...
+ includeEnvironmentVariables = ["FIREBASE_API_KEY", "FIREBASE_AUTH_DOMAIN"]
+```
+
+We've hooked up our Firebase app to our Redwood app, but if you try it now, it won't work.
+That's because we haven't actually enabled auth in our Firebase app yet.
+
+Back to the dashboard one more time: in the nav on the left, click "Build", "Authentication", and "Get started".
+We're going to go with "Email/Password" here, but feel free to configure things as you wish.
+Click "Email/Password", enable it, and click "Save".
+
+That should be enough; now, things should just work.
+Let's make sure: if this is a brand new project, generate a home page.
+There we'll try to sign up by destructuring `signUp` from the `useAuth` hook (import that from `'src/auth'`). We'll also destructure and display `isAuthenticated` to see if it worked:
+
+```tsx title="web/src/pages/HomePage.tsx"
+import { useAuth } from 'src/auth'
+
+const HomePage = () => {
+ const { isAuthenticated, signUp } = useAuth()
+
+ return (
+ <>
+ {/* MetaTags, h1, paragraphs, etc. */}
+
+ {JSON.stringify({ isAuthenticated })}
+
+ signUp({
+ // email: 'your.email@email.com',
+ // password: 'super secret password',
+ })
+ }
+ >
+ sign up
+
+ >
+ )
+}
+```
+
+"Email/Password" says what it means: Firebase doesn't redirect to a hosted sign-up page or open a sign-up modal.
+In a real app, you'd build a form here, but we're going to hardcode an email and password.
+After you sign up, you should see `{"isAuthenticated":true}` on the page.
diff --git a/docs/versioned_docs/version-8.4/auth/netlify.md b/docs/versioned_docs/version-8.4/auth/netlify.md
new file mode 100644
index 000000000000..67a8efd5e1e6
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/auth/netlify.md
@@ -0,0 +1,64 @@
+---
+sidebar_label: Netlify
+---
+
+# Netlify Identity Authentication
+
+To get started, run the setup command:
+
+```bash
+yarn rw setup auth netlify
+```
+
+This installs all the packages, writes all the files, and makes all the code modifications you need.
+For a detailed explanation of all the api- and web-side changes that aren't exclusive to Netlify Identity, see the top-level [Authentication](../authentication.md) doc.
+For now let's focus on Netlify's side of things.
+
+There's a catch with Netlify Identity: your app has to be be deployed to Netlify to use it.
+If this's a deal breaker for you, there are [other great auth providers to choose from](../authentication.md#official-integrations).
+But here we'll assume it's not and that your app is already deployed.
+(If it isn't, do that first, then come back. And yes, there's a setup command for that: `yarn rw setup deploy netlify`.)
+
+Once you've deployed your app, go to it's overview, click "Integrations" in the nav at the top, search for Netlify Identity, enable it, and copy the API endpoint in the Identity card.
+(It should look something like `https://my-redwood-app.netlify.app/.netlify/identity`.)
+
+Let's do one more thing while we're here to make signing up later a little easier.
+Right now, if we sign up, we'll have to verify our email address.
+Let's forego that feature for the purposes of this doc: click "Settings and usage", then scroll down to "Emails" and look for "Confirmation template".
+Click "Edit settings", tick the box next to "Allow users to sign up without verifying their email address", and click "Save".
+
+Netlify Identity works a little differently than the other auth providers in that you don't have to copy API keys to your project's `.env` and `redwood.toml` files.
+Instead, the first time you use it (by, say, calling `signUp` from `useAuth`), it'll ask you for your app's API endpoint.
+So let's go ahead and use it: if this is a brand new project, generate a home page.
+There we'll try to sign up by destructuring `signUp` from the `useAuth` hook (import that from `'src/auth'`). We'll also destructure and display `isAuthenticated` to see if it worked:
+
+```
+yarn rw g page home /
+```
+
+```tsx title="web/src/pages/HomePage.tsx"
+import { useAuth } from 'src/auth'
+
+const HomePage = () => {
+ const { isAuthenticated, signUp } = useAuth()
+
+ return (
+ <>
+ {/* MetaTags, h1, paragraphs, etc. */}
+
+ {JSON.stringify({ isAuthenticated })}
+ sign up
+ >
+ )
+}
+```
+
+Clicking sign up should open a modal; paste the API endpoint you copied earlier there:
+
+
+
+After that, you should see a sign-up modal. Go ahead and sign up:
+
+
+
+After you sign up, you should see `{"isAuthenticated":true}` on the page.
diff --git a/docs/versioned_docs/version-8.4/auth/supabase.md b/docs/versioned_docs/version-8.4/auth/supabase.md
new file mode 100644
index 000000000000..d9274e799773
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/auth/supabase.md
@@ -0,0 +1,341 @@
+---
+sidebar_label: Supabase
+---
+
+# Supabase Authentication
+
+To get started, run the setup command:
+
+```bash
+yarn rw setup auth supabase
+```
+
+This installs all the packages, writes all the files, and makes all the code modifications you need.
+For a detailed explanation of all the api- and web-side changes that aren't exclusive to Supabase, see the top-level [Authentication](../authentication.md) doc. For now, let's focus on Supabase's side of things.
+
+## Setup
+
+If you don't have a Supabase account yet, now's the time to make one: navigate to https://supabase.com and click "Start your project" in the top right. Then sign up and create an organization and a project.
+
+While Supabase creates your project, it thoughtfully shows your project's API keys.
+(If the page refreshes while you're copying them over, just scroll down a bit and look for "Connecting to your new project".)
+We're looking for "Project URL" and "API key" (the `anon`, `public` one).
+Copy them into your project's `.env` file as `SUPABASE_URL` and `SUPABASE_KEY` respectively.
+
+There's one more we need, the "JWT Secret", that's not here.
+To get that one, click the cog icon ("Project Settings") near the bottom of the nav on the left.
+Then click "API", scroll down a bit, and you should see it—"JWT Secret" under "JWT Settings".
+Copy it into your project's `.env` file as `SUPABASE_JWT_SECRET`.
+All together now:
+
+```bash title=".env"
+SUPABASE_URL="..."
+SUPABASE_KEY="..."
+SUPABASE_JWT_SECRET="..."
+```
+
+Lastly, in `redwood.toml`, include `SUPABASE_URL` and `SUPABASE_KEY` in the list of env vars that should be available to the web side:
+
+```toml title="redwood.toml"
+[web]
+ # ...
+ includeEnvironmentVariables = ["SUPABASE_URL", "SUPABASE_KEY"]
+```
+
+## Authentication UI
+
+Supabase doesn't redirect to a hosted sign-up page or open a sign-up modal.
+In a real app, you'd build a form here, but we're going to hardcode an email and password.
+
+### Basic Example
+
+After you sign up, head to your inbox: there should be a confirmation email from Supabase waiting for you.
+
+Click the link, then head back to your app.
+Once you refresh the page, you should see `{"isAuthenticated":true}` on the page.
+
+Let's make sure: if this is a brand new project, generate a home page.
+
+There we'll try to sign up by destructuring `signUp` from the `useAuth` hook (import that from `'src/auth'`). We'll also destructure and display `isAuthenticated` to see if it worked:
+
+```tsx title="web/src/pages/HomePage.tsx"
+import { useAuth } from 'src/auth'
+
+const HomePage = () => {
+ const { isAuthenticated, signUp } = useAuth()
+
+ return (
+ <>
+ {/* MetaTags, h1, paragraphs, etc. */}
+
+ {JSON.stringify({ isAuthenticated })}
+
+ signUp({
+ email: 'your.email@email.com',
+ password: 'super secret password',
+ })
+ }
+ >
+ sign up
+
+ >
+ )
+}
+```
+
+## Authentication Reference
+
+You will notice that [Supabase Javascript SDK Auth API](https://supabase.com/docs/reference/javascript/auth-api) reference documentation presents methods to sign in with the various integrations Supabase supports: password, OAuth, IDToken, SSO, etc.
+
+The RedwoodJS implementation of Supabase authentication supports these as well, but within the `logIn` method of the `useAuth` hook.
+
+That means that you will see that Supabase documents sign in with email password as:
+
+```ts
+const { data, error } = await supabase.auth.signInWithPassword({
+ email: 'example@email.com',
+ password: 'example-password',
+})
+```
+
+In RedwoodJS, you will always use `logIn` and pass the necessary credential options and also an `authMethod` to declare how you want to authenticate.
+
+```ts
+const { logIn } = useAuth()
+
+await logIn({
+ authMethod: 'password',
+ email: 'example@email.com',
+ password: 'example-password',
+})
+```
+
+### Sign Up with email and password
+
+Creates a new user.
+
+```ts
+const { signUp } = useAuth()
+
+await signUp({
+ email: 'example@email.com',
+ password: 'example-password',
+})
+```
+
+### Sign Up with email and password and additional user metadata
+
+Creates a new user with additional user metadata.
+
+```ts
+const { signUp } = useAuth()
+
+await signUp({
+ email: 'example@email.com',
+ password: 'example-password',
+ options: {
+ data: {
+ first_name: 'John',
+ age: 27,
+ },
+ },
+})
+```
+
+### Sign Up with email and password and a redirect URL
+
+Creates a new user with a redirect URL.
+
+```ts
+const { signUp } = useAuth()
+
+await signUp({
+ email: 'example@email.com',
+ password: 'example-password',
+ options: {
+ emailRedirectTo: 'https://example.com/welcome',
+ },
+})
+```
+
+### Sign in a user with email and password
+
+Log in an existing user with an email and password or phone and password.
+
+- Requires either an email and password or a phone number and password.
+
+```ts
+const { logIn } = useAuth()
+
+await logIn({
+ authMethod: 'password',
+ email: 'example@email.com',
+ password: 'example-password',
+})
+```
+
+### Sign in a user through Passwordless/OTP
+
+Log in a user using magiclink or a one-time password (OTP).
+
+- Requires either an email or phone number.
+
+- This method is used for passwordless sign-ins where a OTP is sent to the user's email or phone number.
+
+```ts
+const { logIn } = useAuth()
+
+await logIn({
+ authMethod: 'otp',
+ email: 'example@email.com',
+ options: {
+ emailRedirectTo: 'https://example.com/welcome',
+ },
+})
+```
+
+### Sign in a user through OAuth
+
+Log in an existing user via a third-party provider.
+
+- This method is used for signing in using a third-party provider.
+
+- Supabase supports many different [third-party providers](https://supabase.com/docs/guides/auth#providers).
+
+```ts
+const { logIn } = useAuth()
+
+await logIn({
+ authMethod: 'oauth',
+ provider: 'github',
+})
+```
+
+### Sign in a user with IDToken
+
+Log in a user using IDToken.
+
+```ts
+const { logIn } = useAuth()
+
+await logIn({
+ authMethod: 'id_token',
+ provider: 'apple',
+ token: 'cortland-apple-id-token',
+})
+```
+
+### Sign in a user with SSO
+
+Log in a user using IDToken.
+
+```ts
+const { logIn } = useAuth()
+
+await logIn({
+ authMethod: 'sso',
+ providerId: 'sso-provider-identity-uuid',
+ domain: 'example.com',
+})
+```
+
+### Get Current User
+
+Gets the content of the current user set by API side authentication.
+
+```ts
+const { currentUser } = useAuth()
+
+{JSON.stringify({ currentUser })}
+```
+
+### Get Current User Metadata
+
+Gets content of the current Supabase user session, i.e., `auth.getSession()`.
+
+```ts
+const { userMetadata } = useAuth()
+
+{JSON.stringify({ userMetadata })}
+```
+
+### Sign out a user
+
+Inside a browser context, signOut() will remove the logged in user from the browser session and log them out - removing all items from localStorage and then trigger a "SIGNED_OUT" event.
+
+In order to use the signOut() method, the user needs to be signed in first.
+
+```ts
+const { logOut } = useAuth()
+
+logOut()
+```
+
+### Verify and log in through OTP
+
+Log in a user given a User supplied OTP received via mobile.
+
+- The verifyOtp method takes in different verification types. If a phone number is used, the type can either be sms or phone_change. If an email address is used, the type can be one of the following: signup, magiclink, recovery, invite or email_change.
+
+- The verification type used should be determined based on the corresponding auth method called before verifyOtp to sign up / sign-in a user.
+
+The RedwoodJS auth provider doesn't expose the `veriftyOtp` method from the Supabase SDK directly.
+
+Instead, since you always have access the the Supabase Auth client, you can access any method it exposes.
+
+So, in order to use the `verifyOtp` method, you would:
+
+```ts
+const { client } = useAuth()
+
+useEffect(() => {
+ const { data, error } = await client.verifyOtp({ phone, token, type: 'sms' })
+}, [client])
+```
+
+### Access the Supabase Auth Client
+
+Sometimes you may need to access the Supabase Auth client directly.
+
+```ts
+const { client } = useAuth()
+```
+
+You can then use it to work with Supabase sessions, or auth events.
+
+When using in a React component, you'll have to put any method that needs an `await` in a `useEffect()`.
+
+### Retrieve a session
+
+Returns the session, refreshing it if necessary. The session returned can be null if the session is not detected which can happen in the event a user is not signed-in or has logged out.
+
+```ts
+const { client } = useAuth()
+
+useEffect(() => {
+ const { data, error } = await client.getSession()
+}, [client])
+```
+
+### Listen to auth events
+
+Receive a notification every time an auth event happens.
+
+- Types of auth events: `SIGNED_IN`, `SIGNED_OUT`, `TOKEN_REFRESHED`, `USER_UPDATED`, `PASSWORD_RECOVERY`
+
+```ts
+const { client } = useAuth()
+
+useEffect(() => {
+ const {
+ data: { subscription },
+ } = client.onAuthStateChange((event, session) => {
+ console.log(event, session)
+ })
+
+ return () => {
+ subscription.unsubscribe()
+ }
+}, [client])
+```
diff --git a/docs/versioned_docs/version-8.4/auth/supertokens.md b/docs/versioned_docs/version-8.4/auth/supertokens.md
new file mode 100644
index 000000000000..ecdc200726c6
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/auth/supertokens.md
@@ -0,0 +1,119 @@
+---
+sidebar_label: SuperTokens
+---
+
+# SuperTokens Authentication
+
+To get started, run the setup command:
+
+```bash
+yarn rw setup auth supertokens
+```
+
+This installs all the packages, writes all the files, and makes all the code modifications you need.
+
+:::info
+
+You may have noticed that in `api/src/functions/auth.ts` there's an import from `'supertokens-node/framework/awsLambda'`. This is fine, even if your app isn't running in a serverless environment like AWS Lambda. In "serverful" environments, Redwood automatically handles the translation between Fastify's request and reply objects and functions' AWS Lambda signature.
+
+:::
+
+For a detailed explanation of all the api- and web-side changes that aren't exclusive to SuperTokens, see the top-level [Authentication](../authentication.md) doc.
+For now, let's focus on SuperTokens's side of things.
+
+When you run the setup command it configures your app to support both email+password logins as well as social auth logins (Apple, GitHub and Google). Working with those social auth logins does require quite a few environment variables. And SuperTokens itself needs a couple variables too. Thankfully SuperTokens makes this very easy to setup as they provide values we can use for testing.
+
+# Environment variables
+
+The environment variables have to be added either to your project's `.env` file (when running in development environment), or to the environment variables of your hosting provider (when running in production).
+
+## Base setup
+
+```bash
+SUPERTOKENS_APP_NAME="Redwoodjs App" # this will be used in the email template for password reset or email verification emails.
+SUPERTOKENS_JWKS_URL=http://localhost:8910/.redwood/functions/auth/jwt/jwks.json
+SUPERTOKENS_CONNECTION_URI=https://try.supertokens.io # set to the correct connection uri
+```
+
+## Production setup
+
+Assuming that your web side is hosted on `https://myapp.com`:
+
+```bash
+SUPERTOKENS_WEBSITE_DOMAIN=https://myapp.com
+SUPERTOKENS_JWKS_URL=https://myapp.com/.redwood/functions/auth/jwt/jwks.json
+```
+
+## Managed Supertokens service setup
+
+```bash
+SUPERTOKENS_API_KEY=your-api-key # The value can be omitted when self-hosting Supertokens
+```
+
+## Social login setup
+
+The following environment variables have to be set up (depending on the social login options):
+
+```bash
+SUPERTOKENS_APPLE_CLIENT_ID=4398792-io.supertokens.example.service
+SUPERTOKENS_APPLE_SECRET_KEY_ID=7M48Y4RYDL
+SUPERTOKENS_APPLE_SECRET_PRIVATE_KEY=-----BEGIN PRIVATE KEY-----\nMIGTAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBHkwdwIBAQQgu8gXs+XYkqXD6Ala9Sf/iJXzhbwcoG5dMh1OonpdJUmgCgYIKoZIzj0DAQehRANCAASfrvlFbFCYqn3I2zeknYXLwtH30JuOKestDbSfZYxZNMqhF/OzdZFTV0zc5u5s3eN+oCWbnvl0hM+9IW0UlkdA\n-----END PRIVATE KEY-----
+SUPERTOKENS_APPLE_SECRET_TEAM_ID=YWQCXGJRJL
+SUPERTOKENS_GITHUB_CLIENT_ID=467101b197249757c71f
+SUPERTOKENS_GITHUB_CLIENT_SECRET=e97051221f4b6426e8fe8d51486396703012f5bd
+SUPERTOKENS_GOOGLE_CLIENT_ID=1060725074195-kmeum4crr01uirfl2op9kd5acmi9jutn.apps.googleusercontent.com
+SUPERTOKENS_GOOGLE_CLIENT_SECRET=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
+```
+
+## `redwood.toml` setup
+
+Make sure to modify `redwood.toml` to pass the required environment variables to the web side:
+
+```toml
+[web]
+...
+includeEnvironmentVariables = [
+ 'SUPERTOKENS_WEBSITE_DOMAIN',
+ 'SUPERTOKENS_API_DOMAIN',
+ 'SUPERTOKENS_API_GATEWAY_PATH',
+ 'SUPERTOKENS_APP_NAME'
+]
+```
+
+# Page setup
+
+Let's make sure: if this is a brand new project, generate a home page.
+There we'll try to sign up by destructuring `signUp` from the `useAuth` hook (import that from `'src/auth'`). We'll also destructure and display `isAuthenticated` to see if it worked:
+
+```
+yarn rw g page home /
+```
+
+```tsx title="web/src/pages/HomePage.tsx"
+import { useAuth } from 'src/auth'
+
+const HomePage = () => {
+ const { isAuthenticated, signUp } = useAuth()
+
+ return (
+ <>
+ {/* MetaTags, h1, paragraphs, etc. */}
+
+ {JSON.stringify({ isAuthenticated })}
+ sign up
+ >
+ )
+}
+
+export default HomePage
+```
+
+Clicking sign up should navigate you to `/auth` where SuperToken's default login/sign up UI is rendered.
+
+
+
+After you sign up, you should be redirected back to your Redwood app, and you should see `{"isAuthenticated":true}` on the page.
+
+## Troubleshooting
+
+If going to `http://localhost:8910/auth` results in the plain Javascript file being served instead of the expected auth page, rename the `web/src/auth.tsx` file to `web/src/authentication.tsx`, and update the imports (related to https://github.com/redwoodjs/redwood/issues/9740).
diff --git a/docs/versioned_docs/version-8.4/authentication.md b/docs/versioned_docs/version-8.4/authentication.md
new file mode 100644
index 000000000000..7c3d96e5ab1e
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/authentication.md
@@ -0,0 +1,202 @@
+---
+description: Set up an authentication provider
+---
+
+# Authentication
+
+Redwood has integrated auth end to end, from the web side to the api side.
+On the web side, the router can protect pages via the `PrivateSet` component, and even restrict access at the role-level.
+And if you'd prefer to work with the primitives, the `useAuth` hook exposes all the pieces to build the experience you want.
+
+Likewise, the api side is locked down by default: all SDLs are generated with the `@requireAuth` directive, ensuring that making things publicly available is something that you opt in to rather than out of.
+You can also require auth anywhere in your Services, and even in your serverful or serverless functions.
+
+Last but not least, Redwood provides it's own self-hosted, full-featured auth provider: [dbAuth](./auth/dbauth.md).
+
+In this doc, we'll cover auth at a high level.
+All auth providers share the same interface so the information here will be useful no matter which auth provider you use.
+
+## Official integrations
+
+Redwood has a simple API to integrate any auth provider you can think of. But to make it easier for you to get started, Redwood provides official integrations for some of the most popular auth providers out of the box:
+
+- [Auth0](./auth/auth0.md)
+- [Azure Active Directory](./auth/azure.md)
+- [Clerk](./auth/clerk.md)
+- [Firebase](./auth/firebase.md)
+- [Netlify](./auth/netlify.md)
+- [Supabase](./auth/supabase.md)
+- [SuperTokens](./auth/supertokens.md)
+
+:::tip how to tell if an integration is official
+
+To tell if an integration is official, look for the `@redwoodjs` scope.
+For example, Redwood's Auth0 integration comprises two npm packages: `@redwoodjs/auth-auth0-web` and `@redwoodjs/auth-auth0-api`.
+
+:::
+
+Other than bearing the `@redwoodjs` scope, the reason these providers are official is that we're committed to keeping them up to date.
+You can set up any of them via the corresponding auth setup command:
+
+```
+yarn rw setup auth auth0
+```
+
+## The API at a high-level
+
+We mentioned that Redwood has a simple API you can use to integrate any provider you want.
+Whether you roll your own auth provider or choose one of Redwood's integrations, it's good to be familiar with it, so let's dive into it here.
+
+On the web side, there are two components that can be auth enabled: the `RedwoodApolloProvider` in `web/src/App.tsx` and the `Router` in `web/src/Routes.tsx`.
+Both take a `useAuth` prop. If provided, they'll use this hook to get information about the app's auth state. The `RedwoodApolloProvider` uses it to get a token to include in every GraphQL request, and the `Router` uses it to determine if a user has access to private or role-restricted routes.
+
+When you set up an auth provider, the setup command makes a new file, `web/src/auth.ts`. This file's job is to create the `AuthProvider` component and the `useAuth` hook by integrating with the auth provider of your choice. Whenever you need access to the auth context, you'll import the `useAuth` hook from this file. The `RedwoodApolloProvider` and the `Router` are no exceptions:
+
+![web-side-auth](https://user-images.githubusercontent.com/32992335/208549951-469617d7-c798-4d9a-8a29-46efe23cca6a.png)
+
+Once auth is setup on the web side, every GraphQL request includes a JWT (JSON Web Token).
+The api side needs a way of verifying and decoding this token if it's to do anything with it.
+There are two steps to this process:
+
+- decoding the token
+- mapping it into a user object
+
+The `createGraphQLHandler` function in `api/src/functions/graphql.ts` takes two props, `authDecoder` and `getCurrentUser`, for each of these steps (respectively):
+
+```ts title="api/src/functions/graphql.ts"
+// highlight-next-line
+import { authDecoder } from '@redwoodjs/auth-auth0-api'
+import { createGraphQLHandler } from '@redwoodjs/graphql-server'
+
+import directives from 'src/directives/**/*.{js,ts}'
+import sdls from 'src/graphql/**/*.sdl.{js,ts}'
+import services from 'src/services/**/*.{js,ts}'
+
+// highlight-next-line
+import { getCurrentUser } from 'src/lib/auth'
+import { db } from 'src/lib/db'
+import { logger } from 'src/lib/logger'
+
+export const handler = createGraphQLHandler({
+ // highlight-start
+ authDecoder,
+ getCurrentUser,
+ // highlight-end
+ loggerConfig: { logger, options: {} },
+ directives,
+ sdls,
+ services,
+ onException: () => {
+ // Disconnect from your database with an unhandled exception.
+ db.$disconnect()
+ },
+})
+```
+
+### Destructuring the `useAuth` hook
+
+That was auth at a high level.
+Now for a few more details on something you'll probably use a lot, the `useAuth` hook.
+
+The `useAuth` hook provides a streamlined interface to your auth provider's client SDK.
+Much of what the functions it returns do is self explanatory, but the options they take depend on the auth provider:
+
+| Name | Description |
+| :---------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `client` | The client instance used in creating the auth provider. Most of the functions here use this under the hood |
+| `currentUser` | An object containing information about the current user as set on the `api` side, or if the user isn't authenticated, `null` |
+| `getToken` | Returns a JWT |
+| `hasRole` | Determines if the current user is assigned a role like `"admin"` or assigned to any of the roles in an array |
+| `isAuthenticated` | A boolean indicating whether or not the user is authenticated |
+| `loading` | If the auth context is loading |
+| `logIn` | Logs a user in |
+| `logOut` | Logs a user out |
+| `reauthenticate` | Refetch auth data and context. (This one is called internally and shouldn't be something you have to reach for often) |
+| `signUp` | Signs a user up |
+| `userMetadata` | An object containing the user's metadata (or profile information), fetched directly from an instance of the auth provider client. Or if the user isn't authenticated, `null` |
+
+### Protecting routes
+
+You can require that a user be authenticated to navigate to a route by wrapping it in the `PrivateSet` component.
+An unauthenticated user will be redirected to the route specified in either component's `unauthenticated` prop:
+
+```tsx title="web/src/Routes.tsx"
+import { Router, Route, PrivateSet } from '@redwoodjs/router'
+
+const Routes = () => {
+ return (
+
+
+
+ // highlight-next-line
+
+
+
+
+
+ )
+}
+```
+
+You can also restrict access by role by passing a role or an array of roles to the `PrivateSet` component's `roles` prop:
+
+```tsx title="web/src/Routes.tsx"
+import { Router, Route, PrivateSet } from '@redwoodjs/router'
+
+const Routes = () => {
+ return (
+
+
+
+
+
+
+
+ // highlight-next-line
+
+
+
+ // highlight-next-line
+
+
+
+
+ )
+}
+```
+
+:::note Note about roles
+A route is permitted when authenticated and user has **any** of the provided roles such as `"admin"` or `["admin", "editor"]`.
+:::
+
+### api-side currentUser
+
+We briefly mentioned that GraphQL requests include an `Authorization` header in every request when a user is authenticated.
+The api side verifies and decodes the token in this header via the `authDecoder` function.
+While information about the user is technically available at this point, it's still pretty raw.
+You can map it into a real user object via the `getCurrentUser` function.
+Both these functions are passed to the `createGraphQLHandler` function in `api/src/functions/graphql.ts`:
+
+```ts title="api/src/functions/graphql.ts"
+export const handler = createGraphQLHandler({
+ authDecoder,
+ getCurrentUser,
+ // ...
+})
+```
+
+If you're using one of Redwood's official integrations, `authDecoder` comes from the corresponding integration package (in auth0's case, `@redwoodjs/auth-auth0-api`):
+
+```ts
+import { authDecoder } from '@redwoodjs/auth-auth0-api'
+```
+
+If you're rolling your own, you'll have to write it yourself. See the [Custom Auth](./auth/custom.md#api-side) docs for an example.
+
+It's always up to you to write `getCurrentUser`, though the setup command will stub it out for you in `api/src/lib/auth.ts` with plenty of guidance.
+
+`getCurrentUser`'s return is made globally available in the api side's context via `context.currentUser` for convenience.
+
+### Locking down the GraphQL api
+
+Use the `requireAuth` and `skipAuth` [GraphQL directives](directives#secure-by-default-with-built-in-directives) to protect individual GraphQL calls.
diff --git a/docs/versioned_docs/version-8.4/background-jobs.md b/docs/versioned_docs/version-8.4/background-jobs.md
new file mode 100644
index 000000000000..1bb62e47d184
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/background-jobs.md
@@ -0,0 +1,790 @@
+# Background Jobs
+
+No one likes waiting in line. This is especially true of your website: users don't want to wait for things to load that don't directly impact the task they're trying to accomplish. For example, sending a "welcome" email when a new user signs up. The process of sending the email could take as long or longer than the sum total of everything else that happens during that request. Why make the user wait for it? As long as they eventually get the email, everything is good.
+
+## Concepts
+
+A typical create-user flow could look something like this:
+
+![jobs-before](/img/background-jobs/jobs-before.png)
+
+If we want the email to be sent asynchronously, we can shuttle that process off into a **background job**:
+
+![jobs-after](/img/background-jobs/jobs-after.png)
+
+The user's response is returned much quicker, and the email is sent by another process, literally running in the background. All of the logic around sending the email is packaged up as a **job** and a **job worker** is responsible for executing it.
+
+Each job is completely self-contained and has everything it needs to perform its own task.
+
+### Overview
+
+There are three components to the Background Job system in Redwood:
+
+1. Scheduling
+2. Storage
+3. Execution
+
+**Scheduling** is the main interface to background jobs from within your application code. This is where you tell the system to run a job at some point in the future, whether that's:
+
+- as soon as possible
+- delay for an amount of time before running
+- run at a specific datetime in the future
+
+**Storage** is necessary so that your jobs are decoupled from your running application. The job system interfaces with storage via an **adapter**. With the included `PrismaAdapter`, jobs are stored in your database. This allows you to scale everything independently: the api server (which is scheduling jobs), the database (which is storing the jobs ready to be run), and the job workers (which are executing the jobs).
+
+**Execution** is handled by a **job worker**, which takes a job from storage, executes it, and then does something with the result, whether it was a success or failure.
+
+:::info Job execution time is never guaranteed
+
+When scheduling a job, you're really saying "this is the earliest possible time I want this job to run": based on what other jobs are in the queue, and how busy the workers are, they may not get a chance to execute this one particular job for an indeterminate amount of time.
+
+The only thing that's guaranteed is that a job won't run any _earlier_ than the time you specify.
+
+:::
+
+### Queues
+
+Jobs are organized by a named **queue**. This is simply a string and has no special significance, other than letting you group jobs. Why group them? So that you can potentially have workers with different configurations working on them. Let's say you send a lot of emails, and you find that among all your other jobs, emails are starting to be noticeably delayed when sending. You can start assigning those jobs to the "email" queue and create a new worker group that _only_ focuses on jobs in that queue so that they're sent in a more timely manner.
+
+Jobs are sorted by **priority** before being selected to be worked on. Lower numbers mean higher priority:
+
+![job-queues](/img/background-jobs/jobs-queues.png)
+
+You can also increase the number of workers in a group. If we bumped the group working on the "default" queue to 2 and started our new "email" group with 1 worker, once those workers started we would see them working on the following jobs:
+
+![job-workers](/img/background-jobs/jobs-workers.png)
+
+## Quick Start
+
+Start here if you want to get up and running with jobs as quickly as possible and worry about the details later.
+
+### Setup
+
+Run the setup command to get the jobs configuration file created and migrate the database with a new `BackgroundJob` table:
+
+```bash
+yarn rw setup jobs
+yarn rw prisma migrate dev
+```
+
+This created `api/src/lib/jobs.js` (or `.ts`) with a sensible default config. You can leave this as is for now.
+
+### Create a Job
+
+```bash
+yarn rw g job SampleJob
+```
+
+This created `api/src/jobs/SampleJob/SampleJob.js` and a test and scenario file. For now the job just outputs a message to the logs, but you'll fill out the `perform()` function to take any arguments you want and perform any work you want to do. Let's update the job to take a user's `id` and then just print that to the logs:
+
+```js
+import { jobs } from 'src/lib/jobs'
+
+export const SampleJob = jobs.createJob({
+ queue: 'default',
+ // highlight-start
+ perform: async (userId) => {
+ jobs.logger.info(`Received user id ${userId}`)
+ },
+ // highlight-end
+})
+```
+
+### Schedule a Job
+
+You'll most likely be scheduling work as the result of one of your service functions being executed. Let's say we want to schedule our `SampleJob` whenever a new user is created:
+
+```js title="api/src/services/users/users.js"
+import { db } from 'src/lib/db'
+// highlight-start
+import { later } from 'src/lib/jobs'
+import { SampleJob } from 'src/jobs/SampleJob'
+// highlight-end
+
+export const createUser = async ({ input }) => {
+ const user = await db.user.create({ data: input })
+ // highlight-next-line
+ await later(SampleJob, [user.id], { wait: 60 })
+ return user
+}
+```
+
+The first argument is the job itself, the second argument is an array of all the arguments your job should receive. The job itself defines them as normal, named arguments (like `userId`), but when you schedule you wrap them in an array (like `[user.id]`). The third argument is an optional object that provides a couple of options. In this case, the number of seconds to `wait` before this job will be run (60 seconds).
+
+### Executing a Job
+
+Start the worker process to find jobs in the DB and execute them:
+
+```bash
+yarn rw jobs work
+```
+
+This process will stay attached to the terminal and show you debug log output as it looks for jobs to run. Note that since we scheduled our job to wait 60 seconds before running, the runner will not find a job to work on right away (unless it's already been a minute since you scheduled it!).
+
+That's the basics of jobs! Keep reading to get a more detailed walkthrough, followed by the API docs listing all the various options. We'll wrap up with a discussion of using jobs in a production environment.
+
+## In-Depth Start
+
+Let's go into more depth in each of the parts of the job system.
+
+### Installation
+
+To get started with jobs, run the setup command:
+
+```bash
+yarn rw setup jobs
+```
+
+This will add a new model to your Prisma schema, and create a configuration file at `api/src/lib/jobs.js` (or `.ts` for a TypeScript project). You'll need to run migrations in order to actually create the model in your database:
+
+```bash
+yarn rw prisma migrate dev
+```
+
+This added the following model:
+
+```prisma
+model BackgroundJob {
+ id Int @id @default(autoincrement())
+ attempts Int @default(0)
+ handler String
+ queue String
+ priority Int
+ runAt DateTime?
+ lockedAt DateTime?
+ lockedBy String?
+ lastError String?
+ failedAt DateTime?
+ createdAt DateTime @default(now())
+ updatedAt DateTime @updatedAt
+}
+```
+
+Let's look at the config file that was generated. Comments have been removed for brevity:
+
+```js
+import { PrismaAdapter, JobManager } from '@redwoodjs/jobs'
+
+import { db } from 'src/lib/db'
+import { logger } from 'src/lib/logger'
+
+export const jobs = new JobManager({
+ adapters: {
+ prisma: new PrismaAdapter({ db, logger }),
+ },
+ queues: ['default'],
+ logger,
+ workers: [
+ {
+ adapter: 'prisma',
+ logger,
+ queue: '*',
+ count: 1,
+ maxAttempts: 24,
+ maxRuntime: 14_400,
+ deleteFailedJobs: false,
+ sleepDelay: 5,
+ },
+ ],
+})
+
+export const later = jobs.createScheduler({
+ adapter: 'prisma',
+})
+```
+
+Two variables are exported: one is an instance of the `JobManager` called `jobs` on which you'll call functions to create jobs and schedulers. The other is `later` which is an instance of the `Scheduler`, which is responsible for getting your job into the storage system (out of the box this will be the database thanks to the `PrismaAdapter`).
+
+We'll go into more detail on this file later (see [JobManager Config](#jobmanager-config)), but what's there now is fine to get started creating a job.
+
+### Creating New Jobs
+
+We have a generator that creates a job in `api/src/jobs`:
+
+```bash
+yarn rw g job SendWelcomeEmail
+```
+
+Jobs are defined as a plain object and given to the `createJob()` function (which is called on the `jobs` export in the config file above). An example `SendWelcomeEmailJob` may look something like:
+
+```js
+import { db } from 'src/lib/db'
+import { mailer } from 'src/lib/mailer'
+import { jobs } from 'src/lib/jobs'
+
+export const SendWelcomeEmailJob = jobs.createJob({
+ queue: 'default',
+ perform: async (userId) => {
+ const user = await db.user.findUnique({ where: { id: userId } })
+ await mailer.send(WelcomeEmail({ user }), {
+ to: user.email,
+ subject: `Welcome to the site!`,
+ })
+ },
+})
+```
+
+At a minimum, a job must contain the name of the `queue` the job should be saved to, and a function named `perform()` which contains the logic for your job. You can add additional properties to the object to support the task your job is performing, but `perform()` is what's invoked by the job worker that we'll see later.
+
+Note that `perform()` can take any argument(s) you want (or none at all), but it's a best practice to keep them as simple as possible. With the `PrismaAdapter` the arguments are stored in the database, so the list of arguments must be serializable to and from a string of JSON.
+
+:::info Keeping Arguments Simple
+
+Most jobs will probably act against data in your database, so it makes sense to have the arguments simply be the `id` of those database records. When the job executes it will look up the full database record and then proceed from there.
+
+If it's likely that the data in the database will change before your job is actually run, but you need the job to run with the original data, you may want to include the original values as arguments to your job. This way the job is sure to be working with those original values and not the potentially changed ones in the database.
+
+:::
+
+### Scheduling Jobs
+
+Remember the `later` export in the jobs config file:
+
+```js
+export const later = jobs.createScheduler({
+ adapter: 'prisma',
+})
+```
+
+You call this function, passing the job, job arguments, and an optional options object when you want to schedule a job. Let's see how we'd schedule our welcome email to go out when a new user is created:
+
+```js
+// highlight-start
+import { later } from 'src/lib/jobs'
+import { SendWelcomeEmailJob } from 'src/jobs/SendWelcomeEmailJob'
+// highlight-end
+
+export const createUser = async ({ input }) {
+ const user = await db.user.create({ data: input })
+ // highlight-next-line
+ await later(SendWelcomeEmailJob, [user.id])
+ return user
+}
+```
+
+By default the job will run as soon as possible. If you wanted to wait five minutes before sending the email you can set a `wait` time to a number of seconds:
+
+```js
+later(SendWelcomeEmailJob, [user.id], { wait: 300 })
+```
+
+Or run it at a specific datetime:
+
+```js
+later(MillenniumAnnouncementJob, [user.id], {
+ waitUntil: new Date(3000, 0, 1, 0, 0, 0),
+})
+```
+
+If we were to query the `BackgroundJob` table after the job has been scheduled you'd see a new row. We can use the Redwood Console to query the table from the command line:
+
+```js
+% yarn rw console
+> db.backgroundJob.findMany()
+[
+ {
+ id: 1,
+ attempts: 0,
+ handler: '{"name":"SendWelcomeEmailJob",path:"SendWelcomeEmailJob/SendWelcomeEmailJob","args":[335]}',
+ queue: 'default',
+ priority: 50,
+ runAt: 2024-07-12T22:27:51.085Z,
+ lockedAt: null,
+ lockedBy: null,
+ lastError: null,
+ failedAt: null,
+ createdAt: 2024-07-12T22:27:51.125Z,
+ updatedAt: 2024-07-12T22:27:51.125Z
+ }
+]
+```
+
+:::info
+
+Because we're using the `PrismaAdapter` here all jobs are stored in the database, but if you were using a different storage mechanism via a different adapter you would have to query those in a manner specific to that adapter's backend.
+
+:::
+
+The `handler` column contains the name of the job, file path to find it, and the arguments its `perform()` function will receive. Where did the `name` and `path` come from? We have a babel plugin that adds them to your job when they are built!
+
+:::warning Jobs Must Be Built
+
+Jobs are run from the `api/dist` directory, which will exist only after running `yarn rw build api` or `yarn rw dev`. If you are working on a job in development, you're probably running `yarn rw dev` anyway. But just be aware that if the dev server is _not_ running then any changes to your job will not be reflected unless you run `yarn rw build api` (or start the dev server) to compile your job into `api/dist`.
+
+:::
+
+### Executing Jobs
+
+In development you can start a job worker via the **job runner** from the command line:
+
+```bash
+yarn rw jobs work
+```
+
+The runner is a sort of overseer that doesn't do any work itself, but spawns workers to actually execute the jobs. When starting in `work` mode your `workers` config will be used to start the workers and they will stay attached to the terminal, updating you on the status of what they're doing:
+
+![image](/img/background-jobs/jobs-terminal.png)
+
+It checks the `BackgroundJob` table every few seconds for a new job and, if it finds one, locks it so that no other workers can have it, then calls your `perform()` function, passing it the arguments you gave when you scheduled it.
+
+If the job succeeds then by default it's removed from the database (using the `PrismaAdapter`, other adapters behavior may vary). If the job fails, the job is un-locked in the database, the `runAt` is set to an incremental backoff time in the future, and `lastError` is updated with the error that occurred. The job will now be picked up in the future once the `runAt` time has passed and it'll try again.
+
+To stop the runner (and the workers it started), press `Ctrl-C` (or send `SIGINT`). The workers will gracefully shut down, waiting for their work to complete before exiting. If you don't wait to wait, hit `Ctrl-C` again (or send `SIGTERM`).
+
+There are a couple of additional modes that `rw jobs` can run in:
+
+```bash
+yarn rw jobs workoff
+```
+
+This mode will execute all jobs that are eligible to run, then stop itself.
+
+```bash
+yarn rw jobs start
+```
+
+Starts the workers and then detaches them to run forever. Use `yarn rw jobs stop` to stop them, or `yarn rw jobs restart` to pick up any code changes to your jobs.
+
+### Everything Else
+
+The rest of this doc describes more advanced usage, like:
+
+- Assigning jobs to named **queues**
+- Setting a **priority** so that some jobs always run before others
+- Using different adapters and loggers on a per-job basis
+- Starting more than one worker
+- Having some workers focus on only certain queues
+- Configuring individual workers to use different adapters
+- Manually start workers without the job runner monitoring them
+- And more!
+
+## Instantly Running Jobs
+
+As noted in the [Concepts](#concepts) section, a job is never _guaranteed_ to run at an exact time. The worker could be busy working on other jobs and can't get to yours just yet.
+
+If you absolutely, positively need your job to run right _now_ (with the knowledge that the user will be waiting for it to complete) you can call your job's `perform` function directly in your code:
+
+```js
+await SampleEmailJob.perform(user.id)
+```
+
+## Recurring Jobs
+
+A common task for a background job is that it does something on a schedule: run reports every night at midnight, check for abandoned carts every 15 minutes, that sort of thing. We call these recurring jobs.
+
+Redwood's job system will soon have native syntax for setting a job to run repeatedly, but in the meantime you can accomplish this by simply having your job schedule another copy of itself at some interval in the future:
+
+```js
+import { later, jobs } from 'src/lib/jobs'
+
+export const NightlyReportJob = jobs.createJob({
+ queue: 'default',
+ perform: async () => {
+ await DailyUsageReport.run()
+ // highlight-start
+ await later(NightlyReportJob, [], {
+ wait: new Date(new Date().getTime() + 86_400 * 1000),
+ })
+ // highlight-end
+ },
+})
+```
+
+## Configuration
+
+There are a bunch of ways to customize your jobs and the workers.
+
+### JobManager Config
+
+Let's take a closer look at the `jobs` export in `api/src/lib/jobs.js`:
+
+```js
+export const jobs = new JobManager({
+ adapters: {
+ prisma: new PrismaAdapter({ db, logger }),
+ },
+ queues: ['default'],
+ logger,
+ workers: [
+ {
+ adapter: 'prisma',
+ logger,
+ queue: '*',
+ count: 1,
+ maxAttempts: 24,
+ maxRuntime: 14_400,
+ deleteFailedJobs: false,
+ sleepDelay: 5,
+ },
+ ],
+})
+```
+
+The object passed here contains all of the configuration for the Background Job system. Let's take a quick look at the four top-level properties and then we'll get into more details in the subsections to follow.
+
+#### `adapters`
+
+This is the list of adapters that are available to handle storing and retrieving your jobs to and from the storage system. You could list more than one adapter here and then have multiple schedulers. Most folks will probably stick with a single one.
+
+#### `queues`
+
+An array of available queue names that jobs can be placed in. By default, a single queue named "default" is listed here, and will also be the default queue for generated jobs. To denote the named queue that a worker will look at, there is a matching `queue` property on the `workers` config below.
+
+#### `logger`
+
+The logger object for all internal logging of the job system itself and will fall back to `console` if you don't set it.
+
+#### `workers`
+
+This is an array of objects, each defining a "group" of workers. When will you need more than one group? If you need workers to work on different queues, or use different adapters. Read more about this in the [Job Workers](#job-workers) section.
+
+### Adapter Config
+
+Adapters are added as key/value pairs to the `adapters` object given to the `JobManager` upon initialization. The key of the property (like `prisma` in the example below) is the name you'll use in your scheduler when you tell it which adapter to use to schedule your jobs. Adapters accept an object of options when they are initialized.
+
+#### PrismaAdapter
+
+```js
+export const jobs = new JobManager({
+ adapters: {
+ // highlight-next-line
+ prisma: new PrismaAdapter({ db, model: 'BackgroundJob', logger }),
+ },
+ // remaining config...
+})
+```
+
+- `db`: **[required]** an instance of `PrismaClient` that the adapter will use to store, find and update the status of jobs. In most cases this will be the `db` variable exported from `api/src/lib/db.{js,ts}`. This must be set in order for the adapter to be initialized!
+- `model`: the name of the model that was created to store jobs. This defaults to `BackgroundJob`.
+- `logger`: events that occur within the adapter will be logged using this. This defaults to `console` but the `logger` exported from `api/src/lib/logger` works great.
+
+### Scheduler Config
+
+When you create an instance of the scheduler you can pass it a couple of options:
+
+```js
+export const later = jobs.createScheduler({
+ adapter: 'prisma',
+})
+```
+
+- `adapter` : **[required]** the name of the adapter this scheduler will use to schedule jobs. Must be one of the keys that you gave to the `adapters` option on the JobManager itself.
+- `logger` : the logger to use for this instance of the scheduler. If not provided, defaults to the `logger` set on the `JobManager`.
+
+#### Scheduling Options
+
+When using the scheduler to schedule a job you can pass options in an optional third argument:
+
+```js
+later(SampleJob, [user.id], { wait: 300 })
+```
+
+- `wait`: number of seconds to wait before the job will run
+- `waitUntil`: a specific `Date` in the future to run at
+
+If you don't pass any options then the job will be defaulted to run as soon as possible, ie: `new Date()`
+
+### Job Config
+
+There are two configuration options you can define in the object that describes your job:
+
+```js
+import { jobs } from 'src/lib/jobs'
+
+export const SendWelcomeEmailJob = jobs.createJob({
+ // highlight-start
+ queue: 'email',
+ priority: 1,
+ // highlight-end
+ perform: async (userId) => {
+ // job details...
+ },
+})
+```
+
+- `queue` : **[required]** the name of the queue that this job will be placed in. Must be one of the strings you assigned to the `queues` array when you set up the `JobManager`.
+- `priority` : within a queue you can have jobs that are more or less important. The workers will pull jobs off the queue with a higher priority before working on ones with a lower priority. A lower number is _higher_ in priority than a higher number. Ie. the workers will work on a job with a priority of `1` before they work on one with a priority of `100`. If you don't override it here, the default priority is `50`.
+
+### Worker Config
+
+This is the largest section of the `JobManager` config object. This options array tell the workers how to behave when looking for and executing jobs.
+
+```js
+export const jobs = new JobManager({
+ // .. more config
+ workers: [
+ {
+ adapter: 'prisma',
+ logger,
+ queue: '*',
+ count: 1,
+ maxAttempts: 24,
+ maxRuntime: 14_400,
+ deleteFailedJobs: true,
+ deleteSuccessfulJobs: false,
+ sleepDelay: 5,
+ },
+ ],
+
+```
+
+This is an array of objects. Each object represents the config for a single "group" of workers. By default, there is only one worker group. It uses the `PrismaAdapter` and will look for jobs in all queues. If you want to start fine tuning your workers by working with different adapters, or only working on some named queues, you can add additional members to this array, each with a unique set of options.
+
+- `adapter` : **[required]** the name of the adapter this worker group will use. Must be one of the keys that you gave to the `adapters` option on the `JobManager` itself.
+- `logger` : the logger to use when working on jobs. If not provided, defaults to the `logger` set on the `JobManager`. You can use this logger in the `perform()` function of your job by accessing `jobs.logger`
+- queue : **[required]** the named queue(s) in which this worker group will watch for jobs. There is a reserved `'*'` value you can use which means "all queues." This can be an array of queues as well: `['default', 'email']` for example.
+- `count` : **[required]** the number of workers to start with this config.
+- `maxAttempts`: the maximum number of times to retry a job before giving up. A job that throws an error will be set to retry in the future with an exponential backoff in time equal to the number of previous attempts \*\* 4. After this number, a job is considered "failed" and will not be re-attempted. Default: `24`.
+- `maxRuntime` : the maximum amount of time, in seconds, to try running a job before another worker will pick it up and try again. It's up to you to make sure your job doesn't run for longer than this amount of time! Default: `14_400` (4 hours).
+- `deleteFailedJobs` : when a job has failed (maximum number of retries has occurred) you can keep the job in the database, or delete it. Default: `false`.
+- `deleteSuccessfulJobs` : when a job has succeeded, you can keep the job in the database, or delete it. It's generally assumed that your jobs _will_ succeed so it usually makes sense to clear them out and keep the queue lean. Default: `true`.
+- `sleepDelay` : the amount of time, in seconds, to wait before checkng the queue for another job to run. Too low and you'll be thrashing your storage system looking for jobs, too high and you start to have a long delay before any job is run. Default: `5`.
+
+See the next section for advanced usage examples, like multiple worker groups.
+
+## Job Workers
+
+A job worker actually executes your jobs. The workers will ask the adapter to find a job to work on. The adapter will mark the job as locked (the process name and a timestamp is set on the job) and then the worker will call `perform()` on your job, passing in any args that were given when you scheduled it. The behavior of what happens when the job succeeds or fails depends on the config options you set in the `JobManager`. By default, successful jobs are removed from storage and failed jobs and kept around so you can diagnose what happened.
+
+The runner has several modes it can start in depending on how you want it to behave.
+
+### Dev Modes
+
+These modes are ideal when you're creating a job and want to be sure it runs correctly while developing. You could also use this in production if you wanted (maybe a job is failing and you want to watch verbose logs and see what's happening).
+
+```bash
+yarn rw jobs work
+```
+
+This process will stay attached to the console and continually look for new jobs and execute them as they are found. The log level is set to `debug` by default so you'll see everything. Pressing `Ctrl-C` to cancel the process (sending `SIGINT`) will start a graceful shutdown: the workers will complete any work they're in the middle of before exiting. To cancel immediately, hit `Ctrl-C` again (or send `SIGTERM`) and they'll stop in the middle of what they're doing. Note that this could leave locked jobs in the database, but they will be picked back up again if a new worker starts with the same name as the one that locked the process. They'll also be picked up automatically after `maxRuntime` has expired, even if they are still locked.
+
+:::caution Long running jobs
+
+It's currently up to you to make sure your job completes before your `maxRuntime` limit is reached! NodeJS Promises are not truly cancelable: you can reject early, but any Promises that were started _inside_ will continue running unless they are also early rejected, recursively forever.
+
+The only way to guarantee a job will completely stop no matter what is for your job to spawn an actual OS level process with a timeout that kills it after a certain amount of time. We may add this functionality natively to Jobs in the near future: let us know if you'd benefit from this being built in!
+
+:::
+
+To work on whatever outstanding jobs there are and then automatically exit use the `workoff` mode:
+
+```bash
+yarn rw jobs workoff
+```
+
+As soon as there are no more jobs to be executed (either the store is empty, or they are scheduled in the future) the process will automatically exit.
+
+### Clearing the Job Queue
+
+You can remove all jobs from storage with:
+
+```bash
+yarn rw jobs clear
+```
+
+### Production Modes
+
+In production you'll want your job workers running forever in the background. For that, use the `start` mode:
+
+```bash
+yarn rw jobs start
+```
+
+That will start a number of workers determined by the `workers` config on the `JobManager` and then detach them from the console. If you care about the output of that worker then you'll want to have configured a logger that writes to the filesystem or sends to a third party log aggregator.
+
+To stop the workers:
+
+```bash
+yarn rw jobs stop
+```
+
+Or to restart any that are already running:
+
+```bash
+yarn rw jobs restart
+```
+
+### Multiple Workers
+
+With the default configuration options generated with the `yarn rw setup jobs` command you'll have one worker group. If you simply want more workers that use the same `adapter` and `queue` settings, increase the `count`:
+
+```js
+export const jobs = new JobManager({
+ adapters: {
+ prisma: new PrismaAdapter({ db, logger }),
+ },
+ queues: ['default'],
+ logger,
+ workers: [
+ {
+ adapter: 'prisma',
+ logger,
+ queue: '*',
+ // highlight-next-line
+ count: 5,
+ maxAttempts: 24,
+ maxRuntime: 14_400,
+ deleteFailedJobs: false,
+ sleepDelay: 5,
+ },
+ ],
+})
+```
+
+Now you have 5 workers. If you want to have separate workers working on separate queues, create another worker config object with a different queue name:
+
+```js
+export const jobs = new JobManager({
+ adapters: {
+ prisma: new PrismaAdapter({ db, logger }),
+ },
+ queues: ['default'],
+ logger,
+ workers: [
+ {
+ adapter: 'prisma',
+ logger,
+ // highlight-start
+ queue: 'default',
+ // highlight-end
+ count: 1,
+ maxAttempts: 24,
+ maxRuntime: 14_400,
+ deleteFailedJobs: false,
+ sleepDelay: 5,
+ },
+ {
+ adapter: 'prisma',
+ logger,
+ // highlight-start
+ queue: 'email',
+ count: 1,
+ maxAttempts: 1,
+ maxRuntime: 30,
+ deleteFailedJobs: true,
+ // highlight-end
+ sleepDelay: 5,
+ },
+ ],
+})
+```
+
+Here, we have 2 workers working on the "default" queue and 1 worker looking at the "email" queue (which will only try a job once, wait 30 seconds for it to finish, and delete the job if it fails). You can also have different worker groups using different adapters. For example, you may store and work on some jobs in your database using the `PrismaAdapter` and some jobs/workers using a `RedisAdapter`.
+
+:::info
+
+We don't currently provide a `RedisAdapter` but plan to add one soon! You'll want to create additional schedulers to use any other adapters as well:
+
+```js
+export const prismaLater = jobs.createScheduler({
+ adapter: 'prisma',
+})
+
+export const redisLater = jobs.createScheduler({
+ adapter: 'redis',
+})
+```
+
+:::
+
+## Job Errors & Failure
+
+Jobs sometimes don't complete as expected, either because of an error in our code (unlikely, of course) or because a third party service that's being accessed responds in an unexpected way. Luckily, the job system is ready to handle these problems gracefully.
+
+If you're using the `PrismaAdapter` and an uncaught error occurs while the worker is executing your `perform` function, three things happen:
+
+1. The job's `runAt` time is set to a new time in the future, based on an incremental backoff computed from the number of previous attempts at running the job (by default it's `attempts ** 4`)
+2. The error message and backtrace is recorded in the `lastError` field
+3. The job is unlocked so that it's available for another worker to pick up when the time comes
+
+By checking the `lastError` field in the database you can see what the last error was and attempt to correct it, if possible. If the retry occurs and another error is thrown, the same sequence above will happen _unless_ the number of attempts is equal to the `maxAttempts` config variable set in the jobs config. If `maxAttempts` is reached then the job is considered **failed** and will not be rescheduled. `runAt` is set to `NULL`, the `failedAt` timestamp is set to now and, assuming you have `deleteFailedJobs` set to `false`, the job will remain in the database so you can inspect it and potentially correct the problem.
+
+## Deployment
+
+For many use cases you may be able to rely on the job runner to start and detach your job workers, which will then run forever:
+
+```bash
+yarn rw jobs start
+```
+
+When you deploy new code you'll want to restart your runners to make sure they get the latest source files:
+
+```bash
+yarn rw jobs restart
+```
+
+Using this utility, however, gives you nothing to monitor that your jobs workers are still running: the runner starts the required number of workers, detaches them, and then exits itself. Node processes are pretty robust, but by no means are they guaranteed to run forever with no problems. You could mistakenly release a bad job that has an infinite loop or even just a random gamma ray striking the RAM of the server could cause a panic and the process will be shut down.
+
+For maximum reliability you should take a look at the [Advanced Job Workers](#advanced-job-workers) section and manually start your workers this way, with a process monitor like [pm2](https://pm2.keymetrics.io/) or [nodemon](https://github.com/remy/nodemon) to watch and restart the workers if something unexpected happens.
+
+:::info
+
+Of course if you have a process monitor system watching your workers you'll want to use the process monitor's version of the `restart` command each time you deploy!
+
+:::
+
+### NODE_ENV
+
+You'll need to explicitly set your `NODE_ENV` when in environments other than development or test. We like having a `.env` file in a serverfull production environment, and you just include:
+
+```bash
+NODE_ENV=production
+```
+
+If you're using Docker, make sure you have an `ENV` declaration for it:
+
+```docker
+ENV NODE_ENV="production"
+```
+
+## Advanced Job Workers
+
+As noted above, although the workers are started and detached using the `yarn rw jobs start` command, there is nothing to monitor those workers to make sure they keep running. To do that, you'll want to start the workers yourself (or have your process monitor start them) using command line flags.
+
+You can do this with the `yarn rw-jobs-worker` command. The flags passed to the script tell it which worker group config to use to start itself, and which `id` to give this worker (if you're running more than one). To start a single worker, using the first `workers` config object, you would run:
+
+```bash
+yarn rw-jobs-worker --index=0 --id=0
+```
+
+:::info
+
+The job runner started with `yarn rw jobs start` runs this same command behind the scenes for you, keeping it attached or detached depending on if you start in `work` or `start` mode!
+
+:::
+
+### Flags
+
+- `--index` : a number that represents the index of the `workers` config array you passed to the `JobManager`. Setting this to `0`, for example, uses the first object in the array to set all config options for the worker.
+- `--id` : a number identifier that's set as part of the process name. Starting a worker with `--id=0` and then inspecting your process list will show one worker running named `rw-job-worker.queue-name.0`. Using `yarn rw-jobs-worker` only ever starts a single instance, so if your config had a `count` of `2` you'd need to run the command twice, once with `--id=0` and a second time with `--id=1`.
+- `--workoff` : a boolean that will execute all currently available jobs and then cause the worker to exit. Defaults to `false`
+- `--clear` : a boolean that starts a worker to remove all jobs from all queues. Defaults to `false`
+
+Your process monitor can now restart the workers automatically if they crash since the monitor is using the worker script itself and not the wrapping job runner.
+
+### What Happens if a Worker Crashes?
+
+If a worker crashes because of circumstances outside of your control the job will remained locked in the storage system: the worker couldn't finish work and clean up after itself. When this happens, the job will be picked up again immediately if a new worker starts with the same process title, otherwise when `maxRuntime` has passed it's eligible for any worker to pick up and re-lock.
+
+## Creating Your Own Adapter
+
+We'd love the community to contribute adapters for Redwood Jobs! Take a look at the source for `BaseAdapter` for what's absolutely required, and then the source for `PrismaAdapter` to see a concrete implementation.
+
+The general gist of the required functions:
+
+- `find()` should find a job to be run, lock it and return it (minimum return of an object containing `id`, `name`, `path`, `args` and `attempts` properties)
+- `schedule()` accepts `name`, `path`, `args`, `runAt`, `queue` and `priority` and should store the job
+- `success()` accepts the same job object returned from `find()` and a `deleteJob` boolean for whether the job should be deleted upon success.
+- `error()` accepts the same job object returned from `find()` and an error instance. Does whatever failure means to you (like unlock the job and reschedule a time for it to run again in the future)
+- `failure()` is called when the job has reached `maxAttempts`. Accepts the job object and a `deleteJob` boolean that says whether the job should be deleted.
+- `clear()` remove all jobs from the queue (mostly used in development).
+
+## The Future
+
+There's still more to add to background jobs! Our current TODO list:
+
+- More adapters: Redis, SQS, RabbitMQ...
+- RW Studio integration: monitor the state of your outstanding jobs
+- Baremetal integration: if jobs are enabled, monitor the workers with pm2
+- Recurring jobs (like cron jobs)
+- Lifecycle hooks: `beforePerform()`, `afterPerform()`, `afterSuccess()`, `afterFailure()`
diff --git a/docs/versioned_docs/version-8.4/builds.md b/docs/versioned_docs/version-8.4/builds.md
new file mode 100644
index 000000000000..b5cc87de5f0f
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/builds.md
@@ -0,0 +1,38 @@
+---
+description: What happens when you build your app
+---
+
+# Builds
+
+> ⚠ **Work in Progress** ⚠️
+>
+> There's more to document here. In the meantime, you can check our [community forum](https://community.redwoodjs.com/search?q=yarn%20rw%20build) for answers.
+>
+> Want to contribute? Redwood welcomes contributions and loves helping people become contributors.
+> You can edit this doc [here](https://github.com/redwoodjs/redwoodjs.com/blob/main/docs/builds.md).
+> If you have any questions, just ask for help! We're active on the [forums](https://community.redwoodjs.com/c/contributing/9) and on [discord](https://discord.com/channels/679514959968993311/747258086569541703).
+
+## API
+
+The api side of Redwood is transpiled by Babel into the `./api/dist` folder.
+
+### Steps on Netlify
+
+To emulate Netlify's build steps locally:
+
+```bash
+yarn rw build api
+cd api
+yarn zip-it-and-ship-it dist/functions/ zipballs/
+```
+
+Each lambda function in `./api/dist/functions` is parsed by zip-it-and-ship-it resulting in a zip file per lambda function that contains all the dependencies required for that lambda function.
+
+> Note: The `@netlify/zip-it-and-ship-it` package needs to be installed as a dev dependency in `api/`. Use the command `yarn workspace api add -D @netlify/zip-it-and-ship-it`.
+>
+> - You can learn more about the package [here](https://www.npmjs.com/package/@netlify/zip-it-and-ship-it).
+> - For more information on AWS Serverless Deploy see these [docs](/docs/deploy/serverless).
+
+## Web
+
+The web side of Redwood is built by Vite into the `./web/dist` folder.
diff --git a/docs/versioned_docs/version-8.4/cells.md b/docs/versioned_docs/version-8.4/cells.md
new file mode 100644
index 000000000000..addfa2d6b2ac
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/cells.md
@@ -0,0 +1,415 @@
+---
+description: Declarative data fetching with Cells
+---
+
+# Cells
+
+Cells are a declarative approach to data fetching and one of Redwood's signature modes of abstraction.
+By providing conventions around data fetching, Redwood can get in between the request and the response to do things like query optimization and more, all without you ever having to change your code.
+
+While it might seem like there's a lot of magic involved, all a Cell really does is execute a GraphQL query and manage its lifecycle.
+The idea is that, by exporting named constants that declare what you want your UI to look like throughout a query's lifecycle,
+Redwood can assemble these into a component template at build-time using a Babel plugin.
+All without you having to write a single line of imperative code!
+
+## Generating a Cell
+
+You can generate a Cell with Redwood's Cell generator:
+
+```bash
+yarn rw generate cell
+```
+
+This creates a directory named `Cell` in `web/src/components` with four files:
+
+| File | Description |
+| :---------------------- | :------------------------------------------------------ |
+| `Cell.js` | The actual Cell |
+| `Cell.test.js` | Jest tests for each state of the Cell |
+| `Cell.stories.js` | Storybook stories for each state of the Cell |
+| `Cell.mock.js` | Mock data for both the Jest tests and Storybook stories |
+
+### Single Item Cell vs List Cell
+
+Sometimes you want a Cell that renders a single item and other times you want a Cell that renders a list.
+Redwood's Cell generator can do both.
+
+First, it detects if `` is singular or plural.
+For example, to generate a Cell that renders a list of users, run `yarn rw generate cell users`.
+Second, for irregular words whose singular and plural are the same, such as "equipment" or "pokemon", you can pass `--list` to tell Redwood to generate a list Cell explicitly:
+
+```bash
+yarn rw generate cell equipment --list
+```
+
+## Cells in-depth
+
+Cells exports five constants: `QUERY`, `Loading` , `Empty` , `Failure` and `Success`. The root query in `QUERY` is the same as `` so that, if you're generating a cell based on a model in your `schema.prisma`, you can get something out of the database right away. But there's a good chance you won't generate your Cell this way, so if you need to, make sure to change the root query. See the [Cells](tutorial/chapter2/cells.md#our-first-cell) section of the Tutorial for a great example of this.
+
+## Usage
+
+With Cells, you have a total of seven exports to work with:
+
+| Name | Type | Description |
+| :------------ | :---------------- | :----------------------------------------------------------- |
+| `QUERY` | `string,function` | The query to execute |
+| `beforeQuery` | `function` | Lifecycle hook; prepares variables and options for the query |
+| `isEmpty` | `function` | Lifecycle hook; decides if the Cell should render Empty |
+| `afterQuery` | `function` | Lifecycle hook; sanitizes data returned from the query |
+| `Loading` | `component` | If the request is in flight, render this component |
+| `Empty` | `component` | If there's no data (`null` or `[]`), render this component |
+| `Failure` | `component` | If something went wrong, render this component |
+| `Success` | `component` | If the data has loaded, render this component |
+
+Only `QUERY` and `Success` are required. If you don't export `Empty`, empty results are sent to `Success`, and if you don't export `Failure`, error is output to the console.
+
+In addition to displaying the right component at the right time, Cells also funnel the right props to the right component. `Loading`, `Empty`, `Failure`, and `Success` all have access to the props passed down from the Cell in good ol' React fashion, and most of the `useQuery` hook's return as a prop called `queryResult`. In addition to all those props, `Empty` and `Success` also get the `data` returned from the query and an `updating` prop indicating whether the Cell is currently fetching new data. `Failure` also gets `updating` and exclusive access to `error` and `errorCode`.
+
+We mentioned above that Cells receive "most" of what's returned from the `useQuery` hook. You can see exactly what `useQuery` returns in Apollo Client's [API reference](https://www.apollographql.com/docs/react/api/react/hooks/#result). Again note that `error` and `data` get some special treatment.
+
+### QUERY
+
+`QUERY` can be a string or a function. If `QUERY` is a function, it has to return a valid GraphQL document.
+
+It's more-than ok to have more than one root query. Here's an example:
+
+```jsx {7-10}
+export const QUERY = gql`{
+ query {
+ posts {
+ id
+ title
+ }
+ authors {
+ id
+ name
+ }
+ }
+}
+```
+
+So in this case, both `posts` and `authors` would be available to `Success`:
+
+```jsx
+export const Success = ({ posts, authors }) => {
+ // ...
+}
+```
+
+Normally queries have variables. Cells are setup to use any props they receive from their parent as variables (things are setup this way in `beforeQuery`). For example, here `BlogPostCell` takes a prop, `numberToShow`, so `numberToShow` is just available to your `QUERY`:
+
+```jsx {7}
+import BlogPostsCell from 'src/components/BlogPostsCell'
+
+const HomePage = () => {
+ return (
+
+
Home
+
+
+ )
+}
+
+export default HomePage
+```
+
+```jsx {2-3}
+export const QUERY = gql`
+ query ($numberToShow: Int!) {
+ posts(numberToShow: $numberToShow) {
+ id
+ title
+ }
+ }
+`
+```
+
+This means you can think backwards about your Cell's props from your SDL: whatever the variables in your SDL are, that's what your Cell's props should be.
+
+### beforeQuery
+
+`beforeQuery` is a lifecycle hook. The best way to think about it is as a chance to configure [Apollo Client's `useQuery` hook](https://www.apollographql.com/docs/react/api/react/hooks#options).
+
+By default, `beforeQuery` gives any props passed from the parent component to `QUERY` so that they're available as variables for it. It'll also set the fetch policy to `'cache-and-network'` since we felt it matched the behavior users want most of the time:
+
+```jsx
+export const beforeQuery = (props) => {
+ return {
+ variables: props,
+ fetchPolicy: 'cache-and-network',
+ }
+}
+```
+
+For example, if you wanted to turn on Apollo's polling option, and prevent caching, you could export something like this (see Apollo's docs on [polling](https://www.apollographql.com/docs/react/data/queries/#polling) and [caching](https://www.apollographql.com/docs/react/data/queries/#setting-a-fetch-policy))
+
+```jsx
+export const beforeQuery = (props) => {
+ return { variables: props, fetchPolicy: 'no-cache', pollInterval: 2500 }
+}
+```
+
+You can also use `beforeQuery` to populate variables with data not included in the Cell's props (like from React's Context API or a global state management library). If you provide a `beforeQuery` function, the Cell will automatically change the type of its props to match the first argument of the function.
+
+```jsx
+// The Cell will take no props: |
+export const beforeQuery = () => {
+ const { currentUser } = useAuth()
+
+ return {
+ variables: { userId: currentUser.id },
+ }
+}
+```
+
+```jsx
+// The cell will take 1 prop named "word" that is a string:
+export const beforeQuery = ({ word }: { word: string }) => {
+ return {
+ variables: { magicWord: word }
+ }
+}
+```
+
+### isEmpty
+
+`isEmpty` is an optional lifecycle hook. It returns a boolean to indicate if the Cell should render empty. Use it to override the default check, which checks if the Cell's root fields are null or empty arrays.
+
+It receives two parameters: 1) the `data`, and 2) an object that has the default `isEmpty` function, named `isDataEmpty`, so that you can extend the default:
+
+```jsx
+export const isEmpty = (data, { isDataEmpty }) => {
+ return isDataEmpty(data) || data?.blog?.status === 'hidden'
+}
+```
+
+### afterQuery
+
+`afterQuery` is a lifecycle hook. It runs just before data gets to `Success`.
+Use it to sanitize data returned from `QUERY` before it gets there.
+
+By default, `afterQuery` just returns the data as it is:
+
+### Loading
+
+If there's no cached data and the request is in flight, a Cell renders `Loading`.
+
+When you're developing locally, you can catch your Cell waiting to hear back for a moment if set your speed in the Inspector's **Network** tab to something like "Slow 3G".
+
+But why bother with Slow 3G when Redwood comes with Storybook? Storybook makes developing components like `Loading` (and `Failure`) a breeze. We don't have to put up with hacky workarounds like Slow 3G or intentionally breaking our app just to develop our components.
+
+### Empty
+
+A Cell renders this component if there's no data.
+By no data, we mean if the response is 1) `null` or 2) an empty array (`[]`).
+
+### Failure
+
+A Cell renders this component if something went wrong. You can quickly see this in action if you add an untyped field to your `QUERY`:
+
+```jsx {6}
+const QUERY = gql`
+ query {
+ posts {
+ id
+ title
+ unTypedField
+ }
+ }
+`
+```
+
+But, like `Loading`, Storybook is probably a better place to develop this.
+
+
+
+In this example, we use the `errorCode` to conditionally render the error heading title, and we also use it for our translation string.
+
+```jsx
+export const Failure = ({ error, errorCode }: CellFailureProps) => {
+ const { t } = useTranslation()
+ return (
+
+ {errorCode === 'NO_CONFIG' ? NO_CONFIG : ERROR }
+ Error: {error.message} - Code: {errorCode} - {t(`error.${errorCode}`)}
+
+ )
+}
+```
+
+### Success
+
+If everything went well, a Cell renders `Success`.
+
+As mentioned, Success gets exclusive access to the `data` prop. But if you try to destructure it from `props`, you'll notice that it doesn't exist. This is because Redwood adds a layer of convenience: Redwood spreads `data` into `Success` so that you can just destructure whatever data you were expecting from your `QUERY` directly.
+
+So, if you're querying for `posts` and `authors`, instead of doing:
+
+```jsx
+export const Success = ({ data }) => {
+ const { posts, authors } = data
+
+ // ...
+}
+```
+
+Redwood lets you do:
+
+```jsx
+export const Success = ({ posts, authors }) => {
+ // ...
+}
+```
+
+Note that you can still pass any other props to `Success`. After all, it's just a React component.
+
+:::tip
+
+Looking for info on how TypeScript works with Cells? Check out the [Utility Types](typescript/utility-types.md#cells) doc.
+
+:::
+
+### When should I use a Cell?
+
+Whenever you want to fetch data. Let Redwood juggle what's displayed when. You just focus on what those things should look like.
+
+While you can use a Cell whenever you want to fetch data, it's important to note that you don't have to. You can do anything you want! For example, for one-off queries, there's always `useApolloClient`. This hook returns the client, which you can use to execute queries, among other things:
+
+```jsx
+// In a react component...
+
+client = useApolloClient()
+
+client.query({
+ query: gql`
+ ...
+ `,
+})
+```
+
+### Can I Perform a Mutation in a Cell?
+
+Absolutely. We do so in our [example todo app](https://github.com/redwoodjs/example-todo/blob/f29069c9dc89fa3734c6f99816442e14dc73dbf7/web/src/components/TodoListCell/TodoListCell.js#L26-L44).
+We also don't think it's an anti-pattern to do so. Far from it—your cells might end up containing a lot of logic and really serve as the hub of your app in many ways.
+
+It's also important to remember that, besides exporting certain things with certain names, there aren't many rules around Cells—everything you can do in a regular component still goes.
+
+## How Does Redwood Know a Cell is a Cell?
+
+You just have to end a filename in "Cell" right? Well, while that's basically correct, there is one other thing you should know.
+
+Redwood looks for all files ending in "Cell" (so if you want your component to be a Cell, its filename does have to end in "Cell"), but if the file 1) doesn't export a const named `QUERY` and 2) has a default export, then it'll be skipped.
+
+When would you want to do this? If you just want a file to end in "Cell" for some reason. Otherwise, don't worry about it!
+
+
+
+
+
+
+## Advanced Example: Implementing a Cell Yourself
+
+If we didn't do all that build-time stuff for you, how might you go about implementing a Cell yourself?
+
+Consider the [example from the Tutorial](tutorial/chapter2/cells.md#our-first-cell) where we're fetching posts:
+
+```jsx
+export const QUERY = gql`
+ query {
+ posts {
+ id
+ title
+ body
+ createdAt
+ }
+ }
+`
+
+export const Loading = () => Loading...
+
+export const Empty = () => No posts yet!
+
+export const Failure = ({ error }) => (
+ Error loading posts: {error.message}
+)
+
+export const Success = ({ posts }) => {
+ return posts.map((post) => (
+
+ {post.title}
+ {post.body}
+
+ ))
+}
+```
+
+And now let's say that Babel isn't going to come along and assemble our exports. What might we do?
+
+We'd probably do something like this:
+
+
+
+```jsx
+const QUERY = gql`
+ query {
+ posts {
+ id
+ title
+ body
+ createdAt
+ }
+ }
+`
+
+const Loading = () => Loading...
+
+const Empty = () => No posts yet!
+
+const Failure = ({ error }) => (
+ Error loading posts: {error.message}
+)
+
+const Success = ({ posts }) => {
+ return posts.map((post) => (
+
+ {post.title}
+ {post.body}
+
+ ))
+}
+
+const isEmpty = (data) => {
+ return isDataNull(data) || isDataEmptyArray(data)
+}
+
+export const Cell = () => {
+ return (
+
+ {({ error, loading, data }) => {
+ if (error) {
+ if (Failure) {
+ return
+ } else {
+ console.error(error)
+ }
+ } else if (loading) {
+ return
+ } else if (data) {
+ if (typeof Empty !== 'undefined' && isEmpty(data)) {
+ return
+ } else {
+ return
+ }
+ } else {
+ throw 'Cannot render Cell: graphQL success but `data` is null'
+ }
+ }}
+
+ )
+}
+```
+
+That's a lot of code. A lot of imperative code too.
+
+We're basically just dumping the contents of [createCell.tsx](https://github.com/redwoodjs/redwood/blob/main/packages/web/src/components/cell/createCell.tsx) into this file. Can you imagine having to do this every time you wanted to fetch data that might be delayed in responding? Yikes.
diff --git a/docs/versioned_docs/version-8.4/cli-commands.md b/docs/versioned_docs/version-8.4/cli-commands.md
new file mode 100644
index 000000000000..24326dc39c08
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/cli-commands.md
@@ -0,0 +1,2288 @@
+---
+description: A comprehensive reference of Redwood's CLI
+---
+
+# Command Line Interface
+
+The following is a comprehensive reference of the Redwood CLI. You can get a glimpse of all the commands by scrolling the aside to the right.
+
+The Redwood CLI has two entry-point commands:
+
+1. **redwood** (alias `rw`), which is for developing an application, and
+2. **redwood-tools** (alias `rwt`), which is for contributing to the framework.
+
+This document covers the `redwood` command . For `redwood-tools`, see [Contributing](https://github.com/redwoodjs/redwood/blob/main/CONTRIBUTING.md#cli-reference-redwood-tools) in the Redwood repo.
+
+**A Quick Note on Syntax**
+
+We use [yargs](http://yargs.js.org/) and borrow its syntax here:
+
+```
+yarn redwood generate page [path] --option
+```
+
+- `redwood g page` is the command.
+- `` and `[path]` are positional arguments.
+ - `<>` denotes a required argument.
+ - `[]` denotes an optional argument.
+- `--option` is an option.
+
+Every argument and option has a type. Here `` and `[path]` are strings and `--option` is a boolean.
+
+You'll also sometimes see arguments with trailing `..` like:
+
+```
+yarn redwood build [side..]
+```
+
+The `..` operator indicates that the argument accepts an array of values. See [Variadic Positional Arguments](https://github.com/yargs/yargs/blob/master/docs/advanced.md#variadic-positional-arguments).
+
+## create redwood-app
+
+Create a Redwood project using the yarn create command:
+
+```
+yarn create redwood-app [option]
+```
+
+| Arguments & Options | Description |
+| :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `project directory` | Specify the project directory [Required] |
+| `--yarn-install` | Enables the yarn install step and version-requirement checks. You can pass `--no-yarn-install` to disable this behavior |
+| `--typescript`, `--ts` | Generate a TypeScript project. JavaScript by default |
+| `--overwrite` | Create the project even if the specified project directory isn't empty |
+| `--no-telemetry` | Disable sending telemetry events for this create command and all Redwood CLI commands: [https://telemetry.redwoodjs.com](https://telemetry.redwoodjs.com) |
+| `--yarn1` | Use yarn 1 instead of yarn 3 |
+| `--git-init`, `--git` | Initialize a git repo during the install process, disabled by default |
+
+If you run into trouble during the yarn install step, which may happen if you're developing on an external drive and in other miscellaneous scenarios, try the `--yarn1` flag:
+
+```
+yarn create redwood-app my-redwood-project --yarn1
+```
+
+## build
+
+Build for production.
+
+```bash
+yarn redwood build [side..]
+```
+
+We use Babel to transpile the api side into `./api/dist` and Vite to package the web side into `./web/dist`.
+
+| Arguments & Options | Description |
+| :------------------ | :------------------------------------------------------------------------------- |
+| `side` | Which side(s) to build. Choices are `api` and `web`. Defaults to `api` and `web` |
+| `--verbose, -v` | Print more information while building |
+
+#### Usage
+
+See [Builds](builds.md).
+
+#### Example
+
+Running `yarn redwood build` without any arguments generates the Prisma client and builds both sides of your project:
+
+```bash
+~/redwood-app$ yarn redwood build
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood build
+✔ Generating the Prisma client...
+✔ Building "api"...
+✔ Building "web"...
+Done in 17.37s.
+```
+
+Files are output to each side's `dist` directory:
+
+```plaintext {2,6}
+├── api
+│ ├── dist
+│ ├── prisma
+│ └── src
+└── web
+ ├── dist
+ ├── public
+ └── src
+```
+
+## check (alias diagnostics)
+
+Get structural diagnostics for a Redwood project (experimental).
+
+```
+yarn redwood check
+```
+
+#### Example
+
+```bash
+~/redwood-app$ yarn redwood check
+yarn run v1.22.4
+web/src/Routes.js:14:5: error: You must specify a 'notfound' page
+web/src/Routes.js:14:19: error: Duplicate Path
+web/src/Routes.js:15:19: error: Duplicate Path
+web/src/Routes.js:17:40: error: Page component not found
+web/src/Routes.js:17:19: error (INVALID_ROUTE_PATH_SYNTAX): Error: Route path contains duplicate parameter: "/{id}/{id}"
+```
+
+## console (alias c)
+
+Launch an interactive Redwood shell (experimental):
+
+- This has not yet been tested on Windows.
+- The Prisma Client must be generated _prior_ to running this command, e.g. `yarn redwood prisma generate`. This is a known issue.
+
+```
+yarn redwood console
+```
+
+Right now, you can only use the Redwood console to interact with your database (always with `await`):
+
+#### Example
+
+```bash
+~/redwood-app$ yarn redwood console
+yarn run v1.22.4
+> await db.user.findMany()
+> [ { id: 1, email: 'tom@redwoodjs.com', name: 'Tom' } ]
+```
+
+## data-migrate
+
+Data migration tools.
+
+```bash
+yarn redwood data-migrate
+```
+
+| Command | Description |
+| :-------- | :------------------------------------------------------------------------------------------ |
+| `install` | Appends `DataMigration` model to `schema.prisma`, creates `api/db/dataMigrations` directory |
+| `up` | Executes outstanding data migrations |
+
+### data-migrate install
+
+- Appends a `DataMigration` model to `schema.prisma` for tracking which data migrations have already run.
+- Creates a DB migration using `yarn redwood prisma migrate dev --create-only create_data_migrations`.
+- Creates `api/db/dataMigrations` directory to contain data migration scripts
+
+```bash
+yarn redwood data-migrate install
+```
+
+### data-migrate up
+
+Executes outstanding data migrations against the database. Compares the list of files in `api/db/dataMigrations` to the records in the `DataMigration` table in the database and executes any files not present.
+
+If an error occurs during script execution, any remaining scripts are skipped and console output will let you know the error and how many subsequent scripts were skipped.
+
+```bash
+yarn redwood data-migrate up
+```
+
+## dev
+
+Start development servers for api and web.
+
+```bash
+yarn redwood dev [side..]
+```
+
+`yarn redwood dev api` starts the Redwood dev server and `yarn redwood dev web` starts the Vite dev server with Redwood's config.
+
+| Argument | Description |
+| :----------------- | :------------------------------------------------------------------------------------- |
+| `side` | Which dev server(s) to start. Choices are `api` and `web`. Defaults to `api` and `web` |
+| `--forward, --fwd` | String of one or more Vite Dev Server config options. See example usage below |
+
+#### Usage
+
+If you're only working on your sdl and services, you can run just the api server to get GraphQL Playground on port 8911:
+
+```bash
+~/redwood-app$ yarn redwood dev api
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood dev api
+$ /redwood-app/node_modules/.bin/dev-server
+15:04:51 api | Listening on http://localhost:8911
+15:04:51 api | Watching /home/dominic/projects/redwood/redwood-app/api
+15:04:51 api \
+ | 15:04:51 api | Now serving
+15:04:51 api \
+ | 15:04:51 api | ► http://localhost:8911/graphql/
+```
+
+Using `--forward` (alias `--fwd`), you can pass one or more Vite Dev Server [config options](https://vitejs.dev/guide/cli#vite). The following will run the dev server, set the port to `1234`, and disable automatic browser opening.
+
+```bash
+~/redwood-app$ yarn redwood dev --fwd="--port=1234 --open=false"
+```
+
+You may need to access your dev application from a different host, like your mobile device or an SSH tunnel. To resolve the “Invalid Host Header” message, run the following:
+
+```bash
+~/redwood-app$ yarn redwood dev --fwd="--allowed-hosts all"
+```
+
+For the full list of Vite Dev Server settings, see [this documentation](https://vitejs.dev/guide/cli#vite).
+
+For the full list of Server Configuration settings, see [this documentation](app-configuration-redwood-toml.md#api).
+
+## deploy
+
+Deploy your redwood project to a hosting provider target.
+
+**Netlify, Vercel, and Render**
+
+For hosting providers that auto deploy from Git, the deploy command runs the set of steps to build, apply production DB changes, and apply data migrations. In this context, it is often referred to as a Build Command. _Note: for Render, which uses traditional infrastructure, the command also starts Redwood's api server._
+
+**AWS**
+
+This command runs the steps to both build your project _and_ deploy it to AWS.
+
+```
+yarn redwood deploy
+```
+
+| Commands | Description |
+| :---------------------------- | :--------------------------------------- |
+| `serverless` | Deploy to AWS using Serverless framework |
+| `netlify [...commands]` | Build command for Netlify deploy |
+| `render [...commands]` | Build command for Render deploy |
+| `vercel [...commands]` | Build command for Vercel deploy |
+
+### deploy serverless
+
+Deploy to AWS CloudFront and Lambda using [Serverless](https://www.serverless.com/) framework
+
+```
+yarn redwood deploy serverless
+```
+
+| Options & Arguments | Description |
+| :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------ |
+| `--side` | which Side(s)to deploy [choices: "api", "web"] [default: "web","api"] |
+| `--stage` | serverless stage, see [serverless stage docs](https://www.serverless.com/blog/stages-and-environments) [default: "production"] |
+| `--pack-only` | Only package the build for deployment |
+| `--first-run` | Use this flag the first time you deploy. The first deploy wizard will walk you through configuring your web side to connect to the api side |
+
+### deploy netlify
+
+Build command for Netlify deploy
+
+```
+yarn redwood deploy netlify
+```
+
+| Options | Description |
+| :--------------------- | :-------------------------------------------------- |
+| `--build` | Build for production [default: "true"] |
+| `--prisma` | Apply database migrations [default: "true"] |
+| `--data-migrate, --dm` | Migrate the data in your database [default: "true"] |
+
+#### Example
+
+The following command will build, apply Prisma DB migrations, and skip data migrations.
+
+```
+yarn redwood deploy netlify --no-data-migrate
+```
+
+:::warning
+While you may be tempted to use the [Netlify CLI](https://cli.netlify.com) commands to [build](https://cli.netlify.com/commands/build) and [deploy](https://cli.netlify.com/commands/deploy) your project directly from you local project directory, doing so **will lead to errors when deploying and/or when running functions**. I.e. errors in the function needed for the GraphQL server, but also other serverless functions.
+
+The main reason for this is that these Netlify CLI commands simply build and deploy -- they build your project locally and then push the dist folder. That means that when building a RedwoodJS project, the [Prisma client is generated with binaries matching the operating system at build time](https://cli.netlify.com/commands/link) -- and not the [OS compatible](https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference#binarytargets-options) with running functions on Netlify. Your Prisma client engine may be `darwin` for OSX or `windows` for Windows, but it needs to be `debian-openssl-1.1.x` or `rhel-openssl-1.1.x`. If the client is incompatible, your functions will fail.
+
+Therefore, please follow the [instructions in the Tutorial](tutorial/chapter4/deployment.md#netlify) to sync your GitHub (or other compatible source control service) repository with Netlify and allow their build and deploy system to manage deployments.
+
+The [Netlify CLI](https://cli.netlify.com) still works well for [linking your project to your site](https://cli.netlify.com/commands/link), testing local builds and also using their [dev](https://cli.netlify.com/commands/dev) or [dev --live](https://cli.netlify.com/commands/dev) to share your local dev server via a tunnel.
+:::
+
+### deploy render
+
+Build (web) and Start (api) command for Render deploy. (For usage instructions, see the Render [Deploy Redwood](https://render.com/docs/deploy-redwood) doc.)
+
+```
+yarn redwood deploy render
+```
+
+| Options & Arguments | Description |
+| :--------------------- | :-------------------------------------------------- |
+| `side` | select side to build [choices: "api", "web"] |
+| `--prisma` | Apply database migrations [default: "true"] |
+| `--data-migrate, --dm` | Migrate the data in your database [default: "true"] |
+| `--serve` | Run server for api in production [default: "true"] |
+
+#### Example
+
+The following command will build the Web side for static-site CDN deployment.
+
+```
+yarn redwood deploy render web
+```
+
+The following command will apply Prisma DB migrations, run data migrations, and start the api server.
+
+```
+yarn redwood deploy render api
+```
+
+### deploy vercel
+
+Build command for Vercel deploy
+
+```
+yarn redwood deploy vercel
+```
+
+| Options | Description |
+| :--------------------- | :-------------------------------------------------- |
+| `--build` | Build for production [default: "true"] |
+| `--prisma` | Apply database migrations [default: "true"] |
+| `--data-migrate, --dm` | Migrate the data in your database [default: "true"] |
+
+#### Example
+
+The following command will build, apply Prisma DB migrations, and skip data migrations.
+
+```
+yarn redwood deploy vercel --no-data-migrate
+```
+
+## destroy (alias d)
+
+Rollback changes made by the generate command.
+
+```
+yarn redwood destroy
+```
+
+| Command | Description |
+| :------------------- | :------------------------------------------------------------------------------ |
+| `cell ` | Destroy a cell component |
+| `component ` | Destroy a component |
+| `function ` | Destroy a Function |
+| `layout ` | Destroy a layout component |
+| `page [path]` | Destroy a page and route component |
+| `scaffold ` | Destroy pages, SDL, and Services files based on a given DB schema Model |
+| `sdl ` | Destroy a GraphQL schema and service component based on a given DB schema Model |
+| `service ` | Destroy a service component |
+| `directive ` | Destroy a directive |
+
+## exec
+
+Execute scripts generated by [`yarn redwood generate script `](#generate-script) to run one-off operations, long-running jobs, or utility scripts.
+
+#### Usage
+
+You can pass any flags to the command and use them within your script:
+
+```
+❯ yarn redwood exec syncStripeProducts foo --firstParam 'hello' --two 'world'
+
+[18:13:56] Generating Prisma client [started]
+[18:13:57] Generating Prisma client [completed]
+[18:13:57] Running script [started]
+:: Executing script with args ::
+{ _: [ 'exec', 'foo' ], firstParam: 'hello', two: 'world', '$0': 'rw' }
+[18:13:58] Running script [completed]
+✨ Done in 4.37s.
+```
+
+**Examples of CLI scripts:**
+
+- One-off scripts—such as syncing your Stripe products to your database
+- A background worker you can off-load long running tasks
+- Custom seed scripts for your application during development
+
+See [this how to](how-to/background-worker.md) for an example of using exec to run a background worker.
+
+## experimental (alias exp)
+
+Set up and run experimental features.
+
+Some caveats:
+
+- these features do not follow SemVer (may be breaking changes in minor and patch releases)
+- these features may be deprecated or removed (anytime)
+- your feedback is wanted and necessary!
+
+For more information, including details about specific features, see this Redwood Forum category:
+[Experimental Features](https://community.redwoodjs.com/c/experimental-features/25)
+
+**Available Experimental Features**
+View all that can be _set up_:
+
+```
+yarn redwood experimental --help
+```
+
+## generate (alias g)
+
+Save time by generating boilerplate code.
+
+```
+yarn redwood generate
+```
+
+Some generators require that their argument be a model in your `schema.prisma`. When they do, their argument is named ``.
+
+| Command | Description |
+| ---------------------- | ----------------------------------------------------------------------------------------------------- |
+| `cell ` | Generate a cell component |
+| `component ` | Generate a component component |
+| `dataMigration ` | Generate a data migration component |
+| `dbAuth` | Generate sign in, sign up and password reset pages for dbAuth |
+| `deploy ` | Generate a deployment configuration |
+| `function ` | Generate a Function |
+| `job ` | Generate a background job |
+| `layout ` | Generate a layout component |
+| `page [path]` | Generate a page component |
+| `scaffold ` | Generate Pages, SDL, and Services files based on a given DB schema Model. Also accepts `` |
+| `sdl ` | Generate a GraphQL schema and service object |
+| `secret` | Generate a secret key using a cryptographically-secure source of entropy |
+| `service ` | Generate a service component |
+| `types` | Generate types and supplementary code |
+| `script ` | Generate a script that can use your services/libs to execute with `redwood exec script ` |
+
+### TypeScript generators
+
+If your project is configured for TypeScript (see the [TypeScript docs](typescript/index)), the generators will automatically detect and generate `.ts`/`.tsx` files for you
+
+**Undoing a Generator with a Destroyer**
+
+Most generate commands (i.e., everything but `yarn redwood generate dataMigration`) can be undone by their corresponding destroy command. For example, `yarn redwood generate cell` can be undone with `yarn redwood destroy cell`.
+
+### generate cell
+
+Generate a cell component.
+
+```bash
+yarn redwood generate cell
+```
+
+Cells are signature to Redwood. We think they provide a simpler and more declarative approach to data fetching.
+
+| Arguments & Options | Description |
+| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `name` | Name of the cell |
+| `--force, -f` | Overwrite existing files |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--query` | Use this flag to specify a specific name for the GraphQL query. The query name must be unique |
+| `--list` | Use this flag to generate a list cell. This flag is needed when dealing with irregular words whose plural and singular is identical such as equipment or pokemon |
+| `--tests` | Generate test files [default: true] |
+| `--stories` | Generate Storybook files [default: true] |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+#### Usage
+
+The cell generator supports both single items and lists. See the [Single Item Cell vs List Cell](cells.md#single-item-cell-vs-list-cell) section of the Cell documentation.
+
+See the [Cells](tutorial/chapter2/cells.md) section of the Tutorial for usage examples.
+
+**Destroying**
+
+```
+yarn redwood destroy cell
+```
+
+#### Example
+
+Generating a user cell:
+
+```bash
+~/redwood-app$ yarn redwood generate cell user
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g cell user
+✔ Generating cell files...
+✔ Writing $(./web/src/components/UserCell/UserCell.test.js)...
+✔ Writing $(./web/src/components/UserCell/UserCell.js)...
+Done in 1.00s.
+```
+
+A cell defines and exports four constants: `QUERY`, `Loading`, `Empty`, `Failure`, and `Success`:
+
+```jsx title="./web/src/components/UserCell/UserCell.js"
+export const QUERY = gql`
+ query {
+ user {
+ id
+ }
+ }
+`
+
+export const Loading = () => Loading...
+
+export const Empty = () => Empty
+
+export const Failure = ({ error }) => Error: {error.message}
+
+export const Success = ({ user }) => {
+ return JSON.stringify(user)
+}
+```
+
+### generate component
+
+Generate a component.
+
+```bash
+yarn redwood generate component
+```
+
+Redwood loves function components and makes extensive use of React Hooks, which are only enabled in function components.
+
+| Arguments & Options | Description |
+| -------------------- | ------------------------------------------------------------------------------------ |
+| `name` | Name of the component |
+| `--force, -f` | Overwrite existing files |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--tests` | Generate test files [default: true] |
+| `--stories` | Generate Storybook files [default: true] |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+**Destroying**
+
+```
+yarn redwood destroy component
+```
+
+#### Example
+
+Generating a user component:
+
+```bash
+~/redwood-app$ yarn redwood generate component user
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g component user
+✔ Generating component files...
+✔ Writing $(./web/src/components/User/User.test.js)...
+✔ Writing $(./web/src/components/User/User.js)...
+Done in 1.02s.
+```
+
+The component will export some jsx telling you where to find it.
+
+```jsx title="./web/src/components/User/User.js"
+const User = () => {
+ return (
+
+ {'User'}
+ {'Find me in ./web/src/components/User/User.js'}
+
+ )
+}
+
+export default User
+```
+
+### generate dataMigration
+
+Generate a data migration script.
+
+```
+yarn redwood generate dataMigration
+```
+
+Creates a data migration script in `api/db/dataMigrations`.
+
+| Arguments & Options | Description |
+| :------------------ | :----------------------------------------------------------------------- |
+| `name` | Name of the data migration, prefixed with a timestamp at generation time |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+#### Usage
+
+See the [Data Migration](data-migrations.md) docs.
+
+#### Usage
+
+See the [Deploy](/docs/deploy/introduction) docs.
+
+### generate dbAuth
+
+Generate log in, sign up, forgot password and password reset pages for dbAuth
+
+```
+yarn redwood generate dbAuth
+```
+
+| Arguments & Options | Description |
+| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
+| `--username-label` | The label to give the username field on the auth forms, e.g. "Email". Defaults to "Username". If not specified you will be prompted |
+| `--password-label` | The label to give the password field on the auth forms, e.g. "Secret". Defaults to "Password". If not specified you will be prompted |
+| `--webAuthn` | Whether or not to add webAuthn support to the log in page. If not specified you will be prompted |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+If you don't want to create your own log in, sign up, forgot password and
+password reset pages from scratch you can use this generator. The pages will be
+available at /login, /signup, /forgot-password, and /reset-password. Check the
+post-install instructions for one change you need to make to those pages: where
+to redirect the user to once their log in/sign up is successful.
+
+If you'd rather create your own, you might want to start from the generated
+pages anyway as they'll contain the other code you need to actually submit the
+log in credentials or sign up fields to the server for processing.
+
+:::important
+This `generate dbAuth` command simply adds the pages. You must add the necessary dbAuth functions and
+app setup by running `yarn rw setup auth dbAuth` to fully use dbAuth.
+:::
+
+### generate directive
+
+Generate a directive.
+
+```bash
+yarn redwood generate directive
+```
+
+| Arguments & Options | Description |
+| -------------------- | --------------------------------------------------------------------- |
+| `name` | Name of the directive |
+| `--force, -f` | Overwrite existing files |
+| `--typescript, --ts` | Generate TypeScript files (defaults to your projects language target) |
+| `--type` | Directive type [Choices: "validator", "transformer"] |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+#### Usage
+
+See [Redwood Directives](directives.md).
+
+**Destroying**
+
+```
+yarn redwood destroy directive
+```
+
+#### Example
+
+Generating a `myDirective` directive using the interactive command:
+
+```bash
+yarn rw g directive myDirective
+
+? What type of directive would you like to generate? › - Use arrow-keys. Return to submit.
+❯ Validator - Implement a validation: throw an error if criteria not met to stop execution
+Transformer - Modify values of fields or query responses
+```
+
+### generate function
+
+Generate a Function.
+
+```
+yarn redwood generate function
+```
+
+Not to be confused with Javascript functions, Capital-F Functions are meant to be deployed to serverless endpoints like AWS Lambda.
+
+| Arguments & Options | Description |
+| -------------------- | ------------------------------------------------------------------------------------ |
+| `name` | Name of the function |
+| `--force, -f` | Overwrite existing files |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+#### Usage
+
+See the [Custom Function](how-to/custom-function.md) how to.
+
+**Destroying**
+
+```
+yarn redwood destroy function
+```
+
+#### Example
+
+Generating a user function:
+
+```bash
+~/redwood-app$ yarn redwood generate function user
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g function user
+✔ Generating function files...
+✔ Writing $(./api/src/functions/user.js)...
+Done in 16.04s.
+```
+
+Functions get passed `context` which provides access to things like the current user:
+
+```jsx title="./api/src/functions/user.js"
+export const handler = async (event, context) => {
+ return {
+ statusCode: 200,
+ body: `user function`,
+ }
+}
+```
+
+Now if we run `yarn redwood dev api`:
+
+```plaintext {11}
+~/redwood-app$ yarn redwood dev api
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood dev api
+$ /redwood-app/node_modules/.bin/dev-server
+17:21:49 api | Listening on http://localhost:8911
+17:21:49 api | Watching /home/dominic/projects/redwood/redwood-app/api
+17:21:49 api |
+17:21:49 api | Now serving
+17:21:49 api |
+17:21:49 api | ► http://localhost:8911/graphql/
+17:21:49 api | ► http://localhost:8911/user/
+```
+
+### generate job
+
+Generate a background job file (and optional tests) in `api/src/jobs`.
+
+```bash
+yarn redwood generate job
+```
+
+| Arguments & Options | Description |
+| -------------------- | ------------------------------------------------------------------------------------- |
+| `name` | Name of the job ("Job" suffix is optional) |
+| `--force, -f` | Overwrite existing files |
+| `--typescript, --ts` | Generate TypeScript files. Enabled by default if we detect your project is TypeScript |
+| `--tests` | Generate test files [default: true] |
+
+#### Example
+
+```bash
+yarn redwood generate job WelcomeEmail
+# or
+yarn rw g job WelcomeEmail
+```
+
+:::info Job naming
+By convention a job filename and exported code ends in `Job` and the generate command enforces this. If you don't include "Job" at the end of the name, the generator will add it. For example, with the above command, the file generated would be `api/src/jobs/WelcomeEmailJob/WelcomeEmailJob.{js|ts}`.
+:::
+
+Learn more about jobs in the [Background Jobs docs](background-jobs).
+
+### generate layout
+
+Generate a layout component.
+
+```bash
+yarn redwood generate layout
+```
+
+Layouts wrap pages and help you stay DRY.
+
+| Arguments & Options | Description |
+| -------------------- | ------------------------------------------------------------------------------------ |
+| `name` | Name of the layout |
+| `--force, -f` | Overwrite existing files |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--tests` | Generate test files [default: true] |
+| `--stories` | Generate Storybook files [default: true] |
+| `--skipLink` | Generate a layout with a skip link [default: false] |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+#### Usage
+
+See the [Layouts](tutorial/chapter1/layouts.md) section of the tutorial.
+
+**Destroying**
+
+```
+yarn redwood destroy layout
+```
+
+#### Example
+
+Generating a user layout:
+
+```bash
+~/redwood-app$ yarn redwood generate layout user
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g layout user
+✔ Generating layout files...
+✔ Writing $(./web/src/layouts/UserLayout/UserLayout.test.js)...
+✔ Writing $(./web/src/layouts/UserLayout/UserLayout.js)...
+Done in 1.00s.
+```
+
+A layout will just export its children:
+
+```jsx title="./web/src/layouts/UserLayout/UserLayout.test.js"
+const UserLayout = ({ children }) => {
+ return <>{children}>
+}
+
+export default UserLayout
+```
+
+### generate model
+
+Generate a RedwoodRecord model.
+
+```bash
+yarn redwood generate model
+```
+
+| Arguments & Options | Description |
+| ------------------- | --------------------------------------------------- |
+| `name` | Name of the model (in schema.prisma) |
+| `--force, -f` | Overwrite existing files |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+#### Usage
+
+See the [RedwoodRecord docs](redwoodrecord.md).
+
+#### Example
+
+```bash
+~/redwood-app$ yarn redwood generate model User
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g model User
+✔ Generating model file...
+✔ Successfully wrote file $(./api/src/models/User.js)
+✔ Parsing datamodel, generating api/src/models/index.js...
+
+Wrote /Users/rob/Sites/redwoodjs/redwood_record/.redwood/datamodel.json
+Wrote /Users/rob/Sites/redwoodjs/redwood_record/api/src/models/index.js
+
+✨ Done in 3.74s.
+```
+
+Generating a model automatically runs `yarn rw record init` as well.
+
+### generate page
+
+Generates a page component and updates the routes.
+
+```bash
+yarn redwood generate page < name > [path]
+```
+
+`path` can include a route parameter which will be passed to the generated
+page. The syntax for that is `/path/to/page/{routeParam}/more/path`. You can
+also specify the type of the route parameter if needed: `{routeParam:Int}`. If
+`path` isn't specified, or if it's just a route parameter, it will be derived
+from `name` and the route parameter, if specified, will be added to the end.
+
+This also updates `Routes.js` in `./web/src`.
+
+| Arguments & Options | Description |
+| -------------------- | ------------------------------------------------------------------------------------ |
+| `name` | Name of the page |
+| `path` | URL path to the page. Defaults to `name` |
+| `--force, -f` | Overwrite existing files |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--tests` | Generate test files [default: true] |
+| `--stories` | Generate Storybook files [default: true] |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+**Destroying**
+
+```
+yarn redwood destroy page [path]
+```
+
+**Examples**
+
+Generating a home page:
+
+```plaintext
+~/redwood-app$ yarn redwood generate page home /
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g page home /
+ ✔ Generating page files...
+ ✔ Writing `./web/src/pages/HomePage/HomePage.test.js`...
+ ✔ Writing `./web/src/pages/HomePage/HomePage.js`...
+ ✔ Updating routes file...
+Done in 1.02s.
+```
+
+The page returns jsx telling you where to find it:
+
+```jsx title="./web/src/pages/HomePage/HomePage.js"
+const HomePage = () => {
+ return (
+
+ HomePage
+ Find me in ./web/src/pages/HomePage/HomePage.js
+
+ )
+}
+
+export default HomePage
+```
+
+And the route is added to `Routes.js`:
+
+```jsx {6} title="./web/src/Routes.js"
+const Routes = () => {
+ return (
+
+
+
+
+ )
+}
+```
+
+Generating a page to show quotes:
+
+```plaintext
+~/redwood-app$ yarn redwood generate page quote {id}
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g page quote {id}
+ ✔ Generating page files...
+ ✔ Writing `./web/src/pages/QuotePage/QuotePage.stories.js`...
+ ✔ Writing `./web/src/pages/QuotePage/QuotePage.test.js`...
+ ✔ Writing `./web/src/pages/QuotePage/QuotePage.js`...
+ ✔ Updating routes file...
+Done in 1.02s.
+```
+
+The generated page will get the route parameter as a prop:
+
+```jsx {5,12,14} title="./web/src/pages/QuotePage/QuotePage.js"
+import { Link, routes } from '@redwoodjs/router'
+
+const QuotePage = ({ id }) => {
+ return (
+ <>
+ QuotePage
+ Find me in "./web/src/pages/QuotePage/QuotePage.js"
+ {/*
+ My default route is named "quote", link to me with `
+ Quote 42`
+ The parameter passed to me is {id}
+ */}
+ >
+ )
+}
+
+export default QuotePage
+```
+
+And the route is added to `Routes.js`, with the route parameter added:
+
+```jsx {6} title="./web/src/Routes.js"
+const Routes = () => {
+ return (
+
+
+
+
+ )
+}
+```
+
+### generate realtime
+
+Generate a boilerplate subscription or live query used with RedwoodJS Realtime.
+
+```bash
+yarn redwood generate realtime
+```
+
+| Arguments & Options | Description |
+| ------------------- | ------------------------------------------------------------------------------------------------ |
+| `name` | Name of the realtime event to setup.post` |
+| `-t, --type` | Choices: `liveQuery`, `subscription`. Optional. If not provided, you will be prompted to select. |
+| `--force, -f` | Overwrite existing files |
+
+#### Usage
+
+See Realtime for more information on how to [setup RedwoodJS Realtime](#setup-realtime) and use Live Queries, and Subscriptions.
+
+**Examples**
+
+Generate a live query.
+
+```bash
+~/redwood-app$ yarn rw g realtime NewLiveQuery
+? What type of realtime event would you like to create? › - Use arrow-keys. Return to submit.
+❯ Live Query
+Create a Live Query to watch for changes in data
+Subscription
+
+✔ What type of realtime event would you like to create? › Live Query
+✔ Checking for realtime environment prerequisites ...
+✔ Adding newlivequery example live query ...
+✔ Generating types ...
+```
+
+Generate a subscription.
+
+```bash
+~/redwood-app$ yarn rw g realtime NewSub
+? What type of realtime event would you like to create? › - Use arrow-keys. Return to submit.
+Live Query
+❯ Subscription - Create a Subscription to watch for events
+
+✔ What type of realtime event would you like to create? › Subscription
+✔ Checking for realtime environment prerequisites ...
+✔ Adding newsub example subscription ...
+✔ Generating types ...
+```
+
+### generate scaffold
+
+Generate Pages, SDL, and Services files based on a given DB schema Model. Also accepts ``.
+
+```bash
+yarn redwood generate scaffold
+```
+
+A scaffold quickly creates a CRUD for a model by generating the following files and corresponding routes:
+
+- sdl
+- service
+- layout
+- pages
+- cells
+- components
+
+The content of the generated components is different from what you'd get by running them individually.
+
+| Arguments & Options | Description |
+| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `model` | Model to scaffold. You can also use `` to nest files by type at the given path directory (or directories). For example, `redwood g scaffold admin/post` |
+| `--docs` | Use or set to `true` to generated comments in SDL to use in self-documentating your app's GraphQL API. See: [Self-Documenting GraphQL API](./graphql.md#self-documenting-graphql-api) [default:false] |
+| `--force, -f` | Overwrite existing files |
+| `--tailwind` | Generate TailwindCSS version of scaffold.css (automatically set to `true` if TailwindCSS config exists) |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+#### Usage
+
+See [Creating a Post Editor](tutorial/chapter2/getting-dynamic.md#creating-a-post-editor).
+
+**Nesting of Components and Pages**
+
+By default, redwood will nest the components and pages in a directory named as per the model. For example (where `post` is the model):
+`yarn rw g scaffold post`
+will output the following files, with the components and pages nested in a `Post` directory:
+
+```plaintext {9-20}
+ √ Generating scaffold files...
+ √ Successfully wrote file `./api/src/graphql/posts.sdl.js`
+ √ Successfully wrote file `./api/src/services/posts/posts.js`
+ √ Successfully wrote file `./api/src/services/posts/posts.scenarios.js`
+ √ Successfully wrote file `./api/src/services/posts/posts.test.js`
+ √ Successfully wrote file `./web/src/layouts/PostsLayout/PostsLayout.js`
+ √ Successfully wrote file `./web/src/pages/Post/EditPostPage/EditPostPage.js`
+ √ Successfully wrote file `./web/src/pages/Post/PostPage/PostPage.js`
+ √ Successfully wrote file `./web/src/pages/Post/PostsPage/PostsPage.js`
+ √ Successfully wrote file `./web/src/pages/Post/NewPostPage/NewPostPage.js`
+ √ Successfully wrote file `./web/src/components/Post/EditPostCell/EditPostCell.js`
+ √ Successfully wrote file `./web/src/components/Post/Post/Post.js`
+ √ Successfully wrote file `./web/src/components/Post/PostCell/PostCell.js`
+ √ Successfully wrote file `./web/src/components/Post/PostForm/PostForm.js`
+ √ Successfully wrote file `./web/src/components/Post/Posts/Posts.js`
+ √ Successfully wrote file `./web/src/components/Post/PostsCell/PostsCell.js`
+ √ Successfully wrote file `./web/src/components/Post/NewPost/NewPost.js`
+ √ Adding layout import...
+ √ Adding set import...
+ √ Adding scaffold routes...
+ √ Adding scaffold asset imports...
+```
+
+If it is not desired to nest the components and pages, then redwood provides an option that you can set to disable this for your project.
+Add the following in your `redwood.toml` file to disable the nesting of components and pages.
+
+```
+[generate]
+ nestScaffoldByModel = false
+```
+
+Setting the `nestScaffoldByModel = true` will retain the default behavior, but is not required.
+
+Notes:
+
+1. The nesting directory is always set to be PascalCase.
+
+**Namespacing Scaffolds**
+
+You can namespace your scaffolds by providing ``. The layout, pages, cells, and components will be nested in newly created dir(s). In addition, the nesting folder, based upon the model name, is still applied after the path for components and pages, unless turned off in the `redwood.toml` as described above. For example, given a model `user`, running `yarn redwood generate scaffold admin/user` will nest the layout, pages, and components in a newly created `Admin` directory created for each of the `layouts`, `pages`, and `components` folders:
+
+```plaintext {9-20}
+~/redwood-app$ yarn redwood generate scaffold admin/user
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g scaffold admin/user
+ ✔ Generating scaffold files...
+ ✔ Successfully wrote file `./api/src/graphql/users.sdl.js`
+ ✔ Successfully wrote file `./api/src/services/users/users.js`
+ ✔ Successfully wrote file `./api/src/services/users/users.scenarios.js`
+ ✔ Successfully wrote file `./api/src/services/users/users.test.js`
+ ✔ Successfully wrote file `./web/src/layouts/Admin/UsersLayout/UsersLayout.js`
+ ✔ Successfully wrote file `./web/src/pages/Admin/User/EditUserPage/EditUserPage.js`
+ ✔ Successfully wrote file `./web/src/pages/Admin/User/UserPage/UserPage.js`
+ ✔ Successfully wrote file `./web/src/pages/Admin/User/UsersPage/UsersPage.js`
+ ✔ Successfully wrote file `./web/src/pages/Admin/User/NewUserPage/NewUserPage.js`
+ ✔ Successfully wrote file `./web/src/components/Admin/User/EditUserCell/EditUserCell.js`
+ ✔ Successfully wrote file `./web/src/components/Admin/User/User/User.js`
+ ✔ Successfully wrote file `./web/src/components/Admin/User/UserCell/UserCell.js`
+ ✔ Successfully wrote file `./web/src/components/Admin/User/UserForm/UserForm.js`
+ ✔ Successfully wrote file `./web/src/components/Admin/User/Users/Users.js`
+ ✔ Successfully wrote file `./web/src/components/Admin/User/UsersCell/UsersCell.js`
+ ✔ Successfully wrote file `./web/src/components/Admin/User/NewUser/NewUser.js`
+ ✔ Adding layout import...
+ ✔ Adding set import...
+ ✔ Adding scaffold routes...
+ ✔ Adding scaffold asset imports...
+Done in 1.21s.
+```
+
+The routes wrapped in the [`Set`](router.md#sets-of-routes) component with generated layout will be nested too:
+
+```jsx {6-11} title="./web/src/Routes.js"
+const Routes = () => {
+ return (
+
+
+
+
+
+
+
+
+
+ )
+}
+```
+
+Notes:
+
+1. Each directory in the scaffolded path is always set to be PascalCase.
+2. The scaffold path may be multiple directories deep.
+
+**Destroying**
+
+```
+yarn redwood destroy scaffold
+```
+
+Notes:
+
+1. You can also use `` to destroy files that were generated under a scaffold path. For example, `redwood d scaffold admin/post`
+2. The destroy command will remove empty folders along the path, provided they are lower than the folder level of component, layout, page, etc.
+3. The destroy scaffold command will also follow the `nestScaffoldbyModel` setting in the `redwood.toml` file. For example, if you have an existing scaffold that you wish to destroy, that does not have the pages and components nested by the model name, you can destroy the scaffold by temporarily setting:
+
+```
+[generate]
+ nestScaffoldByModel = false
+```
+
+**Troubleshooting**
+
+If you see `Error: Unknown type: ...`, don't panic!
+It's a known limitation with GraphQL type generation.
+It happens when you generate the SDL of a Prisma model that has relations **before the SDL for the related model exists**.
+Please see [Troubleshooting Generators](./schema-relations#troubleshooting-generators) for help.
+
+### generate script
+
+Generates an arbitrary Node.js script in `./scripts/` that can be used with `redwood execute` command later.
+
+| Arguments & Options | Description |
+| -------------------- | ------------------------------------------------------------------------------------ |
+| `name` | Name of the service |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+Scripts have access to services and libraries used in your project. Some examples of how this can be useful:
+
+- create special database seed scripts for different scenarios
+- sync products and prices from your payment provider
+- running cleanup jobs on a regular basis e.g. delete stale/expired data
+- sync data between platforms e.g. email from your db to your email marketing platform
+
+#### Usage
+
+```
+❯ yarn rw g script syncStripeProducts
+
+ ✔ Generating script file...
+ ✔ Successfully wrote file `./scripts/syncStripeProducts.ts`
+ ✔ Next steps...
+
+ After modifying your script, you can invoke it like:
+
+ yarn rw exec syncStripeProducts
+
+ yarn rw exec syncStripeProducts --param1 true
+```
+
+### generate sdl
+
+Generate a GraphQL schema and service object.
+
+```bash
+yarn redwood generate sdl
+```
+
+The sdl will inspect your `schema.prisma` and will do its best with relations. Schema to generators isn't one-to-one yet (and might never be).
+
+| Arguments & Options | Description |
+| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| `model` | Model to generate the sdl for |
+| `--crud` | Set to `false`, or use `--no-crud`, if you do not want to generate mutations |
+| `--docs` | Use or set to `true` to generated comments in SDL to use in self-documentating your app's GraphQL API. See: [Self-Documenting GraphQL API](./graphql.md#self-documenting-graphql-api) [default: false] |
+| `--force, -f` | Overwrite existing files |
+| `--tests` | Generate service test and scenario [default: true] |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+> **Note:** The generated sdl will include the `@requireAuth` directive by default to ensure queries and mutations are secure. If your app's queries and mutations are all public, you can set up a custom SDL generator template to apply `@skipAuth` (or a custom validator directive) to suit you application's needs.
+
+**Regenerating the SDL**
+
+Often, as you iterate on your data model, you may add, remove, or rename fields. You still want Redwood to update the generated SDL and service files for those updates because it saves time not having to make those changes manually.
+
+But, since the `generate` command prevents you from overwriting files accidentally, you use the `--force` option -- but a `force` will reset any test and scenarios you may have written which you don't want to lose.
+
+In that case, you can run the following to "regenerate" **just** the SDL file and leave your tests and scenarios intact and not lose your hard work.
+
+```
+yarn redwood g sdl --force --no-tests
+```
+
+#### Example
+
+```bash
+~/redwood-app$ yarn redwood generate sdl user --force --no-tests
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g sdl user
+✔ Generating SDL files...
+✔ Writing $(./api/src/graphql/users.sdl.js)...
+✔ Writing $(./api/src/services/users/users.js)...
+Done in 1.04s.
+```
+
+**Destroying**
+
+```
+yarn redwood destroy sdl
+```
+
+#### Example
+
+Generating a user sdl:
+
+```bash
+~/redwood-app$ yarn redwood generate sdl user
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g sdl user
+✔ Generating SDL files...
+✔ Writing $(./api/src/graphql/users.sdl.js)...
+✔ Writing $(./api/src/services/users/users.scenarios.js)...
+✔ Writing $(./api/src/services/users/users.test.js)...
+✔ Writing $(./api/src/services/users/users.js)...
+Done in 1.04s.
+```
+
+The generated sdl defines a corresponding type, query, create/update inputs, and any mutations. To prevent defining mutations, add the `--no-crud` option.
+
+```jsx title="./api/src/graphql/users.sdl.js"
+export const schema = gql`
+ type User {
+ id: Int!
+ email: String!
+ name: String
+ }
+
+ type Query {
+ users: [User!]! @requireAuth
+ }
+
+ input CreateUserInput {
+ email: String!
+ name: String
+ }
+
+ input UpdateUserInput {
+ email: String
+ name: String
+ }
+
+ type Mutation {
+ createUser(input: CreateUserInput!): User! @requireAuth
+ updateUser(id: Int!, input: UpdateUserInput!): User! @requireAuth
+ deleteUser(id: Int!): User! @requireAuth
+ }
+`
+```
+
+The services file fulfills the query. If the `--no-crud` option is added, this file will be less complex.
+
+```jsx title="./api/src/services/users/users.js"
+import { db } from 'src/lib/db'
+
+export const users = () => {
+ return db.user.findMany()
+}
+```
+
+For a model with a relation, the field will be listed in the sdl:
+
+```jsx {8} title="./api/src/graphql/users.sdl.js"
+export const schema = gql`
+ type User {
+ id: Int!
+ email: String!
+ name: String
+ profile: Profile
+ }
+
+ type Query {
+ users: [User!]! @requireAuth
+ }
+
+ input CreateUserInput {
+ email: String!
+ name: String
+ }
+
+ input UpdateUserInput {
+ email: String
+ name: String
+ }
+
+ type Mutation {
+ createUser(input: CreateUserInput!): User! @requireAuth
+ updateUser(id: Int!, input: UpdateUserInput!): User! @requireAuth
+ deleteUser(id: Int!): User! @requireAuth
+ }
+`
+```
+
+And the service will export an object with the relation as a property:
+
+```jsx {9-13} title="./api/src/services/users/users.js"
+import { db } from 'src/lib/db'
+
+export const users = () => {
+ return db.user.findMany()
+}
+
+export const User = {
+ profile: (_obj, { root }) => {
+ db.user.findUnique({ where: { id: root.id } }).profile(),
+ }
+}
+```
+
+**Troubleshooting**
+
+If you see `Error: Unknown type: ...`, don't panic!
+It's a known limitation with GraphQL type generation.
+It happens when you generate the SDL of a Prisma model that has relations **before the SDL for the related model exists**.
+Please see [Troubleshooting Generators](./schema-relations#troubleshooting-generators) for help.
+
+### generate secret
+
+Generate a secret key using a cryptographically-secure source of entropy. Commonly used when setting up dbAuth.
+
+| Arguments & Options | Description |
+| :------------------ | :------------------------------------------------- |
+| `--raw` | Print just the key, without any informational text |
+
+#### Usage
+
+Using the `--raw` option you can easily append a secret key to your .env file, like so:
+
+```
+# yarn v1
+echo "SESSION_SECRET=$(yarn --silent rw g secret --raw)" >> .env
+
+# yarn v3
+echo "SESSION_SECRET=$(yarn rw g secret --raw)" >> .env
+```
+
+### generate service
+
+Generate a service component.
+
+```bash
+yarn redwood generate service
+```
+
+Services are where Redwood puts its business logic. They can be used by your GraphQL API or any other place in your backend code. See [How Redwood Works with Data](tutorial/chapter2/side-quest.md).
+
+| Arguments & Options | Description |
+| -------------------- | ------------------------------------------------------------------------------------ |
+| `name` | Name of the service |
+| `--force, -f` | Overwrite existing files |
+| `--typescript, --ts` | Generate TypeScript files Enabled by default if we detect your project is TypeScript |
+| `--tests` | Generate test and scenario files [default: true] |
+| `--rollback` | Rollback changes if an error occurs [default: true] |
+
+**Destroying**
+
+```
+yarn redwood destroy service
+```
+
+#### Example
+
+Generating a user service:
+
+```bash
+~/redwood-app$ yarn redwood generate service user
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood g service user
+✔ Generating service files...
+✔ Writing $(./api/src/services/users/users.scenarios.js)...
+✔ Writing $(./api/src/services/users/users.test.js)...
+✔ Writing $(./api/src/services/users/users.js)...
+Done in 1.02s.
+```
+
+The generated service component will export a `findMany` query:
+
+```jsx title="./api/src/services/users/users.js"
+import { db } from 'src/lib/db'
+
+export const users = () => {
+ return db.user.findMany()
+}
+```
+
+### generate types
+
+Generates supplementary code (project types)
+
+```bash
+yarn redwood generate types
+```
+
+#### Usage
+
+```
+~/redwood-app$ yarn redwood generate types
+yarn run v1.22.10
+$ /redwood-app/node_modules/.bin/redwood g types
+$ /redwood-app/node_modules/.bin/rw-gen
+
+Generating...
+
+- .redwood/schema.graphql
+- .redwood/types/mirror/api/src/services/posts/index.d.ts
+- .redwood/types/mirror/web/src/components/BlogPost/index.d.ts
+- .redwood/types/mirror/web/src/layouts/BlogLayout/index.d.ts
+...
+- .redwood/types/mirror/web/src/components/Post/PostsCell/index.d.ts
+- .redwood/types/includes/web-routesPages.d.ts
+- .redwood/types/includes/all-currentUser.d.ts
+- .redwood/types/includes/web-routerRoutes.d.ts
+- .redwood/types/includes/api-globImports.d.ts
+- .redwood/types/includes/api-globalContext.d.ts
+- .redwood/types/includes/api-scenarios.d.ts
+- api/types/graphql.d.ts
+- web/types/graphql.d.ts
+
+... and done.
+```
+
+## info
+
+Print your system environment information.
+
+```bash
+yarn redwood info
+```
+
+This command's primarily intended for getting information others might need to know to help you debug:
+
+```bash
+~/redwood-app$ yarn redwood info
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/redwood info
+
+ System:
+ OS: Linux 5.4 Ubuntu 20.04 LTS (Focal Fossa)
+ Shell: 5.0.16 - /usr/bin/bash
+ Binaries:
+ Node: 13.12.0 - /tmp/yarn--1589998865777-0.9683603763419713/node
+ Yarn: 1.22.4 - /tmp/yarn--1589998865777-0.9683603763419713/yarn
+ Browsers:
+ Chrome: 78.0.3904.108
+ Firefox: 76.0.1
+ npmPackages:
+ @redwoodjs/core: ^0.7.0-rc.3 => 0.7.0-rc.3
+
+Done in 1.98s.
+```
+
+## lint
+
+Lint your files.
+
+```bash
+yarn redwood lint
+```
+
+[Our ESLint configuration](https://github.com/redwoodjs/redwood/blob/master/packages/eslint-config/index.js) is a mix of [ESLint's recommended rules](https://eslint.org/docs/rules/), [React's recommended rules](https://www.npmjs.com/package/eslint-plugin-react#list-of-supported-rules), and a bit of our own stylistic flair:
+
+- no semicolons
+- comma dangle when multiline
+- single quotes
+- always use parenthesis around arrow functions
+- enforced import sorting
+
+| Option | Description |
+| :------ | :---------------- |
+| `--fix` | Try to fix errors |
+
+## prisma
+
+Run Prisma CLI within the context of a Redwood project.
+
+```
+yarn redwood prisma
+```
+
+Redwood's `prisma` command is a lightweight wrapper around the Prisma CLI. It's the primary way you interact with your database.
+
+> **What do you mean it's a lightweight wrapper?**
+>
+> By lightweight wrapper, we mean that we're handling some flags under the hood for you.
+> You can use the Prisma CLI directly (`yarn prisma`), but letting Redwood act as a proxy (`yarn redwood prisma`) saves you a lot of keystrokes.
+> For example, Redwood adds the `--schema=api/db/schema.prisma` flags automatically.
+>
+> If you want to know exactly what `yarn redwood prisma ` runs, which flags it's passing, etc., it's right at the top:
+>
+> ```sh{3}
+> $ yarn redwood prisma migrate dev
+> yarn run v1.22.10
+> $ ~/redwood-app/node_modules/.bin/redwood prisma migrate dev
+> Running prisma cli:
+> yarn prisma migrate dev --schema "~/redwood-app/api/db/schema.prisma"
+> ...
+> ```
+
+Since `yarn redwood prisma` is just an entry point into all the database commands that the Prisma CLI has to offer, we won't try to provide an exhaustive reference of everything you can do with it here. Instead what we'll do is focus on some of the most common commands; those that you'll be running on a regular basis, and how they fit into Redwood's workflows.
+
+For the complete list of commands, see the [Prisma CLI Reference](https://www.prisma.io/docs/reference/api-reference/command-reference). It's the authority.
+
+Along with the CLI reference, bookmark Prisma's [Migration Flows](https://www.prisma.io/docs/concepts/components/prisma-migrate/prisma-migrate-flows) doc—it'll prove to be an invaluable resource for understanding `yarn redwood prisma migrate`.
+
+| Command | Description |
+| :------------------ | :----------------------------------------------------------- |
+| `db ` | Manage your database schema and lifecycle during development |
+| `generate` | Generate artifacts (e.g. Prisma Client) |
+| `migrate ` | Update the database schema with migrations |
+
+### prisma db
+
+Manage your database schema and lifecycle during development.
+
+```
+yarn redwood prisma db
+```
+
+The `prisma db` namespace contains commands that operate directly against the database.
+
+#### prisma db pull
+
+Pull the schema from an existing database, updating the Prisma schema.
+
+> 👉 Quick link to the [Prisma CLI Reference](https://www.prisma.io/docs/reference/api-reference/command-reference#db-pull).
+
+```
+yarn redwood prisma db pull
+```
+
+This command, formerly `introspect`, connects to your database and adds Prisma models to your Prisma schema that reflect the current database schema.
+
+> Warning: The command will Overwrite the current schema.prisma file with the new schema. Any manual changes or customization will be lost. Be sure to back up your current schema.prisma file before running `db pull` if it contains important modifications.
+
+#### prisma db push
+
+Push the state from your Prisma schema to your database.
+
+> 👉 Quick link to the [Prisma CLI Reference](https://www.prisma.io/docs/reference/api-reference/command-reference#db-push).
+
+```
+yarn redwood prisma db push
+```
+
+This is your go-to command for prototyping changes to your Prisma schema (`schema.prisma`).
+Prior to to `yarn redwood prisma db push`, there wasn't a great way to try out changes to your Prisma schema without creating a migration.
+This command fills the void by "pushing" your `schema.prisma` file to your database without creating a migration. You don't even have to run `yarn redwood prisma generate` afterward—it's all taken care of for you, making it ideal for iterative development.
+
+#### prisma db seed
+
+Seed your database.
+
+> 👉 Quick link to the [Prisma CLI Reference](https://www.prisma.io/docs/reference/api-reference/command-reference#db-seed-preview).
+
+```
+yarn redwood prisma db seed
+```
+
+This command seeds your database by running your project's `seed.js|ts` file which you can find in your `scripts` directory.
+
+Prisma's got a great [seeding guide](https://www.prisma.io/docs/guides/prisma-guides/seed-database) that covers both the concepts and the nuts and bolts.
+
+> **Important:** Prisma Migrate also triggers seeding in the following scenarios:
+>
+> - you manually run the `yarn redwood prisma migrate reset` command
+> - the database is reset interactively in the context of using `yarn redwood prisma migrate dev`—for example, as a result of migration history conflicts or database schema drift
+>
+> If you want to use `yarn redwood prisma migrate dev` or `yarn redwood prisma migrate reset` without seeding, you can pass the `--skip-seed` flag.
+
+While having a great seed might not be all that important at the start, as soon as you start collaborating with others, it becomes vital.
+
+**How does seeding actually work?**
+
+If you look at your project's `package.json` file, you'll notice a `prisma` section:
+
+```json
+ "prisma": {
+ "seed": "yarn rw exec seed"
+ },
+```
+
+Prisma runs any command found in the `seed` setting when seeding via `yarn rw prisma db seed` or `yarn rw prisma migrate reset`.
+Here we're using the Redwood [`exec` cli command](#exec) that runs a script.
+
+If you wanted to seed your database using a different method (like `psql` and an `.sql` script), you can do so by changing the "seed" script command.
+
+**More About Seeding**
+
+In addition, you can [code along with Ryan Chenkie](https://www.youtube.com/watch?v=2LwTUIqjbPo), and learn how libraries like [faker](https://www.npmjs.com/package/faker) can help you create a large, realistic database fast, especially in tandem with Prisma's [createMany](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#createmany).
+
+**Log Formatting**
+
+If you use the Redwood Logger as part of your seed script, you can pipe the command to the LogFormatter to output prettified logs.
+
+For example, if your `scripts.seed.js` imports the `logger`:
+
+```jsx title="scripts/seed.js"
+import { db } from 'api/src/lib/db'
+import { logger } from 'api/src/lib/logger'
+
+export default async () => {
+ try {
+ const posts = [
+ {
+ title: 'Welcome to the blog!',
+ body: "I'm baby single- origin coffee kickstarter lo.",
+ },
+ {
+ title: 'A little more about me',
+ body: 'Raclette shoreditch before they sold out lyft.',
+ },
+ {
+ title: 'What is the meaning of life?',
+ body: 'Meh waistcoat succulents umami asymmetrical, hoodie post-ironic paleo chillwave tote bag.',
+ },
+ ]
+
+ Promise.all(
+ posts.map(async (post) => {
+ const newPost = await db.post.create({
+ data: { title: post.title, body: post.body },
+ })
+
+ logger.debug({ data: newPost }, 'Added post')
+ })
+ )
+ } catch (error) {
+ logger.error(error)
+ }
+}
+```
+
+You can pipe the script output to the formatter:
+
+```bash
+yarn rw prisma db seed | yarn rw-log-formatter
+```
+
+> Note: Just be sure to set `data` attribute, so the formatter recognizes the content.
+> For example: `logger.debug({ data: newPost }, 'Added post')`
+
+### prisma migrate
+
+Update the database schema with migrations.
+
+> 👉 Quick link to the [Prisma Concepts](https://www.prisma.io/docs/concepts/components/prisma-migrate).
+
+```
+yarn redwood prisma migrate
+```
+
+As a database toolkit, Prisma strives to be as holistic as possible. Prisma Migrate lets you use Prisma schema to make changes to your database declaratively, all while keeping things deterministic and fully customizable by generating the migration steps in a simple, familiar format: SQL.
+
+Since migrate generates plain SQL files, you can edit those SQL files before applying the migration using `yarn redwood prisma migrate --create-only`. This creates the migration based on the changes in the Prisma schema, but doesn't apply it, giving you the chance to go in and make any modifications you want. [Daniel Norman's tour of Prisma Migrate](https://www.youtube.com/watch?v=0LKhksstrfg) demonstrates this and more to great effect.
+
+Prisma Migrate has separate commands for applying migrations based on whether you're in dev or in production. The Prisma [Migration flows](https://www.prisma.io/docs/concepts/components/prisma-migrate/prisma-migrate-flows) goes over the difference between these workflows in more detail.
+
+#### prisma migrate dev
+
+Create a migration from changes in Prisma schema, apply it to the database, trigger generators (e.g. Prisma Client).
+
+> 👉 Quick link to the [Prisma CLI Reference](https://www.prisma.io/docs/reference/api-reference/command-reference#migrate-dev).
+
+```
+yarn redwood prisma migrate dev
+```
+
+#### prisma migrate deploy
+
+Apply pending migrations to update the database schema in production/staging.
+
+> 👉 Quick link to the [Prisma CLI Reference](https://www.prisma.io/docs/reference/api-reference/command-reference#migrate-deploy).
+
+```
+yarn redwood prisma migrate deploy
+```
+
+#### prisma migrate reset
+
+This command deletes and recreates the database, or performs a "soft reset" by removing all data, tables, indexes, and other artifacts.
+
+It'll also re-seed your database by automatically running the `db seed` command. See [prisma db seed](#prisma-db-seed).
+
+> **_Important:_** For use in development environments only
+
+## record
+
+> This command is experimental and its behavior may change.
+
+Commands for working with RedwoodRecord.
+
+### record init
+
+Parses `schema.prisma` and caches the datamodel as JSON. Reads relationships between models and adds some configuration in `api/src/models/index.js`.
+
+```
+yarn rw record init
+```
+
+## redwood-tools (alias rwt)
+
+Redwood's companion CLI development tool. You'll be using this if you're contributing to Redwood. See [Contributing](https://github.com/redwoodjs/redwood/blob/main/CONTRIBUTING.md#cli-reference-redwood-tools) in the Redwood repo.
+
+## setup
+
+Initialize configuration and integrate third-party libraries effortlessly.
+
+```
+yarn redwood setup
+```
+
+| Commands | Description |
+| ------------------ | ------------------------------------------------------------------------------------------ |
+| `auth` | Set up auth configuration for a provider |
+| `cache` | Set up cache configuration for memcached or redis |
+| `custom-web-index` | Set up an `index.js` file, so you can customize how Redwood web is mounted in your browser |
+| `deploy` | Set up a deployment configuration for a provider |
+| `generator` | Copy default Redwood generator templates locally for customization |
+| `i18n` | Set up i18n |
+| `jobs` | Set up background job creation and processing |
+| `package` | Peform setup actions by running a third-party npm package |
+| `tsconfig` | Add relevant tsconfig so you can start using TypeScript |
+| `ui` | Set up a UI design or style library |
+
+### setup auth
+
+Integrate an auth provider.
+
+```
+yarn redwood setup auth
+```
+
+| Arguments & Options | Description |
+| :------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `provider` | Auth provider to configure. Choices are `auth0`, `azureActiveDirectory`, `clerk`, `dbAuth`, `ethereum`, `firebase`, `goTrue`, `magicLink`, `netlify`, `nhost`, and `supabase` |
+| `--force, -f` | Overwrite existing configuration |
+
+#### Usage
+
+See [Authentication](authentication.md).
+
+### setup cache
+
+This command creates a setup file in `api/src/lib/cache.{ts|js}` for connecting to a Memcached or Redis server and allows caching in services. See the [**Caching** section of the Services docs](/docs/services#caching) for usage.
+
+```
+yarn redwood setup cache
+```
+
+| Arguments & Options | Description |
+| :------------------ | :------------------------------------------------------ |
+| `client` | Name of the client to configure, `memcached` or `redis` |
+| `--force, -f` | Overwrite existing files |
+
+### setup generator
+
+Copies a given generator's template files to your local app for customization. The next time you generate that type again, it will use your custom template instead of Redwood's default.
+
+```
+yarn rw setup generator
+```
+
+| Arguments & Options | Description |
+| :------------------ | :------------------------------------------------------------ |
+| `name` | Name of the generator template(s) to copy (see help for list) |
+| `--force, -f` | Overwrite existing copied template files |
+
+#### Usage
+
+If you wanted to customize the page generator template, run the command:
+
+```
+yarn rw setup generator page
+```
+
+And then check `web/generators/page` for the page, storybook and test template files. You don't need to keep all of these templates—you could customize just `page.tsx.template` and delete the others and they would still be generated, but using the default Redwood templates.
+
+The only exception to this rule is the scaffold templates. You'll get four directories, `assets`, `components`, `layouts` and `pages`. If you want to customize any one of the templates in those directories, you will need to keep all the other files inside of that same directory, even if you make no changes besides the one you care about. (This is due to the way the scaffold looks up its template files.) For example, if you wanted to customize only the index page of the scaffold (the one that lists all available records in the database) you would edit `web/generators/scaffold/pages/NamesPage.tsx.template` and keep the other pages in that directory. You _could_ delete the other three directories (`assets`, `components`, `layouts`) if you don't need to customize them.
+
+**Name Variants**
+
+Your template will receive the provided `name` in a number of different variations.
+
+For example, given the name `fooBar` your template will receive the following _variables_ with the given _values_
+
+| Variable | Value |
+| :--------------------- | :--------- |
+| `pascalName` | `FooBar` |
+| `camelName` | `fooBar` |
+| `singularPascalName` | `FooBar` |
+| `pluralPascalName` | `FooBars` |
+| `singularCamelName` | `fooBar` |
+| `pluralCamelName` | `fooBars` |
+| `singularParamName` | `foo-bar` |
+| `pluralParamName` | `foo-bars` |
+| `singularConstantName` | `FOO_BAR` |
+| `pluralConstantName` | `FOO_BARS` |
+
+#### Example
+
+Copying the cell generator templates:
+
+```bash
+~/redwood-app$ yarn rw setup generator cell
+yarn run v1.22.4
+$ /redwood-app/node_modules/.bin/rw setup generator cell
+✔ Copying generator templates...
+✔ Wrote templates to /web/generators/cell
+✨ Done in 2.33s.
+```
+
+### setup deploy (config)
+
+Set up a deployment configuration.
+
+```
+yarn redwood setup deploy
+```
+
+| Arguments & Options | Description |
+| :------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `provider` | Deploy provider to configure. Choices are `baremetal`, `coherence`, `edgio`, `flightcontrol`, `netlify`, `render`, `vercel`, or `aws-serverless [deprecated]`, |
+| `--database, -d` | Database deployment for Render only [choices: "none", "postgresql", "sqlite"] [default: "postgresql"] |
+| `--force, -f` | Overwrite existing configuration [default: false] |
+
+#### setup deploy netlify
+
+When configuring Netlify deployment, the `setup deploy netlify` command generates a `netlify.toml` [configuration file](https://docs.netlify.com/configure-builds/file-based-configuration/) with the defaults needed to build and deploy a RedwoodJS site on Netlify.
+
+The `netlify.toml` file is a configuration file that specifies how Netlify builds and deploys your site — including redirects, branch and context-specific settings, and more.
+
+This configuration file also defines the settings needed for [Netlify Dev](https://docs.netlify.com/configure-builds/file-based-configuration/#netlify-dev) to detect that your site uses the RedwoodJS framework. Netlify Dev serves your RedwoodJS app as if it runs on the Netlify platform and can serve functions, handle Netlify [headers](https://docs.netlify.com/configure-builds/file-based-configuration/#headers) and [redirects](https://docs.netlify.com/configure-builds/file-based-configuration/#redirects).
+
+Netlify Dev can also create a tunnel from your local development server that allows you to share and collaborate with others using `netlify dev --live`.
+
+```
+// See: netlify.toml
+// ...
+[dev]
+ # To use [Netlify Dev](https://www.netlify.com/products/dev/),
+ # install netlify-cli from https://docs.netlify.com/cli/get-started/#installation
+ # and then use netlify link https://docs.netlify.com/cli/get-started/#link-and-unlink-sites
+ # to connect your local project to a site already on Netlify
+ # then run netlify dev and our app will be accessible on the port specified below
+ framework = "redwoodjs"
+ # Set targetPort to the [web] side port as defined in redwood.toml
+ targetPort = 8910
+ # Point your browser to this port to access your RedwoodJS app
+ port = 8888
+```
+
+In order to use [Netlify Dev](https://www.netlify.com/products/dev/) you need to:
+
+- install the latest [netlify-cli](https://docs.netlify.com/cli/get-started/#installation)
+- use [netlify link](https://docs.netlify.com/cli/get-started/#link-and-unlink-sites) to connect to your Netlify site
+- ensure that the `targetPort` matches the [web] side port in `redwood.toml`
+- run `netlify dev` and your site will be served on the specified `port` (e.g., 8888)
+- if you wish to share your local server with others, you can run `netlify dev --live`
+
+> Note: To detect the RedwoodJS framework, please use netlify-cli v3.34.0 or greater.
+
+### setup jobs
+
+This command adds the necessary packages and files defining the configuration for Redwood's [Background Jobs](background-jobs) processing.
+
+```
+yarn redwood setup jobs
+```
+
+| Arguments & Options | Description |
+| :------------------ | :----------------------- |
+| `--force, -f` | Overwrite existing files |
+
+### setup mailer
+
+This command adds the necessary packages and files to get started using the RedwoodJS mailer. By default it also creates an example mail template which can be skipped with the `--skip-examples` flag.
+
+```
+yarn redwood setup mailer
+```
+
+| Arguments & Options | Description |
+| :------------------ | :------------------------------------------------------------- |
+| `--force, -f` | Overwrite existing files |
+| `--skip-examples` | Do not include example content, such as a React email template |
+
+### setup package
+
+This command takes a published npm package that you specify, performs some compatibility checks, and then executes its bin script. This allows you to use third-party packages that can provide you with an easy-to-use setup command for the particular functionality they provide.
+
+This command behaves similarly to `yarn dlx` but will attempt to confirm compatibility between the package you are attempting to run and the current version of Redwood you are running. You can bypass this check by passing the `--force` flag if you feel you understand any potential compatibility issues.
+
+```
+yarn redwood setup package
+```
+
+| Arguments & Options | Description |
+| :------------------ | :------------------------- |
+| `--force, -f` | Forgo compatibility checks |
+
+#### Usage
+
+Run the made up `@redwoodjs/setup-example` package:
+
+```bash
+~/redwood-app$ yarn rw setup package @redwoodjs/setup-example
+```
+
+Run the same package but using a particular npm tag and avoiding any compatibility checks:
+
+```bash
+~/redwood-app$ yarn rw setup package @redwoodjs/setup-example@beta --force
+```
+
+**Compatibility Checks**
+
+We perform a simple compatibility check in an attempt to make you aware of potential compatibility issues with setup packages you might wish to run. This works by examining the version of `@redwoodjs/core` you are using within your root `package.json`. We compare this value with a compatibility range the npm package specifies in the `engines.redwoodjs` field of its own `package.json`. If the version of `@redwoodjs/core` you are using falls outside of the compatibility range specified by the package you are attempting to run, we will warn you and ask you to confirm that you wish to continue.
+
+It's the author of the npm package's responsibility to specify the correct compatibility range, so **you should always research the packages you use with this command**. Especially since they will be executing code on your machine!
+
+### setup graphql
+
+This command creates the necessary files to support GraphQL features like fragments and trusted documents.
+
+#### Usage
+
+Run `yarn rw setup graphql `
+
+#### setup graphql fragments
+
+This command creates the necessary configuration to start using [GraphQL Fragments](./graphql/fragments.md).
+
+```
+yarn redwood setup graphql fragments
+```
+
+| Arguments & Options | Description |
+| :------------------ | :--------------------------------------- |
+| `--force, -f` | Overwrite existing files and skip checks |
+
+#### Usage
+
+Run `yarn rw setup graphql fragments`
+
+#### Example
+
+```bash
+~/redwood-app$ yarn rw setup graphql fragments
+✔ Update Redwood Project Configuration to enable GraphQL Fragments
+✔ Generate possibleTypes.ts
+✔ Import possibleTypes in App.tsx
+✔ Add possibleTypes to the GraphQL cache config
+```
+
+#### setup graphql trusted-documents
+
+This command creates the necessary configuration to start using [GraphQL Trusted Documents](./graphql/trusted-documents.md).
+
+```
+yarn redwood setup graphql trusted-documents
+```
+
+#### Usage
+
+Run `yarn rw setup graphql trusted-documents`
+
+#### Example
+
+```bash
+~/redwood-app$ yarn rw setup graphql trusted-documents
+✔ Update Redwood Project Configuration to enable GraphQL Trusted Documents ...
+✔ Generating Trusted Documents store ...
+✔ Configuring the GraphQL Handler to use a Trusted Documents store ...
+```
+
+If you have not setup the RedwoodJS server file, it will be setup:
+
+```bash
+✔ Adding the experimental server file...
+✔ Adding config to redwood.toml...
+✔ Adding required api packages...
+```
+
+### setup realtime
+
+This command creates the necessary files, installs the required packages, and provides examples to setup RedwoodJS Realtime from GraphQL live queries and subscriptions. See the Realtime docs for more information.
+
+```
+yarn redwood setup realtime
+```
+
+| Arguments & Options | Description |
+| :---------------------------------- | :--------------------------------------------------------------------------------- |
+| `-e, --includeExamples, --examples` | Include examples of how to implement liveQueries and subscriptions. Default: true. |
+| `--force, -f` | Forgo compatibility checks |
+
+:::note
+
+If the RedwoodJS Server is not setup, it will be installed as well.
+
+:::
+
+#### Usage
+
+Run `yarn rw setup realtime`
+
+#### Example
+
+```bash
+~/redwood-app$ yarn rw setup realtime
+✔ Checking for realtime environment prerequisites ...
+✔ Adding required api packages...
+✔ Adding the realtime api lib ...
+✔ Adding Countdown example subscription ...
+✔ Adding NewMessage example subscription ...
+✔ Adding Auctions example live query ...
+✔ Generating types ...
+```
+
+If you have not setup the RedwoodJS server file, it will be setup:
+
+```bash
+✔ Adding the experimental server file...
+✔ Adding config to redwood.toml...
+✔ Adding required api packages...
+```
+
+### setup tsconfig
+
+Add a `tsconfig.json` to both the web and api sides so you can start using [TypeScript](typescript/index).
+
+```
+yarn redwood setup tsconfig
+```
+
+| Arguments & Options | Description |
+| :------------------ | :----------------------- |
+| `--force, -f` | Overwrite existing files |
+
+### setup ui
+
+Set up a UI design or style library. Right now the choices are [TailwindCSS](https://tailwindcss.com/), [Chakra UI](https://chakra-ui.com/), and [Mantine UI](https://ui.mantine.dev/).
+
+```
+yarn rw setup ui
+```
+
+| Arguments & Options | Description |
+| :------------------ | :-------------------------------------------------------------------------- |
+| `library` | Library to configure. Choices are `chakra-ui`, `tailwindcss`, and `mantine` |
+| `--force, -f` | Overwrite existing configuration |
+
+## storybook
+
+Starts Storybook locally
+
+```bash
+yarn redwood storybook
+```
+
+[Storybook](https://storybook.js.org/docs/react/get-started/introduction) is a tool for UI development that allows you to develop your components in isolation, away from all the conflated cruft of your real app.
+
+> "Props in, views out! Make it simple to reason about."
+
+RedwoodJS supports Storybook by creating stories when generating cells, components, layouts and pages. You can then use these to describe how to render that UI component with representative data.
+
+| Arguments & Options | Description |
+| :------------------ | :------------------------------------------------------------------------------------------------- |
+| `--open` | Open Storybook in your browser on start [default: true]. Pass `--no-open` to disable this behavior |
+| `--build` | Build Storybook |
+| `--port` | Which port to run Storybook on [default: 7910] |
+
+## test
+
+Run Jest tests for api and web.
+
+```bash
+yarn redwood test [side..]
+```
+
+| Arguments & Options | Description |
+| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `sides or filter` | Which side(s) to test, and/or a regular expression to match against your test files to filter by |
+| `--help` | Show help |
+| `--version` | Show version number |
+| `--watch` | Run tests related to changed files based on hg/git (uncommitted files). Specify the name or path to a file to focus on a specific set of tests [default: true] |
+| `--watchAll` | Run all tests |
+| `--collectCoverage` | Show test coverage summary and output info to `coverage` directory in project root. See this directory for an .html coverage report |
+| `--clearCache` | Delete the Jest cache directory and exit without running tests |
+| `--db-push` | Syncs the test database with your Prisma schema without requiring a migration. It creates a test database if it doesn't already exist [default: true]. This flag is ignored if your project doesn't have an `api` side. [👉 More details](#prisma-db-push). |
+
+> **Note** all other flags are passed onto the jest cli. So for example if you wanted to update your snapshots you can pass the `-u` flag
+
+## type-check (alias tsc or tc)
+
+Runs a TypeScript compiler check on both the api and the web sides.
+
+```bash
+yarn redwood type-check [side]
+```
+
+| Arguments & Options | Description |
+| ------------------- | ------------------------------------------------------------------------------ |
+| `side` | Which side(s) to run. Choices are `api` and `web`. Defaults to `api` and `web` |
+
+#### Usage
+
+See [Running Type Checks](typescript/introduction.md#running-type-checks).
+
+## serve
+
+Runs a server that serves both the api and the web sides.
+
+```bash
+yarn redwood serve [side]
+```
+
+> You should run `yarn rw build` before running this command to make sure all the static assets that will be served have been built.
+
+`yarn rw serve` is useful for debugging locally or for self-hosting—deploying a single server into a serverful environment. Since both the api and the web sides run in the same server, CORS isn't a problem.
+
+| Arguments & Options | Description |
+| ------------------- | ------------------------------------------------------------------------------ |
+| `side` | Which side(s) to run. Choices are `api` and `web`. Defaults to `api` and `web` |
+| `--port` | What port should the server run on [default: 8911] |
+| `--socket` | The socket the server should run. This takes precedence over port |
+
+### serve api
+
+Runs a server that only serves the api side.
+
+```
+yarn rw serve api
+```
+
+This command uses `apiUrl` in your `redwood.toml`. Use this command if you want to run just the api side on a server (e.g. running on Render).
+
+| Arguments & Options | Description |
+| ------------------- | ----------------------------------------------------------------- |
+| `--port` | What port should the server run on [default: 8911] |
+| `--socket` | The socket the server should run. This takes precedence over port |
+| `--apiRootPath` | The root path where your api functions are served |
+
+For the full list of Server Configuration settings, see [this documentation](app-configuration-redwood-toml.md#api).
+If you want to format your log output, you can pipe the command to the Redwood LogFormatter:
+
+```
+yarn rw serve api | yarn rw-log-formatter
+```
+
+### serve web
+
+Runs a server that only serves the web side.
+
+```
+yarn rw serve web
+```
+
+This command serves the contents in `web/dist`. Use this command if you're debugging (e.g. great for debugging prerender) or if you want to run your api and web sides on separate servers, which is often considered a best practice for scalability (since your api side likely has much higher scaling requirements).
+
+> **But shouldn't I use nginx and/or equivalent technology to serve static files?**
+>
+> Probably, but it can be a challenge to setup when you just want something running quickly!
+
+| Arguments & Options | Description |
+| ------------------- | ------------------------------------------------------------------------------------- |
+| `--port` | What port should the server run on [default: 8911] |
+| `--socket` | The socket the server should run. This takes precedence over port |
+| `--apiHost` | Forwards requests from the `apiUrl` (defined in `redwood.toml`) to the specified host |
+
+If you want to format your log output, you can pipe the command to the Redwood LogFormatter:
+
+```
+yarn rw serve web | yarn rw-log-formatter
+```
+
+## upgrade
+
+Upgrade all `@redwoodjs` packages via an interactive CLI.
+
+```bash
+yarn redwood upgrade
+```
+
+This command does all the heavy-lifting of upgrading to a new release for you.
+
+Besides upgrading to a new stable release, you can use this command to upgrade to either of our unstable releases, `canary` and `rc`, or you can upgrade to a specific release version.
+
+A canary release is published to npm every time a PR is merged to the `main` branch, and when we're getting close to a new release, we publish release candidates.
+
+| Option | Description |
+| :-------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `--dry-run, -d` | Check for outdated packages without upgrading |
+| `--tag, -t` | Choices are "rc", "canary", "latest", "next", "experimental", or a specific version (e.g. "0.19.3"). WARNING: Unstable releases in the case of "canary", "rc", "next", and "experimental". And "canary" releases include breaking changes often requiring codemods if upgrading a project. |
+
+#### Example
+
+Upgrade to the most recent canary:
+
+```bash
+yarn redwood upgrade -t canary
+```
+
+Upgrade to a specific version:
+
+```bash
+yarn redwood upgrade -t 0.19.3
+```
+
+## Background checks
+
+The CLI can check for things in the background, like new versions of the framework, while you dev.
+
+Right now it can only check for new versions.
+If you'd like it to do so, set `notifications.versionUpdates` in the `redwood.toml` file to include an array of the tags you're interested in hearing about.
+(The former has priority.)
+
+By default, the CLI won't check for upgrades—you have to opt into it.
+
+You'll see this notification once a day at most. And the CLI will check for it once a day at most. So, nothing heavy-handed going on here.
diff --git a/docs/versioned_docs/version-8.4/connection-pooling.md b/docs/versioned_docs/version-8.4/connection-pooling.md
new file mode 100644
index 000000000000..7f28eaed7c81
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/connection-pooling.md
@@ -0,0 +1,108 @@
+---
+description: Scale your serverless functions
+---
+
+# Connection Pooling
+
+> ⚠ **Work in Progress** ⚠️
+>
+> There's more to document here. In the meantime, you can check our [community forum](https://community.redwoodjs.com/search?q=connection%20pooling) for answers.
+>
+> Want to contribute? Redwood welcomes contributions and loves helping people become contributors.
+> You can edit this doc [here](https://github.com/redwoodjs/redwoodjs.com/blob/main/docs/connectionPooling.md).
+> If you have any questions, just ask for help! We're active on the [forums](https://community.redwoodjs.com/c/contributing/9) and on [discord](https://discord.com/channels/679514959968993311/747258086569541703).
+
+Production Redwood apps should enable connection pooling in order to properly scale with your Serverless functions.
+
+## Prisma Data Proxy
+
+The [Prisma Data Proxy](https://www.prisma.io/docs/data-platform/data-proxy) provides database connection management and pooling for Redwood apps using Prisma. It supports MySQL and Postgres databases in either the U.S. or EU regions.
+
+To set up a Prisma Data Proxy, sign up for the [Prisma Data Platform](https://www.prisma.io/data-platform) for free. In your onboarding workflow, plug in the connection URL for your database and choose your region. This will generate a connection string for your app. Then follow the instructions in [Prisma's documentation](https://www.prisma.io/docs/concepts/data-platform/data-proxy).
+
+> Note that the example uses npm. Rather than using npm, you can access the Prisma CLI using `yarn redwood prisma` inside a Redwood app.
+
+## Prisma & PgBouncer
+
+PgBouncer holds a connection pool to the database and proxies incoming client connections by sitting between Prisma Client and the database. This reduces the number of processes a database has to handle at any given time. PgBouncer passes on a limited number of connections to the database and queues additional connections for delivery when space becomes available.
+
+To use Prisma Client with PgBouncer from a serverless function, add the `?pgbouncer=true` flag to the PostgreSQL connection URL:
+
+```
+postgresql://USER:PASSWORD@HOST:PORT/DATABASE?pgbouncer=true
+```
+
+Typically, your PgBouncer port will be 6543 which is different from the Postgres default of 5432.
+
+> Note that since Prisma Migrate uses database transactions to check out the current state of the database and the migrations table, if you attempt to run Prisma Migrate commands in any environment that uses PgBouncer for connection pooling, you might see an error.
+>
+> To work around this issue, you must connect directly to the database rather than going through PgBouncer when migrating.
+
+For more information on Prisma and PgBouncer, please refer to Prisma's Guide on [Configuring Prisma Client with PgBouncer](https://www.prisma.io/docs/guides/performance-and-optimization/connection-management/configure-pg-bouncer).
+
+## Supabase
+
+For Postgres running on [Supabase](https://supabase.io) see: [PgBouncer is now available in Supabase](https://supabase.io/blog/2021/04/02/supabase-pgbouncer#using-connection-pooling-in-supabase).
+
+All new Supabase projects include connection pooling using [PgBouncer](https://www.pgbouncer.org/).
+
+We recommend that you connect to your Supabase Postgres instance using SSL which you can do by setting `sslmode` to `require` on the connection string:
+
+```
+// not pooled typically uses port 5432
+postgresql://postgres:mydb.supabase.co:5432/postgres?sslmode=require
+// pooled typically uses port 6543
+postgresql://postgres:mydb.supabase.co:6543/postgres?sslmode=require&pgbouncer=true
+```
+
+## Heroku
+
+For Postgres, see [Postgres Connection Pooling](https://devcenter.heroku.com/articles/postgres-connection-pooling).
+
+Heroku does not officially support MySQL.
+
+## Digital Ocean
+
+For Postgres, see [How to Manage Connection Pools](https://www.digitalocean.com/docs/databases/postgresql/how-to/manage-connection-pools)
+
+To run migrations through a connection pool, you're required to append connection parameters to your `DATABASE_URL`. Prisma needs to know to use pgbouncer (which is part of Digital Ocean's connection pool). If omitted, you may receive the following error:
+
+```
+Error: Migration engine error:
+db error: ERROR: prepared statement "s0" already exists
+```
+
+To resolve this, use the following structure in your `DATABASE_URL`:
+
+```
+:25061/defaultdb?connection_limit=3&sslmode=require&pgbouncer=true&connect_timeout=10&pool_timeout=30
+```
+
+Here's a couple more things to be aware of:
+
+- When using a Digital Ocean connection pool, you'll have multiple ports available. Typically the direct connection (without connection pooling) is on port `25060` and the connection through pgbouncer is served through port `25061`. Make sure you connect to your connection pool on port `25061`
+- Adjust the `connection_limit`. Clusters provide 25 connections per 1 GB of RAM. Three connections per cluster are reserved for maintenance, and all remaining connections can be allocated to connection pools
+- Both `pgbouncer=true` and `pool_timeout=30` are required to deploy successfully through your connection pool
+
+Connection Pooling for MySQL is not yet supported.
+
+## AWS
+
+Use [Amazon RDS Proxy](https://aws.amazon.com/rds/proxy) for MySQL or PostgreSQL.
+
+From the [AWS Docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html#rds-proxy.limitations):
+
+> Your RDS Proxy must be in the same VPC as the database. The proxy can't be publicly accessible.
+
+Because of this limitation, with out-of-the-box configuration, you can only use RDS Proxy if you're deploying your Lambda Functions to the same AWS account. Alternatively, you can use RDS directly, but you might require larger instances to handle your production traffic and the number of concurrent connections.
+
+## Why Connection Pooling?
+
+Relational databases have a maximum number of concurrent client connections.
+
+- Postgres allows 100 by default
+- MySQL allows 151 by default
+
+In a traditional server environment, you would need a large amount of traffic (and therefore web servers) to exhaust these connections, since each web server instance typically leverages a single connection.
+
+In a Serverless environment, each function connects directly to the database, which can exhaust limits quickly. To prevent connection errors, you should add a connection pooling service in front of your database. Think of it as a load balancer.
diff --git a/docs/versioned_docs/version-8.4/contributing-overview.md b/docs/versioned_docs/version-8.4/contributing-overview.md
new file mode 100644
index 000000000000..873757c25733
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/contributing-overview.md
@@ -0,0 +1,179 @@
+---
+title: Contributing
+description: There's several ways to contribute to Redwood
+slug: contributing
+---
+
+# Contributing: Overview and Orientation
+
+Love Redwood and want to get involved? You’re in the right place and in good company! As of this writing, there are more than [250 contributors](https://github.com/redwoodjs/redwood/blob/main/README.md#contributors) who have helped make Redwood awesome by contributing code and documentation. This doesn't include all those who participate in the vibrant, helpful, and encouraging Forums and Discord, which are both great places to get started if you have any questions.
+
+There are several ways you can contribute to Redwood:
+
+- join the [community Forums](https://community.redwoodjs.com/) and [Discord server](https://discord.gg/jjSYEQd) — encourage and help others 🙌
+- [triage issues on the repo](https://github.com/redwoodjs/redwood/issues) and [review PRs](https://github.com/redwoodjs/redwood/pulls) 🩺
+- write and edit [docs](#contributing-docs) ✍️
+- and of course, write code! 👩💻
+
+_Before interacting with the Redwood community, please read and understand our [Code of Conduct](https://github.com/redwoodjs/redwood/blob/main/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)._
+
+> ⚡️ **Quick Links**
+>
+> There are several contributing docs and references, each covering specific topics:
+>
+> 1. 🧭 **Overview and Orientation** (👈 you are here)
+> 2. 📓 [Reference: Contributing to the Framework Packages](https://github.com/redwoodjs/redwood/blob/main/CONTRIBUTING.md)
+> 3. 🪜 [Step-by-step Walkthrough](contributing-walkthrough.md) (including Video Recording)
+> 4. 📈 [Current Project Status](https://github.com/orgs/redwoodjs/projects/11)
+> 5. 🤔 What should I work on?
+> - [Good First Issue](https://redwoodjs.com/good-first-issue)
+> - [Discovery Process and Open Issues](#what-should-i-work-on)
+
+## The Characteristics of a Contributor
+
+More than committing code, contributing is about human collaboration and relationship. Our community mantra is **“By helping each other be successful with Redwood, we make the Redwood project successful.”** We have a specific vision for the effect this project and community will have on you — it should give you superpowers to build+create, progress in skills, and help advance your career.
+
+So who do you need to become to achieve this? Specifically, what characteristics, skills, and capabilities will you need to cultivate through practice? Here are our suggestions:
+
+- Empathy
+- Gratitude
+- Generosity
+
+All of these are applicable in relation to both others and yourself. The goal of putting them into practice is to create trust that will be a catalyst for risk-taking (another word to describe this process is “learning”!). These are the ingredients necessary for productive, positive collaboration.
+
+And you thought all this was just about opening a PR 🤣 Yes, it’s a super rewarding experience. But that’s just the beginning!
+
+## What should I work on?
+
+Even if you know the mechanics, it’s hard to get started without a starting place. Our best advice is this — dive into the Redwood Tutorial, read the docs, and build your own experiment with Redwood. Along the way, you’ll find typos, out-of-date (or missing) documentation, code that could work better, or even opportunities for improving and adding features. You’ll be engaging in the Forums and Chat and developing a feel for priorities and needs. This way, you’ll naturally follow your own interests and sooner than later intersect “things you’re interested in” + “ways to help improve Redwood”.
+
+There are other more direct ways to get started as well, which are outlined below.
+
+### Project Boards and GitHub Issues
+
+The Redwood Core Team is working publicly — progress is updated daily on the [Release Project Board](https://github.com/orgs/redwoodjs/projects/11).
+
+Eventually, all this leads you back to Redwood’s GitHub Issues page. Here you’ll find open items that need help, which are organized by labels. There are four labels helpful for contributing:
+
+1. [Good First Issue](https://github.com/redwoodjs/redwood/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22): these items are more likely to be an accessible entry point to the Framework. It’s less about skill level and more about focused scope.
+2. [Help Wanted](https://github.com/redwoodjs/redwood/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22): these items especially need contribution help from the community.
+3. [Bugs 🐛](https://github.com/redwoodjs/redwood/issues?q=is%3Aissue+is%3Aopen+label%3Abug%2Fconfirmed): last but not least, we always need help with bugs. Some are technically less challenging than others. Sometimes the best way you can help is to attempt to reproduce the bug and confirm whether or not it’s still an issue.
+
+### Create a New Issue
+
+Anyone can create a new Issue. If you’re not sure that your feature or idea is something to work on, start the discussion with an Issue. Describe the idea and problem + solution as clearly as possible, including examples or pseudo code if applicable. It’s also very helpful to `@` mention a maintainer or Core Team member that shares the area of interest.
+
+Just know that there’s a lot of Issues that shuffle every day. If no one replies, it’s just because people are busy. Reach out in the Forums, Chat, or comment in the Issue. We intend to reply to every Issue that’s opened. If yours doesn’t have a reply, then give us a nudge!
+
+Lastly, it can often be helpful to start with brief discussion in the community Chat or Forums. Sometimes that’s the quickest way to get feedback and a sense of priority before opening an Issue.
+
+## Contributing Code
+
+Redwood's composed of many packages that are designed to work together. Some of these packages are designed to be used outside Redwood too!
+
+Before you start contributing, you'll want to set up your local development environment. The Redwood repo's top-level [contributing guide](https://github.com/redwoodjs/redwood/blob/main/CONTRIBUTING.md#local-development) walks you through this. Make sure to give it an initial read.
+
+For details on contributing to a specific package, see the package's README (links provided in the table below). Each README has a section named Roadmap. If you want to get involved but don't quite know how, the Roadmap's a good place to start. See anything that interests you? Go for it! And be sure to let us know—you don't have to have a finished product before opening an issue or pull request. In fact, we're big fans of [Readme Driven Development](https://tom.preston-werner.com/2010/08/23/readme-driven-development.html).
+
+What you want to do not on the roadmap? Well, still go for it! We love spikes and proof-of-concepts. And if you have a question, just ask!
+
+### RedwoodJS Framework Packages
+
+| Package | Description |
+| :---------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [`@redwoodjs/api-server`](https://github.com/redwoodjs/redwood/blob/main/packages/api-server/README.md) | Run a Redwood app using Fastify server (alternative to serverless API) |
+| [`@redwoodjs/api`](https://github.com/redwoodjs/redwood/blob/main/packages/api/README.md) | Infrastructure components for your applications UI including logging, webhooks, authentication decoders and parsers, as well as tools to test custom serverless functions and webhooks |
+| [`@redwoodjs/auth`](https://github.com/redwoodjs/redwood/blob/main/packages/auth/README.md#contributing) | A lightweight wrapper around popular SPA authentication libraries |
+| [`@redwoodjs/cli`](https://github.com/redwoodjs/redwood/blob/main/packages/cli/README.md) | All the commands for Redwood's built-in CLI |
+| [`@redwoodjs/codemods`](https://github.com/redwoodjs/redwood/blob/main/packages/codemods/README.md) | Codemods that automate upgrading a Redwood project |
+| [`@redwoodjs/core`](https://github.com/redwoodjs/redwood/blob/main/packages/core/README.md) | Defines babel plugins and config files |
+| [`@redwoodjs/create-redwood-app`](https://github.com/redwoodjs/redwood/blob/main/packages/create-redwood-app/README.md) | Enables `yarn create redwood-app`—downloads the latest release of Redwood and extracts it into the supplied directory |
+| [`@redwoodjs/eslint-config`](https://github.com/redwoodjs/redwood/blob/main/packages/eslint-config/README.md) | Defines Redwood's eslint config |
+| [`@redwoodjs/forms`](https://github.com/redwoodjs/redwood/blob/main/packages/forms/README.md) | Provides Form helpers |
+| [`@redwoodjs/graphql-server`](https://github.com/redwoodjs/redwood/blob/main/packages/graphql-server/README.md) | Exposes functions to build the GraphQL API, provides services with `context`, and a set of envelop plugins to supercharge your GraphQL API with logging, authentication, error handling, directives and more |
+| [`@redwoodjs/internal`](https://github.com/redwoodjs/redwood/blob/main/packages/internal/README.md) | Provides tooling to parse Redwood configs and get a project's paths |
+| [`@redwoodjs/prerender`](https://github.com/redwoodjs/redwood/blob/main/packages/prerender/README.md) | Defines functionality for prerendering static content |
+| [`@redwoodjs/record`](https://github.com/redwoodjs/redwood/blob/main/packages/record/README.md) | ORM built on top of Prisma. It may be extended in the future to wrap other database access packages |
+| [`@redwoodjs/router`](https://github.com/redwoodjs/redwood/blob/main/packages/router/README.md) | The built-in router for Redwood |
+| [`@redwoodjs/structure`](https://github.com/redwoodjs/redwood/blob/main/packages/structure/README.md) | Provides a way to build, validate and inspect an object graph that represents a complete Redwood project |
+| [`@redwoodjs/telemetry`](https://github.com/redwoodjs/redwood/blob/main/packages/telemetry/README.md) | Provides functionality for anonymous data collection |
+| [`@redwoodjs/testing`](https://github.com/redwoodjs/redwood/blob/main/packages/testing/README.md) | Provides helpful defaults when testing a Redwood project's web side |
+| [`@redwoodjs/web`](https://github.com/redwoodjs/redwood/blob/main/packages/web/README.md) | Configures a Redwood's app web side: wraps the Apollo Client in `RedwoodApolloProvider`; defines the Cell HOC |
+
+## Contributing Docs
+
+First off, thank you for your interest in contributing docs! Redwood prides itself on good developer experience, and that includes good documentation.
+
+Before you get started, there's an implicit doc-distinction that we should make explicit: all the docs on redwoodjs.com are for helping people develop apps using Redwood, while all the docs on the Redwood repo are for helping people contribute to Redwood.
+
+Although Developing and Contributing docs are in different places, they most definitely should be linked and referenced as needed. For example, it's appropriate to have a "Contributing" doc on redwoodjs.com that's context-appropriate, but it should link to the Framework's [CONTRIBUTING.md](https://github.com/redwoodjs/redwood/blob/main/CONTRIBUTING.md) (the way this doc does).
+
+### How Redwood Thinks about Docs
+
+Before we get into the how-to, a little explanation. When thinking about docs, we find [divio's documentation system](https://documentation.divio.com/) really useful. It's not necessary that a doc always have all four of the dimensions listed, but if you find yourself stuck, you can ask yourself questions like "Should I be explaining? Am I explaining too much? Too little?" to reorient yourself while writing.
+
+### Docs for Developing Redwood Apps
+
+redwoodjs.com has three kinds of Developing docs: References, How To's, and The Tutorial.
+You can find References and How To's within their respective directories on the redwood/redwood repo: [docs/](https://github.com/redwoodjs/redwood/tree/main/docs) and [how-to/](https://github.com/redwoodjs/redwood/tree/main/docs/how-to).
+
+The Tutorial is a standalone document that serves a specific purpose as an introduction to Redwood, an aspirational roadmap, and an example of developer experience. As such, it's distinct from the categories mentioned, although it's most similar to How To's.
+
+#### References
+
+References are explanation-driven how-to content. They're more direct and to-the-point than The Tutorial and How To's. The idea is much more about finding something or getting something done than any kind of learning journey.
+
+Before you take on a doc, you should read [Forms](forms.md) and [Router](router.md); they have the kind of content you should be striving for. They're comprehensive yet conversational.
+
+In general, don't be afraid to go into too much detail. We'd rather you err on the side of too much than too little. One tip for finding good content is searching the forum and repo for "prior art"—what are people talking about where this comes up?
+
+#### How To's
+
+How To's are tutorial-style content focused on a specific problem-solution. They usually have a beginner in mind (if not, they should indicate that they don't—put 'Advanced' or 'Deep-Dive', etc., in the title or introduction). How To's may include some explanatory text as asides, but they shouldn't be the majority of the content.
+
+#### Making a Doc Findable
+
+If you write it, will they read it? We think they will—if they can find it.
+
+After you've finished writing, step back for a moment and consider the word(s) or phrase(s) people will use to find what you just wrote. For example, let's say you were writing a doc about configuring a Redwood app. If you didn't know much about configuring a Redwood app, a heading (in the nav bar to the left) like "redwood.toml" wouldn't make much sense, even though it _is_ the main configuration file. You'd probably look for "Redwood Config" or "Settings", or type "how to change Redwood App settings" in the "Search the docs" bar up top, or in Google.
+
+That is to say, the most useful headings aren't always the most literal ones. Indexing is more than just underlining the "important" words in a text—it's identifying and locating the concepts and topics that are the most relevant to our readers, the users of our documentation.
+
+So, after you've finished writing, reread what you wrote with the intention of making a list of two to three keywords or phrases. Then, try to use each of those in three places, in this order of priority:
+
+- the left-nav menu title
+- the page title or the first right-nav ("On this page") section title
+- the introductory paragraph
+
+### Docs for Contributing to the Redwood Repo
+
+These docs are in the Framework repo, redwoodjs/redwood, and explain how to contribute to Redwood packages. They're the docs linked to in the table above.
+
+In general, they should consist of more straightforward explanations, are allowed to be technically heavy, and should be written for a more experienced audience. But as a best practice for collaborative projects, they should still provide a Vision + Roadmap and identify the project-point person(s) (or lead(s)).
+
+## What makes for a good Pull Request?
+
+In general, we don’t have a formal structure for PRs. Our goal is to make it as efficient as possible for anyone to open a PR. But there are some good practices, which are flexible. Just keep in mind that after opening a PR there’s more to do before getting to the finish line:
+
+1. Reviews from other contributors and maintainers
+2. Update code and, after maintainer approval, merge-in changes to the `main` branch
+3. Once PR is merged, it will be released and added to the next version Release Notes with a link for anyone to look at the PR and understand it.
+
+Some tips and advice:
+
+- **Connect the dots and leave a breadcrumb**: link to related Issues, Forum discussions, etc. Help others follow the trail leading up to this PR.
+- **A Helpful Description**: What does the code in the PR do and what problem does it solve? How can someone use the code? Code sample, Screenshot, Quick Video… Any or all of this is so so good.
+- **Draft or Work in Progress**: You don’t have to finish the code to open a PR. Once you have a start, open it up! Most often the best way to move an Issue forward is to see the code in action. Also, often this helps identify ways forward before you spend a lot of time polishing.
+- **Questions, Items for Discussion, Etc.**: Another reason to open a Draft PR is to ask questions and get direction via review.
+- **Loop in a Maintainer for Feedback and Review**: ping someone with an `@`. And nudge again in a few days if there’s no reply. We appreciate it and truly don’t want the PR to get lost in the shuffle!
+- **Next Steps**: Once the PR is merged, will there be a follow up step? If so, link to an Issue. How about Docs to-do or Docs to-merge?
+
+The best thing you can do is look through existing PRs, which will give you a feel for how things work and what you think is helpful.
+
+### Example PR
+
+If you’re looking for an example of “what makes a good PR”, look no further than this one by Kim-Adeline:
+
+- [Convert component generator to TS #632](https://github.com/redwoodjs/redwood/pull/632)
+
+Not every PR needs this much information. But it’s definitely helpful when it does!
diff --git a/docs/versioned_docs/version-8.4/contributing-walkthrough.md b/docs/versioned_docs/version-8.4/contributing-walkthrough.md
new file mode 100644
index 000000000000..5747b425e799
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/contributing-walkthrough.md
@@ -0,0 +1,275 @@
+---
+title: Contributing Walkthrough
+description: Watch a video of the contributing process
+---
+
+# Contributing: Step-by-Step Walkthrough (with Video)
+
+> ⚡️ **Quick Links**
+>
+> There are several contributing docs and references, each covering specific topics:
+>
+> 1. 🧭 [Overview and Orientation](contributing-overview.md)
+> 2. 📓 [Reference: Contributing to the Framework Packages](https://github.com/redwoodjs/redwood/blob/main/CONTRIBUTING.md)
+> 3. 🪜 **Step-by-step Walkthrough** (👈 you are here)
+> 4. 📈 [Current Project Status: v1 Release Board](https://github.com/orgs/redwoodjs/projects/6)
+> 5. 🤔 What should I work on?
+> - ["Help Wanted" v1 Triage Board](https://redwoodjs.com/good-first-issue)
+> - [Discovery Process and Open Issues](contributing-overview.md#what-should-i-work-on)
+
+## Video Recording of Complete Contributing Process
+
+The following recording is from a Contributing Workshop, following through the exact steps outlined below. The Workshop includes additional topics along with Q&A discussion.
+
+VIDEO
+
+## Prologue: Getting Started with Redwood and GitHub (and git)
+
+These are the foundations for contributing, which you should be familiar with before starting the walkthrough.
+
+[**The Redwood Tutorial**](tutorial/foreword.md)
+
+The best (and most fun) way to learn Redwood and the underlying tools and technologies.
+
+**Docs and How To**
+
+- Start with the [Introduction](https://github.com/redwoodjs/redwood/blob/main/README.md) Doc
+- And browse through [How To's](how-to/index)
+
+### GitHub (and Git)
+
+Diving into Git and the GitHub workflow can feel intimidating if you haven’t experienced it before. The good news is there’s a lot of great material to help you learn and be committing in no time!
+
+- [Introduction to GitHub](https://lab.github.com/githubtraining/introduction-to-github) (overview of concepts and workflow)
+- [First Day on GitHub](https://lab.github.com/githubtraining/first-day-on-github) (including Git)
+- [First Week on GitHub](https://lab.github.com/githubtraining/first-week-on-github) (parts 3 and 4 might be helpful)
+
+## The Full Workflow: From Local Development to a New PR
+
+### Definitions
+
+#### Redwood “Project”
+
+We refer to the codebase of a Redwood application as a Project. This is what you install when you run `yarn create redwood-app `. It’s the thing you are building with Redwood.
+
+Lastly, you’ll find the template used to create a new project (when you run create redwood-app) here in GitHub: [redwoodjs/redwood/packages/create-redwood-app/template/](https://github.com/redwoodjs/redwood/tree/main/packages/create-redwood-app/template)
+
+We refer to this as the **CRWA Template or Project Template**.
+
+#### Redwood “Framework”
+
+The Framework is the codebase containing all the packages (and other code) that is published on NPMjs.com as `@redwoodjs/`. The Framework repository on GitHub is here: [https://github.com/redwoodjs/redwood](https://github.com/redwoodjs/redwood)
+
+### Development tools
+
+These are the tools used and recommended by the Core Team.
+
+**VS Code**
+[Download VS Code](https://code.visualstudio.com/download)
+This has quickly become the de facto editor for JavaScript and TypeScript. Additionally, we have added recommended VS Code Extensions to use when developing both the Framework and a Project. You’ll see a pop-up window asking you about installing the extensions when you open up the code.
+
+**GitHub Desktop**
+[Download GitHub Desktop](https://desktop.github.com)
+You’ll need to be comfortable using Git at the command line. But the thing we like best about GitHub Desktop is how easy it makes workflow across GitHub -- GitHub Desktop -- VS Code. You don’t have to worry about syncing permissions or finding things. You can start from a repo on GitHub.com and use Desktop to do everything from “clone and open on your computer” to returning back to the site to “open a PR on GitHub”.
+
+**[Mac OS] iTerm and Oh-My-Zsh**
+There’s nothing wrong with Terminal (on Mac) and plain zsh or bash. (If you’re on Windows, we highly recommend using Git for Windows and Git bash.) But we enjoy using iTerm2 ([download](https://iterm2.com)) and zsh much more (combined with [Oh My Zsh](https://ohmyz.sh)). Heads up, you can get lost in the world of theming and adding plugins. We recommend keeping it simple for awhile before taking the customization deep dive
+😉.
+
+**[Windows] Git for Windows with Git Bash or WSL(2)**
+Unfortunately, there are a lot of “gotchas” when it comes to working with Javascript-based frameworks on Windows. We do our best to point out (and resolve) issues, but our priority focus is on developing a Redwood app vs contributing to the Framework. (If you’re interested, there’s a lengthy Forum conversation about this with many suggestions.)
+
+All that said, we highly recommend using one of the following setups to maximize your workflow:
+
+1. Use [Git for Windows and Git Bash](how-to/windows-development-setup.md) (included in installation)
+2. Use [WSL following this setup guide on the Forums](https://community.redwoodjs.com/t/windows-subsystem-for-linux-setup/2439)
+
+Lastly, the new Gitpod integration is a great option and only getting better. You might just want to start using it from the beginning (see section below in “Local Development Setup”).
+
+**Gitpod**
+We recently added an integration with [Gitpod](http://gitpod.io) that automatically creates a Framework dev workspace, complete with test project, in a browser-based VS Code environment. It’s pretty amazing and we highly recommend giving it a shot. (If you’re developing on Windows, it’s also an amazing option for you anytime you run into something that isn’t working correctly or supported.)
+
+But don’t skip out reading the following steps in “Local Development Setup” — Gitpod uses the same workflow and tools to initialize. If you want to develop in Gitpod, you’ll need to understand how it all works.
+
+But when you’re ready, learn how to use it in the section at the end [“GitPod: Browser-based Development”](#gitpod-browser-based-development).
+
+### Local Development Setup
+
+#### Step 1: Redwood Framework
+
+1. **Fork the [Redwood Framework](https://github.com/redwoodjs/redwood)** into a personal repo
+2. Using GitHub Desktop, **open the Framework Codebase** in a VS Code workspace
+3. Commands to “**start fresh**” when working on the Framework
+ - `yarn install`: This installs the package dependencies in /node_modules using Yarn package manager. This command is the same as just typing `yarn`. Also, if you ever switch branches and want to make sure the install dependencies are correct, you can run `yarn install --force` (shorthand `yarn -f`).
+ - `git clean -fxd`: _You’ll only need to do this if you’ve already been developing and want to “start over” and reset your codebase_. This command will permanently delete everything that is .gitignored, e.g. /node_modules and /dist directories with package builds. When switching between branches, this command makes sure nothing is carried over that you don’t want. (Warning: it will delete .env files in a Redwood Project. To avoid this, you can use `git clean -fxd -e .env`.)
+4. **Create a new branch** from the `main` branch
+ First make sure you’ve pulled all changes from the remote origin (GitHub repo) into your local branch. (If you just cloned from your fork, you should be up to date.) Then create a new branch. The nomenclature used by David Price is `-description-with-hyphens`, e.g. `dsp-add-eslint-config-redwood-toml`. It's simple to use VS Code or GitHub Desktop to manage branches. You can also do this via the CLI git checkout command.
+
+#### Step 2: Test Project
+
+There are several options for creating a local Redwood Project to use during development. Anytime you are developing against a test project, there are some specific gotchas to keep in mind:
+
+- New projects always use the latest stable version of the Redwood packages, which will not be up to date with the latest Framework code in the `main` branch.
+- To use the packages corresponding with the latest code in the Framework `main` branch, you can use the canary version published to NPM. All you need to do to install the canary versions is run `yarn rw upgrade --tag canary` in your Project
+- Using a cloned project or repo? Just know there are likely breaking changes in `main` that haven’t been applied. You can examine merged PRs with the “breaking” label for more info.
+- Just because you are using canary doesn’t mean you are using your local Framework branch code! Make sure you run `yarn rwfw project:sync`. And anytime you switch branches or get out of sync, you might need to start over beginning with the `git clean -fxd` command
+
+With those details out of the way, now is the time to choose an option below that meets your needs based on functionality and codebase version.
+
+**Build a Functional Test Project [Recommended]**
+
+1. 👉 **Use the build script to create a test project**: From the Framework root directory, run `yarn build:test-project `. This command installs a new project using the Template codebase from your current Framework branch, it then adds Tutorial features, and finally it initializes the DB (with seed data!). It should work 90% of the time and is the recommended starting place. We also use this out-of-the-box with Gitpod.
+
+**Other Options to create a project**
+
+2. **Install a fresh project using the local Framework template code:** Sometimes you need to create a project that uses the Template codebase in your local branch of the Framework, e.g. your changes include modifications to the CRWA Template and need to be tested. Running the command above is exactly the same as `yarn create redwood- app …`, only it runs the command from your local Framework package using the local Template codebase. Note: this is the same command used at the start of the `yarn build:test-project` command.
+
+```
+yarn babel-node packages/create-redwood-app/src/create-redwood-app.js
+```
+
+3. **Clone the Redwood Tutorial App repo:** This is the codebase to use when starting the Redwood Tutorial Part 2. It is updated to the latest version and has the Blog features. This is often something we use for local development. Note: be sure to upgrade to canary and look out for breaking changes coming with the next release.
+
+4. **Install a fresh project**: `yarn create redwood-app ` If you just need a fresh installation 1) using the latest version template codebase and 2) without any features, then just install a new Redwood project. Note: this can have the same issues regarding the need to upgrade to canary and addressing breaking changes (see Notes from items 2 and 3 above).
+
+> Note: All the options above currently set the language to JavaScript. If you would like to work with TypeScript, you can add the option `--typescript` to either of the commands that run the create-redwood-app installation.
+
+#### Step 3: Link the local Framework with the local test Project
+
+Once you work on the Framework code, you’ll most often want to run the code in a Redwood app for testing. However, the Redwood Project you created for testing is currently using the latest version (or canary) packages of Redwood published on NPMjs.com, e.g. [@redwoodjs/core](https://www.npmjs.com/package/@redwoodjs/core)
+
+So we’ll use the Redwood Framework (rwfw) command to connect our local Framework and test Projects, which allows the Project to run on the code for Packages we are currently developing.
+
+Run this command from the CLI in your test Project:
+
+```
+RWFW_PATH= yarn rwfw project:sync
+```
+
+For Example:
+
+```
+cd redwood-project
+RWFW_PATH=~/redwood yarn rwfw project:sync
+```
+
+RWFW*PATH is the path to your local copy of the Redwood Framework. \_Once provided to rwfw, it'll remember it and you shouldn't have to provide it again unless you move it.*
+
+> **Heads up for Windows Devs**
+> Depending on your dev setup, Windows might balk at you setting the env var RWFW_PATH at the beginning of the command like this. If so, try prepending with `cross-env`, e.g. `yarn cross-env RWFW_PATH=~/redwood yarn rwfw` ... Or you can add the env var and value directly to your shell before running the command.
+
+As project:sync starts up, it'll start logging to the console. In order, it:
+
+1. cleans and builds the framework
+2. copies the framework's dependencies to your project
+3. runs yarn install in your project
+4. copies over the framework's packages to your project
+5. waits for changes
+
+Step two is the only explicit change you'll see to your project. You'll see that a ton of packages have been added to your project's root package.json.
+
+All done? You’re ready to kill the link process with “ctrl + c”. You’ll need to confirm your root package.json no longer has the added dependencies. And, if you want to reset your test-project, you should run `yarn install --force`.
+
+#### Step 4: Framework Package(s) Local Testing
+
+Within your Framework directory, use the following tools and commands to test your code:
+
+1. **Build the packages**: `yarn build`
+ - to delete all previous build directories: yarn build:clean
+2. **Syntax and Formatting**: `yarn lint`
+ - to fix errors or warnings: `yarn lint:fix`
+3. **Run unit tests for each package**: `yarn test`
+4. **Run through the Cypress E2E integration tests**: `yarn e2e`
+5. **Check Yarn resolutions and package.json format**: `yarn check`
+
+All of these checks are included in Redwood’s GitHub PR Continuous Integration (CI) automation. However, it’s good practice to understand what they do by using them locally. The E2E tests aren’t something we use every time anymore (because it takes a while), but you should learn how to use it because it comes in handy when your code is failing tests on GitHub and you need to diagnose.
+
+> **Heads up for Windows Devs**
+> The Cypress E2E does _not_ work on Windows. Two options are available if needed:
+>
+> 1. Use Gitpod (see related section for info)
+> 2. When you create a PR, just ask for help from a maintainer
+
+#### Step 5: Open a PR 🚀
+
+You’ve made it to the fun part! It’s time to use the code you’re working on to create a new PR into the Redwood Framework `main` branch.
+
+We use GitHub Desktop to walk through the process of:
+
+- Committing my changes to my development branch
+- Publishing (pushing) my branch and changes to my GitHub repo fork of the Redwood Framework
+- Opening a PR requesting to merge my forked-repo branch into the Redwood Framework `main` branch
+
+> While we use GitHub Desktop as an example, the basic process outlined above is the same whether using the command line or other clients.
+
+1. **Commit Files:** Using GitHub Desktop, browse to your local Redwood Framework repository and select the current branch you're working on. On the left-hand side, you'll see the files that have been modified, added, or removed. Check the boxes for the files you want to include in the PR. Below the file list, you'll see two text boxes and a "Commit to <your-branch-name>" button. Write a short commit message in the first box. If you want to add a longer description then you can do so in the second box. Click the "Commit to ..." button to commit the changes to the branch. The files are now committed under that commit message.
+
+2. **Push Files:** After committing, you should see an option appear with the count of local commits and a button to "Push origin." If you're ready to push those changes to the remote branch, click that button. Otherwise, you can keep working and add more commits using the process in step 1.
+
+3. **Create Pull Request:** Once the commit(s) have been pushed, you should see another option for "Create Pull Request." This will open a browser window to GitHub's "Open a pull request" form. Fill out the appropriate information, check the box to "Allow edits by maintainers," and submit!
+
+> If you are following along and are not using GitHub Desktop, after pushing your commits, you can open a pull request by visiting [github.com](https://github.com) and browsing to your fork. There should be a button at the top to submit a pull request.
+
+You have successfully submitted your PR!
+
+**Note:** Make sure you check the box that allows project maintainers to update your branch. This option is found on the "Open a pull request" form below the description textbox. Checking this option helps move a PR forward more quickly, as branches always need to be updated from `main` before we can merge.
+
+**When is my code “ready” to open a PR?**
+Most of the action, communication, and decisions happen within a PR. A common mistake new contributors make is _waiting_ until their code is “perfect” before opening a PR. Assuming your PR has some code changes, it’s great practice to open a [Draft PR](https://github.blog/2019-02-14-introducing-draft-pull-requests/) (setting during the PR creation), which you can use to start discussion and ask questions. PRs are closed all the time without being merged, often because they are replaced by another PR resulting from decisions and discussion. It’s part of the process. More importantly, it means collaboration is happening!
+
+What isn’t a fun experience is spending a whole bunch of time on code that ends up not being the correct direction or is unnecessary/redundant to something that already exists. This is a part of the learning process. But it’s another reason to open a draft PR sooner than later to get confirmation and questions out of the way before investing time into refining and details.
+
+When in doubt, just try first and ask for help and direction!
+
+Refer to the [What makes for a good Pull Request?](contributing-overview.md#what-makes-for-a-good-pull-request) section in [Contributing Overview](contributing-overview.md)for general good practices when opening PR.
+
+### Gitpod: Browser-based Development
+
+[Gitpod](http://gitpod.io) has recently been integrated with Redwood to JustWork™ with any branch or PR. When a virtual Gitpod workspace is initialized, it automatically:
+
+1. Checks-out the code from your branch or PR
+2. Run Yarn installation
+3. Creates the functional Test Project via `yarn build:test-project`
+4. Syncs the Framework code with the Test Project
+5. Starts the Test Project dev server
+6. 🤯
+
+> **Chrome works best**
+> We’ve noticed some bugs using Gitpod with either Brave or Safari. Currently we recommend sticking to Chrome (although it’s worth trying out Edge and Firefox).
+
+**Demo of Gitpod**
+David briefly walks-through an automatically prebuilt Gitpod workspace here:
+
+- [Gitpod + RedwoodJS 3-minute Walkthrough](https://youtu.be/_kMuTW3x--s)
+
+Make sure you watch until the end where David shows how to set up your integration with GitHub and VS Code sync. 🤩
+
+**Start a Gitpod Workspace**
+There are two ways to get started with Gitpod + Redwood.
+
+_Option 1: Open a PR_
+Every PR will trigger a Gitpod prebuild using the PR branch. Just look for Gitpod in the list of checks at the bottom of the PR — click the “Details” link and away you’ll go!
+
+
+
+_Option 2: Use the link from your project or branch_
+
+You can initialize a workspace using this URL pattern:
+
+```
+https://gitpod.io/#
+```
+
+For example, this link will start a workspace using the RedwoodJS main branch:
+
+- https://gitpod.io/#https://github.com/redwoodjs/redwood
+
+And this link will start a workspace for a PR #3434:
+
+- https://gitpod.io/#https://github.com/redwoodjs/redwood/pull/3434
diff --git a/docs/versioned_docs/version-8.4/cors.md b/docs/versioned_docs/version-8.4/cors.md
new file mode 100644
index 000000000000..3781aeb19fe2
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/cors.md
@@ -0,0 +1,263 @@
+---
+title: Cross-Origin Resource Sharing
+description: For when you need to worry about CORS
+---
+
+# CORS
+
+CORS stands for [Cross Origin Resource Sharing](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS). In a nutshell, by default, browsers aren't allowed to access resources outside their own domain.
+
+## When you need to worry about CORS
+
+If your api and web sides are deployed to different domains, you'll have to worry about CORS. For example, if your web side is deployed to `example.com` but your api is `api.example.com`. For security reasons your browser will not allow XHR requests (like the kind that the GraphQL client makes) to a domain other than the one currently in the browser's address bar.
+
+This will become obvious when you point your browser to your site and see none of your GraphQL data. When you look in the web inspector you'll see a message along the lines of:
+
+> ⛔️ Access to fetch https://api.example.com has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
+
+## Avoiding CORS
+
+Dealing with CORS can complicate your app and make it harder to deploy to new hosts, run in different environments, etc. Is there a way to avoid CORS altogether?
+
+Yes! If you can add a proxy between your web and api sides, all requests will _appear_ to be going to and from the same domain (the web side, even though behind the scenes they are forwarded somewhere else). This functionality is included automatically with hosts like [Netlify](https://docs.netlify.com/routing/redirects/rewrites-proxies/#proxy-to-another-service) or [Vercel](https://vercel.com/docs/cli#project-configuration/rewrites). With a host like [Render](https://render-web.onrender.com/docs/deploy-redwood#deployment) you can enable a proxy with a simple config option. Most providers should provide this functionality through a combination of provider-specific config and/or web server configuration.
+
+## GraphQL Config
+
+You'll need to add CORS headers to GraphQL responses. You can do this easily enough by adding the `cors` option in `api/src/functions/graphql.js` (or `graphql.ts`):
+
+```diff
+export const handler = createGraphQLHandler({
+ loggerConfig: { logger, options: {} },
+ directives,
+ sdls,
+ services,
++ cors: {
++ origin: 'https://www.example.com', // <-- web side domain
++ },
+ onException: () => {
+ db.$disconnect()
+ },
+})
+```
+
+Note that the `origin` needs to be a complete URL including the scheme (`https`). This is the domain that requests are allowed to come _from_. In this example we assume the web side is served from `https://www.example.com`. If you have multiple servers that should be allowed to access the api, you can pass an array of them instead:
+
+```jsx
+cors: {
+ origin: ['https://example.com', 'https://www.example.com']
+},
+```
+
+The proper one will be included in the CORS header depending on where the response came from.
+
+## Authentication Config
+
+The following config only applies if you're using [dbAuth](authentication.md#self-hosted-auth-installation-and-setup), which is Redwood's own cookie-based auth system.
+
+You'll need to configure several things:
+
+- Add CORS config for GraphQL
+- Add CORS config for the auth function
+- Cookie config for the auth function
+- Allow sending of credentials in GraphQL XHR requests
+- Allow sending of credentials in auth function requests
+
+Here's how you configure each of these:
+
+### GraphQL CORS Config
+
+You'll need to add CORS headers to GraphQL responses, and let the browser know to send up cookies with any requests. Add the `cors` option in `api/src/functions/graphql.js` (or `graphql.ts`) with an additional `credentials` property:
+
+```diff
+export const handler = createGraphQLHandler({
+ loggerConfig: { logger, options: {} },
+ directives,
+ sdls,
+ services,
++ cors: {
++ origin: 'https://www.example.com', // <-- web side domain
++ credentials: true,
++ },
+ onException: () => {
+ db.$disconnect()
+ },
+})
+```
+
+`origin` is the domain(s) that requests come _from_ (the web side).
+
+### Auth CORS Config
+
+Similar to the `cors` options being sent to GraphQL, you can set similar options in `api/src/functions/auth.js` (or `auth.ts`):
+
+```diff
+const authHandler = new DbAuthHandler(event, context, {
+ db: db,
+ authModelAccessor: 'user',
+ authFields: {
+ id: 'id',
+ username: 'email',
+ hashedPassword: 'hashedPassword',
+ salt: 'salt',
+ resetToken: 'resetToken',
+ resetTokenExpiresAt: 'resetTokenExpiresAt',
+ },
++ cors: {
++ origin: 'https://www.example.com', // <-- web side domain
++ credentials: true,
++ },
+ cookie: {
+ HttpOnly: true,
+ Path: '/',
+ SameSite: 'Strict',
+ Secure: true,
+ },
+ forgotPassword: forgotPasswordOptions,
+ login: loginOptions,
+ resetPassword: resetPasswordOptions,
+ signup: signupOptions,
+})
+```
+
+Just like the GraphQL config, `origin` is the domain(s) that requests come _from_ (the web side).
+
+### Cookie Config
+
+In order to be able accept cookies from another domain we'll need to make a change to the `SameSite` option in `api/src/functions/auth.js` and set it to `None`:
+
+```jsx {4}
+ cookie: {
+ HttpOnly: true,
+ Path: '/',
+ SameSite: 'None',
+ Secure: true,
+ },
+```
+
+### GraphQL XHR Credentials
+
+Next we need to tell the GraphQL client to include credentials (the dbAuth cookie) in any requests. This config goes in `web/src/App.{ts,js}`:
+
+```jsx {7-12}
+import { AuthProvider, useAuth } from 'src/auth'
+
+const App = () => (
+
+
+
+
+
+
+
+
+
+)
+```
+
+### Auth XHR Credentials
+
+Finally, we need to tell dbAuth to include credentials in its own XHR requests. We'll do this within `web/src/auth.{ts,js}` when creating the `AuthProvider`:
+
+```jsx {3-5}
+import { createDbAuthClient, createAuth } from '@redwoodjs/auth-dbauth-web'
+
+const dbAuthClient = createDbAuthClient({
+ fetchConfig: { credentials: 'include' },
+})
+
+export const { AuthProvider, useAuth } = createAuth(dbAuthClient)
+```
+
+## Testing CORS Locally
+
+If you've made the configuration changes above, `localhost` testing should continue working as normal. But, if you want to make sure your CORS config works without deploying to the internet somewhere, you'll need to do some extra work.
+
+### Serving Sides to the Internet
+
+First, you need to get the web and api sides to be serving from different hosts. A tool like [ngrok](https://ngrok.com/) or [localhost.run](https://localhost.run/) allows you to serve your local development environment over a real domain to the rest of the internet (on both `http` and `https`).
+
+You'll need to start two tunnels, one for the web side (this example assumes ngrok):
+
+```bash
+> ngrok http 8910
+
+Session Status online
+Account Your Name (Plan: Pro)
+Version 2.3.40
+Region United States (us)
+Web Interface http://127.0.0.1:4040
+Forwarding http://3c9913de0c00.ngrok.io -> http://localhost:8910
+Forwarding https://3c9913de0c00.ngrok.io -> http://localhost:8910
+```
+
+And another for the api side:
+
+```bash
+> ngrok http 8911
+
+Session Status online
+Account Your Name (Plan: Pro)
+Version 2.3.40
+Region United States (us)
+Web Interface http://127.0.0.1:4040
+Forwarding http://fb6d701c44b5.ngrok.io -> http://localhost:8911
+Forwarding https://fb6d701c44b5.ngrok.io -> http://localhost:8911
+```
+
+Note the two different domains. Copy the `https` domain from the api side because we'll need it in a moment. Even if the Redwood dev server isn't running you can leave these tunnels running, and when the dev server _does_ start, they'll just start on those domains again.
+
+### `redwood.toml` Config
+
+You'll need to make two changes here:
+
+1. Bind the server to all network interfaces
+2. Point the web side to the api's domain
+
+Normally the dev server only binds to `127.0.0.1` (home sweet home) which means you can only access it from your local machine using `localhost` or `127.0.0.1`. To tell it to bind to all network interfaces, and to be available to the outside world, add this `host` option:
+
+```toml {4}
+[web]
+ title = "Redwood App"
+ port = 8910
+ host = '0.0.0.0'
+ apiUrl = '/.redwood/functions'
+ includeEnvironmentVariables = []
+[api]
+ port = 8911
+[browser]
+ open = true
+```
+
+We'll also need to tell the web side where the api side lives. Update the `apiUrl` to whatever domain your api side is running on (remember the domain you copied from from ngrok):
+
+```toml {5}
+[web]
+ title = "Redwood App"
+ port = 8910
+ host = '0.0.0.0'
+ apiUrl = 'https://fb6d701c44b5.ngrok.io'
+ includeEnvironmentVariables = []
+[api]
+ port = 8911
+[browser]
+ open = true
+```
+
+Where you get this domain from will depend on how you expose your app to the outside world (this example assumes ngrok).
+
+### Starting the Dev Server
+
+You'll need to apply an option when starting the dev server to tell it to accept requests from any host, not just `localhost`:
+
+```bash
+rw > yarn dev --fwd="--allowed-hosts all"
+```
+
+### Wrapping Up
+
+Now you should be able to open the web side's domain in a browser and use your site as usual. Test that GraphQL requests work, as well as authentication if you are using dbAuth.
diff --git a/docs/versioned_docs/version-8.4/create-redwood-app.md b/docs/versioned_docs/version-8.4/create-redwood-app.md
new file mode 100644
index 000000000000..1d4fe329bde7
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/create-redwood-app.md
@@ -0,0 +1,106 @@
+---
+slug: create-redwood-app
+description: Instructions and usage examples for Create Redwood App
+---
+
+# Create Redwood App
+
+To get up and running with Redwood, you can use Create Redwood App:
+
+```terminal
+yarn create redwood-app
+```
+
+## Set up for success
+
+Redwood requires that you're running Node version 20 or higher.
+
+If you're running Node version 21.0.0 or higher, you can still use Create Redwood App, but it may make your project incompatible with some deploy targets, such as AWS Lambdas.
+
+To see what version of Node you're running, you can run the following command in your terminal:
+
+```terminal
+node -v
+```
+
+If you need to update your version of Node or run multiple versions of Node, we recommend installing nvm and have [documentation about how to get up and running.](./how-to/using-nvm)
+
+You also need to have yarn version 1.22.21 or higher installed. To see what version of yarn you're running, you can run the following command in your terminal:
+
+```terminal
+yarn -v
+```
+
+To upgrade your version of yarn, [you can refer to the yarn documentation](https://yarnpkg.com/getting-started/install).
+
+## What you can expect
+
+### Select your preferred language
+
+Options: TypeScript (default) or JavaScript
+
+If you choose JavaScript, you can always [add TypeScript later](/docs/typescript/introduction#converting-a-javascript-project-to-typescript).
+
+### Do you want to initialize a git repo?
+
+Options: yes (default) or no
+
+If you mark "yes", then it will ask you to **Enter a commit message**. The default message is "Initial commit."
+
+You can always initialize a git repo later and add a commit message by running the following commands in your terminal:
+
+```terminal
+cd
+git init
+git add .
+git commit -m "Initial commit"
+```
+
+If you're new to git, here's a recommended playlist on YouTube: [git for Beginners](https://www.youtube.com/playlist?list=PLrz61zkUHJJFmfTgOVL1mBw_NZcgGe882)
+
+### Do you want to run `yarn install`?
+
+Options: yes (default) or no
+
+_NOTE: This prompt will only display if you're running yarn, version 1._
+
+This command will download all of your project's dependencies.
+
+If you mark "no", you can always run this command later:
+
+```terminal
+cd
+yarn install
+```
+
+## Running the development server
+
+Once the Create Redwood app has finished running, you can start your development server by running the following command:
+
+```terminal
+cd
+yarn rw dev
+```
+
+- This will start your development server at `http://localhost:8910`.
+- Your API will be available at `http://localhost:8911`.
+- You can visit the Redwood GraphQL Playground at `http://localhost:8911/graphql`.
+
+## Flags
+
+You can by pass these prompts by using the following flags:
+
+| Flag | Alias | What it does |
+| :---------------------------------- | :---- | :---------------------------------------------------------------------------------- |
+| `--yarn-install` | | Run `yarn install` |
+| `--typescript` | `ts` | Set TypeScript as the preferred language (pass `--no-typescript` to use JavaScript) |
+| `--overwrite` | | Overwrites the existing directory, if it has the same name |
+| `--git-init` | `git` | Initializes a git repository |
+| `--commit-message "Initial commit"` | `m` | Specifies the initial git commit message |
+| `--yes` | `y` | Automatically select all defaults |
+
+For example, here's the project with all flags enabled:
+
+```terminal
+yarn create redwood-app --typescript --git-init --commit-message "Initial commit" --yarn-install
+```
diff --git a/docs/versioned_docs/version-8.4/data-migrations.md b/docs/versioned_docs/version-8.4/data-migrations.md
new file mode 100644
index 000000000000..4f7e85e03e94
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/data-migrations.md
@@ -0,0 +1,172 @@
+---
+description: Track changes to database content
+---
+
+# Data Migrations
+
+> Data Migrations are available as of RedwoodJS v0.15
+
+There are two kinds of changes you can make to your database:
+
+- Changes to structure
+- Changes to content
+
+In Redwood, [Prisma Migrate](https://www.prisma.io/docs/reference/tools-and-interfaces/prisma-migrate) takes care of codifying changes to your database _structure_ in code by creating a snapshot of changes to your database that can be reliably repeated to end up in some known state.
+
+To track changes to your database _content_, Redwood includes a feature we call **Data Migration**. As your app evolves and you move data around, you need a way to consistently declare how that data should move.
+
+Imagine a `User` model that contains several columns for user preferences. Over time, you may end up with more and more preferences to the point that you have more preference-related columns in the table than you do data unique to the user! This is a common occurrence as applications grow. You decide that the app should have a new model, `Preference`, to keep track of them all (and `Preference` will have a foreign key `userId` to reference it back to its `User`). You'll use Prisma Migrate to create the new `Preference` model, but how do you copy the preference data to the new table? Data migrations to the rescue!
+
+## Installing
+
+Just like Prisma, we will store which data migrations have run in the database itself. We'll create a new database table `DataMigration` to keep track of which ones have run already.
+
+Rather than create this model by hand, Redwood includes a CLI tool to add the model to `schema.prisma` and create the DB migration that adds the table to the database:
+
+```
+yarn rw data-migrate install
+```
+
+You'll see a new directory created at `api/db/dataMigrations` which will store our individual migration tasks.
+
+Take a look at `schema.prisma` to see the new model definition:
+
+```jsx title="api/db/schema.prisma"
+model RW_DataMigration {
+ version String @id
+ name String
+ startedAt DateTime
+ finishedAt DateTime
+}
+```
+
+The install script also ran `yarn rw prisma migrate dev --create-only` automatically so you have a DB migration ready to go. You just need to run the `prisma migrate dev` command to apply it:
+
+```
+yarn rw prisma migrate dev
+```
+
+## Creating a New Data Migration
+
+Data migrations are just plain Typescript or Javascript files which export a single anonymous function that is given a single argument—an instance of `PrismaClient` called `db` that you can use to access your database. The files have a simple naming convention:
+
+```
+{version}-{name}.js
+```
+
+Where `version` is a timestamp, like `20200721123456` (an ISO8601 datetime without any special characters or zone identifier), and `name` is a param-case human readable name for the migration, like `copy-preferences`.
+
+To create a data migration we have a generator:
+
+```
+yarn rw generate dataMigration copyPreferences
+```
+
+This will create `api/db/dataMigrations/20200721123456-copy-preferences.js`:
+
+```jsx title="api/db/dataMigrations/20200721123456-copy-preferences.js"
+export default async ({ db }) => {
+ // Migration here...
+}
+```
+
+> **Why such a long name?**
+>
+> So that if multiple developers are creating data migrations, the chances of them creating one with the exact same filename is essentially zero, and they will all run in a predictable order—oldest to newest.
+
+Now it's up to you to define your data migration. In our user/preference example, it may look something like:
+
+```jsx title="api/db/dataMigrations/20200721123456-copy-preferences.js"
+const asyncForEach = async (array, callback) => {
+ for (let index = 0; index < array.length; index++) {
+ await callback(array[index], index, array)
+ }
+}
+
+export default async ({ db }) => {
+ const users = await db.user.findMany()
+
+ asyncForEach(users, async (user) => {
+ await db.preference.create({
+ data: {
+ newsletter: user.newsletter,
+ frequency: user.frequency,
+ theme: user.theme,
+ user: { connect: { id: user.id } },
+ },
+ })
+ })
+}
+```
+
+This loops through each existing `User` and creates a new `Preference` record containing each of the preference-related fields from `User`.
+
+> Note that in a case like this where you're copying data to a new table, you would probably delete the columns from `User` afterwards. This needs to be a two step process!
+>
+> 1. Create the new table (db migration) and then move the data over (data migration)
+> 2. Remove the unneeded columns from `User`
+>
+> When going to production, you would need to run this as two separate deploys to ensure no data is lost.
+>
+> The reason is that all DB migrations are run and _then_ all data migrations. So if you had two DB migrations (one to create `Preference` and one to drop the unneeded columns from `User`) they would both run before the Data Migration, so the columns containing the preferences are gone before the data migration gets a chance to copy them over!
+>
+> **Remember**: Any destructive action on the database (removing a table or column especially) needs to be a two step process to avoid data loss.
+
+## Running a Data Migration
+
+When you're ready, you can execute your data migration with `data-migrate`'s `up` command:
+
+```
+yarn rw data-migrate up
+```
+
+This goes through each file in `api/db/dataMigrations`, compares it against the list of migrations that have already run according to the `DataMigration` table in the database, and executes any that aren't present in that table, sorted oldest to newest based on the timestamp in the filename.
+
+Any logging statements (like `console.info()`) you include in your data migration script will be output to the console as the script is running.
+
+If the script encounters an error, the process will abort, skipping any following data migrations.
+
+> The example data migration above didn't include this for brevity, but you should always run your data migration [inside a transaction](https://www.prisma.io/docs/reference/tools-and-interfaces/prisma-client/transactions#bulk-operations-experimental) so that if any errors occur during execution the database will not be left in an inconsistent state where only _some_ of your changes were performed.
+
+## Long-term Maintainability
+
+Ideally you can run all database migrations and data migrations from scratch (like when a new developer joins the team) and have them execute correctly. Unfortunately you don't get that ideal scenario by default.
+
+Take our example above—what happens when a new developer comes long and attempts to setup their database? All DB migrations will run first (including the one that drops the preference-related columns from `User`) before the data migrations run. They will get an error when they try to read something like `user.newsletter` and that column doesn't exist!
+
+One technique to combat this is to check for the existence of these columns before the data migration does anything. If `user.newsletter` doesn't exist, then don't bother running the data migration at all and assume that your [seed data](cli-commands.md#prisma-db-seed) is already in the correct format:
+
+```jsx {4,15}
+export default async ({ db }) => {
+ const users = await db.user.findMany()
+
+ if (typeof user.newsletter !== undefined) {
+ asyncForEach(users, async (user) => {
+ await db.preference.create({
+ data: {
+ newsletter: user.newsletter,
+ frequency: user.frequency,
+ theme: user.theme,
+ user: { connect: { id: user.id } },
+ },
+ })
+ })
+ }
+}
+```
+
+## Lifecycle Summary
+
+Run once:
+
+```
+yarn rw data-migrate install
+yarn rw prisma migrate dev
+```
+
+Run every time you need a new data migration:
+
+```
+yarn rw generate dataMigration migrationName
+yarn rw data-migrate up
+```
diff --git a/docs/versioned_docs/version-8.4/database-seeds.md b/docs/versioned_docs/version-8.4/database-seeds.md
new file mode 100644
index 000000000000..e8cdc1917ce5
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/database-seeds.md
@@ -0,0 +1,223 @@
+# Database Seeds
+
+Seeds are data that are required in order for your app to function. Think of
+the data a new developer would need to get up and running with your codebase, or
+data that needs to exist when a new instance of your application is deployed to
+a new environment.
+
+Seed data are things like:
+
+- An admin user so that you can log in to your new instance
+- A list of categories that can be assigned to a Product
+- Lists of roles and permissions
+
+Seed data is not meant for:
+
+- Sample data to be used in development
+- Data to run tests against
+- Randomized data
+
+## Best Practices
+
+Ideally seed data should be idempotent: you can execute the seed
+script against your database at any time and end up with the seed data properly
+populated in the database. It should not result in wiping out existing records
+or creating duplicates of any seeded data that is already present.
+
+Making your seeds idempotent requires more code than just a straight
+`createMany()`. The code examples below use the safest idempotent strategy
+by having an `upsert` check if a record with the same unique identifier
+already exists, and if so just update it, if not then create it. But, this
+technique requires a separate SQL statement for each member of your data array
+and is less performant than `createMany()`.
+
+You could also do a check if _any_ data exists in the database first, and if
+not, create the records with `createMany()`. However, this means that any
+existing seed data that may have been modified will remain, and would not be
+updated to match what you expect in your seed.
+
+When in doubt, `upsert`!
+
+## When seeds run
+
+Seeds are automatically run the first time you migrate your database:
+
+```bash
+yarn rw prisma migrate dev
+```
+
+They are run _every_ time you reset your database:
+
+```bash
+yarn rw prisma migrate reset
+```
+
+You can manually run seeds at any time with the following command:
+
+```
+yarn rw prisma db seed
+```
+
+You generally don't need to keep invoking your seeds over and over again, so it
+makes sense that Prisma only does it on a complete database reset, or when the
+database is created with the first `prisma migrate dev` execution. But as your
+schema evolves you may add a new model that requires some seeded data and so
+you can add it to your seed file and then manually run it to create those
+records.
+
+### Performance
+
+Prisma is faster at execting a `createMany()` instead of many `create` or
+`upsert` functions. Unfortunately, you lose the ability to easily make your seed
+idempotent with a single function call.
+
+One solution to simulate an `upsert` will still using `createMany()` could be
+to start with the full array of data and first check to see whether each of
+those records already exist in the database. If they do, create two
+arrays: one for records that don't exist and run `createMany()` with them, and
+the second list for records that do exist, and run `updateMany()` on those.
+
+Unfortunately this relies on a select query for each record, which may negate
+the performance benefits of `createMany()`. Since you are running seeds
+realitively rarely, it's our recommendation that you focus less on absolute
+performance and worry more about making them easy to maintain.
+
+## Types
+
+If you're using Typescript you'll probably want to type your seeds as well.
+Getting the right types for Prisma models can be tricky, but here's the formula:
+
+```javascript title="scripts/seed.ts"
+import { db } from 'api/src/lib/db'
+// highlight-next-line
+import type { Prisma } from '@prisma/client'
+
+export default async () => {
+ try {
+ // highlight-next-line
+ const users: Prisma.UserCreateArgs['data'][] = [
+ { name: 'Alice', email: 'alice@redwoodjs.com },
+ { name: 'Bob', email: 'bob@redwoodjs.com },
+ ]
+
+ await db.user.createMany({ data: users })
+ } catch (error) {
+ console.error(error)
+ }
+}
+```
+
+## Creating seed data
+
+Take a look at `scripts/seed.js` (or `.ts` if you're working on a Typescript
+project):
+
+```javascript title="scripts/seed.js"
+import { db } from 'api/src/lib/db'
+
+export default async () => {
+ try {
+ // Create your database records here! For example, seed some users:
+ //
+ // const users = [
+ // { name: 'Alice', email: 'alice@redwoodjs.com },
+ // { name: 'Bob', email: 'bob@redwoodjs.com },
+ // ]
+ //
+ // await db.user.createMany({ data: users })
+
+ console.info(
+ '\n No seed data, skipping. See scripts/seed.ts to start seeding your database!\n'
+ )
+ } catch (error) {
+ console.error(error)
+ }
+}
+```
+
+Let's create some categories for a bookstore. For this example, assume the
+`Category` model has a unique constraint on the `name` field. Remove the
+commented example and add your code:
+
+```javascript title="scripts/seed.js"
+export default async () => {
+ try {
+ const data = [
+ { name: 'Art', bisacCode: 'ART000000' },
+ { name: 'Biography', bisacCode: 'BIO000000' },
+ { name: 'Fiction', bisacCode: 'FIC000000' },
+ { name: 'Nature', bisacCode: 'NAT000000' },
+ { name: 'Travel', bisacCode: 'TRV000000' },
+ { name: 'World History', bisacCode: 'HIS037000' },
+ ]
+
+ for (const item of data) {
+ await db.category.upsert({
+ where: { name: item.name },
+ update: { code: item.code },
+ create: { name: item.name, code: item.code },
+ })
+ }
+ } catch (error) {
+ console.error(error)
+ }
+}
+```
+
+You can now execute this seed as many times as you want and you'll end up with
+that exact list in the database each time. And, any additional categories you've
+created in the meantime will remain. Remember: seeds are meant to be the
+_minimum_ amount of data you need for your app to run, not necessarily _all_ the
+data that will ever be present in those tables.
+
+# Seeding users for dbAuth
+
+If using dbAuth and seeding users, you'll need to add a `hashedPassword` and
+`salt` using the same algorithm that dbAuth uses internally. Here's an easy way
+do that:
+
+```javascript title="scripts/seed.js"
+import { hashPassword } from '@redwoodjs/auth-dbauth-api'
+
+export default async () => {
+ const users = [
+ { name: 'John', email: 'john@example.com', password: 'secret1' },
+ { name: 'Jane', email: 'jane@example.com', password: 'secret2' },
+ ]
+
+ for (const user of users) {
+ const [hashedPassword, salt] = hashPassword(user.password)
+
+ await db.user.upsert({
+ where: {
+ email: user.email,
+ },
+ create: {
+ name: user.name,
+ email: user.email,
+ hashedPassword,
+ salt,
+ },
+ update: {
+ name: user.name,
+ hashedPassword,
+ salt,
+ },
+ })
+ }
+}
+```
+
+## What if I don't need seeds?
+
+In order to stop automatically executing seeds with the `prisma migrate`
+commands you can remove the following lines from `package.json` in the root of
+your app:
+
+```json
+"prisma": {
+ "seed": "yarn rw exec seed"
+},
+```
+
+You can then delete the `scripts/seed.js` file.
diff --git a/docs/versioned_docs/version-8.4/deploy/baremetal.md b/docs/versioned_docs/version-8.4/deploy/baremetal.md
new file mode 100644
index 000000000000..da59b5a15ae2
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/baremetal.md
@@ -0,0 +1,800 @@
+---
+description: Have complete control by hosting your own code
+---
+
+# Introduction to Baremetal
+
+Once you've grown beyond the confines and limitations of the cloud deployment providers, it's time to get serious: hosting your own code on big iron. Prepare for performance like you've only dreamed of! Also be prepared for IT and infrastructure responsibilities like you've only had nightmares of.
+
+With Redwood's Baremetal deployment option, the source (like your dev machine) will SSH into one or more remote machines and execute commands in order to update your codebase, run any database migrations and restart services.
+
+Deploying from a client (like your own development machine) consists of running a single command:
+
+First time deploy:
+
+```bash
+yarn rw deploy baremetal production --first-run
+```
+
+Subsequent deploys:
+
+```bash
+yarn rw deploy baremetal production
+```
+
+:::warning Deploying to baremetal is an advanced topic
+
+If you haven't done any kind of remote server work before, you may be in a little over your head to start with. But don't worry: until relatively recently (cloud computing, serverless, lambda functions) this is how all websites were deployed, so we've got a good 30 years of experience getting this working!
+
+If you're new to connecting to remote servers, check out the [Intro to Servers](/docs/intro-to-servers) guide we wrote just for you.
+
+:::
+
+## Deployment Lifecycle
+
+The Baremetal deploy runs several commands in sequence. These can be customized, to an extent, and some of them skipped completely:
+
+1. `df` to make sure there is enough free disk space on the server
+2. `git clone --depth=1` to retrieve the latest code
+3. Symlink the latest deploy `.env` to the shared `.env` in the app dir
+4. `yarn install` - installs dependencies
+5. Runs prisma DB migrations
+6. Generate Prisma client libs
+7. Runs [data migrations](/docs/data-migrations)
+8. Builds the web and/or api sides
+9. Symlink the latest deploy dir to `current` in the app dir
+10. Restart the serving process(es)
+11. Remove older deploy directories
+
+### First Run Lifecycle
+
+If the `--first-run` flag is specified then step 7 above will execute the following commands instead:
+
+- `pm2 start [service]` - starts the serving process(es)
+- `pm2 save` - saves the running services to the deploy users config file for future startup. See [Starting on Reboot](#starting-on-reboot) for further information
+
+## Directory Structure
+
+Once you're deployed and running, you'll find a directory structure that looks like this:
+
+```
+└── var
+ └── www
+ └── myapp
+ ├── .env <────────────────┐
+ ├── current ───symlink──┐ │
+ └── releases │ │
+ └── 20220420120000 <┘ │
+ ├── .env ─symlink─┘
+ ├── api
+ ├── web
+ ├── ...
+```
+
+There's a symlink `current` pointing to directory named for a timestamp (the timestamp of the last deploy) and within that is your codebase, the latest revision having been `clone`d. The `.env` file in that directory is then symlinked back out to the one in the root of your app path, so that it can be shared across deployments.
+
+So a reference to `/var/www/myapp/current` will always be the latest deployed version of your codebase. If you wanted to [setup nginx to serve your web side](#redwood-serves-api-nginx-serves-web-side), you would point it to `/var/www/myapp/current/web/dist` as the `root` and it will always be serving the latest code: a new deploy will change the `current` symlink and nginx will start serving the new files instantaneously.
+
+## App Setup
+
+Run the following to add the required config files to your codebase:
+
+```bash
+yarn rw setup deploy baremetal
+```
+
+This will add dependencies to your `package.json` and create two files:
+
+1. `deploy.toml` contains server config for knowing which machines to connect to and which commands to run
+2. `ecosystem.config.js` for [PM2](https://pm2.keymetrics.io/) to know what service(s) to monitor
+
+If you see an error from `gyp` you may need to add some additional dependencies before `yarn install` will be able to complete. See the README for `node-type` for more info: https://github.com/nodejs/node-gyp#installation
+
+### Configuration
+
+Before your first deploy you'll need to add some configuration.
+
+#### ecosystem.config.js
+
+By default, baremetal assumes you want to run the `yarn rw serve` command, which provides both the web and api sides. The web side will be available on port 8910 unless you update your `redwood.toml` file to make it available on another port. The default generated `ecosystem.config.js` will contain this config only, within a service called "serve":
+
+```jsx title="ecosystem.config.js"
+module.exports = {
+ apps: [
+ {
+ name: 'serve',
+ cwd: 'current',
+ script: 'node_modules/.bin/rw',
+ args: 'serve',
+ instances: 'max',
+ exec_mode: 'cluster',
+ wait_ready: true,
+ listen_timeout: 10000,
+ },
+ ],
+}
+```
+
+If you follow our recommended config [below](#redwood-serves-api-nginx-serves-web-side), you could update this to only serve the api side, because the web side will be handled by [nginx](https://www.nginx.com/). That could look like:
+
+```jsx title="ecosystem.config.js"
+module.exports = {
+ apps: [
+ {
+ name: 'api',
+ cwd: 'current',
+ script: 'node_modules/.bin/rw',
+ args: 'serve api',
+ instances: 'max',
+ exec_mode: 'cluster',
+ wait_ready: true,
+ listen_timeout: 10000,
+ },
+ ],
+}
+```
+
+#### deploy.toml
+
+This file contains your server configuration: which servers to connect to and which commands to run on them.
+
+```toml title="deploy.toml"
+[[production.servers]]
+host = "server.com"
+username = "user"
+agentForward = true
+sides = ["api","web"]
+packageManagerCommand = "yarn"
+monitorCommand = "pm2"
+path = "/var/www/app"
+processNames = ["serve"]
+repo = "git@github.com:myorg/myapp.git"
+branch = "main"
+keepReleases = 5
+```
+
+This lists a single server, in the `production` environment, providing the hostname and connection details (`username` and `agentForward`), which `sides` are hosted on this server (by default it's both web and api sides), the `path` to the app code and then which PM2 service names should be (re)started on this server.
+
+#### Config Options
+
+- `host` - hostname to the server
+- `port` - [optional] ssh port for server connection, defaults to 22
+- `username` - the user to login as
+- `password` - [optional] if you are using password authentication, include that here
+- `privateKey` - [optional] if you connect with a private key, include the content of the key here, as a buffer: `privateKey: Buffer.from('...')`. Use this _or_ `privateKeyPath`, not both.
+- `privateKeyPath` - [optional] if you connect with a private key, include the path to the key here: `privateKeyPath: path.join('path','to','key.pem')` Use this _or_ `privateKey`, not both.
+- `passphrase` - [optional] if your private key contains a passphrase, enter it here
+- `agentForward` - [optional] if you have [agent forwarding](https://docs.github.com/en/developers/overview/using-ssh-agent-forwarding) enabled, set this to `true` and your own credentials will be used for further SSH connections from the server (like when connecting to GitHub)
+- `sides` - An array of sides that will be built on this server
+- `packageManagerCommand` - The package manager bin to call, defaults to `yarn` but could be updated to be prefixed with another command first, for example: `doppler run -- yarn`
+- `monitorCommand` - The monitor bin to call, defaults to `pm2` but could be updated to be prefixed with another command first, for example: `doppler run -- pm2`
+- `path` - The absolute path to the root of the application on the server
+- `migrate` - [optional] Whether or not to run migration processes on this server, defaults to `true`
+- `processNames` - An array of service names from `ecosystem.config.js` which will be (re)started on a successful deploy
+- `repo` - The path to the git repo to clone
+- `branch` - [optional] The branch to deploy (defaults to `main`)
+- `keepReleases` - [optional] The number of previous releases to keep on the server, including the one currently being served (defaults to 5)
+- `freeSpaceRequired` - [optional] The amount of free space required on the server in MB (defaults to 2048 MB). You can set this to `0` to skip checking.
+
+The easiest connection method is generally to include your own public key in the server's `~/.ssh/authorized_keys` mannually or by running `ssh-copy-id user@server.com` from your local machine, [enable agent forwarding](https://docs.github.com/en/developers/overview/using-ssh-agent-forwarding), and then set `agentForward = true` in `deploy.toml`. This will allow you to use your own credentials when pulling code from GitHub (required for private repos). Otherwise you can create a [deploy key](https://docs.github.com/en/developers/overview/managing-deploy-keys) and keep it on the server.
+
+#### Using Environment Variables in `deploy.toml`
+
+Similarly to `redwood.toml`, `deploy.toml` supports interpolation of environment variables. For more details on how to use the environment variable interpolation see [Using Environment Variables in redwood.toml](/docs/app-configuration-redwood-toml#using-environment-variables-in-redwoodtoml)
+
+#### Multiple Servers
+
+If you start horizontally scaling your application you may find it necessary to have the web and api sides served from different servers. The configuration files can accommodate this:
+
+```toml title="deploy.toml"
+[[production.servers]]
+host = "api.server.com"
+username = "user"
+agentForward = true
+sides = ["api"]
+path = "/var/www/app"
+processNames = ["api"]
+
+[[production.servers]]
+host = "web.server.com"
+username = "user"
+agentForward = true
+sides = ["web"]
+path = "/var/www/app"
+migrate = false
+processNames = ["web"]
+```
+
+```jsx title="ecosystem.config.js"
+module.exports = {
+ apps: [
+ {
+ name: 'api',
+ cwd: 'current',
+ script: 'node_modules/.bin/rw',
+ args: 'serve api',
+ instances: 'max',
+ exec_mode: 'cluster',
+ wait_ready: true,
+ listen_timeout: 10000,
+ },
+ {
+ name: 'web',
+ cwd: 'current',
+ script: 'node_modules/.bin/rw',
+ args: 'serve web',
+ instances: 'max',
+ exec_mode: 'cluster',
+ wait_ready: true,
+ listen_timeout: 10000,
+ },
+ ],
+}
+```
+
+Note the inclusion of `migrate = false` so that migrations are not run again on the web server (they only need to run once and it makes sense to keep them with the api side).
+
+You can add as many `[[servers]]` blocks as you need.
+
+#### Multiple Environments
+
+You can deploy to multiple environments from a single `deploy.toml` by including servers grouped by environment name:
+
+```toml title="deploy.toml"
+[[production.servers]]
+host = "prod.server.com"
+username = "user"
+agentForward = true
+sides = ["api", "web"]
+path = "/var/www/app"
+processNames = ["serve"]
+
+[[staging.servers]]
+host = "staging.server.com"
+username = "user"
+agentForward = true
+sides = ["api", "web"]
+path = "/var/www/app"
+processNames = ["serve", "stage-logging"]
+```
+
+At deploy time, include the environment in the command:
+
+```bash
+yarn rw deploy baremetal staging
+```
+
+Note that the codebase shares a single `ecosystem.config.js` file. If you need a different set of services running in different environments you'll need to simply give them a unique name and reference them in the `processNames` option of `deploy.toml` (see the additional `stage-logging` process in the above example).
+
+## Server Setup
+
+You will need to create the directory in which your app code will live. This path will be the `path` var in `deploy.toml`. Make sure the username you will connect as in `deploy.toml` has permission to read/write/execute files in this directory. For example, if your `/var` dir is owned by `root`, but you're going to deploy with a user named `deploy`:
+
+```bash
+sudo mkdir -p /var/www/myapp
+sudo chown deploy:deploy /var/www/myapp
+```
+
+You'll want to create an `.env` file in this directory containing any environment variables that are needed by your app (like `DATABASE_URL` at a minimum). This will be symlinked to each release directory so that it's available as the app expects (in the root directory of the codebase).
+
+:::warning SSH and Non-interactive Sessions
+
+The deployment process uses a '[non-interactive](https://tldp.org/LDP/abs/html/intandnonint.html)' SSH session to run commands on the remote server. A non-interactive session will often load a minimal amount of settings for better compatibility and speed. In some versions of Linux `.bashrc` by default does not load (by design) from a non-interactive session. This can lead to `yarn` (or other commands) not being found by the deployment script, even though they are in your path, because additional ENV vars are set in `~/.bashrc` which provide things like NPM paths and setup.
+
+A quick fix on some distros is to edit the deployment user's `~/.bashrc` file and comment out the lines that _stop_ non-interactive processing.
+
+```diff title="~/.bashrc"
+# If not running interactively, don't do anything
+- case $- in
+- *i*) ;;
+- *) return;;
+- esac
+
+# If not running interactively, don't do anything
++ # case $- in
++ # *i*) ;;
++ # *) return;;
++ # esac
+```
+
+This may also be a one-liner like:
+
+```diff title="~/.bashrc"
+- [ -z "$PS1" ] && return
++ # [ -z "$PS1" ] && return
+```
+
+There are techniques for getting `node`, `npm` and `yarn` to be available without loading everything in `.bashrc`. See [this comment](https://github.com/nvm-sh/nvm/issues/1290#issuecomment-427557733) for some ideas.
+
+:::
+
+## First Deploy
+
+Back on your development machine, enter your details in `deploy.toml`, commit it and push it up, and then try a first deploy:
+
+```bash
+yarn rw deploy baremetal production --first-run
+```
+
+If there are any issues the deploy should stop and you'll see the error message printed to the console.
+
+If it worked, hooray! You're deployed to BAREMETAL. If not, read on...
+
+### Troubleshooting
+
+On the server you should see a new directory inside the `path` you defined in `deploy.toml`. It should be a timestamp of the deploy, like:
+
+```bash
+drwxrwxr-x 7 ubuntu ubuntu 4096 Apr 22 23:00 ./
+drwxr-xr-x 7 ubuntu ubuntu 4096 Apr 22 22:46 ../
+-rw-rw-r-- 1 ubuntu ubuntu 1167 Apr 22 20:49 .env
+drwxrwxr-x 10 ubuntu ubuntu 4096 Apr 22 21:43 20220422214218/
+```
+
+You may or may not also have a `current` symlink in the app directory pointing to that timestamp directory (it depends how far the deploy script got before it failed as to whether you'll have the symlink or not).
+
+`cd` into that timestamped directory and check that you have a `.env` symlink pointing back to the app directory's `.env` file.
+
+Next, try performing all of the steps yourself that would happen during a deploy:
+
+```
+yarn install
+yarn rw prisma migrate deploy
+yarn rw prisma generate
+yarn rw dataMigrate up
+yarn rw build
+ln -nsf "$(pwd)" ../current
+```
+
+If they worked for you, the deploy process should have no problem as it runs the same commands (after all, it connects via SSH and runs the same commands you just did!)
+
+Next we can check that the site is being served correctly. Run `yarn rw serve` and make sure your processes start and are accessible (by default on port 8910):
+
+```bash
+curl http://localhost:8910
+# or
+wget http://localhost:8910
+```
+
+If you don't see the content of your `web/src/index.html` file then something isn't working. You'll need to fix those issues before you can deploy. Verify the api side is responding:
+
+```bash
+curl http://localhost:8910/.redwood/functions/graphql?query={redwood{version}}
+# or
+wget http://localhost:8910/.redwood/functions/graphql?query={redwood{version}}
+```
+
+You should see something like:
+
+```json
+{
+ "data": {
+ "redwood": {
+ "version": "1.0.0"
+ }
+ }
+}
+```
+
+If so then your API side is up and running! The only thing left to test is that the api side has access to the database. This call would be pretty specific to your app, but assuming you have port 8910 open to the world you could simply open a browser to click around to find a page that makes a database request.
+
+Was the problem with starting your PM2 process? That will be harder to debug here in this doc, but visit us in the [forums](https://community.redwoodjs.com) or [Discord](https://discord.gg/redwoodjs) and we'll try to help!
+
+:::note My pm2 processes are running but your app has errors, how do I see them?
+
+If your processes are up and running in pm2 you can monitor their log output. Run `pm2 monit` and get a nice graphical interface for watching the logs on your processes. Press the up/down arrows to move through the processes and left/right to switch panes.
+
+![pm2 monit screenshot](https://user-images.githubusercontent.com/300/213776175-2f78d9d4-7e6e-4d69-81b2-a648cc37b6ea.png)
+
+Sometimes the log messages are too long to read in the pane at the right. In that case you can watch them live by "tailing" them right in the terminal. pm2 logs are written to `~/.pm2/logs` and are named after the process name and id, and whether they are standard output or error messages. Here's an example directory listing:
+
+```
+ubuntu@ip-123-45-67-89:~/.pm2/logs$ ll
+total 116
+drwxrwxr-x 2 ubuntu ubuntu 4096 Jan 20 17:58 ./
+drwxrwxr-x 5 ubuntu ubuntu 4096 Jan 20 17:40 ../
+-rw-rw-r-- 1 ubuntu ubuntu 0 Jan 20 17:58 api-error-0.log
+-rw-rw-r-- 1 ubuntu ubuntu 0 Jan 20 17:58 api-error-1.log
+-rw-rw-r-- 1 ubuntu ubuntu 27788 Jan 20 18:11 api-out-0.log
+-rw-rw-r-- 1 ubuntu ubuntu 21884 Jan 20 18:11 api-out-1.log
+```
+
+To watch a log live, run:
+
+```terminal
+tail -f ~/.pm2/logs/api-out-0.log
+```
+
+Note that if you have more than one process running, like we do here, requesting a page on the website will send the request to one of available processes randomly, so you may not see your request show up unless you refresh a few times. Or you can connect to two separate SSH sessions and tail both of the log files at the same time.
+
+:::
+
+## Starting Processes on Server Restart
+
+The `pm2` service requires some system "hooks" to be installed so it can boot up using your system's service manager. Otherwise, your PM2 services will need to be manually started again on a server restart. These steps only need to be run the first time you install PM2.
+
+SSH into your server and then run:
+
+```bash
+pm2 startup
+```
+
+You will see some output similar to the output below. We care about the output after "copy/paste the following command:" You'll need to do just that: copy the command starting with `sudo` and then paste and execute it. _Note_ this command uses `sudo` so you'll need the root password to the machine in order for it to complete successfully.
+
+:::warning
+
+The below text is _example_ output, yours will be different, don't copy and paste ours!
+
+:::
+
+```bash
+$ pm2 startup
+[PM2] Init System found: systemd
+[PM2] To setup the Startup Script, copy/paste the following command:
+// highlight-next-line
+sudo env PATH=$PATH:/home/ubuntu/.nvm/versions/node/v16.13.2/bin /home/ubuntu/.nvm/versions/node/v16.13.2/lib/node_modules/pm2/bin/pm2 startup systemd -u ubuntu --hp /home/ubuntu
+```
+
+In this example, you would copy `sudo env PATH=$PATH:/home/ubuntu/.nvm/versions/node/v16.13.2/bin /home/ubuntu/.nvm/versions/node/v16.13.2/lib/node_modules/pm2/bin/pm2 startup systemd -u ubuntu --hp /home/ubuntu` and run it. You should get a bunch of output along with `[PM2] [v] Command successfully executed.` near the end. Now if your server restarts for whatever reason, your PM2 processes will be restarted once the server is back up.
+
+## Customizing the Deploy
+
+There are several ways you can customize the deploys steps, whether that's skipping steps completely, or inserting your own commands before or after the default ones.
+
+### Skipping Steps
+
+If you want to speed things up you can skip one or more steps during the deploy. For example, if you have no database migrations, you can skip them completely and save some time:
+
+```bash
+yarn rw deploy baremetal production --no-migrate
+```
+
+Run `yarn rw deploy baremetal --help` for the full list of flags. You can set them as `--migrate=false` or use the `--no-migrate` variant.
+
+### Inserting Custom Commands
+
+Baremetal supports running your own custom commands before or after the regular deploy commands. You can run commands **before** and/or **after** the built-in commands. Your custom commands are defined in the `deploy.toml` config file. The existing commands that you can hook into are:
+
+1. `df` - Checking for free disk space
+2. `update` - cloning the codebase
+3. `symlinkEnv` - symlink the new deploy's `.env` to shared one in the app dir
+4. `install` - `yarn install`
+5. `migrate` - database migrations
+6. `build` - `yarn build` (your custom before/after command is run for each side being built)
+7. `symlinkCurrent` - symlink the new deploy dir to `current` in the app dir
+8. `restart` - (re)starting any pm2 processes (your custom command will run before/after each process is restarted)
+9. `cleanup` - cleaning up any old releases
+
+You can define your before/after commands in three different places:
+
+- Globally - runs for any environment
+- Environment specific - runs for only a single environment
+- Server specific - runs for only a single server in a single environment
+
+:::warning
+
+Custom commands are run in the new **deploy** directory, not the root of your application directory. During a deploy the `current` symlink will point to the previous directory while your code is executed in the new one, before the `current` symlink location is updated.
+
+```bash
+drwxrwxr-x 5 ubuntu ubuntu 4096 May 10 18:20 ./
+drwxr-xr-x 7 ubuntu ubuntu 4096 Apr 27 17:43 ../
+drwxrwxr-x 2 ubuntu ubuntu 4096 May 9 22:59 20220503211428/
+drwxrwxr-x 2 ubuntu ubuntu 4096 May 9 22:59 20220503211429/
+drwxrwxr-x 10 ubuntu ubuntu 4096 May 10 18:18 20220510181730/ commands are run in here < --
+lrwxrwxrwx 1 ubuntu ubuntu 14 May 10 18:19 current - > 20220503211429/
+-rw-rw-r-- 1 ubuntu ubuntu 1167 Apr 22 20:49 .env
+```
+
+:::
+
+#### Syntax
+
+Global events are defined in a `[before]` and/or `[after]` block in your `deploy.toml` file:
+
+```toml
+[before]
+install = "touch install.lock"
+
+[after]
+install = "rm install.lock"
+
+[[production.servers]]
+host = 'server.com'
+# ...
+```
+
+Environment specific commands are defined in a `[[environment.before]]` and `[[environment.after]]` block:
+
+```toml
+[production.before]
+install = "touch prod-install.lock"
+
+[production.after]
+install = "rm prod-install.lock"
+
+[production.servers]
+host = 'server.com'
+# ...
+```
+
+Server specific commands are defined with a `before.command` and `after.command` key directly in your server config:
+
+```toml
+[[production.servers]]
+host = 'server.com'
+# ...
+before.install = 'touch server-install.lock'
+after.install = 'rm server-install.lock'
+```
+
+You can define commands as a string, or an array of strings if you want to run multiple commands:
+
+```toml
+[before]
+install = ["echo 'started at $(date)' > install.lock", "cp -R . ../backup"]
+
+[[production.servers]]
+host = 'server.com'
+# ...
+```
+
+You can include commands in any/all of the three configurations (global, env and server) and they will all be stacked up and run in that order: `global -> environment -> server`. For example:
+
+```toml
+[[production.servers]]
+host = 'server.com'
+# ...
+before.install = 'touch server-install.lock'
+
+[production.before]
+install = ['touch prod-install1.lock', 'touch prod-install2.lock']
+
+[before]
+install = 'touch install.lock'
+```
+
+Would result in the commands running in this order, all before running `yarn install`:
+
+1. `touch install.lock`
+2. `touch prod-install1.lock`
+3. `touch prod-install2.lock`
+4. `touch server-install.lock`
+
+## Rollback
+
+If you deploy and find something has gone horribly wrong, you can rollback your deploy to the previous release:
+
+```bash
+yarn rw deploy baremetal production --rollback
+```
+
+You can even rollback multiple deploys, up to the total number you still have denoted with the `keepReleases` option:
+
+```bash
+yarn rw deploy baremetal production --rollback 3
+```
+
+Note that this will _not_ rollback your database—if you had a release that changed the database, that updated database will still be in effect, but with the previous version of the web and api sides. Trying to undo database migrations is a very difficult proposition and isn't even possible in many cases.
+
+Make sure to thoroughly test releases that change the database before doing it for real!
+
+## Maintenance Page
+
+If you find that you have a particular complex deploy, one that may involve incompatible database changes with the current codebase, or want to make sure that database changes don't occur while in the middle of a deploy, you can put up a maintenance page:
+
+```bash
+yarn rw deploy baremetal production --maintenance up
+```
+
+It does this by replacing `web/dist/200.html` with `web/src/maintenance.html`. This means any new web requests, at any URL, will show the maintenance page. This process also stops any services listed in the `processNames` option of `deploy.toml`—this is important for the api server as it will otherwise keep serving requests to users currently running the app, even though no _new_ users can get the Javascript packages required to start a new session in their browser.
+
+You can remove the maintenance page with:
+
+```bash
+yarn rw deploy baremetal production --maintenance down
+```
+
+Note that the maintenance page will automatically come down as the result of a new deploy as it checks out a new copy of the codebase (with a brand new copy of `web/dist/200.html` and will automatically restart services (bring them all back online).
+
+## Monitoring
+
+PM2 has a nice terminal-based dashboard for monitoring your services:
+
+```bash
+pm2 monit
+```
+
+![pm2 dashboard](https://user-images.githubusercontent.com/300/164799386-84442fa3-8e68-4cc6-9e64-928b8e32731a.png)
+
+And even a web-based UI with paid upgrades if you need to give normies access to your monitoring data:
+
+![pm2 web dashboard](https://user-images.githubusercontent.com/300/164799541-6fe321fa-4d7c-44f7-93c6-3c202638da4f.png)
+
+## Example Server Configurations
+
+The default configuration, which requires the least amount of manual configuration, is to serve both the web and api sides, with the web side being bound to port 8910. This isn't really feasible for a general web app which should be available on port 80 (for HTTP) and/or port 443 (for HTTPS). Here are some custom configs to help.
+
+### Redwood Serves Web and Api Sides, Bind to Port 80
+
+This is almost as easy as the default configuration, you just need to tell Redwood to bind to port 80. However, most \*nix distributions will not allow a process to bind to ports lower than 1024 without root/sudo permissions. There is a command you can run to allow access to a specific binary (`node` in this case) to bind to one of those ports anyway.
+
+#### Tell Redwood to Bind to Port 80
+
+Update the `[web]` port:
+
+```diff title="redwood.toml"
+[web]
+ title = "My Application"
+ apiUrl = "/.netlify/functions"
++ port = 80
+[api]
+ port = 8911
+[browser]
+ open = true
+```
+
+#### Allow Node to Bind to Port 80
+
+Use the [setcap](https://man7.org/linux/man-pages/man7/capabilities.7.html) utility to provide access to lower ports by a given process:
+
+```bash
+sudo setcap CAP_NET_BIND_SERVICE=+eip $(which node)
+```
+
+Now restart your service and it should be available on port 80:
+
+```bash
+pm2 restart serve
+```
+
+This should get your site available on port 80 (for HTTP), but you really want it available on port 443 (for HTTPS). That won't be easy if you continue to use Redwood's internal web server. See the next recipe for a solution.
+
+### Redwood Serves Api, Nginx Serves Web Side
+
+[nginx](https://www.nginx.com/) is a very robust, dedicated web server that can do a better job of serving our static web-side files than Redwood's own built-in web server (Fastify) which isn't really configured in Redwood for a high traffic, production website.
+
+If nginx will be serving our web side, what about api-side? Redwood's internal API server will be running, but on the default port of 8911. But browsers are going to want to connect on port 80 (HTTP) or 443 (HTTPS). nginx takes care of this as well: it will [proxy](https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/) (forward) any requests to a path of your choosing (like the default of `/.redwood/functions`) to port 8911 behind the scenes, then return the response to the browser.
+
+This doc isn't going to go through installing and getting nginx running, there are plenty of resources for that available. What we will show is a successful nginx configuration file used by several Redwood apps currently in production.
+
+```text title="nginx.conf"
+upstream redwood_server {
+ server 127.0.0.1:8911 fail_timeout=0;
+}
+
+server {
+ root /var/www/myapp/current/web/dist;
+ server_name myapp.com;
+ index index.html;
+
+ gzip on;
+ gzip_min_length 1000;
+ gzip_types application/json text/css application/javascript application/x-javascript;
+
+ sendfile on;
+
+ keepalive_timeout 65;
+
+ error_page 404 /404.html;
+ error_page 500 /500.html;
+
+ location / {
+ try_files $uri /200.html =404;
+ }
+
+ location ^~ /static/ {
+ gzip_static on;
+ expires max;
+ add_header Cache-Control public;
+ }
+
+ location ~ /.redwood/functions(.*) {
+ rewrite ^/.redwood/functions(.*) $1 break;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_pass http://redwood_server;
+ }
+}
+```
+
+Now when you start Redwood, you're only going to start the api server:
+
+```
+yarn rw serve api
+```
+
+When using `pm2` to start/monitor your processes, you can simplify your `deploy.toml` and `ecosystem.config.js` files to only worry about the api side:
+
+```toml title="deploy.toml"
+[[production.servers]]
+host = "myserver.com"
+username = "ubuntu"
+agentForward = true
+sides = ["api", "web"]
+path = "/var/www/myapp"
+// highlight-next-line
+processNames = ["api"]
+repo = "git@github.com:redwoodjs/myapp.git"
+branch = "main"
+keepReleases = 3
+packageManagerCommand = "yarn"
+monitorCommand = "pm2"
+```
+
+```js title="ecosystem.config.js"
+module.exports = {
+ apps: [
+ {
+ name: 'api',
+ cwd: 'current',
+ script: 'node_modules/.bin/rw',
+ args: 'serve api',
+ instances: 'max',
+ exec_mode: 'cluster',
+ wait_ready: true,
+ listen_timeout: 10000,
+ },
+ ],
+}
+```
+
+This is the bare minimum to get your site served over HTTP, insecurely. After verifying that your site is up and running, we recommend using [Let's Encrypt](https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04) to provision a SSL cert and it will also automatically update your nginx config so everything is served over HTTPS.
+
+#### Custom API Path
+
+If you don't love the path of `/.redwood/functions` for your API calls, this is easy to change. You'll need to tell Redwood to use a different path in development, and then let nginx know about that same path so that it resolves the same in production.
+
+For example, to simplify the path to just `/api` you'll need to make a change to `redwood.toml` and your new nginx config file:
+
+```toml title="redwood.toml"
+[web]
+ title = "My App"
+ port = 8910
+ host = '0.0.0.0'
+// highlight-next-line
+ apiUrl = "/api"
+[api]
+ port = 8911
+[browser]
+ open = true
+```
+
+```text title="nginx.conf"
+upstream redwood_server {
+ server 127.0.0.1:8911 fail_timeout=0;
+}
+
+server {
+ root /var/www/myapp/current/web/dist;
+ server_name myapp.com;
+ index index.html;
+
+ gzip on;
+ gzip_min_length 1000;
+ gzip_types application/json text/css application/javascript application/x-javascript;
+
+ sendfile on;
+
+ keepalive_timeout 65;
+
+ error_page 404 /404.html;
+ error_page 500 /500.html;
+
+ location / {
+ try_files $uri /200.html =404;
+ }
+
+ location ^~ /static/ {
+ gzip_static on;
+ expires max;
+ add_header Cache-Control public;
+ }
+
+// highlight-next-line
+ location ~ /api(.*) {
+// highlight-next-line
+ rewrite ^/api(.*) $1 break;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_pass http://redwood_server;
+ }
+}
+```
diff --git a/docs/versioned_docs/version-8.4/deploy/coherence.md b/docs/versioned_docs/version-8.4/deploy/coherence.md
new file mode 100644
index 000000000000..ccd5bdc00088
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/coherence.md
@@ -0,0 +1,41 @@
+---
+description: Serverful deploys on GCP or AWS via Coherence's full-lifecycle environment automation
+---
+
+# Deploy to Coherence
+
+[Coherence](https://www.withcoherence.com/) delivers automated environments across the full software development lifecycle, without requiring you to glue together your own mess of open source tools to get a world-class develper experience for your team. Coherence is focused on serving startups, who are doing mission-critical work. With one simple configuration, Coherence offers:
+
+- Cloud-hosted development environments, based on VSCode. Similar to Gitpod or GitHub CodeSpaces
+- Production-ready CI/CD running in your own GCP/AWS account, including: database migration/seeding/snapshot loading, parallelized tests, container building and docker registry management
+- Full-stack branch previews. Vercel/Netlify-like developer experience for arbitrary container apps, including dependencies such as CDN, redis, and database resources
+- Staging and production environment management in your AWS/GCP accounts. Production runs in its own cloud account (AWS) or project (GCP). Integrated secrets management across all environment types with a developer-friendly UI
+
+## Coherence Prerequisites
+
+To deploy to Coherence, your Redwood project needs to be hosted on GitHub and you must have an [AWS](https://docs.withcoherence.com/docs/overview/aws-deep-dive) or [GCP](https://docs.withcoherence.com/docs/overview/gcp-deep-dive) account.
+
+## Coherence Deploy
+
+:::warning Prerender doesn't work with Coherence yet
+
+You can see its current status and follow updates here on GitHub: https://github.com/redwoodjs/redwood/issues/8333.
+
+But if you don't use prerender, carry on!
+
+:::
+
+If you want to deploy your Redwood project on Coherence, run the setup command:
+
+```
+yarn rw setup deploy coherence
+```
+
+The command will inspect your Prisma config to determine if you're using a supported database (at the moment, only `postgres` or `mysql` are supported on Coherence).
+
+Then follow the [Coherence Redwood deploy docs](https://docs.withcoherence.com/docs/configuration/frameworks#redwood-js) for more information, including if you want to set up:
+
+- a redis server
+- database migration/seeding/snapshot loading
+- cron jobs or async workers
+- object storage using Google Cloud Storage or AWS's S3
diff --git a/docs/versioned_docs/version-8.4/deploy/edgio.md b/docs/versioned_docs/version-8.4/deploy/edgio.md
new file mode 100644
index 000000000000..fd115935b85b
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/edgio.md
@@ -0,0 +1,26 @@
+# Deploy to Edgio
+
+> ⚠️ **Deprecated**
+>
+> As of Redwood v7, we are deprecating this deploy setup as an "officially" supported provider. This means:
+>
+> - For projects already using this deploy provider, there will be NO change at this time
+> - Both the associated `setup` and `deploy` commands will remain in the framework as is; when setup is run, there will be a “deprecation” message
+> - We will no longer run CI/CD on the Edgio deployments, which means we are no longer guaranteeing this deploy works with each new version
+>
+> If you have concerns or questions about our decision to deprecate this deploy provider please reach out to us on our [community forum](https://community.redwoodjs.com).
+
+[Edgio](https://edg.io) extends the capabilities of a traditional CDN by not only hosting your static content, but also providing server-side rendering for progressive web applications as well as caching both your APIs and HTML at the network edge to provide your users with the fastest browsing experience.
+
+## Edgio Deploy Setup
+
+In order to deploy your RedwoodJS project to Edgio, the project must first be initialized with the Edgio CLI.
+
+1. In your project, run the command `yarn rw setup deploy edgio`.
+2. Verify the changes to your project, commit and push to your repository.
+3. Deploy your project to Edgio
+4. If this is your first time deploying to Edgio, the interactive CLI will prompt to authenticate using your browser. You can start the deploy by running `yarn rw deploy edgio`.
+5. If you are deploying from a **non-interactive** environment, you will need to create an account on [Edgio Developer Console](https://app.layer0.co) first and setup a [deploy token](https://docs.edg.io/guides/deploy_apps#deploy-from-ci). Once the deploy token is created, save it as a secret to your environment. You can start the deploy by running `yarn rw deploy edgio --token=XXX`.
+6. Follow the link in the output to view your site live once deployment has completed!
+
+For more information on deploying to Edgio, check out the [documentation](https://docs.edg.io).
diff --git a/docs/versioned_docs/version-8.4/deploy/flightcontrol.md b/docs/versioned_docs/version-8.4/deploy/flightcontrol.md
new file mode 100644
index 000000000000..a55c382a4a6e
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/flightcontrol.md
@@ -0,0 +1,21 @@
+---
+description: How to deploy a Redwood app to AWS via Flightcontrol
+---
+
+# Deploy to AWS with Flightcontrol
+
+[Flightcontrol](https://www.flightcontrol.dev?ref=redwood) enables any developer to deploy to AWS without being a wizard. It's extremely easy to use but lets you pop the hood and leverage the raw power of AWS when needed. It supports servers, static sites, and databases which makes it a perfect fit for hosting scalable Redwood apps.
+
+## Flightcontrol Deploy Setup
+
+1. In your project, run the command `yarn rw setup deploy flightcontrol --database=YOUR_DB_TYPE` where YOUR_DB_TYPE is `mysql` or `postgresql`
+2. Commit the changes and push to github.
+3. If you don't have an account, sign up at [app.flightcontrol.dev/signup](https://app.flightcontrol.dev/signup?ref=redwood).
+4. Create a new project.
+ 1. Connect your GitHub account and select your repo.
+ 2. Click the Redwood preset
+ 3. Click "Create project" (do not add services to the UI during this step, the flightcontrol.json you added will be used for service config)
+5. After project is created, add your env vars under Environment Settings.
+ 1. If using dbAuth, add the session secret key env variable in the Flightcontrol dashboard.
+
+If you have _any_ problems or questions, Flightcontrol is very responsive. [See their support options](https://www.flightcontrol.dev/docs/troubleshooting/contacting-support).
diff --git a/docs/versioned_docs/version-8.4/deploy/introduction.md b/docs/versioned_docs/version-8.4/deploy/introduction.md
new file mode 100644
index 000000000000..49eba8f4a87d
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/introduction.md
@@ -0,0 +1,105 @@
+---
+description: Deploy to serverless or serverful providers
+---
+
+# Introduction to Deployment
+
+Redwood is designed for both serverless and traditional infrastructure deployments, offering a unique continuous deployment process in both cases:
+
+1. code is committed to a repository on GitHub, GitLab, or Bitbucket, which triggers the deployment
+2. the Redwood API Side and Web Side are individually prepared via a build process
+3. during the build process, any database related actions are run (e.g. migrations)
+4. the hosting provider deploys the built Web static assets to a CDN and the API code to a serverless backend (e.g. AWS Lambdas)
+
+Currently, these are the officially supported deploy targets:
+
+- Baremetal (physical server that you have SSH access to)
+- [Coherence](https://www.withcoherence.com/)
+- [Flightcontrol.dev](https://www.flightcontrol.dev?ref=redwood)
+- [Edg.io](https://edg.io)
+- [Netlify.com](https://www.netlify.com/)
+- [Render.com](https://render.com)
+- [Serverless.com](https://serverless.com)
+- [Vercel.com](https://vercel.com)
+
+Redwood has a CLI generator that adds the code and configuration required by the specified provider (see the [CLI Doc](cli-commands.md#deploy-config) for more information):
+
+```shell
+yarn rw setup deploy
+```
+
+There are examples of deploying Redwood on other providers such as Google Cloud and direct to AWS. You can find more information by searching the [GitHub Issues](https://github.com/redwoodjs/redwood/issues) and [Forums](https://community.redwoodjs.com).
+
+## General Deployment Setup
+
+Deploying Redwood requires setup for the following four categories.
+
+### 1. Host Specific Configuration
+
+Each hosting provider has different requirements for how (and where) the deployment is configured. Sometimes you'll need to add code to your repository, configure settings in a dashboard, or both. You'll need to read the provider specific documentation.
+
+The most important Redwood configuration is to set the `apiUrl` in your `redwood.toml` This sets the API path for your serverless functions specific to your hosting provider.
+
+### 2. Build Command
+
+The build command is used to prepare the Web and API for deployment. Additionally, other actions can be run during build such as database migrations. The Redwood build command must specify one of the supported hosting providers (aka `target`):
+
+```shell
+yarn rw deploy
+```
+
+For example:
+
+```shell
+# Build command for Netlify deploy target
+yarn rw deploy netlify
+```
+
+```shell
+# Build command for Vercel deploy target
+yarn rw deploy vercel
+```
+
+```shell
+# Build command for AWS Lambdas using the https://serverless.com framework
+yarn rw deploy serverless --side api
+```
+
+```shell
+# Build command for Edgio deploy target
+yarn rw deploy edgio
+```
+
+```shell
+# Build command for baremetal deploy target
+yarn rw deploy baremetal [--first-run]
+```
+
+### 3. Prisma and Database
+
+Redwood uses Prisma for managing database access and migrations. The settings in `api/prisma/schema.prisma` must include the correct deployment database, e.g. postgresql, and the database connection string.
+
+To use PostgreSQL in production, include this in your `schema.prisma`:
+
+```jsx
+datasource db {
+ provider = "postgresql"
+ url = env("DATABASE_URL")
+}
+```
+
+The `url` setting above accesses the database connection string via an environment variable, `DATABASE_URL`. Using env vars is the recommended method for both ease of development process as well as security best practices.
+
+Whenever you make changes to your `schema.prisma`, you must run the following command:
+
+```shell
+yarn rw prisma migrate dev # creates and applies a new Prisma DB migration
+```
+
+> Note: when setting your production DATABASE_URL env var, be sure to also set any connection-pooling or sslmode parameters. For example, if using Supabase Postgres with pooling, then you would use a connection string similar to `postgresql://postgres:mydb.supabase.co:6432/postgres?sslmode=require&pgbouncer=true` that uses a specific 6432 port, informs Prisma to consider pgBouncer, and also to use SSL. See: [Connection Pooling](connection-pooling.md) for more info.
+
+### 4. Environment Variables
+
+Any environment variables used locally, e.g. in your `env.defaults` or `.env`, must also be added to your hosting provider settings. (See documentation specific to your provider.)
+
+Additionally, if your application uses env vars on the Web Side, you must configure Redwood's build process to make them available in production. See the [Redwood Environment Variables doc](environment-variables.md) for instructions.
diff --git a/docs/versioned_docs/version-8.4/deploy/netlify.md b/docs/versioned_docs/version-8.4/deploy/netlify.md
new file mode 100644
index 000000000000..3f779f1b6ab4
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/netlify.md
@@ -0,0 +1,26 @@
+---
+description: The serverless git deploy you know and love
+---
+
+# Deploy to Netlify
+
+## Netlify tl;dr Deploy
+
+If you simply want to experience the Netlify deployment process without a database and/or adding custom code, you can do the following:
+
+1. create a new redwood project: `yarn create redwood-app ./netlify-deploy`
+2. after your "netlify-deploy" project installation is complete, init git, commit, and add it as a new repo to GitHub, BitBucket, or GitLab
+3. run the command `yarn rw setup deploy netlify` and commit and push changes
+4. use the Netlify [Quick Start](https://app.netlify.com/signup) to deploy
+
+:::warning
+While you may be tempted to use the [Netlify CLI](https://cli.netlify.com) commands to [build](https://cli.netlify.com/commands/build) and [deploy](https://cli.netlify.com/commands/deploy) your project directly from you local project directory, doing so **will lead to errors when deploying and/or when running functions**. I.e. errors in the function needed for the GraphQL server, but also other serverless functions.
+
+The main reason for this is that these Netlify CLI commands simply build and deploy -- they build your project locally and then push the dist folder. That means that when building a RedwoodJS project, the [Prisma client is generated with binaries matching the operating system at build time](https://cli.netlify.com/commands/link) -- and not the [OS compatible](https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference#binarytargets-options) with running functions on Netlify. Your Prisma client engine may be `darwin` for OSX or `windows` for Windows, but it needs to be `debian-openssl-1.1.x` or `rhel-openssl-1.1.x`. If the client is incompatible, your functions will fail.
+
+Therefore, **please follow the [Tutorial Deployment section](tutorial/chapter4/deployment.md)** to sync your GitHub (or other compatible source control service) repository with Netlify andalllow their build and deploy system to manage deployments.
+:::
+
+## Netlify Complete Deploy Walkthrough
+
+For the complete deployment process on Netlify, see the [Tutorial Deployment section](tutorial/chapter4/deployment.md).
diff --git a/docs/versioned_docs/version-8.4/deploy/render.md b/docs/versioned_docs/version-8.4/deploy/render.md
new file mode 100644
index 000000000000..46ae652e35e8
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/render.md
@@ -0,0 +1,16 @@
+---
+description: Serverful deploys via Render's unified cloud
+---
+
+# Deploy to Render
+
+Render is a unified cloud to build and run all your apps and websites with free SSL, a global CDN, private networks and auto-deploys from Git — **database included**!
+
+## Render tl;dr Deploy
+
+If you simply want to experience the Render deployment process, including a Postgres or SQLite database, you can do the following:
+
+1. create a new redwood project: `yarn create redwood-app ./render-deploy`
+2. after your "render-deploy" project installation is complete, init git, commit, and add it as a new repo to GitHub or GitLab
+3. run the command `yarn rw setup deploy render`, use the flag `--database` to select from `postgresql`, `sqlite` or `none` to proceed without a database [default : `postgresql`]
+4. follow the [Render Redwood Deploy Docs](https://render.com/docs/deploy-redwood) for detailed instructions
diff --git a/docs/versioned_docs/version-8.4/deploy/serverless.md b/docs/versioned_docs/version-8.4/deploy/serverless.md
new file mode 100644
index 000000000000..ed9a1d108ca4
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/serverless.md
@@ -0,0 +1,131 @@
+---
+description: Deploy to AWS with Serverless Framework
+---
+
+# Deploy to AWS with Serverless Framework
+
+> ⚠️ **Deprecated**
+> As of Redwood v5, we are deprecating this deploy setup as an "officially" supported provider. This means:
+>
+> - For projects already using this deploy provider, there will be NO change at this time
+> - Both the associated `setup` and `deploy` commands will remain in the framework as is; when setup is run, there will be a “deprecation” message
+> - We will no longer run CI/CD on the Serverless-AWS deployments, which means we are no longer guaranteeing this deploy works with each new version
+> - We are exploring better options to deploy directly to AWS Lambdas; the current deploy commands will not be removed until we find a replacement
+>
+> For more details (e.g. why?) and current status, see the Forum post ["Deprecating support for Serverless Framework Deployments to AWS Lambdas"](https://community.redwoodjs.com/t/deprecating-support-for-serverless-framework-deployments-to-aws-lambdas/4755/10)
+
+> The following instructions assume you have read the [General Deployment Setup](./introduction.md#general-deployment-setup) section above.
+
+Yes, the name is confusing, but Serverless provides a very interesting option—deploy to your own cloud service account and skip the middleman entirely! By default, Serverless just orchestrates starting up services in your cloud provider of choice and pushing your code up to them. Any bill you receive is from your hosting provider (although many offer a generous free tier). You can optionally use the [Serverless Dashboard](https://www.serverless.com/dashboard/) to monitor your deploys and setup CI/CD to automatically deploy when pushing to your repo of choice. If you don't setup CI/CD you actually deploy from your development machine (or another designated machine you've setup to do the deployment).
+
+Currently we default to deploying to AWS. We'd like to add more providers in the future but need help from the community in figuring out what services are equivalent to the ones we're using in AWS (Lambda for the api-side and S3/CloudFront for the web-side).
+
+We'll handle most of the deployment commands for you, you just need an [AWS account](https://www.serverless.com/framework/docs/providers/aws/guide/credentials#sign-up-for-an-aws-account) and your [access/secret keys](https://www.serverless.com/framework/docs/providers/aws/guide/credentials#create-an-iam-user-and-access-key) before we begin.
+
+## Setup
+
+One command will set you up with (almost) everything you need:
+
+```bash
+yarn rw setup deploy serverless
+```
+
+As you'll see mentioned in the post-install instructions, you'll need to provide your AWS Access and AWS Secret Access keys. Add those to the designated places in your `.env` file:
+
+```bash
+# .env
+
+AWS_ACCESS_KEY_ID=
+AWS_SECRET_ACCESS_KEY=
+```
+
+Make sure you don't check `.env` into your repo! It's set in `.gitignore` by default, so make sure it stays that way.
+
+## First Deploy
+
+You'll need to add a special flag to the deploy command for your first deploy:
+
+```bash
+yarn rw deploy serverless --first-run
+```
+
+The first time you deploy your app we'll first deploy just the API side. Once it's live we can get the URL that it's been deployed to and add that as an environment variable `API_URL` so that web side will know what it is during build-time (it needs to know where to send GraphQL and function requests).
+
+Half-way through the first deploy you'll be asked if you want to add the API_URL to `.env.production` (which is similar to `.env` but is only used when `NODE_ENV=production`, like when building the web and api sides for deploy). Make sure you say `Y`es at this prompt and then it will continue to deploy the web side.
+
+Once that command completes you should see a message including the URL of your site—open that URL and hopefully everything works as expected!
+
+> **Heads up**
+>
+> If you're getting an error trying to load data from the API side, its possible you're still pointing at your local database.
+>
+> Remember to add a DATABASE_URL env var to your `.env.production` file that is created, pointing at the database you want to use on your deployed site. Since your stack is on AWS, RDS might be a good option, but you might find it easier/quicker to setup databases on other providers too, such as [Railway](https://railway.app/) or [Supabase](https://supabase.com/)
+
+## Subsequent Deploys
+
+From now on you can simply run `yarn rw deploy serverless` when you're ready to deploy (which will also be much faster).
+
+:::info
+Remember, if you add or generate new serverless functions (or endpoints), you'll need to update the configuration in your serverless.yml in `./api/serverless.yml`.
+
+By default we only configure the `auth` and `graphql` functions for you.
+:::
+
+## Environment Variables
+
+For local deployment (meaning you're deploying from your own machine, or another that you're in control of) you can put any ENV vars that are production-only into `.env.production`. They will override any same-named vars in `.env`. Make sure neither of these files is checked into your code repository!
+
+If you're setting up CI/CD and deploying from the Serverless Dashboard, you'll need to copy your required ENV vars up to your app on Serverless and then tell it where to get them from. In `api/serverless.yml` and `web/serverless.yml` look for the `provider > environment` section. You'll need to list any ENV vars here, using the `${param:VAR_NAME}` syntax, which means to get them from the Serverless Dashboard "parameters" (which is what they call environment variables, for some strange reason).
+
+There are even more places you can get environment variables from, check out Serverless's [Variables documentation](https://www.serverless.com/framework/docs/providers/aws/guide/variables) for more.
+
+## Serverless Dashboard
+
+> **Note:**
+> Serverless Dashboard CI/CD does not support projects structured like Redwood, although they're working on it. For CD, you'll need to use something like GitHub Actions.
+>
+> It can still be worthwhile to integrate your project with Serverless Dashboard — you'll have features like deploy logs and monitoring, analytics, secret management, and AWS account integration. You can also [authenticate into your Serverless account within a CI context](https://www.serverless.com/framework/docs/guides/cicd/running-in-your-own-cicd). Just remember that if you do use the Dashboard to manage secrets, you'll need to use the `${param:VAR_NAME}` syntax.
+
+To integrate your site into the Serverless Dashboard, there are two ways:
+
+1. Run `yarn serverless login` and a browser _should_ open asking you to allow permission. However, in our experience, this command will fail nearly 50% of the time complaining about an invalid URL. If it _does_ work you can then run `yarn serverless` in both the `api` and `web` directories to link to them an existing app in the Dashboard, or you'll be prompted to create a new one. Future deploys will now be monitored on the Dashboard.
+2. You can manually add the `org` and `app` lines in `api/serverless.yml` and `web/serverless.yml`. You'll see example ones commented out near the top of the file.
+
+## Environments Besides Production
+
+By default we assume you want to deploy to a production environment, but Serverless lets you deploy anywhere. They call these destinations "stages", and in Redwood "production" is the default. Check out their [Managing Staging and Environments blog post](https://www.serverless.com/blog/stages-and-environments) for details.
+
+Once configured, just add the stage to your deploy command:
+
+```bash
+yarn rw deploy serverless --stage qa
+```
+
+## Removing Your Deploy
+
+In addition to creating all of the services necessary for your app to run, Serverless can also remove them (which is great when testing to avoid paying for services you're no longer using).
+
+You'll need to run this command in both the `api` and `web` directories:
+
+```bash
+yarn serverless remove --stage production
+```
+
+Note that `production` is the default stage when you deploy with `yarn rw serverless deploy` - if you have customized this, you have to use the same stage as you deployed with!
+
+This will take several minutes, so grab your favorite beverage and enjoy your new $0 monthly bill!
+
+:::tip Pro tip
+If you get tired of typing `serverless` each time, you can use the much shorter `sls` alias: `yarn rw deploy sls`
+:::
+
+## Troubleshooting
+
+If you happen to see the following error when deploying:
+
+```terminal
+Error:
+No auth.zip file found in the package path you provided.
+```
+
+Make sure that the dev server isn't running, then retry your deploy.
diff --git a/docs/versioned_docs/version-8.4/deploy/vercel.md b/docs/versioned_docs/version-8.4/deploy/vercel.md
new file mode 100644
index 000000000000..060627e4c224
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/deploy/vercel.md
@@ -0,0 +1,141 @@
+---
+description: Deploy serverless in an instant with Vercel
+---
+
+# Deploy to Vercel
+
+> The following instructions assume you have read the [General Deployment Setup](./introduction.md#general-deployment-setup) section above.
+
+## Vercel tl;dr Deploy
+
+If you simply want to experience the Vercel deployment process without a database and/or adding custom code, you can do the following:
+
+1. create a new redwood project: `yarn create redwood-app ./vercel-deploy`
+2. after your "vercel-deploy" project installation is complete, init git, commit, and add it as a new repo to GitHub, BitBucket, or GitLab
+3. run the command `yarn rw setup deploy vercel` and commit and push changes
+4. use the Vercel [Quick Start](https://vercel.com/#get-started) to deploy
+
+_If you choose this quick deploy experience, the following steps do not apply._
+
+## Redwood Project Setup
+
+If you already have a Redwood project, proceed to the next step.
+
+Otherwise, we recommend experiencing the full Redwood DX via the [Redwood Tutorial](tutorial/foreword.md). Simply return to these instructions when you reach the "Deployment" section.
+
+## Redwood Deploy Configuration
+
+Complete the following two steps. Then save, commit, and push your changes.
+
+### Step 1. Serverless Functions Path
+
+Run the following CLI Command:
+
+```shell
+yarn rw setup deploy vercel
+```
+
+This updates your `redwood.toml` file, setting `apiUrl = "/api"`:
+
+### Step 2. Database Settings
+
+Follow the steps in the [Prisma and Database](./introduction#3-prisma-and-database) section above. _(Skip this step if your project does not require a database.)_
+
+:::info
+
+If you're using Vercel Postgres, you may want to limit certain Prisma operations when you deploy. For example, if you're on the Hobby plan, there are some storage and write limits that you can mitigate by turning the Prisma and data migration steps off during deploy and only enabling them on a case-by-case basis when needed:
+
+```
+yarn rw deploy vercel --prisma=false --data-migrate=false
+```
+
+:::
+
+### Vercel Initial Setup and Configuration
+
+Either [login](https://vercel.com/login) to your Vercel account and select "Import Project" or use the Vercel [quick start](https://vercel.com/#get-started).
+
+Then select the "Continue" button within the "From Git Repository" section:
+
+
+Next, select the provider where your repo is hosted: GitHub, GitLab, or Bitbucket. You'll be asked to login and then provider the URL of the repository, e.g. for a GitHub repo `https://github.com/your-account/your-project.git`. Select "Continue".
+
+You'll then need to provide permissions for Vercel to access the repo on your hosting provider.
+
+### Import and Deploy your Project
+
+Vercel will recognize your repo as a Redwood project and take care of most configuration heavy lifting. You should see the following options and, most importantly, the "Framework Preset" showing RedwoodJS.
+
+
+
+Leave the **Build and Output Settings** at the default settings (unless you know what you're doing and have very specific needs).
+
+In the "Environment Variables" dropdown, add `DATABASE_URL` and your app's database connection string as the value. (Or skip if not applicable.)
+
+> When configuring a database, you'll want to append `?connection_limit=1` to the URI. This is [recommended by Prisma](https://www.prisma.io/docs/reference/tools-and-interfaces/prisma-client/deployment#recommended-connection-limit) when working with relational databases in a Serverless context. For production apps, you should setup [connection pooling](https://redwoodjs.com/docs/connection-pooling).
+
+For example, a postgres connection string should look like `postgres://:@/?connection_limit=1`
+
+Finally, click the "Deploy" button. You'll hopefully see a build log without errors (warnings are fine) and end up on a screen that looks like this:
+
+
+
+Go ahead, click that "Visit" button. You’ve earned it 🎉
+
+## Vercel Dashboard Settings
+
+From the Vercel Dashboard you can access the full settings and information for your Redwood App. The default settings seem to work just fine for most Redwood projects. Do take a look around, but be sure check out the [docs as well](https://vercel.com/docs).
+
+From now on, each time you push code to your git repo, Vercel will automatically trigger a deploy of the new code. You can also manually redeploy if you select "Deployments", then the specific deployment from the list, and finally the "Redeploy" option from the vertical dots menu next to "Visit".
+
+## Configuration
+
+You can use `vercel.json` to configure and override the default behavior of Vercel from within your project. For [`functions`](#functions), you should configure in code directly and not in `vercel.json`.
+
+### Project
+
+The [`vercel.json` configuration file](https://vercel.com/docs/projects/project-configuration#configuring-projects-with-vercel.json) lets you configure, and override the default behavior of Vercel from within your project such as rewrites or headers.
+
+### Functions
+
+By default, API requests in Vercel have a timeout limit of 15 seconds, but can be configured to be up to 90 seconds. Pro and other plans allow for longer [duration](https://vercel.com/docs/functions/runtimes#max-duration) and larger [memory-size limits](https://vercel.com/docs/functions/runtimes#memory-size-limits).
+
+To change the `maxDuration` or `memory` per function, export a `config` with the settings you want applied in your function. For example:
+
+```ts
+import type { APIGatewayEvent, Context } from 'aws-lambda'
+
+import { logger } from 'src/lib/logger'
+
+export const config = {
+ maxDuration: 30,
+ memory: 512,
+}
+
+export const handler = async (event: APIGatewayEvent, _context: Context) => {
+ logger.info(`${event.httpMethod} ${event.path}: vercel function`)
+
+ return {
+ statusCode: 200,
+ headers: {
+ 'Content-Type': 'application/json',
+ },
+ body: JSON.stringify({
+ data: 'vercel function',
+ }),
+ }
+}
+```
+
+:::tip important
+Since Redwood has it's own handling of the api directory, the Vercel flavored api directory is disabled. Therefore you don't use the "functions" config in `vercel.json` with Redwood.
+
+Also, be sure to use Node version 20.x or greater or set the `runtime` in the function config:
+
+```ts
+export const config = {
+ runtime: 'nodejs20.x',
+}
+```
+
+:::
diff --git a/docs/versioned_docs/version-8.4/directives.md b/docs/versioned_docs/version-8.4/directives.md
new file mode 100644
index 000000000000..d9c0d1d50119
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/directives.md
@@ -0,0 +1,694 @@
+---
+description: Customize GraphQL execution
+---
+
+# Directives
+
+Redwood Directives are a powerful feature, supercharging your GraphQL-backed Services.
+
+You can think of directives like "middleware" that let you run reusable code during GraphQL execution to perform tasks like authentication and formatting.
+
+Redwood uses them to make it a snap to protect your API Services from unauthorized access.
+
+Here we call those types of directives **Validators**.
+
+You can also use them to transform the output of your query result to modify string values, format dates, shield sensitive data, and more!
+We call those types of directives **Transformers**.
+
+You'll recognize a directive as being 1) preceded by `@` (e.g. `@myDirective`) and 2) declared alongside a field:
+
+```tsx
+type Bar {
+ name: String! @myDirective
+}
+```
+
+or a Query or a Mutation:
+
+```tsx
+type Query {
+ bars: [Bar!]! @myDirective
+}
+
+type Mutation {
+ createBar(input: CreateBarInput!): Bar! @myDirective
+}
+```
+
+You can also define arguments that can be extracted and used when evaluating the directive:
+
+```tsx
+type Bar {
+ field: String! @myDirective(roles: ["ADMIN"])
+}
+```
+
+or a Query or Mutation:
+
+```tsx
+type Query {
+ bars: [Bar!]! @myDirective(roles: ["ADMIN"])
+}
+```
+
+You can also use directives on relations:
+
+```tsx
+type Baz {
+ name: String!
+}
+
+type Bar {
+ name: String!
+ bazzes: [Baz]! @myDirective
+}
+```
+
+There are many ways to write directives using GraphQL tools and libraries. Believe us, it can get complicated fast.
+
+But, don't fret: Redwood provides an easy and ergonomic way to generate and write your own directives so that you can focus on the implementation logic and not the GraphQL plumbing.
+
+## What is a Redwood Directive?
+
+Redwood directives are purposeful.
+They come in two flavors: **Validators** and **Transformers**.
+
+Whatever flavor of directive you want, all Redwood directives must have the following properties:
+
+- be in the `api/src/directives/{directiveName}` directory where `directiveName` is the directive directory
+- must have a file named `{directiveName}.{js,ts}` (e.g. `maskedEmail.ts`)
+- must export a `schema` and implement either a `validate` or `transform` function
+
+### Understanding the Directive Flow
+
+Since it helps to know a little about the GraphQL phases—specifically the Execution phase—and how Redwood Directives fit in the data-fetching and authentication flow, let's have a quick look at some diagrams.
+
+First, we see the built-in `@requireAuth` Validator directive that can allow or deny access to a Service (a.k.a. a resolver) based on Redwood authentication.
+In this example, the `post(id: Int!)` query is protected using the `@requireAuth` directive.
+
+If the request's context has a `currentUser` and the app's `auth.{js|ts}` determines it `isAuthenticated()`, then the execution phase proceeds to get resolved (for example, the `post({ id })` Service is executed and queries the database using Prisma) and returns the data in the resulting response when execution is done.
+
+![require-auth-directive](https://user-images.githubusercontent.com/1051633/135320891-34dc06fc-b600-4c76-8a35-86bf42c7f179.png)
+
+In this second example, we add the Transformer directive `@welcome` to the `title` field on `Post` in the SDL.
+
+The GraphQL Execution phase proceeds the same as the prior example (because the `post` query is still protected and we'll want to fetch the user's name) and then the `title` field is resolved based on the data fetch query in the service.
+
+Finally after execution is done, then the directive can inspect the `resolvedValue` (here "Welcome to the blog!") and replace the value by inserting the current user's name—"Welcome, Tom, to the blog!"
+
+![welcome-directive](https://user-images.githubusercontent.com/1051633/135320906-5e2d639d-13a1-4aaf-85bf-98529822d244.png)
+
+### Validators
+
+Validators integrate with Redwood's authentication to evaluate whether or not a field, query, or mutation is permitted—that is, if the request context's `currentUser` is authenticated or belongs to one of the permitted roles.
+
+Validators should throw an Error such as `AuthenticationError` or `ForbiddenError` to deny access and simply return to allow.
+
+Here the `@isSubscriber` validator directive checks if the currentUser exists (and therefore is authenticated) and whether or not they have the `SUBSCRIBER` role. If they don't, then access is denied by throwing an error.
+
+```tsx
+import {
+ AuthenticationError,
+ ForbiddenError,
+ createValidatorDirective,
+ ValidatorDirectiveFunc,
+} from '@redwoodjs/graphql-server'
+import { hasRole } from 'src/lib/auth'
+
+export const schema = gql`
+ directive @isSubscriber on FIELD_DEFINITION
+`
+
+const validate: ValidatorDirectiveFunc = ({ context }) => {
+ if (!context.currentUser) {
+ throw new AuthenticationError("You don't have permission to do that.")
+ }
+
+ if (!context.currentUser.roles?.includes('SUBSCRIBER')) {
+ throw new ForbiddenError("You don't have access to do that.")
+ }
+}
+
+const isSubscriber = createValidatorDirective(schema, validate)
+
+export default isSubscriber
+```
+
+Since validator directives can access arguments (such as `roles`), you can quickly provide RBAC (Role-based Access Control) to fields, queries and mutations.
+
+```tsx
+import gql from 'graphql-tag'
+
+import { createValidatorDirective } from '@redwoodjs/graphql-server'
+
+import { requireAuth as applicationRequireAuth } from 'src/lib/auth'
+import { logger } from 'src/lib/logger'
+
+export const schema = gql`
+ directive @requireAuth(roles: [String]) on FIELD_DEFINITION
+`
+
+const validate = ({ directiveArgs }) => {
+ const { roles } = directiveArgs
+
+ applicationRequireAuth({ roles })
+}
+
+const requireAuth = createValidatorDirective(schema, validate)
+
+export default requireAuth
+```
+
+All Redwood apps come with two built-in validator directives: `@requireAuth` and `@skipAuth`.
+The `@requireAuth` directive takes optional roles.
+You may use these to protect against unwanted GraphQL access to your data.
+Or explicitly allow public access.
+
+> **Note:** Validators evaluate prior to resolving the field value, so you cannot modify the value and any return value is ignored.
+
+### Transformers
+
+Transformers can access the resolved field value to modify and then replace it in the response.
+Transformers apply to both single fields (such as a `User`'s `email`) and collections (such as a set of `Posts` that belong to `User`s) or is the result of a query. As such, Transformers cannot be applied to Mutations.
+
+In the first case of a single field, the directive would return the modified field value. In the latter case, the directive could iterate each `Post` and modify the `title` in each. In all cases, the directive **must** return the same expected "shape" of the data the SDL expects.
+
+> **Note:** you can chain directives to first validate and then transform, such as `@requireAuth @maskedEmail`. Or even combine transformations to cascade formatting a value (you could use `@uppercase` together with `@truncate` to uppercase a title and shorten to 10 characters).
+
+Since transformer directives can access arguments (such as `roles` or `maxLength`) you may fetch those values and use them when applying (or to check if you even should apply) your transformation.
+
+That means that a transformer directive could consider the `permittedRoles` in:
+
+```tsx
+type user {
+ email: String! @maskedEmail(permittedRoles: ["ADMIN"])
+}
+```
+
+and if the `currentUser` is an `ADMIN`, then skip the masking transform and simply return the original resolved field value:
+
+```jsx title="./api/src/directives/maskedEmail.directive.js"
+import { createTransformerDirective, TransformerDirectiveFunc } from '@redwoodjs/graphql-server'
+
+export const schema = gql`
+ directive @maskedEmail(permittedRoles: [String]) on FIELD_DEFINITION
+`
+
+const transform: TransformerDirectiveFunc = ({ context, resolvedValue }) => {
+ return resolvedValue.replace(/[a-zA-Z0-9]/i, '*')
+}
+
+const maskedEmail = createTransformerDirective(schema, transform)
+
+export default maskedEmail
+```
+
+and you would use it in your SDLs like this:
+
+```graphql
+type UserExample {
+ id: Int!
+ email: String! @maskedEmail # 👈 will replace alphanumeric characters with asterisks in the response!
+ name: String
+}
+```
+
+### Where can I use a Redwood Directive?
+
+A directive can only appear in certain locations in a GraphQL schema or operation. These locations are listed in the directive's definition.
+
+In the example below, the `@maskedEmail` example, the directive can only appear in the `FIELD_DEFINITION` location.
+
+An example of a `FIELD_DEFINITION` location is a field that exists on a `Type`:
+
+```graphql
+type UserExample {
+ id: Int!
+ email: String! @requireAuth
+ name: String @maskedEmail # 👈 will maskedEmail name in the response!
+}
+
+type Query {
+ userExamples: [UserExample!]! @requireAuth 👈 will enforce auth when fetching all users
+ userExamples(id: Int!): UserExample @requireAuth 👈 will enforce auth when fetching a single user
+}
+```
+
+> **Note**: Even though GraphQL supports `FIELD_DEFINITION | ARGUMENT_DEFINITION | INPUT_FIELD_DEFINITION | ENUM_VALUE` locations, RedwoodDirectives can **only** be declared on a `FIELD_DEFINITION` — that is, you **cannot** declare a directive in an `Input type`:
+>
+> ```graphql
+> input UserExampleInput {
+> email: String! @maskedEmail # 👈 🙅 not allowed on an input
+> name: String! @requireAuth # 👈 🙅 also not allowed on an input
+> }
+> ```
+
+## When Should I Use a Redwood Directive?
+
+As noted in the [GraphQL spec](https://graphql.org/learn/queries/#directives):
+
+> Directives can be useful to get out of situations where you otherwise would need to do string manipulation to add and remove fields in your query. Server implementations may also add experimental features by defining completely new directives.
+
+Here's a helpful guide for deciding when you should use one of Redwood's Validator or Transformer directives:
+
+| | Use | Directive | Custom? | Type |
+| --- | ---------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- | ------------ |
+| ✅ | Check if the request is authenticated? | `@requireAuth` | Built-in | Validator |
+| ✅ | Check if the user belongs to a role? | `@requireAuth(roles: ["AUTHOR"])` | Built-in | Validator |
+| ✅ | Only allow admins to see emails, but others get a masked value like "###@######.###" | `@maskedEmail(roles: ["ADMIN"])` | Custom | Transformer |
+| 🙅 | Know if the logged in user can edit the record, and/or values | N/A - Instead do this check in your service |
+| 🙅 | Is my input a valid email address format? | N/A - Instead do this check in your service using [Service Validations](services.md#service-validations) or consider [GraphQL Scalars](https://www.graphql-scalars.dev) |
+| 🙅 | I want to remove a field from the response for data filtering; for example, do not include the title of the post | `@skip(if: true )` or `@include(if: false)` | Instead use [core directives](https://graphql.org/learn/queries/#directives) on the GraphQL client query, not the SDL | Core GraphQL |
+
+## Combining, Chaining and Cascading Directives
+
+Now that you've seen what Validator and Transformer directives look like and where and when you may use them, you may wonder: can I use them together? Can I transform the result of a transformer?
+
+The answer is: yes—yes you can!
+
+### Combine Directives on a Query and a Type Field
+
+Let's say you want to only allow logged-in users to be able to query `User` details and you only want un-redacted email addresses to be shown to ADMINs.
+
+You can apply the `@requireAuth` directive to the `user(id: Int!)` query so you have to be logged in.
+Then, you can compose a `@maskedEmail` directive that checks the logged-in user's role membership and if they're not an ADMIN, mask the email address:
+
+```tsx
+ type User {
+ id: Int!
+ name: String!
+ email: String! @maskedEmail(role: "ADMIN")
+ createdAt: DateTime!
+ }
+
+ type Query {
+ user(id: Int!): User @requireAuth
+ }
+```
+
+Or, let's say I want to only allow logged in users to be able to query User details.
+
+But, I only want ADMIN users to be able to query and fetch the email address.
+
+I can apply the `@requireAuth` directive to the `user(id: Int!)` query so I have to be logged in.
+
+And, I can apply the `@requireAuth` directive to the `email` field with a role argument.
+
+```tsx
+ type User {
+ id: Int!
+ name: String!
+ email: String! @requireAuth(role: "ADMIN")
+ createdAt: DateTime!
+ }
+
+ type Query {
+ user(id: Int!): User @requireAuth
+ }
+```
+
+Now, if a user who is not an ADMIN queries:
+
+```tsx
+query user(id: 1) {
+ id
+ name
+ createdAt
+}
+```
+
+They will get a result.
+
+But, if they try to query:
+
+```tsx
+query user(id: 1) {
+ id
+ name
+ email
+ createdAt
+}
+```
+
+They will be forbidden from even making the request.
+
+### Chaining a Validator and a Transformer
+
+Similar to the prior example, you may want to chain directives, but the transform doesn't consider authentication or role membership.
+
+For example, here we ensure that anyone trying to query a User and fetch the email must be authenticated.
+
+And then, if they are, apply a mask to the email field.
+
+```tsx
+ type User {
+ id: Int!
+ name: String!
+ email: String! @requireAuth @maskedEmail
+ createdAt: DateTime!
+ }
+```
+
+### Cascade Transformers
+
+Maybe you want to apply multiple field formatting?
+
+If your request event headers includes geographic or timezone info, you could compose a custom Transformer directive called `@localTimezone` could inspect the header value and convert the `createdAt` from UTC to local time -- something often done in the browser.
+
+Then, you can chain the `@dateFormat` Transformer, to just return the date portion of the timestamp -- and not the time.
+
+```tsx
+ type User {
+ id: Int!
+ name: String!
+ email: String!
+ createdAt: DateTime! @localTimezone @dateFormat
+ }
+```
+
+> **Note**: These directives could be alternatively be implemented as "operation directives" so the client can use them on a query instead of the schema-level. These such directives are a potential future Redwood directive feature.
+
+## GraphQL Handler Setup
+
+Redwood makes it easy to code, organize, and map your directives into your GraphQL schema.
+Simply add them to the `directives` directory and the `createGraphQLHandler` does all the work.
+
+You simply add them to the `directives` directory and the `createGraphQLHandler` will do all the work.
+
+> **Note**: Redwood has a generator that will do all the heavy lifting setup for you!
+
+```tsx title="api/src/functions/graphql.ts"
+import { createGraphQLHandler } from '@redwoodjs/graphql-server'
+
+import directives from 'src/directives/**/*.{js,ts}' // 👈 directives live here
+import sdls from 'src/graphql/**/*.sdl.{js,ts}'
+import services from 'src/services/**/*.{js,ts}'
+
+import { db } from 'src/lib/db'
+import { logger } from 'src/lib/logger'
+
+export const handler = createGraphQLHandler({
+ loggerConfig: { logger, options: {} },
+ directives, // 👈 directives are added to the schema here
+ sdls,
+ services,
+ onException: () => {
+ // Disconnect from your database with an unhandled exception.
+ db.$disconnect()
+ },
+})
+```
+
+## Secure by Default with Built-in Directives
+
+By default, your GraphQL endpoint is open to the world.
+
+That means anyone can request any query and invoke any Mutation.
+Whatever types and fields are defined in your SDL is data that anyone can access.
+
+But Redwood encourages being secure by default by defaulting all queries and mutations to have the `@requireAuth` directive when generating SDL or a service.
+When your app builds and your server starts up, Redwood checks that **all** queries and mutations have `@requireAuth`, `@skipAuth` or a custom directive applied.
+
+If not, then your build will fail:
+
+```bash
+✖ Verifying graphql schema...
+Building API...
+Cleaning Web...
+Building Web...
+Prerendering Web...
+You must specify one of @requireAuth, @skipAuth or a custom directive for
+- contacts Query
+- posts Query
+- post Query
+- updatePost Mutation
+- deletePost Mutation
+```
+
+or your server won't startup and you should see that "Schema validation failed":
+
+```bash
+gen | Generating TypeScript definitions and GraphQL schemas...
+gen | 47 files generated
+api | Building... Took 593 ms
+api | [GQL Server Error] - Schema validation failed
+api | ----------------------------------------
+api | You must specify one of @requireAuth, @skipAuth or a custom directive for
+api | - posts Query
+api | - createPost Mutation
+api | - updatePost Mutation
+api | - deletePost Mutation
+```
+
+To correct, just add the appropriate directive to your queries and mutations.
+
+If not, then your build will fail and your server won't startup.
+
+### @requireAuth
+
+It's your responsibility to implement the `requireAuth()` function in your app's `api/src/lib/auth.{js|ts}` to check if the user is properly authenticated and/or has the expected role membership.
+
+The `@requireAuth` directive will call the `requireAuth()` function to determine if the user is authenticated or not.
+
+```tsx title="api/src/lib/auth.ts"
+// ...
+
+export const isAuthenticated = (): boolean => {
+ return true // 👈 replace with the appropriate check
+}
+
+// ...
+
+export const requireAuth = ({ roles }: { roles: AllowedRoles }) => {
+ if (isAuthenticated()) {
+ throw new AuthenticationError("You don't have permission to do that.")
+ }
+
+ if (!hasRole({ roles })) {
+ throw new ForbiddenError("You don't have access to do that.")
+ }
+}
+```
+
+> **Note**: The `auth.ts` file here is the stub for a new RedwoodJS app. Once you have setup auth with your provider, this will enforce a proper authentication check.
+
+### @skipAuth
+
+If, however, you want your query or mutation to be public, then simply use `@skipAuth`.
+
+## Custom Directives
+
+Want to write your own directive? You can of course!
+Just generate one using the Redwood CLI; it takes care of the boilerplate and even gives you a handy test!
+
+### Generators
+
+When using the `yarn redwood generate` command,
+you'll be presented with a choice of creating a Validator or a Transformer directive.
+
+```bash
+yarn redwood generate directive myDirective
+
+? What type of directive would you like to generate? › - Use arrow-keys. Return to submit.
+❯ Validator - Implement a validation: throw an error if criteria not met to stop execution
+Transformer - Modify values of fields or query responses
+```
+
+> **Note:** You can pass the `--type` flag with either `validator` or `transformer` to create the desired directive type.
+
+After picking the directive type, the files will be created in your `api/src/directives` directory:
+
+```bash
+ ✔ Generating directive file ...
+ ✔ Successfully wrote file `./api/src/directives/myDirective/myDirective.test.ts`
+ ✔ Successfully wrote file `./api/src/directives/myDirective/myDirective.ts`
+ ✔ Generating TypeScript definitions and GraphQL schemas ...
+ ✔ Next steps...
+
+ After modifying your directive, you can add it to your SDLs e.g.:
+ // example todo.sdl.js
+ # Option A: Add it to a field
+ type Todo {
+ id: Int!
+ body: String! @myDirective
+ }
+
+ # Option B: Add it to query/mutation
+ type Query {
+ todos: [Todo] @myDirective
+ }
+```
+
+### Validator
+
+Let's create a `@isSubscriber` directive that checks roles to see if the user is a subscriber.
+
+```bash
+yarn rw g directive isSubscriber --type validator
+```
+
+Next, implement your validation logic in the directive's `validate` function.
+
+Validator directives don't have access to the field value, (i.e. they're called before resolving the value). But they do have access to the `context` and `directiveArgs`.
+They can be async or sync.
+And if you want to stop executing (because of insufficient permissions for example), throw an error.
+The return value is ignored
+
+An example of `directiveArgs` is the `roles` argument in the directive `requireAuth(roles: "ADMIN")`
+
+```tsx
+const validate: ValidatorDirectiveFunc = ({ context, directiveArgs }) => {
+ // You can also modify your directive to take arguments
+ // and use the directiveArgs object provided to this function to get values
+ logger.debug(directiveArgs, 'directiveArgs in isSubscriber directive')
+
+ throw new Error('Implementation missing for isSubscriber')
+}
+```
+
+Here we can access the `context` parameter and then check to see if the `currentUser` is authenticated and if they belong to the `SUBSCRIBER` role:
+
+```tsx title="/api/src/directives/isSubscriber/isSubscriber.ts"
+// ...
+
+const validate: ValidatorDirectiveFunc = ({ context }) => {
+ if (!context.currentUser)) {
+ throw new AuthenticationError("You don't have permission to do that.")
+ }
+
+ if (!context.currentUser.roles?.includes('SUBSCRIBER')) {
+ throw new ForbiddenError("You don't have access to do that.")
+ }
+}
+```
+
+#### Writing Validator Tests
+
+When writing a Validator directive test, you'll want to:
+
+- ensure the directive is named consistently and correctly so the directive name maps properly when validating
+- confirm that the directive throws an error when invalid. The Validator directive should always have a reason to throw an error
+
+Since we stub out the `Error('Implementation missing for isSubscriber')` case when generating the Validator directive, these tests should pass.
+But once you begin implementing the validate logic, it's on you to update appropriately.
+
+```tsx
+import { mockRedwoodDirective, getDirectiveName } from '@redwoodjs/testing/api'
+
+import isSubscriber from './isSubscriber'
+
+describe('isSubscriber directive', () => {
+ it('declares the directive sdl as schema, with the correct name', () => {
+ expect(isSubscriber.schema).toBeTruthy()
+ expect(getDirectiveName(isSubscriber.schema)).toBe('isSubscriber')
+ })
+
+ it('has a isSubscriber throws an error if validation does not pass', () => {
+ const mockExecution = mockRedwoodDirective(isSubscriber, {})
+
+ expect(mockExecution).toThrowError(
+ 'Implementation missing for isSubscriber'
+ )
+ })
+})
+```
+
+:::tip
+If your Validator Directive is asynchronous, you can use `rejects` to handle the exception.
+
+```ts
+describe('isSubscriber directive', () => {
+ it('has a isSubscriber throws an error if validation does not pass', async () => {
+ const mockExecution = mockRedwoodDirective(isSubscriber, {})
+ await expect(mockExecution()).rejects.toThrowError(
+ 'Implementation missing for isSubscriber'
+ )
+ })
+})
+```
+
+:::
+
+### Transformer
+
+Let's create a `@maskedEmail` directive that checks roles to see if the user should see the complete email address or if it should be obfuscated from prying eyes:
+
+```bash
+yarn rw g directive maskedEmail --type transformer
+```
+
+Next, implement your validation logic in the directive's `transform` function.
+
+Transformer directives provide `context` and `resolvedValue` parameters and run **after** resolving the value.
+Transformer directives **must** be synchronous, and return a value.
+You can throw an error, if you want to stop executing, but note that the value has already been resolved.
+
+Take note of the `resolvedValue`:
+
+```tsx
+const transform: TransformerDirectiveFunc = ({ context, resolvedValue }) => {
+ return resolvedValue.replace('foo', 'bar')
+}
+```
+
+It contains the value of the field on which the directive was placed. Here, `email`.
+So the `resolvedValue` will be the value of the email property in the User model, the "original value" so-to-speak.
+
+When you return a value from the `transform` function, just return a modified value and that will be returned as the result and replace the `email` value in the response.
+
+> 🛎️ **Important**
+>
+> You must return a value of the same type. So, if your `resolvedValue` is a `String`, return a `String`. If it's a `Date`, return a `Date`. Otherwise, your data will not match the SDL Type.
+
+#### Writing Transformer Tests
+
+When writing a Transformer directive test, you'll want to:
+
+- ensure the directive is named consistently and correctly so the directive name maps properly when transforming
+- confirm that the directive returns a value and that it's the expected transformed value
+
+Since we stub out and mock the `mockedResolvedValue` when generating the Transformer directive, these tests should pass.
+
+Here we mock the value `foo` and, since the generated `transform` function replaces `foo` with `bar`, we expect that after execution, the returned value will be `bar`.
+But once you begin implementing the validate logic, it's on you to update appropriately.
+
+```tsx
+import { mockRedwoodDirective, getDirectiveName } from '@redwoodjs/testing/api'
+
+import maskedEmail from './maskedEmail'
+
+describe('maskedEmail directive', () => {
+ it('declares the directive sdl as schema, with the correct name', () => {
+ expect(maskedEmail.schema).toBeTruthy()
+ expect(getDirectiveName(maskedEmail.schema)).toBe('maskedEmail')
+ })
+
+ it('has a maskedEmail implementation transforms the value', () => {
+ const mockExecution = mockRedwoodDirective(maskedEmail, {
+ mockedResolvedValue: 'foo',
+ })
+
+ expect(mockExecution()).toBe('bar')
+ })
+})
+```
+
+:::tip
+
+If your Transformer Directive is asynchronous, you can use `resolves` to handle the result.
+
+```ts
+import maskedEmail from './maskedEmail'
+
+describe('maskedEmail directive', () => {
+ it('has a maskedEmail implementation transforms the value', async () => {
+ const mockExecution = mockRedwoodDirective(maskedEmail, {
+ mockedResolvedValue: 'foo',
+ })
+
+ await expect(mockExecution()).resolves.toBe('bar')
+ })
+})
+```
+
+:::
diff --git a/docs/versioned_docs/version-8.4/docker.md b/docs/versioned_docs/version-8.4/docker.md
new file mode 100644
index 000000000000..c2d60c81509a
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/docker.md
@@ -0,0 +1,489 @@
+---
+description: Redwood's Dockerfile
+---
+
+# Docker
+
+If you're not familiar with Docker, we recommend going through their [getting started](https://docs.docker.com/get-started/) documentation.
+
+## Set up
+
+To get started, run the setup command:
+
+```
+yarn rw setup docker
+```
+
+The setup commands does several things:
+
+- writes four files: `Dockerfile`, `.dockerignore`, `docker-compose.dev.yml`, and `docker-compose.prod.yml`
+- adds the `@redwoodjs/api-server` and `@redwoodjs/web-server` packages to the api and web sides respectively
+- edits the `browser.open` setting in the `redwood.toml` (right now, if it's set to `true`, it'll break the dev server when running the `docker-compose.dev.yml`)
+
+## Usage
+
+You can start the dev compose file with:
+
+```
+docker compose -f ./docker-compose.dev.yml up
+```
+
+And the prod compose file with:
+
+```
+docker compose -f ./docker-compose.prod.yml up
+```
+
+:::info make sure to specify build args
+
+If your api side or web side depend on env vars at build time, you may need to supply them as `--build-args`, or in the compose files.
+
+This is often the most tedious part of setting up Docker. Have ideas of how it could be better? Let us know on the [forums](https://community.redwoodjs.com/)!
+
+:::
+
+The first time you do this, you'll have to use the `console` stage to go in and migrate the database—just like you would with a Redwood app on your machine:
+
+```
+docker compose -f ./docker-compose.dev.yml run --rm -it console /bin/bash
+root@...:/home/node/app# yarn rw prisma migrate dev
+```
+
+:::info database choice
+The docker setup command assumes that you are using Postgres as your database provider and sets up a local Postgres database for you. You may have to switch from SQLite to Postgres if you have not done so and want to continue with the default setup.
+:::
+
+:::important
+If you are using a [Server File](server-file.md) then you should [change the command](#command) that runs the `api_serve` service.
+:::
+
+## Dockerfile
+
+The documentation here goes through and explains every line of Redwood's Dockerfile.
+If you'd like to see the whole Dockerfile for reference, you can find it [here](https://github.com/redwoodjs/redwood/tree/main/packages/cli/src/commands/setup/docker/templates/Dockerfile) or by setting it up in your project: `yarn rw setup docker`.
+
+Redwood takes advantage of [Docker's multi-stage build support](https://docs.docker.com/build/building/multi-stage/) to keep the final production images lean.
+
+### The `base` stage
+
+The `base` stage installs dependencies.
+It's used as the base image for the build stages and the `console` stage.
+
+```Dockerfile
+FROM node:20-bookworm-slim as base
+```
+
+We use a Node.js 20 image as the base image because that's the version Redwood targets.
+"bookworm" is the codename for the current stable distribution of Debian (version 12).
+Lastly, the "slim" variant of the `node:20-bookworm` image only includes what Node.js needs which reduces the image's size while making it more secure.
+
+:::tip Why not alpine?
+
+While alpine may be smaller, it uses musl, a different C standard library.
+In developing this Dockerfile, we prioritized security over size.
+
+If you know what you're doing feel free to change this—it's your Dockerfile now!
+Just remember to change the `apt-get` instructions further down too if needed.
+
+:::
+
+Moving on, next we have `corepack enable`:
+
+```Dockerfile
+RUN corepack enable
+```
+
+[Corepack](https://nodejs.org/docs/latest-v18.x/api/corepack.html), Node's manager for package managers, needs to be enabled so that Yarn can use the `packageManager` field in your project's root `package.json` to pick the right version of itself.
+If you'd rather check in the binary, you still can, but you'll need to remember to copy it over (i.e. `COPY --chown=node:node .yarn/releases .yarn/releases`).
+
+```Dockerfile
+RUN apt-get update && apt-get install -y \
+ openssl \
+ # python3 make gcc \
+ && rm -rf /var/lib/apt/lists/*
+```
+
+The `node:20-bookworm-slim` image doesn't have [OpenSSL](https://www.openssl.org/), which [seems to be a bug](https://github.com/nodejs/docker-node/issues/1919).
+(It was included in the "bullseye" image, the codename for Debian 11.)
+On Linux, [Prisma needs OpenSSL](https://www.prisma.io/docs/reference/system-requirements#linux-runtime-dependencies), so we install it here via Ubuntu's package manager APT.
+Python and its dependencies are there ready to be uncommented if you need them. See the [Troubleshooting](#python) section for more information.
+
+[It's recommended](https://docs.docker.com/develop/develop-images/instructions/#apt-get) to combine `apt-get update` and `apt-get install -y` in the same `RUN` statement for cache busting.
+After installing, we clean up the apt cache to keep the layer lean. (Running `apt-get clean` isn't required—[official Debian images do it automatically](https://github.com/moby/moby/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105).)
+
+```Dockerfile
+USER node
+```
+
+This and subsequent `chown` options in `COPY` instructions are for security.
+[Services that can run without privileges should](https://docs.docker.com/develop/develop-images/instructions/#user).
+The Node.js image includes a user, `node`, created with an explicit `uid` and `gid` (`1000`).
+We reuse it.
+
+```Dockerfile
+WORKDIR /home/node/app
+
+COPY --chown=node:node .yarnrc.yml .
+COPY --chown=node:node package.json .
+COPY --chown=node:node api/package.json api/
+COPY --chown=node:node web/package.json web/
+COPY --chown=node:node yarn.lock .
+```
+
+Here we copy the minimum set of files that the `yarn install` step needs.
+The order isn't completely arbitrary—it tries to maximize [Docker's layer caching](https://docs.docker.com/build/cache/).
+We expect `yarn.lock` to change more than the `package.json`s and the `package.json`s to change more than `.yarnrc.yml`.
+That said, it's hard to argue that these files couldn't be arranged differently, or that the `COPY` instructions couldn't be combined.
+The important thing is that they're all here, before the `yarn install` step:
+
+```Dockerfile
+RUN mkdir -p /home/node/.yarn/berry/index
+RUN mkdir -p /home/node/.cache
+
+RUN --mount=type=cache,target=/home/node/.yarn/berry/cache,uid=1000 \
+ --mount=type=cache,target=/home/node/.cache,uid=1000 \
+ CI=1 yarn install
+```
+
+This step installs all your project's dependencies—production and dev.
+Since we use multi-stage builds, your production images won't pay for the dev dependencies installed in this step.
+The build stages need the dev dependencies.
+
+The `mkdir` steps are a workaround for a permission error. We're working on removing them, but for now if you remove them the install step will probably fail.
+
+This step is a bit more involved than the others.
+It uses a [cache mount](https://docs.docker.com/build/cache/#use-your-package-manager-wisely).
+Yarn operates in three steps: resolution, fetch, and link.
+If you're not careful, the cache for the fetch step basically doubles the number of `node_modules` installed on disk.
+We could disable it all together, but by using a cache mount, we can still get the benefits without paying twice.
+We set it to the default directory here, but you can change its location in `.yarnrc.yml`.
+If you've done so you'll have to change it here too.
+
+One more thing to note: without setting `CI=1`, depending on the deploy provider, yarn may think it's in a TTY, making the logs difficult to read. With this set, yarn adapts accordingly.
+Enabling CI enables [immutable installs](https://v3.yarnpkg.com/configuration/yarnrc#enableImmutableInstalls) and [inline builds](https://v3.yarnpkg.com/configuration/yarnrc#enableInlineBuilds), both of which are highly recommended.
+
+```Dockerfile
+COPY --chown=node:node redwood.toml .
+COPY --chown=node:node graphql.config.js .
+COPY --chown=node:node .env.defaults .env.defaults
+```
+
+We'll need these config files for the build and production stages.
+The `redwood.toml` file is Redwood's de-facto config file.
+Both the build and serve stages read it to enable and configure functionality.
+
+:::warning `.env.defaults` is ok to include but `.env` is not
+
+If you add a secret to the Dockerfile, it can be excavated.
+While it's technically true that multi stage builds add a sort of security layer, it's not a best practice.
+Leave them out and look to your deploy provider for further configuration.
+
+:::
+
+### The `api_build` stage
+
+The `api_build` stage builds the api side:
+
+```Dockerfile
+FROM base as api_build
+
+# If your api side build relies on build-time environment variables,
+# specify them here as ARGs.
+#
+# ARG MY_BUILD_TIME_ENV_VAR
+
+COPY --chown=node:node api api
+RUN yarn rw build api
+```
+
+After the work we did in the base stage, building the api side amounts to copying in the api directory and running `yarn rw build api`.
+
+### The `api_serve` stage
+
+The `api_serve` stage serves your GraphQL api and functions:
+
+```Dockerfile
+FROM node:20-bookworm-slim as api_serve
+
+RUN corepack enable
+
+RUN apt-get update && apt-get install -y \
+ openssl \
+ # python3 make gcc \
+ && rm -rf /var/lib/apt/lists/*
+```
+
+We don't start from the `base` stage, but begin anew with the `node:20-bookworm-slim` image.
+Since this is a production stage, it's important for it to be as small as possible.
+Docker's [multi-stage builds](https://docs.docker.com/build/building/multi-stage/) enables this.
+
+```Dockerfile
+USER node
+WORKDIR /home/node/app
+
+COPY --chown=node:node .yarnrc.yml .yarnrc.yml
+COPY --chown=node:node package.json .
+COPY --chown=node:node api/package.json api/
+COPY --chown=node:node yarn.lock yarn.lock
+```
+
+Like other `COPY` instructions, ordering these files with care enables layering caching.
+
+```Dockerfile
+RUN mkdir -p /home/node/.yarn/berry/index
+RUN mkdir -p /home/node/.cache
+
+RUN --mount=type=cache,target=/home/node/.yarn/berry/cache,uid=1000 \
+ --mount=type=cache,target=/home/node/.cache,uid=1000 \
+ CI=1 yarn workspaces focus api --production
+```
+
+This is a critical step for image size.
+We don't use the regular `yarn install` command.
+Using the [official workspaces plugin](https://github.com/yarnpkg/berry/tree/master/packages/plugin-workspace-tools)—which is included by default in yarn v4—we "focus" on the api workspace, only installing its production dependencies.
+
+The cache mount will be populated at this point from the install in the `base` stage, so the fetch step should fly by.
+
+```Dockerfile
+COPY --chown=node:node redwood.toml .
+COPY --chown=node:node graphql.config.js .
+COPY --chown=node:node .env.defaults .env.defaults
+
+COPY --chown=node:node --from=api_build /home/node/app/api/dist /home/node/app/api/dist
+COPY --chown=node:node --from=api_build /home/node/app/api/db /home/node/app/api/db
+COPY --chown=node:node --from=api_build /home/node/app/node_modules/.prisma /home/node/app/node_modules/.prisma
+```
+
+Here's where we really take advantage of multi-stage builds by copying from the `api_build` stage.
+At this point all the building has been done. Now we can just grab the artifacts without having to lug around the dev dependencies.
+
+There's one more thing that was built: the prisma client in `node_modules/.prisma`.
+We need to grab it, too.
+
+Lastly, the default command is to start the api server using the bin from the `@redwoodjs/api-server` package.
+You can override this command if you have more specific needs.
+
+```Dockerfile
+ENV NODE_ENV=production
+
+# default api serve command
+# ---------
+# If you are using a custom server file, you must use the following
+# command to launch your server instead of the default api-server below.
+# This is important if you intend to configure GraphQL to use Realtime.
+#
+# CMD [ "./api/dist/server.js" ]
+CMD [ "node_modules/.bin/rw-server", "api" ]
+```
+
+:::important
+If you are using a [Server File](#using-the-server-file) then you must change the command that runs the `api_serve` service to `./api/dist/server.js` as shown above.
+
+Not updating the command will not completely configure the GraphQL Server and not setup [Redwood Realtime](./realtime.md), if you are using that.
+:::
+
+Note that the Redwood CLI isn't available anymore because it is a dev dependency.
+To access the server bin, we have to find its path in `node_modules`.
+Though this is somewhat discouraged in modern yarn, since we're using the `node-modules` node linker, it's in `node_modules/.bin`.
+
+### The `web_build` stage
+
+This `web_build` builds the web side:
+
+```Dockerfile
+FROM base as web_build
+
+COPY --chown=node:node web web
+RUN yarn rw build web --no-prerender
+```
+
+After the work we did in the base stage, building the web side amounts to copying in the web directory and running `yarn rw build web`.
+
+This stage is a bit of a simplification.
+It foregoes Redwood's prerendering (SSG) capability.
+Prerendering is a little trickier; see [the `web_prerender_build` stage](#the-web_prerender_build-stage).
+
+If you've included environment variables in your `redwood.toml`'s `web.includeEnvironmentVariables` field, you'll want to specify them as ARGs here.
+The setup command should've inlined them for you.
+
+### The `web_prerender_build` stage
+
+The `web_prerender_build` stage builds the web side with prerender.
+
+```Dockerfile
+FROM api_build as web_build_with_prerender
+
+COPY --chown=node:node web web
+RUN yarn rw build web
+```
+
+Building the web side with prerendering poses a challenge.
+Prerender needs the api side around to get data for your Cells and route hooks.
+The key line here is the first one—this stage uses the `api_build` stage as its base image.
+
+### The `web_serve` stage
+
+```Dockerfile
+FROM node:20-bookworm-slim as web_serve
+
+RUN corepack enable
+
+USER node
+WORKDIR /home/node/app
+
+COPY --chown=node:node .yarnrc.yml .
+COPY --chown=node:node package.json .
+COPY --chown=node:node web/package.json web/
+COPY --chown=node:node yarn.lock .
+
+RUN mkdir -p /home/node/.yarn/berry/index
+RUN mkdir -p /home/node/.cache
+
+RUN --mount=type=cache,target=/home/node/.yarn/berry/cache,uid=1000 \
+ --mount=type=cache,target=/home/node/.cache,uid=1000 \
+ CI=1 yarn workspaces focus web --production
+
+COPY --chown=node:node redwood.toml .
+COPY --chown=node:node graphql.config.js .
+COPY --chown=node:node .env.defaults .env.defaults
+
+COPY --chown=node:node --from=web_build /home/node/app/web/dist /home/node/app/web/dist
+
+ENV NODE_ENV=production \
+ API_PROXY_TARGET=http://api:8911
+
+CMD "node_modules/.bin/rw-web-server" "--api-proxy-target" "$API_PROXY_TARGET"
+```
+
+Most of this stage is similar to the `api_serve` stage, except that we're copying from the `web_build` stage instead of the `api_build`.
+(If you're prerendering, you'll want to change the `--from=web_build` to `--from=web_prerender_build`.)
+
+The binary we're using here to serve the web side is `rw-web-server` which comes from the `@redwoodjs/web-server` package.
+While this web server will be much more fully featured in the future, right now it's mostly just to get you going.
+Ideally you want to put a web server like Nginx or Caddy in front of it.
+
+Lastly, note that we use the shell form of `CMD` here for its variable expansion.
+
+### The `console` stage
+
+The `console` stage is an optional stage for debugging:
+
+````Dockerfile
+FROM base as console
+
+# To add more packages:
+#
+# ```
+# USER root
+#
+# RUN apt-get update && apt-get install -y \
+# curl
+#
+# USER node
+# ```
+
+COPY --chown=node:node api api
+COPY --chown=node:node web web
+COPY --chown=node:node scripts scripts
+````
+
+The console stage completes the base stage by copying in the rest of your Redwood app.
+But then it pretty much leaves you to your own devices.
+The intended way to use it is to create an ephemeral container by starting a shell like `/bin/bash` in the image built by targeting this stage:
+
+```bash
+# Build the console image:
+docker build . -t console --target console
+# Start an ephemeral container from it:
+docker run --rm -it console /bin/bash
+```
+
+As the comment says, feel free to add more packages.
+We intentionally kept them to a minimum in the base stage, but you shouldn't worry about the size of the image here.
+
+## Troubleshooting
+
+### Python
+
+We tried to make the Dockerfile as lean as possible.
+In some cases, that means we excluded a dependency your project needs.
+And by far the most common is Python.
+
+During a stage's `yarn install` step (`RUN ... yarn install`), if you see an error like the following:
+
+```
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python Python is not set from command line or npm configuration
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python Python is not set from environment variable PYTHON
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python checking if "python3" can be used
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python - executable path is ""
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python - "" could not be run
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python checking if "python" can be used
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python - executable path is ""
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python - "" could not be run
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python **********************************************************
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python You need to install the latest version of Python.
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python Node-gyp should be able to find and use Python. If not,
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python you can try one of the following options:
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python - Use the switch --python="/path/to/pythonexecutable"
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python (accepted by both node-gyp and npm)
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python - Set the environment variable PYTHON
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python - Set the npm configuration variable python:
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python npm config set python "/path/to/pythonexecutable"
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python For more information consult the documentation at:
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python https://github.com/nodejs/node-gyp#installation
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python **********************************************************
+➤ YN0000: │ bufferutil@npm:4.0.8 STDERR gyp ERR! find Python
+```
+
+It's because your project depends on Python and the image doesn't provide it.
+
+It's easy to fix: just add `python3` and its dependencies (usually `make` and `gcc`):
+
+```diff
+ FROM node:20-bookworm-slim as base
+
+ RUN apt-get update && apt-get install -y \
+ openssl \
++ python3 make gcc \
+ && rm -rf /var/lib/apt/lists/*
+```
+
+Not sure why your project depends on Python? `yarn why` is your friend.
+From the error message, we know `bufferutil` couldn't build.
+But why do we have `bufferutil`?
+
+```
+yarn why bufferutil
+└─ websocket@npm:1.0.34
+ └─ bufferutil@npm:4.0.8 (via npm:^4.0.1)
+```
+
+`websocket` needs `bufferutil`. But why do we have `websocket`?
+Keep pulling the thread till you get to a top-level dependency:
+
+```
+yarn why websocket
+└─ @supabase/realtime-js@npm:2.8.4
+ └─ websocket@npm:1.0.34 (via npm:^1.0.34)
+
+yarn why @supabase/realtime-js
+└─ @supabase/supabase-js@npm:2.38.4
+ └─ @supabase/realtime-js@npm:2.8.4 (via npm:^2.8.4)
+
+yarn why @supabase/supabase-js
+├─ api@workspace:api
+│ └─ @supabase/supabase-js@npm:2.38.4 (via npm:^2.21.0)
+│
+└─ web@workspace:web
+ └─ @supabase/supabase-js@npm:2.38.4 (via npm:^2.21.0)
+```
+
+In this case, it looks like it's ultimately because of our auth provider, `@supabase/supabase-js`.
+
+## Using the server file
+
+Sometimes you will want additional control over the API server, perhaps adding content type parsers, or Fastify plugins.
+
+Refer to our [Server File](server-file.md) for details on how to make use of this.
diff --git a/docs/versioned_docs/version-8.4/environment-variables.md b/docs/versioned_docs/version-8.4/environment-variables.md
new file mode 100644
index 000000000000..1e6534bbdc60
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/environment-variables.md
@@ -0,0 +1,152 @@
+---
+description: How to use environment variables on the api and web sides
+---
+
+# Environment Variables
+
+You can provide environment variables to each side of your Redwood app in different ways, depending on each Side's target, and whether you're in development or production.
+
+> Right now, Redwood apps have two fixed Sides, API and Web, that each have a single target, nodejs and browser respectively.
+
+## Generally
+
+Redwood apps use [dotenv](https://github.com/motdotla/dotenv) to load vars from your `.env` file into `process.env`.
+For a reference on dotenv syntax, see the dotenv README's [Rules](https://github.com/motdotla/dotenv#rules) section.
+
+> Technically, we use [dotenv-defaults](https://github.com/mrsteele/dotenv-defaults), which is how we also supply and load `.env.defaults`.
+
+
+
+Redwood also configures Vite, so that all references to `process.env` vars on the Web side will be replaced with the variable's actual value at build-time. More on this in [Web](#Web).
+
+## Web
+
+### Including environment variables
+
+> **Heads Up:** for Web to access environment variables in production, you _must_ configure one of the options below.
+>
+> Redwood recommends **Option 1: `redwood.toml`** as it is the most robust.
+
+In production, you can get environment variables to the Web Side either by
+
+1. adding to `redwood.toml` via the `includeEnvironmentVariables` array, or
+2. prefixing with `REDWOOD_ENV_`
+
+Just like for the API Side, you'll also have to set them up with your provider. Some hosting providers distinguish between build and runtime environments for configuring environment variables.
+Environment variables for the web side should in those cases be configured as build-time variables.
+
+#### Option 1: includeEnvironmentVariables in redwood.toml
+
+For Example:
+
+```toml title="redwood.toml"
+[web]
+ includeEnvironmentVariables = ['SECRET_API_KEY', 'ANOTHER_ONE']
+```
+
+By adding environment variables to this array, they'll be available to Web in production via `process.env.SECRET_API_KEY`. This means that if you have an environment variable like `process.env.SECRET_API_KEY` Redwood removes and replaces it with its _actual_ value.
+
+Note: if someone inspects your site's source, _they could see your `REDWOOD_ENV_SECRET_API_KEY` in plain text._ This is a limitation of delivering static JS and HTML to the browser.
+
+#### Option 2: Prefixing with REDWOOD_ENV\_
+
+In `.env`, if you prefix your environment variables with `REDWOOD_ENV_`, they'll be available via `process.env.REDWOOD_ENV_MY_VAR_NAME`, and will be dynamically replaced at build-time.
+
+Like the option above, these are also removed and replaced with the _actual value_ during build in order to be available in production.
+
+### Accessing API URLs
+
+Redwood automatically makes your API URL configurations from the web section of your `redwood.toml` available globally.
+They're accessible via the `window` or `global` objects.
+For example, `global.RWJS_API_GRAPHQL_URL` gives you the URL for your graphql endpoint.
+
+The toml values are mapped as follows:
+
+| `redwood.toml` key | Available globally as | Description |
+| ------------------ | ----------------------------- | ---------------------------------------- |
+| `apiUrl` | `global.RWJS_API_URL` | URL or absolute path to your api-server |
+| `apiGraphQLUrl` | `global.RWJS_API_GRAPHQL_URL` | URL or absolute path to GraphQL function |
+
+See the [redwood.toml reference](app-configuration-redwood-toml.md#api-paths) for more details.
+
+## Development Fatal Error Page
+
+```text title=".env"
+REDWOOD_ENV_EDITOR=vscode
+```
+
+Redwood comes with a `FatalErrorPage` that displays helpful information—like the stack trace and the request—when something breaks.
+
+> `FatalErrorPage` isn't bundled when deploying to production
+
+As part of the stack trace, there are links to the original source files so that they can be quickly opened in your editor.
+The page defaults to VSCode, but you can override the editor by setting the environment variable `REDWOOD_ENV_EDITOR`.
+
+## API
+
+### Development
+
+You can access environment variables defined in `.env` and `.env.defaults` as `process.env.VAR_NAME`. For example, if we define the environment variable `HELLO_ENV` in `.env`:
+
+```
+HELLO_ENV=hello world
+```
+
+and make a hello Function (`yarn rw generate function hello`) and reference `HELLO_ENV` in the body of our response:
+
+```jsx {6} title="./api/src/functions/hello.js"
+export const handler = async (event, context) => {
+ return {
+ statusCode: 200,
+ body: `${process.env.HELLO_ENV}`,
+ }
+}
+```
+
+Navigating to http://localhost:8911/hello shows that the Function successfully accesses the environment variable:
+
+
+
+
+![rw-envVars-api](https://user-images.githubusercontent.com/32992335/86520528-47112100-bdfa-11ea-8d7e-1c0d502805b2.png)
+
+### Production
+
+
+
+
+Whichever platform you deploy to, they'll have some specific way of making environment variables available to the serverless environment where your Functions run. For example, if you deploy to Netlify, you set your environment variables in **Settings** > **Build & Deploy** > **Environment**. You'll just have to read your provider's documentation.
+Some hosting providers distinguish between build and runtime environments for configuring environment variables. Environment variables for the api side should in those cases be configured as runtime variables.
+
+## Keeping Sensitive Information Safe
+
+Since it usually contains sensitive information, you should [never commit your `.env` file](https://github.com/motdotla/dotenv#should-i-commit-my-env-file). Note that you'd actually have to go out of your way to do this as, by default, a Redwood app's `.gitignore` explicitly ignores `.env`:
+
+```plaintext {2}
+.DS_Store
+.env
+.netlify
+dev.db
+dist
+dist-babel
+node_modules
+yarn-error.log
+```
+
+## Where Does Redwood Load My Environment Variables?
+
+For all the variables in your `.env` and `.env.defaults` files to make their way to `process.env`, there has to be a call to `dotenv`'s `config` function somewhere. So where is it?
+
+It's in [the CLI](https://github.com/redwoodjs/redwood/blob/main/packages/cli/src/index.js#L6-L12)—every time you run a `yarn rw` command:
+
+```jsx title="packages/cli/src/index.js"
+import { config } from 'dotenv-defaults'
+
+config({
+ path: path.join(getPaths().base, '.env'),
+ encoding: 'utf8',
+ defaults: path.join(getPaths().base, '.env.defaults'),
+})
+```
+
+Remember, if `yarn rw dev` is already running, your local app won't reflect any changes you make to your `.env` file until you stop and re-run `yarn rw dev`.
diff --git a/docs/versioned_docs/version-8.4/forms.md b/docs/versioned_docs/version-8.4/forms.md
new file mode 100644
index 000000000000..22575b27c69e
--- /dev/null
+++ b/docs/versioned_docs/version-8.4/forms.md
@@ -0,0 +1,549 @@
+---
+description: Redwood makes building forms easier with helper components
+---
+
+# Forms
+
+Redwood provides several helpers to make building forms easier.
+All of Redwood's helpers are simple wrappers around [React Hook Form](https://react-hook-form.com/) (RHF) that make it even easier to use in most cases.
+
+If Redwood's helpers aren't flexible enough for you, you can use React Hook Form directly. `@redwoodjs/forms` exports everything it does:
+
+```jsx
+import {
+ useForm,
+ useFormContext,
+ /**
+ * Or anything else React Hook Form exports!
+ *
+ * @see {@link https://react-hook-form.com/api}
+ */
+} from '@redwoodjs/forms'
+```
+
+## Overview
+
+`@redwoodjs/forms` exports the following components:
+
+| Component | Description |
+| :---------------- | :------------------------------------------------------------------------------------------------------------------------------------------------- |
+| ` |