Skip to content

Conversation

@github-actions
Copy link
Contributor

This is an automated pull request to merge daniel/monorepo-refactor into dev.
It was created by the [Auto Pull Request] action.

@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 19, 2025

🔒 Comp AI - Security Review

🔴 Risk Level: HIGH

No OSV/CVE findings in dependency scan. Multiple files contain hardcoded credentials and API keys (examples in workflows, README.md, SELF_HOSTING.md, and disabled workflows).


📦 Dependency Vulnerabilities

✅ No known vulnerabilities detected in dependencies.


🛡️ Code Security Analysis

View 15 file(s) with issues

🟡 .github/actions/bun-install/action.yml (MEDIUM Risk)

# Issue Risk Level
1 Unpinned actions (buildjet/setup-node@v4, buildjet/cache@v4, oven-sh/setup-bun@v2) MEDIUM
2 bun-version set to 'latest' (mutable release / supply chain risk) MEDIUM
3 Broad cache path '**/node_modules/' may restore malicious or stale content MEDIUM
4 Composite action executes third-party actions with runner privileges MEDIUM

Recommendations:

  1. Pin third‑party actions to immutable references (commit SHAs) instead of tags (e.g., buildjet/setup-node@).
  2. Replace bun-version: latest with a specific, tested version or SHA (e.g., bun-version: 1.0.0 or pinned release SHA).
  3. Restrict cache paths to deterministic, repo-scoped locations (e.g., ./node_modules or ${{ github.workspace }}/node_modules) and include repo/branch/commit in cache keys (e.g., key: ${{ runner.os }}-bun-nm-cache-${{ github.repository }}-${{ github.ref }}-${{ hashFiles('bun.lockb','package.json') }}). Avoid globbing that can pull node_modules from other directories.
  4. Harden workflow permissions and action provenance: limit GITHUB_TOKEN permissions, use Organizational allowed-actions policy, enable Dependabot/automated action update checks, and review/verify third-party action source code before pinning.
  5. Add cache integrity measures: incorporate hashFiles(...) into keys, and consider validating critical packages via lockfile verification or checksum checks post-restore.
  6. Consider removing or minimizing use of third‑party actions inside composite actions where possible, or vendor minimal trusted code into your repository after review.

🔴 .github/actions/dangerous-git-checkout/action.yml (HIGH Risk)

# Issue Risk Level
1 Checkout of untrusted PR head allows arbitrary code from forks HIGH
2 Ref set to github.event.pull_request.head.sha (attacker-controlled) HIGH
3 No safeguards shown to prevent secret exfiltration by fork code HIGH

Recommendations:

  1. Do not check out arbitrary fork HEADs with workflows that have access to secrets. Prefer running untrusted PR code with event: pull_request and ensure no secrets/credentials are available to those runs.
  2. If you must check out PR code, set actions/checkout persist-credentials: false to avoid writing GITHUB_TOKEN into the checked-out repo and reduce risk of git credential use.
  3. Explicitly limit workflow permissions (top-level permissions block) to the least privilege required (e.g., permissions: contents: read or none). Do not rely on default GITHUB_TOKEN permissions.
  4. Remove or do not expose repository secrets or other credentials in steps that execute untrusted code. Do not pass secrets into steps that run code from forks.
  5. Require maintainer approval before running workflows that execute untrusted code, or run such checks only after a maintainer merges or manually triggers a workflow (e.g., workflow_dispatch / protected merge).
  6. Run checks that execute untrusted code on isolated runners that have no access to sensitive secrets or external systems.
  7. Consider alternative approaches: require PR authors to run tests in their fork, use a gated merge process (run full CI on merge commit in the main repository), or use dedicated merge queues so code that runs with secrets has been reviewed/merged first.

🟡 .github/workflows/auto-pr-to-release.yml (MEDIUM Risk)

# Issue Risk Level
1 continue-on-error masks failures; PR can be created despite failed checkout or version step MEDIUM
2 repo-sync/pull-request@v2 is unpinned; supply-chain can alter action behavior MEDIUM
3 GITHUB_TOKEN has write to contents/pull-requests/issues (excessive scope) MEDIUM

Recommendations:

  1. Remove continue-on-error for critical steps (checkout, version extraction, PR creation). Let the job fail on real errors so conditional steps (if: ${{ success() }}) behave correctly. If you need to allow nonblocking steps, isolate them and explicitly check their outcomes before creating the PR.
  2. Pin third-party actions to a full commit SHA (e.g., repo-sync/pull-request@) instead of a floating tag to prevent supply-chain tampering.
  3. Apply least-privilege permissions to GITHUB_TOKEN: only grant the exact permissions required (e.g., pull-requests: write if only creating PRs; remove contents: write/issues: write unless strictly necessary).
  4. Migrate from deprecated ::set-output to the GITHUB_OUTPUT file and validate the parsed package.json version (e.g., ensure it matches semver regex) before using it in workflow logic.
  5. Enforce branch protection rules and require manual approvals for merges from automated PRs into release branches to reduce risk of accidental or malicious releases.

🟡 .github/workflows/database-migrations-main.yml (MEDIUM Risk)

# Issue Risk Level
1 Automatic migration on main after push (can apply harmful migrations) MEDIUM
2 workflow_dispatch allows manual triggers that run migrations MEDIUM
3 DATABASE_URL_DEV secret used in workflow may be exposed in logs MEDIUM
4 Runner 'warp-ubuntu-latest-arm64-4x' appears self-hosted; secrets risk MEDIUM
5 oven-sh/setup-bun@v1 and actions/checkout@v4 not pinned to commit MEDIUM
6 bun-version: latest uses floating version (supply-chain risk) MEDIUM
7 pnpx prisma migrate deploy may fetch remote packages at runtime MEDIUM
8 Migrations auto-run on merge lets merged PRs execute DB changes MEDIUM
9 No approvals/branch protections outlined to prevent malicious merges MEDIUM

Recommendations:

  1. Require branch protection rules and enforce required PR reviews/approvals before merges to main.
  2. Remove automated migrations from post-merge workflows or gate them behind an environment that requires manual approval (GitHub Environments) or a safe promotion step.
  3. Avoid running destructive operations automatically on main; run migrations in a dedicated, isolated deployment pipeline with explicit approvals.
  4. Pin GitHub Actions to immutable SHAs (or at least exact minor/patch versions) instead of tags like v1 to reduce supply-chain risk.
  5. Avoid floating tool versions (e.g., bun-version: latest). Pin to a specific, audited version and update via controlled PRs.
  6. Do not rely on pnpx to fetch CLIs at runtime; install the required CLI tools as part of CI (from a lockfile/artifact) so the workflow does not pull arbitrary remote code during execution.
  7. Treat secrets as sensitive on self-hosted runners: either use GitHub-hosted runners for workflows that need secrets or harden/isolate the self-hosted runner, rotate credentials frequently, and restrict runner admin access.
  8. Ensure workflows and commands do not print sensitive env vars; enable GitHub Actions secret masking and audit logs for accidental leakage.
  9. Use a least-privilege database user for CI/dev migrations (limit schema privileges) and rotate DATABASE_URL credentials regularly.

🔴 .github/workflows/database-migrations-release.yml (HIGH Risk)

# Issue Risk Level
1 Third-party actions referenced by tags, not pinned to commit SHAs HIGH
2 bun-version set to 'latest' (unpinned) HIGH
3 DATABASE_URL_PROD exposed to workflow steps and actions HIGH
4 Secrets accessible to third-party actions; possible exfiltration HIGH
5 Workflow runs on custom/self-hosted runner; secrets risk if compromised HIGH
6 No environment protections/required approvals for prod secret use HIGH
7 Automated migrations run against production DB without approval HIGH
8 Potential secret leakage in workflow logs if output printed HIGH

Recommendations:

  1. Pin third-party actions to specific commit SHAs (not tags) so code run is immutable and auditable.
  2. Pin bun-version to a specific tested release (e.g., '1.0.0') instead of 'latest' to avoid unexpected changes.
  3. Move DATABASE_URL_PROD into a GitHub Environment configured for production with required reviewers/approvals before allowing access to secrets.
  4. Limit the scope of the secret: expose DATABASE_URL_PROD only to the single migration step (use 'env' at the step level) and avoid passing secrets to third-party actions whenever possible.
  5. Use a least-privilege database user for migrations (no broad admin rights) and rotate credentials regularly; consider using ephemeral credentials/secrets where supported.
  6. Require manual approval or an explicit protected workflow run (e.g., GitHub Environments with required reviewers) before running production migrations.
  7. Harden or avoid self-hosted runners for production-sensitive workflows; if using them, ensure the runner is fully secured, isolated, and monitored, or use GitHub-hosted runners.
  8. Suppress/inspect command output to avoid logging secrets (set commands not to echo sensitive env vars, use --log-level flags, and add steps to redact any accidental exposure).
  9. Add auditing/alerting around workflow runs that use production secrets and log all runs that perform migrations for accountability.

🔴 .github/workflows/trigger-tasks-deploy-main.yml (HIGH Risk)

# Issue Risk Level
1 Self-hosted runner may expose secrets if compromised HIGH
2 Secrets may be leaked via --log-level debug output HIGH
3 Supply-chain risk: pnpx and bun install fetch remote packages HIGH
4 Install lifecycle scripts can run arbitrary commands during bun install HIGH

Recommendations:

  1. Use GitHub-hosted runners or strongly harden/isolate self-hosted runners (ephemeral runner instances, strict network egress controls, dedicated machine, minimal OS, frequent rebuilds).
  2. Remove or avoid --log-level debug in CI, or ensure the invoked CLI will never print secret values. Prefer explicit redaction and validate that the tool's debug output is safe before enabling in CI.
  3. Avoid on-the-fly remote installs with pnpx. Pin the CLI version and verify integrity (use a vendored binary, checksum-verified release artifact, or store the CLI in your org's package registry).
  4. Run dependency installs with script execution disabled where possible and enforce lockfile verification. For example: use --ignore-scripts by default, use frozen/verified lockfiles, use an offline/cached installer, and verify package checksums (supply-chain protections).
  5. Limit and scope secrets (least privilege), require environment protection rules/approvals for secrets and deployments, rotate tokens regularly, and restrict token scopes (use OIDC where possible instead of long-lived secrets).
  6. Log hygiene: ensure secrets are marked as GitHub secrets, review third‑party tools for patterns that might bypass GitHub masking, and avoid printing full environment contents in CI logs.

🟡 .github/workflows/trigger-tasks-deploy-release.yml (MEDIUM Risk)

# Issue Risk Level
1 Third-party action oven-sh/setup-bun — supply-chain risk MEDIUM
2 bun install runs install scripts (first run without --ignore-scripts) MEDIUM
3 pnpx trigger.dev@4.0.6 executes remote npm package code MEDIUM
4 Non-standard runner 'warp-ubuntu-latest-arm64-4x' likely self-hosted MEDIUM
5 Secrets set as env can leak if steps echo or logs expose them MEDIUM

Recommendations:

  1. Pin and audit third-party actions more strictly: prefer commit SHA pins for actions (not just major/minor tags) and review the oven-sh/setup-bun repo before use. Consider vendoring or replacing with a maintained official action.
  2. Avoid executing package install scripts from untrusted code. Run bun install with --ignore-scripts in CI, or run the first install in an isolated sandbox/build container. Alternatively, vendor dependencies or run a verified build pipeline that inspects postinstall scripts.
  3. Avoid pnpx remote execution. Install the trigger.dev CLI as a dependency in the repo (or download a verified binary) and run the local binary (node_modules/.bin/trigger.dev) or pin the exact package version and checksum you run. If pnpx must be used, verify integrity (install from a lockfile or use an approved package proxy).
  4. If this is a self-hosted runner (warp-ubuntu-latest-arm64-4x), harden and isolate it: restrict network access, enable least-privilege job execution, rotate credentials, keep OS/packages patched, and limit which repos can use the runner. Prefer GitHub-hosted runners for untrusted CI workloads.
  5. Limit exposure of secrets: avoid exporting secrets as wide-step env where not necessary; scope them to the minimum step; do not print or log env vars; enable GitHub Actions secret masking and consider runtime secret injection mechanisms. Rotate secrets if they may have been exposed.

🔴 .github/workflows_disabled/e2e-tests.yml (HIGH Risk)

# Issue Risk Level
1 Hardcoded DB credentials in DATABASE_URL (postgres:postgres) HIGH
2 Hardcoded auth secrets (NEXTAUTH_SECRET, AUTH_SECRET) HIGH
3 Hardcoded API keys/tokens in env (UPSTASH_REDIS_REST_TOKEN, RESEND_API_KEY, TRIGGER_SECRET_KEY) HIGH
4 Workflow permissions include actions: write and pull-requests: write (overly broad) HIGH
5 Third-party actions used without pinned SHAs (supply chain risk) HIGH
6 Cache includes node_modules and home dirs — may leak secrets or enable cache poisoning HIGH
7 Postgres service maps host port 5432:5432 exposing DB to host/network HIGH
8 Server logs are tailed and printed — may expose secrets in CI logs HIGH
9 Running pnpx scripts and builds executes repo deps — supply chain execution risk HIGH
10 Workflow triggers on pull_request/workflow_dispatch — forks/untrusted code may run with tokens HIGH

Recommendations:

  1. Remove all hardcoded secrets from the workflow. Move credentials, auth secrets, API keys and tokens into GitHub Secrets (or environment-specific secret stores) and reference them via secrets.*. Example: DATABASE_URL: ${{ secrets.DATABASE_URL }}.
  2. Avoid using plaintext secrets even for test values. If test-only values are required, generate ephemeral/test credentials at runtime or store them in repository-protected secrets with minimal scope and rotate regularly.
  3. Minimize workflow permissions to least privilege. Only grant the exact permissions needed (e.g., contents: read). Remove actions: write and pull-requests: write unless strictly required and document why they are needed.
  4. Pin all third-party actions to immutable commit SHAs instead of relying on tag names (e.g., actions/checkout@). Prefer GitHub-verified actions and review action source before pinning.
  5. Limit caches to safe, project-specific directories. Do not cache or restore whole home directories or node_modules across untrusted runs. Use package-manager specific caches (e.g., bun/npm cache) and include integrity keys in cache keys.
  6. Remove host port mapping for the Postgres service (avoid '5432:5432') so the DB is only available to the job container network. If host access is needed, restrict binding to localhost and document the rationale.
  7. Avoid printing full server logs or any outputs that may contain secrets to workflow logs. Only output minimal, sanitized logs and only on failure. Use secret redaction and never echo environment variables that contain secrets.
  8. Treat pnpx/npm install and build steps as execution of third-party code. Pin dependencies in lockfiles, run audits (npm audit/retirejs), and consider using tools like npm ci / --frozen-lockfile to avoid arbitrary changes. Consider running builds in isolated runners or containers with limited network access if supply chain risk is a concern.
  9. Protect workflows from untrusted fork contributions: do not expose secrets to workflows triggered by forked PRs. Use workflow conditions (e.g., restrict steps that use secrets to runs where context.repo == context.payload.pull_request.head.repo) or use pull_request_target with caution (and avoid using secrets there).
  10. Sanitize any test artifacts and avoid including secrets in test-output, screenshots or traces. If artifacts are uploaded, ensure they cannot leak secrets and set appropriate retention policies.

🟡 .github/workflows_disabled/quick-tests.yml (MEDIUM Risk)

# Issue Risk Level
1 Hardcoded DB credentials in DATABASE_URL env MEDIUM
2 Hardcoded NEXTAUTH_SECRET and AUTH_SECRET in workflow MEDIUM
3 Hardcoded API keys/tokens (UPSTASH, RESEND, TRIGGER_SECRET) MEDIUM
4 Postgres container maps host port 5432 exposing DB on runner host MEDIUM
5 Using mutable bun-version: latest (supply-chain risk) MEDIUM

Recommendations:

  1. Move all secrets to GitHub Secrets and reference them in the workflow (e.g., DATABASE_URL: ${{ secrets.DATABASE_URL }}, NEXTAUTH_SECRET: ${{ secrets.NEXTAUTH_SECRET }}, AUTH_SECRET: ${{ secrets.AUTH_SECRET }}, UPSTASH_REDIS_REST_TOKEN: ${{ secrets.UPSTASH_TOKEN }}, RESEND_API_KEY: ${{ secrets.RESEND_API_KEY }}, TRIGGER_SECRET_KEY: ${{ secrets.TRIGGER_SECRET_KEY }}). Do not store secret values directly in workflow files.
  2. Avoid mapping container ports to the runner host. Remove the 'ports: - 5432:5432' entry for the postgres service; GitHub Actions services are reachable from the job via localhost without host port mapping. Only map ports if absolutely required and you understand the exposure.
  3. Pin action/tool versions and avoid mutable tags for runtime tools. Replace bun-version: latest with a specific, audited version (e.g., bun-version: 1.0.0) and pin actions to a full commit SHA or well-known stable tag where possible.
  4. Ensure secrets are not printed in logs. Use actions that mask secrets and avoid echoing environment variables. When debugging, prefer ephemeral local runs with non-production data rather than committing secrets to CI.
  5. If running on self-hosted runners, restrict runner network access and use isolated ephemeral runners for CI jobs that build or run services. Consider GitHub-hosted runners for better isolation if self-hosted runners are currently used.
  6. Audit CI-only dummy credentials to ensure they cannot access production resources. Use separate test-only resources and credentials with limited privileges.

🔴 .github/workflows_disabled/test-quick.yml (HIGH Risk)

# Issue Risk Level
1 Hardcoded secrets: AUTH_SECRET, RESEND_API_KEY, UPSTASH_REDIS_REST_TOKEN, TRIGGER_SECRET_KEY HIGH
2 Hardcoded DB URL: postgresql://dummy:dummy@localhost:5432/dummy HIGH
3 Plain env values in workflow can leak in logs (not stored as secrets) HIGH
4 Runs PR code on custom runner warp-ubuntu-latest-arm64-4x (self-hosted risk) HIGH
5 Installing deps on PRs may run malicious package postinstall scripts HIGH
6 Cache restores node_modules via WarpBuilds/cache (third-party) risk HIGH
7 Using bun-version: latest (unpinned) risks supply-chain changes HIGH
8 SKIP_ENV_VALIDATION=true bypasses env validation for build HIGH
9 Typecheck and Lint use '

Recommendations:

  1. Move all real secrets to GitHub Secrets and reference them via ${{ secrets.NAME }} instead of hardcoding. Replace CI-only dummy values with secrets or guarded test values that cannot be used outside CI.
  2. Do not commit database connection strings or credentials to workflow files. Use secrets for DATABASE_URL or use ephemeral/test-only providers that cannot access real data.
  3. Avoid placing sensitive values directly in workflow env blocks — they can appear in logs and are not masked. Use secrets and restrict log output that might echo env values.
  4. Do not run untrusted PR code on self-hosted runners or restrict what those runners can access. Consider using GitHub-hosted runners for PRs, or isolate self-hosted runners and avoid supplying secrets to PR runs.
  5. Avoid running package installs with full script execution on untrusted PRs. For PRs from forks, use --ignore-scripts, use lockfile-only installs, or build in an isolated environment (or use pnpm/bun flags that disable lifecycle scripts).
  6. Pin third-party actions to specific versions or SHAs (avoid @v1 without SHA) and review third-party cache/action code. Where possible, use official GitHub cache or well-audited actions and pin them.
  7. Pin tool versions (do not use 'latest' for oven-sh/setup-bun). Use an explicit version or SHA to avoid unexpected changes from upstream.
  8. Remove SKIP_ENV_VALIDATION in CI or ensure a safe validation strategy is in place. Bypassing env validation can cause builds to succeed with insecure/missing config.
  9. Fail CI on lint/typecheck by removing '|| true' so problems are visible and acted on. Mask and handle known non-blocking issues explicitly rather than silencing failures.
  10. Limit workflow permissions to the least privilege required and consider additional safeguards (e.g., required reviewers, protected branches) to reduce impact of malicious PRs.

🟡 .github/workflows_disabled/unit-tests.yml (MEDIUM Risk)

# Issue Risk Level
1 Hardcoded AUTH_SECRET value in workflow env MEDIUM

Recommendations:

  1. Remove the hardcoded AUTH_SECRET from the workflow and reference a repository/organization secret instead: use env: AUTH_SECRET: ${{ secrets.AUTH_SECRET }} and set AUTH_SECRET in GitHub Secrets.
  2. Do not expose secrets to forked pull requests. Avoid using pull_request_target where possible; if you must, ensure secrets are not available to untrusted code. Add checks (e.g., only run jobs that need secrets when the PR branch is in the same repo) or use conditional steps that skip secret-using steps for external PRs.
  3. Rotate any secret values that were committed to the repository history. Revoke the old secret and create a new one. If the value was committed historically, remove it from history using git-filter-repo or BFG and force-push, then rotate the credential.
  4. Restrict workflow permissions to least privilege and limit which workflows can access secrets. Use environment protection rules (environments) to require approvals for secrets if needed.
  5. Prevent secrets from being logged: do not echo them to logs or write them out to artifacts. Mark steps that use secrets carefully and avoid debugging prints.
  6. Pin third-party actions to a specific commit SHA (or at least an exact released version) instead of loose tags to reduce supply-chain risk.
  7. If this workflow file is intended only for local/testing and not used in CI (it appears under .github/workflows_disabled), remove sensitive values from it before committing or move such files out of the repository.

🔴 CHANGELOG.md (HIGH Risk)

# Issue Risk Level
1 Auth bypass: auto-approve org creation on staging HIGH
2 Missing validation: questionnaire file uploads to S3 HIGH
3 Unverified executable download for Windows device agent HIGH
4 AI auto-answering may expose sensitive data from uploads HIGH

Recommendations:

  1. Auto-approve org creation on staging: remove or restrict auto-approve logic. Require authenticated/authorized approval or confine auto-approve to isolated test environments with no access to production data. Add audit logging, alerts, and rate-limiting for org creation events.
  2. S3 file uploads (questionnaire): validate file uploads server-side — enforce MIME/type whitelists, file size limits, filename sanitization, and content checks. Integrate virus/malware scanning (e.g., ClamAV / managed scanning). Use presigned PUT URLs with short TTLs and restrict S3 bucket policies to least privilege (no public ACLs).
  3. Executable downloads (Windows agent): serve installers via signed URLs, host integrity metadata (SHA256 checksums) and cryptographic signatures, and verify signatures on download. Ensure S3 objects used for agent downloads are private and require authenticated, time-limited access. Add telemetry/audit for downloads and consider code-signing the binaries.
  4. AI auto-answering and file parsing: apply strict input-sanitization and PII detection/redaction before sending content to AI services. Limit which uploaded files are eligible for AI parsing, require explicit user consent, and use isolated/private models where possible. Enforce least-privilege access to uploaded files, log AI requests, and implement retention/expiry policies for sensitive data used in ML pipelines.

🟡 Dockerfile (MEDIUM Risk)

# Issue Risk Level
1 Image runs as root (no USER set) MEDIUM
2 Build copies workspace files, may include secrets MEDIUM
3 Build-time ARG/ENV may bake secrets into final image MEDIUM
4 Using pnpx in CMD may execute packages at container start MEDIUM
5 Copying local migrations into node_modules/@trycompai/db MEDIUM
6 No .dockerignore shown; extraneous files may be included MEDIUM

Recommendations:

  1. Run the runtime image as a non-root user: create a user and add USER : in final stages (app, portal, migrator) and ensure file permissions are adjusted during the build.
  2. Do not COPY whole workspace or secret files into images. Add a .dockerignore to exclude .env, .secrets, .git, local DB files, docs, tests, and other sensitive artifacts.
  3. Avoid baking secrets into the image via ARG/ENV. Use runtime secret mechanisms (Docker secrets, Kubernetes Secrets, or a secrets manager) or bind-mount secrets at runtime.
  4. Replace pnpx in CMD with a pinned, pre-installed binary or script included at build-time, or run migrations via an init container/CI step that has explicit dependencies — avoid invoking package runners that can fetch/execute packages at container start.
  5. Do not mutate node_modules of published packages as part of the image build. Handle migrations explicitly (e.g., include migrations in a dedicated migrations artifact, use a dedicated migrations container, or run Prisma CLI against the schema/migrations separate from node_modules modification).
  6. Use a lockfile and an SBOM; pin dependency versions and verify integrity. Run dependency audits and consider reproducible installs (e.g., bun.lock is present — ensure it is used and committed).
  7. Minimize runtime surface: remove build tools and unnecessary files from final images (multi-stage builds already used — ensure only required files are copied to final stages).
  8. Consider hardening the base images (use minimal base images, apply updates) and enabling non-root restrictive kernel capabilities where applicable.

🔴 README.md (HIGH Risk)

# Issue Risk Level
1 Encourages hardcoding AUTH_SECRET and REVALIDATION_SECRET in source HIGH
2 Instructions to hard-code GOOGLE_ID/GOOGLE_SECRET in auth.ts HIGH
3 Instructions to hard-code Redis URL and TOKEN in packages/kv/src/index.ts HIGH
4 Trigger.dev project ID hardcoded in trigger.config.ts HIGH
5 Default DB credentials (postgres/postgres) exposed in docs HIGH
6 psql command shows DB password in connection string, can leak to process/shell HIGH
7 Guidance increases risk of committing secrets to repository HIGH
8 Example .env usage encourages storing secrets in plaintext HIGH

Recommendations:

  1. Remove any guidance to 'hard-code' secrets in source files. Always load secrets from environment variables or a secrets manager in code, and fail fast if required env vars are missing.
  2. Replace hardcoded example values in README with clearly marked placeholders (e.g., <GOOGLE_ID>, <GOOGLE_SECRET>) and demonstrate secure loading from env vars.
  3. Use a managed secrets solution (HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) or platform-provided secret storage (Vercel/Render/Heroku secrets) for production credentials and API keys.
  4. Ensure all .env files are listed in .gitignore and that .env.example contains only non-sensitive placeholders. Add explicit README guidance: never commit .env with real secrets.
  5. Remove default DB credentials from documentation or clearly label them as insecure defaults for local dev only; recommend creating a dedicated unprivileged DB user for local dev and explicitly instruct how to change defaults.
  6. Avoid passing plaintext passwords on the command line. For psql, recommend using .pgpass, PGPASSFILE, environment variables (PGPASSWORD only in controlled contexts), or interactive entry, and show examples that do not expose passwords in process lists.
  7. Eliminate examples that instruct hardcoding third-party project IDs/API keys (Trigger.dev project ID, Upstash tokens). Instead show how to set these as env vars and reference them in the config.
  8. Add pre-commit secret scanning (git-secrets, pre-commit-hooks, trufflehog, detect-secrets) and CI secret-detection checks to block accidental commits of secrets. Provide remediation steps to rotate any secrets that may have been exposed.
  9. Document secure local-development workflows (use docker-compose with secrets, use a local secrets file stored outside the repo, or use dev-only secret stores) and emphasize rotation and least privilege for any credentials used.

🟡 SELF_HOSTING.md (MEDIUM Risk)

# Issue Risk Level
1 Example DATABASE_URL includes plaintext creds (postgresql://user:pass@host...) MEDIUM
2 .env.example encourages storing secrets in plain .env files (risk of git commits) MEDIUM
3 BETTER_AUTH_URL examples use HTTP not HTTPS (tokens/transports unencrypted) MEDIUM
4 Build args may leak config/secrets during image build (build-time exposure) MEDIUM
5 Healthchecks call HTTP endpoints without auth (information disclosure) MEDIUM
6 Ports 3000/3002 are host-mapped, exposing services to network/internet MEDIUM
7 RESEND_API_KEY reused across services increases blast radius if leaked MEDIUM
8 NEXT_PUBLIC_* and other envs can leak config/keys to client-side code MEDIUM
9 Optional AWS/Redis envs may be set and exposed if accidentally client-accessible MEDIUM
10 TRIGGER_SECRET_KEY and other secrets in env_file may appear in logs or backups MEDIUM
11 Seeder/migrator commands use env values; seed logs could leak secrets MEDIUM

Recommendations:

  1. Use a secrets manager (HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) or Docker/Kubernetes secrets for production secrets instead of committing .env files.
  2. Keep .env.example free of real credentials. Never check in actual .env files; add .env to .gitignore and enforce pre-commit checks to prevent accidental commits.
  3. Require HTTPS for public endpoints. Update examples and documentation to show HTTPS for production (and enforce TLS at the reverse proxy/load balancer).
  4. Do not pass sensitive values as Docker build args. Inject secrets at runtime via secrets mechanisms (docker secrets, K8s secrets, or env from secret stores).
  5. Limit healthcheck exposure: run healthchecks against localhost from the container, or protect health endpoints (IP allowlist, basic auth, ephemeral tokens) and avoid exposing sensitive internal endpoints publicly.
  6. Avoid binding service ports directly to all host interfaces in production. Prefer binding to localhost, using an internal network, or put services behind an authenticated reverse proxy/ingress with firewall rules.
  7. Use distinct per-service API keys/credentials (least privilege) and rotate keys regularly. Do not reuse the same RESEND_API_KEY across services if separate keys can be issued.
  8. Never place secrets in NEXT_PUBLIC_* vars or any client-exposed env. Move sensitive values to server-side only code and expose only necessary non-sensitive configuration to clients.
  9. Audit and restrict any optional storage/third-party credentials (AWS, Redis). Ensure such keys are only provided to server-side environments and not exposed to client code or logs.
  10. Protect .env and other configuration files from being backed up to public or shared backups. Limit access to logs and configure logging levels to avoid printing full envs or connection strings.
  11. When running migrator/seeder tasks, ensure logging does not print secrets (redact DB URLs, API keys). Run sensitive migrations/seeding in controlled environments and with least-privilege DB user accounts.

💡 Recommendations

View 3 recommendation(s)
  1. Remove the plaintext credentials seen in repository files (notably: README.md, SELF_HOSTING.md, and files under .github/workflows_disabled such as e2e-tests.yml, quick-tests.yml, test-quick.yml, unit-tests.yml). Replace literal secrets in code with environment-variable reads (e.g., process.env.NEXTAUTH_SECRET) and add runtime checks that fail fast if required values are missing.
  2. Purge committed secrets from git history and replace the repo contents so the secrets are not present in any commit; after removing, rotate any credentials that were exposed.
  3. Sanitize docs and examples: replace real/example credentials (postgres/postgres, hardcoded AUTH_SECRET/NEXTAUTH_SECRET, GOOGLE_ID/GOOGLE_SECRET, Trigger/Upstash/Resend tokens, etc.) with clearly labeled placeholders (e.g. <GOOGLE_SECRET>) and ensure example code reads values from environment variables rather than embedding them.

Powered by Comp AI - AI that handles compliance for you. Reviewed Nov 20, 2025

@vercel
Copy link

vercel bot commented Nov 19, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
app Error Error Nov 20, 2025 11:50am
portal Error Error Nov 20, 2025 11:50am

@Itsnotaka
Copy link
Contributor

This branch represents a significant modernization of our monorepo infrastructure, shifting us towards a faster, leaner, and more strictly defined development environment.

Here is the updated explanation for your team, highlighting the caching improvements:

  1. Simplified Build Pipeline & Reliable Caching (tsdown)

We have replaced tsup with tsdown across our packages.

The Change: tsdown (powered by rolldown) replaces complex configs and "scattered" artifacts with a standardized output structure.
The Win: Turborepo can now cache builds properly. Because the output is consistent (clean dist/ folders) rather than fragmented (dist/mjs, d.js, etc.), Turbo's hash detection works reliably.
DX: We've deleted complex tsup.config.ts files. For packages like packages/ui, we point exports directly to ./src/index.ts. This means no more "watch mode" for the UI package—changes are picked up instantly by the app.

  1. Strict Dependency Management (Bun Isolated Mode)

We have configured Bun to use isolated installs (default in Bun 1.3+) via bunfig.toml.

Why: Previously, dependencies were "hoisted" to the root, allowing packages to accidentally access libraries they didn't explicitly declare.
The Win: This eliminates "phantom dependency" bugs and immediately exposes version discrepancies and incorrect peer dependencies.
If package A and package B expect different versions of a library, or if a peer dependency is missing, the build will now fail fast locally rather than causing subtle runtime errors in production.

  1. "Always-Fresh" Workspace Dependencies

We are now using the workspace:* protocol for our internal packages (e.g., in packages/email depending on packages/ui).

Why: This tells the package manager to "use whatever version is currently in the folder."
The Win: No more version mismatch errors or manual version bumping for internal packages. We are always running against the live code in the repo.

  1. Prisma 6 & TypeScript-Based Engine

We upgraded to Prisma 6 and switched the generator provider to prisma-client (replacing prisma-client-js).

Why: This enables Prisma's new TypeScript-based engine (Query Compiler), moving away from the heavy Rust binary sidecar.
The Win: This significantly reduces the deployment bundle size (no more platform-specific binaries), improves cold start times in serverless/edge environments, and aligns the database layer closer to our runtime environment.

@@ -97,32 +109,36 @@
// Not a URL - treat as S3 key
// Security: Ensure it's not a malformed URL attempting to bypass validation
const lowerInput = url.toLowerCase();
if (lowerInput.includes('://') || lowerInput.includes('amazonaws.com')) {
throw new Error('Invalid input: Malformed URL detected');
if (lowerInput.includes("://") || lowerInput.includes("amazonaws.com")) {

Check failure

Code scanning / CodeQL

Incomplete URL substring sanitization High

'
amazonaws.com
' can be anywhere in the URL, and arbitrary hosts may come before or after it.

Copilot Autofix

AI about 22 hours ago

To address the issue, replace the substring check lowerInput.includes("amazonaws.com") with a proper host/URL parsing check. For non-URL inputs, ensure the value is treated strictly as an S3 key, without attempting to catch URLs by substring.
Specifically:

  • Remove the substring check for "amazonaws.com" from the code.
  • Rely solely on the existence of the URL object (parsedUrl). If parsing fails, treat the input as an S3 key as intended.
  • Alternatively, consider using a stricter regex or try another parsing attempt to reject malformed URLs, but in most typical S3 key usage, just ensure path traversal and empty keys are rejected.

Edit only the block in extractS3KeyFromUrl.
No additional dependencies are needed for this fix.


Suggested changeset 1
apps/app/src/app/s3.ts

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/apps/app/src/app/s3.ts b/apps/app/src/app/s3.ts
--- a/apps/app/src/app/s3.ts
+++ b/apps/app/src/app/s3.ts
@@ -109,7 +109,7 @@
   // Not a URL - treat as S3 key
   // Security: Ensure it's not a malformed URL attempting to bypass validation
   const lowerInput = url.toLowerCase();
-  if (lowerInput.includes("://") || lowerInput.includes("amazonaws.com")) {
+  if (lowerInput.includes("://")) {
     throw new Error("Invalid input: Malformed URL detected");
   }
 
EOF
@@ -109,7 +109,7 @@
// Not a URL - treat as S3 key
// Security: Ensure it's not a malformed URL attempting to bypass validation
const lowerInput = url.toLowerCase();
if (lowerInput.includes("://") || lowerInput.includes("amazonaws.com")) {
if (lowerInput.includes("://")) {
throw new Error("Invalid input: Malformed URL detected");
}

Copilot is powered by AI and may make mistakes. Always verify output.
}
// For backward compatibility, hash without salt
return createHash('sha256').update(apiKey).digest('hex');
return createHash("sha256").update(apiKey).digest("hex");

Check failure

Code scanning / CodeQL

Use of password hash with insufficient computational effort High

Password from
a call to generateApiKey
is hashed insecurely.
Password from
an access to apiKey
is hashed insecurely.
Password from
an access to apiKey
is hashed insecurely.
Password from
an access to apiKey
is hashed insecurely.
Password from
a call to get
is hashed insecurely.
Password from
an access to apiKey
is hashed insecurely.
Password from
an access to apiKey
is hashed insecurely.
Password from
an access to apiKey
is hashed insecurely.
Password from
an access to apiKey
is hashed insecurely.
Password from
an access to apiKey
is hashed insecurely.

Copilot Autofix

AI about 22 hours ago

The insecure use of sha256 for API key hashing should be replaced with a secure, computationally expensive hash (such as bcrypt). The best fix is to use bcrypt's hashSync (for sync use) or hash (for async use), along with the generated salt, to properly hash API keys. This will significantly slow down brute-force attacks in case of data compromise. The change involves:

  • Importing bcryptjs (or bcrypt). bcryptjs is pure JavaScript and often easier to use in serverless/Next.js environments.
  • Modifying the hashApiKey function to use bcrypt when a salt is provided. For backward compatibility, the legacy sha256 hash can be retained as before if no salt is provided.
  • Generating salts with bcrypt (genSaltSync or genSalt) instead of making your own salt.
  • Ensure that generated API keys (random and long) continue as usual.
  • Required changes: Update legacy logic for generating and verifying hashes, import bcryptjs, update salt generation for new API keys, and use bcrypt in the hashing function.

All edits occur in apps/app/src/lib/api-key.ts (and only touched areas per instructions).


Suggested changeset 2
apps/app/src/lib/api-key.ts

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/apps/app/src/lib/api-key.ts b/apps/app/src/lib/api-key.ts
--- a/apps/app/src/lib/api-key.ts
+++ b/apps/app/src/lib/api-key.ts
@@ -1,4 +1,5 @@
 import { createHash, randomBytes } from "node:crypto";
+import bcrypt from "bcryptjs";
 import type { NextRequest } from "next/server";
 import { NextResponse } from "next/server";
 
@@ -20,7 +21,8 @@
  * @returns A random salt string
  */
 export function generateSalt(): string {
-  return randomBytes(16).toString("hex");
+  // Generate a bcrypt salt with 12 rounds (recommended for modern systems)
+  return bcrypt.genSaltSync(12);
 }
 
 /**
@@ -31,12 +33,10 @@
  */
 export function hashApiKey(apiKey: string, salt?: string): string {
   if (salt) {
-    // If salt is provided, use it for hashing
-    return createHash("sha256")
-      .update(apiKey + salt)
-      .digest("hex");
+    // Use bcrypt to securely hash the API key with the provided salt
+    return bcrypt.hashSync(apiKey, salt);
   }
-  // For backward compatibility, hash without salt
+  // For backward compatibility, hash without salt using the old method
   return createHash("sha256").update(apiKey).digest("hex");
 }
 
EOF
@@ -1,4 +1,5 @@
import { createHash, randomBytes } from "node:crypto";
import bcrypt from "bcryptjs";
import type { NextRequest } from "next/server";
import { NextResponse } from "next/server";

@@ -20,7 +21,8 @@
* @returns A random salt string
*/
export function generateSalt(): string {
return randomBytes(16).toString("hex");
// Generate a bcrypt salt with 12 rounds (recommended for modern systems)
return bcrypt.genSaltSync(12);
}

/**
@@ -31,12 +33,10 @@
*/
export function hashApiKey(apiKey: string, salt?: string): string {
if (salt) {
// If salt is provided, use it for hashing
return createHash("sha256")
.update(apiKey + salt)
.digest("hex");
// Use bcrypt to securely hash the API key with the provided salt
return bcrypt.hashSync(apiKey, salt);
}
// For backward compatibility, hash without salt
// For backward compatibility, hash without salt using the old method
return createHash("sha256").update(apiKey).digest("hex");
}

apps/app/package.json
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/apps/app/package.json b/apps/app/package.json
--- a/apps/app/package.json
+++ b/apps/app/package.json
@@ -141,7 +141,8 @@
     "xml2js": "^0.6.2",
     "zaraz-ts": "^1.2.0",
     "zod": "^4.1.12",
-    "zustand": "^5.0.8"
+    "zustand": "^5.0.8",
+    "bcryptjs": "^3.0.3"
   },
   "devDependencies": {
     "@playwright/experimental-ct-react": "^1.56.1",
EOF
@@ -141,7 +141,8 @@
"xml2js": "^0.6.2",
"zaraz-ts": "^1.2.0",
"zod": "^4.1.12",
"zustand": "^5.0.8"
"zustand": "^5.0.8",
"bcryptjs": "^3.0.3"
},
"devDependencies": {
"@playwright/experimental-ct-react": "^1.56.1",
This fix introduces these dependencies
Package Version Security advisories
bcryptjs (npm) 3.0.3 None
Copilot is powered by AI and may make mistakes. Always verify output.
@@ -1,11 +1,11 @@
export const logger = {
info: (message: string, params?: unknown) => {
console.log(`[INFO] ${message}`, params || '');
console.log(`[INFO] ${message}`, params || "");

Check failure

Code scanning / CodeQL

Use of externally-controlled format string High

Format string depends on a
user-provided value
.

Copilot Autofix

AI about 22 hours ago

To fix this vulnerability in apps/app/src/utils/logger.ts, specifically in the logger's info, warn, and error methods, avoid passing user-controlled data directly within the first console.log/console.warn/console.error argument, which results in it being interpreted as a format string. Instead, always use a format placeholder (%s) for the interpolated/user-controlled portion and pass the interpolated/user-controlled data as a separate argument.

Specifically, change lines such as:

console.log(`[INFO] ${message}`, params || "");

to:

console.log("[INFO] %s", message, params || "");

Apply the same pattern for warn and error. This ensures any % sequences in message will not be interpreted as format placeholders by console.log and avoids format string confusion. No new imports are needed for this fix.


Suggested changeset 1
apps/app/src/utils/logger.ts

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/apps/app/src/utils/logger.ts b/apps/app/src/utils/logger.ts
--- a/apps/app/src/utils/logger.ts
+++ b/apps/app/src/utils/logger.ts
@@ -1,11 +1,11 @@
 export const logger = {
   info: (message: string, params?: unknown) => {
-    console.log(`[INFO] ${message}`, params || "");
+    console.log("[INFO] %s", message, params || "");
   },
   warn: (message: string, params?: unknown) => {
-    console.warn(`[WARN] ${message}`, params || "");
+    console.warn("[WARN] %s", message, params || "");
   },
   error: (message: string, params?: unknown) => {
-    console.error(`[ERROR] ${message}`, params || "");
+    console.error("[ERROR] %s", message, params || "");
   },
 };
EOF
@@ -1,11 +1,11 @@
export const logger = {
info: (message: string, params?: unknown) => {
console.log(`[INFO] ${message}`, params || "");
console.log("[INFO] %s", message, params || "");
},
warn: (message: string, params?: unknown) => {
console.warn(`[WARN] ${message}`, params || "");
console.warn("[WARN] %s", message, params || "");
},
error: (message: string, params?: unknown) => {
console.error(`[ERROR] ${message}`, params || "");
console.error("[ERROR] %s", message, params || "");
},
};
Copilot is powered by AI and may make mistakes. Always verify output.
@@ -94,21 +107,21 @@
// Not a URL - treat as S3 key
// Security: Ensure it's not a malformed URL attempting to bypass validation
const lowerInput = url.toLowerCase();
if (lowerInput.includes('://') || lowerInput.includes('amazonaws.com')) {
throw new Error('Invalid input: Malformed URL detected');
if (lowerInput.includes("://") || lowerInput.includes("amazonaws.com")) {

Check failure

Code scanning / CodeQL

Incomplete URL substring sanitization High

'
amazonaws.com
' can be anywhere in the URL, and arbitrary hosts may come before or after it.

Copilot Autofix

AI about 22 hours ago

To fix the problem, avoid using a substring check for "amazonaws.com" and instead, robustly determine if an input string represents a URL by attempting to parse it. If it can be parsed as a URL (even without a valid S3 host), it should be rejected from the S3 key path. Lines 109-112 should be updated to: attempt to parse the string as a URL, and if the result is a valid URL object (regardless of host), reject the input as "malformed". This way, any string that is a valid URL (even if the host is not "amazonaws.com") cannot masquerade as an S3 key. Use the built-in URL constructor to do this check, which is also used above in the function.

No external dependencies are needed, as the built-in URL API suffices. Only the code block on lines 109-112 in apps/portal/src/utils/s3.ts should be changed.


Suggested changeset 1
apps/portal/src/utils/s3.ts

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/apps/portal/src/utils/s3.ts b/apps/portal/src/utils/s3.ts
--- a/apps/portal/src/utils/s3.ts
+++ b/apps/portal/src/utils/s3.ts
@@ -106,9 +106,12 @@
 
   // Not a URL - treat as S3 key
   // Security: Ensure it's not a malformed URL attempting to bypass validation
-  const lowerInput = url.toLowerCase();
-  if (lowerInput.includes("://") || lowerInput.includes("amazonaws.com")) {
+  // If input parses as a URL, treat as malformed (should be S3 key only)
+  try {
+    new URL(url);
     throw new Error("Invalid input: Malformed URL detected");
+  } catch {
+    // Not a URL, continue as S3 key
   }
 
   // Security: Check for path traversal
EOF
@@ -106,9 +106,12 @@

// Not a URL - treat as S3 key
// Security: Ensure it's not a malformed URL attempting to bypass validation
const lowerInput = url.toLowerCase();
if (lowerInput.includes("://") || lowerInput.includes("amazonaws.com")) {
// If input parses as a URL, treat as malformed (should be S3 key only)
try {
new URL(url);
throw new Error("Invalid input: Malformed URL detected");
} catch {
// Not a URL, continue as S3 key
}

// Security: Check for path traversal
Copilot is powered by AI and may make mistakes. Always verify output.
@@ -110,7 +117,7 @@
<BreadcrumbPage>{item.label}</BreadcrumbPage>
) : (
<BreadcrumbLink asChild>
<Link href={item.href || '#'}>{item.label}</Link>
<Link href={item.href || "#"}>{item.label}</Link>

Check warning

Code scanning / CodeQL

Client-side URL redirect Medium

Untrusted URL redirection depends on a
user-provided value
.

Copilot Autofix

AI about 22 hours ago

The best way to fix this problem is to ensure that the href values in breadcrumbs are safe: only allowing navigation to internal routes (relative URLs) and never to external sites. This can be achieved by validating the href value before passing it to the <Link> component. If the href does not start with a safe value ("/") or matches an accepted pattern, it should be replaced with a safe default ("#" or another internal route). For all breadcrumb links, before rendering each <Link>, check that item.href is either undefined, empty, or a safe relative path. You can create a helper function (e.g., isSafeInternalLink(href)) to encapsulate the logic and use it consistently for breadcrumb items, dropdown items, and hidden items.

This change should be made in apps/app/src/components/pages/PageWithBreadcrumb.tsx, in all places where a breadcrumb href is rendered (item.href and similar), by wrapping the values with the safety check helper.


Suggested changeset 1
apps/app/src/components/pages/PageWithBreadcrumb.tsx

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/apps/app/src/components/pages/PageWithBreadcrumb.tsx b/apps/app/src/components/pages/PageWithBreadcrumb.tsx
--- a/apps/app/src/components/pages/PageWithBreadcrumb.tsx
+++ b/apps/app/src/components/pages/PageWithBreadcrumb.tsx
@@ -20,6 +20,15 @@
 
 import PageCore from "./PageCore.tsx";
 
+
+function isSafeInternalLink(href?: string): string {
+  if (!href || typeof href !== "string") return "#";
+  // Accept only relative paths starting with "/" and not containing "//"
+  if (/^\/(?!\/)/.test(href)) return href;
+  // Optionally, further restrict to e.g. `/[orgId]/...` if you wish
+  return "#";
+}
+
 interface BreadcrumbDropdownItem {
   label: string;
   href: string;
@@ -104,7 +113,7 @@
                         >
                           {item.dropdown.map((dropdownItem) => (
                             <DropdownMenuItem key={dropdownItem.href} asChild>
-                              <Link href={dropdownItem.href}>
+                              <Link href={isSafeInternalLink(dropdownItem.href)}>
                                 {dropdownItem.label.length > maxLabelLength
                                   ? `${dropdownItem.label.slice(0, maxLabelLength)}...`
                                   : dropdownItem.label}
@@ -117,7 +126,7 @@
                       <BreadcrumbPage>{item.label}</BreadcrumbPage>
                     ) : (
                       <BreadcrumbLink asChild>
-                        <Link href={item.href || "#"}>{item.label}</Link>
+                        <Link href={isSafeInternalLink(item.href)}>{item.label}</Link>
                       </BreadcrumbLink>
                     )}
                   </BreadcrumbItem>
@@ -132,7 +141,7 @@
                           <DropdownMenuContent align="start">
                             {hiddenItems.map((hiddenItem) => (
                               <DropdownMenuItem key={hiddenItem.label} asChild>
-                                <Link href={hiddenItem.href || "#"}>
+                                <Link href={isSafeInternalLink(hiddenItem.href)}>
                                   {hiddenItem.label}
                                 </Link>
                               </DropdownMenuItem>
EOF
@@ -20,6 +20,15 @@

import PageCore from "./PageCore.tsx";


function isSafeInternalLink(href?: string): string {
if (!href || typeof href !== "string") return "#";
// Accept only relative paths starting with "/" and not containing "//"
if (/^\/(?!\/)/.test(href)) return href;
// Optionally, further restrict to e.g. `/[orgId]/...` if you wish
return "#";
}

interface BreadcrumbDropdownItem {
label: string;
href: string;
@@ -104,7 +113,7 @@
>
{item.dropdown.map((dropdownItem) => (
<DropdownMenuItem key={dropdownItem.href} asChild>
<Link href={dropdownItem.href}>
<Link href={isSafeInternalLink(dropdownItem.href)}>
{dropdownItem.label.length > maxLabelLength
? `${dropdownItem.label.slice(0, maxLabelLength)}...`
: dropdownItem.label}
@@ -117,7 +126,7 @@
<BreadcrumbPage>{item.label}</BreadcrumbPage>
) : (
<BreadcrumbLink asChild>
<Link href={item.href || "#"}>{item.label}</Link>
<Link href={isSafeInternalLink(item.href)}>{item.label}</Link>
</BreadcrumbLink>
)}
</BreadcrumbItem>
@@ -132,7 +141,7 @@
<DropdownMenuContent align="start">
{hiddenItems.map((hiddenItem) => (
<DropdownMenuItem key={hiddenItem.label} asChild>
<Link href={hiddenItem.href || "#"}>
<Link href={isSafeInternalLink(hiddenItem.href)}>
{hiddenItem.label}
</Link>
</DropdownMenuItem>
Copilot is powered by AI and may make mistakes. Always verify output.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants