-
-
Notifications
You must be signed in to change notification settings - Fork 84
feat: one cmd development environment setup workflow in nixopus-cli #413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds a new Typer "setup" CLI with a DevSetup orchestrator for multi-phase local dev provisioning; enhances preflight checks (including Windows/WSL and typed port results); registers the setup command; updates API version timestamp, CLI package version, and a DB password default in dev config; minor package init/whitespace edits. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as CLI (Typer)
participant Setup as DevSetup
participant Pre as PreflightRunner
participant Conflict as ConflictService
participant Docker as Docker Engine
participant Git as Git
participant Proc as Local Processes
User->>CLI: nixopus setup dev [options]
CLI->>Setup: DevSetup(config, logger)
activate Setup
Setup->>Pre: _validate_system()
alt on Windows
Pre->>Pre: check_windows_environment() (docker CLI, docker info, WSL checks)
end
Setup->>Conflict: detect conflicts (optional)
Setup->>Setup: _install_dependencies() (optional/dry-run)
Setup->>Git: clone repo (optional/force/dry-run)
Setup->>Proc: install project deps (Go/Yarn/Poetry)
Setup->>Docker: provision DB/Redis (optional/dry-run)
Setup->>Proc: ensure SSH tools + keys (optional)
Setup->>Proc: update .env files
Setup->>Proc: start API and Frontend services
Setup-->>User: completion info (URLs, creds, workspace)
deactivate Setup
sequenceDiagram
autonumber
participant Pre as PreflightRunner
participant Sys as OS/Env
participant DockerCLI as Docker CLI
participant DockerDaemon as Docker Daemon
participant WSL as WSL/WSL2
Pre->>Sys: _have("docker")
alt Docker CLI missing
Pre-->>Pre: raise error (docker not found)
else Docker present
Pre->>DockerCLI: docker info
DockerCLI->>DockerDaemon: probe
alt Daemon unavailable
Pre-->>Pre: raise error (daemon not running)
else Daemon OK
alt Windows
Pre->>WSL: detect WSL/WSL2, get versions (warn/info)
end
end
end
Pre->>Sys: run port checks -> List<PortCheckResult>
Estimated code review effort🎯 4 (Complex) | ⏱️ ~55 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 18077893 | Triggered | Generic Password | a761edb | cli/app/commands/setup/dev.py | View secret |
| 14725274 | Triggered | Generic Password | 7960c77 | cli/app/commands/setup/dev.py | View secret |
| 19593524 | Triggered | Generic Password | 0b6a5b4 | helpers/config.dev.yaml | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secrets safely. Learn here the best practices.
- Revoke and rotate these secrets.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🧹 Nitpick comments (13)
cli/app/commands/install/__init__.py (1)
1-1: Remove stray whitespace in empty package file.A lone space can fail linters (e.g., flake8 W291). Make the file truly empty or add a brief comment.
Apply one of:
- +# Package marker for the install command groupcli/app/commands/clone/__init__.py (1)
1-1: Trim trailing whitespace.Same as install/init.py—avoid whitespace-only content to keep linters happy.
- +# Package marker for the clone command groupcli/pyproject.toml (1)
41-44: Align Black target Python with supported runtime.Project supports Python >=3.9, but Black target-version is py38. Align to avoid subtle formatting mismatches.
-[tool.black] -line-length = 127 -target-version = ['py38'] -include = '\.pyi?$' +[tool.black] +line-length = 127 +target-version = ['py39'] +include = '\.pyi?$'cli/app/commands/preflight/run.py (3)
1-4: Remove unused import.
Sequenceisn’t used; drop it to satisfy flake8 F401.-from typing import List, Optional, Sequence +from typing import List, Optional
15-15: Docstring now understates scope.This runner also performs Windows environment checks; update the docstring.
-class PreflightRunner: - """Centralized preflight check runner for port availability""" +class PreflightRunner: + """Centralized preflight checks: OS (Windows/WSL/Docker) and port availability."""
102-105: Validate port numbers and deduplicate before checking.Cast-only risks invalid ports (0 or >65535). Validate range and dedupe for clearer errors and fewer checks.
- if not isinstance(ports, list): - raise Exception("Configured 'ports' must be a list of integers") - self.check_required_ports([int(p) for p in ports]) + if not isinstance(ports, list): + raise RuntimeError("Configured 'ports' must be a list of integers") + parsed = [int(p) for p in ports] + invalid = [p for p in parsed if p < 1 or p > 65535] + if invalid: + raise RuntimeError(f"Invalid port(s): {sorted(set(invalid))}. Valid range is 1-65535.") + unique_ports = sorted(set(parsed)) + self.check_required_ports(unique_ports)cli/app/commands/setup/command.py (1)
18-18: Consider making the default branch configurable.The hardcoded default branch "feat/develop" appears to be a feature branch rather than a stable development branch. Consider using "main" or "develop" as the default, or make it configurable via the config file.
- branch: str = typer.Option("feat/develop", "--branch", "-b", help="Git branch to clone"), + branch: str = typer.Option("main", "--branch", "-b", help="Git branch to clone"),cli/app/commands/setup/dev.py (6)
126-126: Remove pyright ignore comment.The
# pyright: ignorecomment on line 126 suppresses type checking. Instead of ignoring the error, properly handle the type checking issue or add appropriate type hints.- preflight_runner.check_windows_environment() # pyright: ignore + preflight_runner.check_windows_environment()
273-274: Remove emoji from error messages for consistency.The error messages use emoji (❌) which is inconsistent with other error messages in the codebase and may not display correctly in all terminals.
- self.logger.error(f"❌ Project dependency setup failed: {e}") + self.logger.error(f"Project dependency setup failed: {e}")
459-466: Security concern: Bypassing PowerShell execution policy.The PowerShell commands bypass the execution policy which could be a security concern in enterprise environments where policies are enforced for a reason.
Consider checking if the current execution policy allows script execution before bypassing:
def run_powershell(ps_cmd: str, check: bool = True): + # First try without bypassing execution policy + try: + return subprocess.run( + ["powershell", "-NoProfile", "-NonInteractive", "-Command", ps_cmd], + check=check, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + text=True, + ) + except subprocess.CalledProcessError: + # Fall back to bypass if needed + pass return subprocess.run( ["powershell", "-NoProfile", "-NonInteractive", "-ExecutionPolicy", "Bypass", "-Command", ps_cmd],
698-701: Security concern: Installing Go packages from the internet without verification.The code installs Air directly from GitHub without any version pinning or checksum verification, which could be a supply chain security risk.
- air_install_cmd = ["go", "install", "github.com/air-verse/air@latest"] + # Pin to a specific version for reproducibility and security + air_install_cmd = ["go", "install", "github.com/air-verse/air@v1.49.0"]
739-740: Add automated admin user creation.The TODO comment indicates that automated admin creation is not implemented. This is a critical feature for a complete setup experience.
Would you like me to implement the automated admin user creation functionality? This would involve making API calls to create the initial admin user after the services are started.
60-60: Avoid modifying private attributes directly.Setting
_yaml_pathdirectly on the Config object breaks encapsulation. Consider adding a public method to set the configuration file path.- self.nixopus_config._yaml_path = os.path.abspath(self.config.config_file) + # Consider adding a public method to Config class: + # self.nixopus_config.set_config_file(os.path.abspath(self.config.config_file)) + self.nixopus_config._yaml_path = os.path.abspath(self.config.config_file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
api/api/versions.json(1 hunks)cli/app/commands/clone/__init__.py(1 hunks)cli/app/commands/install/__init__.py(1 hunks)cli/app/commands/preflight/run.py(2 hunks)cli/app/commands/setup/command.py(1 hunks)cli/app/commands/setup/dev.py(1 hunks)cli/app/main.py(2 hunks)cli/pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
cli/app/commands/setup/dev.py (8)
cli/app/utils/lib.py (5)
os(41-42)HostInformation(61-91)DirectoryManager(123-149)package_manager(49-50)get_package_manager(67-78)cli/app/commands/preflight/run.py (3)
PreflightRunner(14-104)check_windows_environment(25-74)check_ports_from_config(93-104)cli/app/commands/conflict/models.py (1)
ConflictConfig(21-26)cli/app/commands/conflict/conflict.py (1)
ConflictService(391-409)cli/app/commands/install/deps.py (1)
install_all_deps(108-140)cli/app/commands/clone/clone.py (1)
Clone(245-259)cli/app/utils/config.py (2)
get_yaml_value(36-46)get_service_env_values(48-50)cli/app/commands/install/ssh.py (1)
generate(479-481)
cli/app/commands/setup/command.py (2)
cli/app/utils/timeout.py (1)
TimeoutWrapper(6-26)cli/app/commands/setup/dev.py (3)
DevSetup(51-783)DevSetupConfig(23-48)run(70-114)
cli/app/commands/preflight/run.py (3)
cli/app/utils/config.py (2)
Config(10-115)get_config_value(89-115)cli/app/commands/preflight/port.py (3)
PortConfig(35-51)PortService(134-162)PortCheckResult(27-32)cli/app/commands/preflight/command.py (2)
check(42-67)ports(71-109)
🔇 Additional comments (6)
api/api/versions.json (1)
6-6: Release date bump looks fine—confirm immutability policy.If clients assume version metadata is immutable post-release, ensure this update (2025-09-14T22:04:22.317136+05:30) is intentional and reflected in release notes.
cli/pyproject.toml (1)
3-3: Version bump to 0.1.6 acknowledged.Ensure a matching VCS tag and CHANGELOG entry accompany this bump.
cli/app/commands/preflight/run.py (2)
76-82: Good logger fallback for PortService.Supplying a concrete logger ensures consistent formatting and verbosity propagation.
25-75: Windows checks are invoked — original concern incorrect.
Callsite: cli/app/commands/setup/dev.py:126 — preflight_runner.check_windows_environment() is called.Likely an incorrect or invalid review comment.
cli/app/main.py (1)
17-17: LGTM! Clean integration of the setup command.The new setup command is properly imported and registered with the CLI app, following the established pattern for other commands.
Also applies to: 100-100
cli/app/commands/setup/dev.py (1)
722-722: Keep current arg-forwarding —yarn run dev -- -p <port>is correctview/package.json: scripts.dev =
next dev --turbopack, so the--is required to forward-p/--portto Next.js; replacing with["yarn","run","dev","--port",...]is incorrect. Location: cli/app/commands/setup/dev.py (around line 722).Likely an incorrect or invalid review comment.
| # pin g Docker daemon | ||
| try: | ||
| # quick daemon ping via 'docker info' | ||
| subprocess.run(["docker", "info"], check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) | ||
| if self.logger: | ||
| self.logger.success("Docker daemon is running.") | ||
| except subprocess.CalledProcessError: | ||
| raise Exception("Docker daemon is not running. Start Docker Desktop and retry.") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add timeouts and tighten errors for subprocess calls.
docker info and wsl -l -v can hang (bad contexts, stalled daemons). Add timeouts and handle TimeoutExpired. Also fix the “pin g” comment typo and prefer RuntimeError over bare Exception.
- # pin g Docker daemon
+ # ping Docker daemon
try:
# quick daemon ping via 'docker info'
- subprocess.run(["docker", "info"], check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
+ subprocess.run(
+ ["docker", "info"],
+ check=True,
+ stdout=subprocess.DEVNULL,
+ stderr=subprocess.DEVNULL,
+ timeout=15,
+ )
if self.logger:
self.logger.success("Docker daemon is running.")
- except subprocess.CalledProcessError:
- raise Exception("Docker daemon is not running. Start Docker Desktop and retry.")
+ except subprocess.TimeoutExpired:
+ raise RuntimeError("Timed out checking Docker daemon (docker info). Verify Docker Desktop is responsive.")
+ except subprocess.CalledProcessError:
+ raise RuntimeError("Docker daemon is not running. Start Docker Desktop and retry.")
@@
- try:
- result = subprocess.run(
+ try:
+ result = subprocess.run(
["wsl", "-l", "-v"],
check=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
- )
+ timeout=15,
+ )
output = result.stdout or ""
@@
- except Exception:
+ except subprocess.TimeoutExpired:
+ if self.logger:
+ self.logger.warning("Timed out determining WSL version; proceeding.")
+ except Exception:
if self.logger:
self.logger.warning("Unable to verify WSL version. Proceeding.")Also applies to: 53-59
🤖 Prompt for AI Agents
In cli/app/commands/preflight/run.py around lines 41 to 49 (and also apply same
changes to lines 53 to 59), the code pings Docker with subprocess.run(["docker",
"info"], ...) and currently can hang and raises a bare Exception; fix by
correcting the comment typo ("pin g" -> "ping"), add a reasonable timeout
argument (e.g., timeout=10), catch subprocess.TimeoutExpired and
subprocess.CalledProcessError separately, log or surface a clear error, and
raise RuntimeError with a descriptive message (including timeout vs non-zero
exit) instead of Exception; apply the same timeout/TimeoutExpired handling and
RuntimeError replacement to the wsl subprocess call in the other block.
| self, config_key: str = "required_ports", user_config: dict = None, defaults: dict = None | ||
| self, config_key: str = "required_ports", user_config: Optional[dict] = None, defaults: Optional[dict] = None | ||
| ) -> None: | ||
| """Check ports using configuration values""" | ||
| if user_config is not None and defaults is not None: | ||
| ports = self.config.get_config_value(config_key, user_config, defaults) | ||
| else: | ||
| ports = self.config.get_yaml_value("ports") | ||
|
|
||
| self.check_required_ports(ports) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Call Windows checks from config-driven path.
Safe to always call; it no-ops on non-Windows. Ensures users running preflight check get OS checks too.
def check_ports_from_config(
self, config_key: str = "required_ports", user_config: Optional[dict] = None, defaults: Optional[dict] = None
) -> None:
"""Check ports using configuration values"""
+ # Run OS-specific checks (no-op on non-Windows)
+ self.check_windows_environment()
if user_config is not None and defaults is not None:
ports = self.config.get_config_value(config_key, user_config, defaults)
else:
ports = self.config.get_yaml_value("ports")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def check_ports_from_config( | |
| self, config_key: str = "required_ports", user_config: dict = None, defaults: dict = None | |
| self, config_key: str = "required_ports", user_config: Optional[dict] = None, defaults: Optional[dict] = None | |
| ) -> None: | |
| """Check ports using configuration values""" | |
| if user_config is not None and defaults is not None: | |
| ports = self.config.get_config_value(config_key, user_config, defaults) | |
| else: | |
| ports = self.config.get_yaml_value("ports") | |
| def check_ports_from_config( | |
| self, config_key: str = "required_ports", user_config: Optional[dict] = None, defaults: Optional[dict] = None | |
| ) -> None: | |
| """Check ports using configuration values""" | |
| # Run OS-specific checks (no-op on non-Windows) | |
| self.check_windows_environment() | |
| if user_config is not None and defaults is not None: | |
| ports = self.config.get_config_value(config_key, user_config, defaults) | |
| else: | |
| ports = self.config.get_yaml_value("ports") |
🤖 Prompt for AI Agents
In cli/app/commands/preflight/run.py around lines 93 to 101, the config-driven
path that reads ports skips invoking the Windows-specific checks; update the
method so that after resolving ports (both when using user_config/defaults and
when falling back to YAML) it also calls the Windows checks routine (the
existing Windows-specific check method in this class) unconditionally — this
call is safe because the method is a no-op on non-Windows — so place the call
right after ports is set in both branches (or once after the if/else) to ensure
OS checks run for config-driven preflight.
| # Output configuration | ||
| output: str = typer.Option("text", "--output", "-o", help="Output format (text/json)"), | ||
| # Configuration override | ||
| config_file: str = typer.Option("../helpers/config.dev.yaml", "--config-file", "-c", help="Configuration file path"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect relative path for config file.
The default config file path uses a relative path that goes up one directory (../helpers/config.dev.yaml). This will be inconsistent depending on where the command is run from and may not exist in production.
- config_file: str = typer.Option("../helpers/config.dev.yaml", "--config-file", "-c", help="Configuration file path"),
+ config_file: str = typer.Option("config.dev.yaml", "--config-file", "-c", help="Configuration file path"),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| config_file: str = typer.Option("../helpers/config.dev.yaml", "--config-file", "-c", help="Configuration file path"), | |
| config_file: str = typer.Option("config.dev.yaml", "--config-file", "-c", help="Configuration file path"), |
🤖 Prompt for AI Agents
In cli/app/commands/setup/command.py around line 42, the default config path
"../helpers/config.dev.yaml" is a fragile relative path; replace it with a
reliably resolved absolute path. Import pathlib.Path (if not already), compute
the default like Path(__file__).resolve().parents[2] / "helpers" /
"config.dev.yaml" (or compute it inside the command) and use str(...) as the
default for the typer.Option, or alternatively accept None and resolve to that
absolute path at runtime; update the Option to use that resolved path instead of
the "../" relative string.
| with TimeoutWrapper(timeout): | ||
| dev_setup.run() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify timeout mechanism compatibility across platforms.
The TimeoutWrapper uses Unix signals (SIGALRM) which are not available on Windows. This will cause the timeout functionality to fail on Windows systems.
Based on the timeout implementation, you should add platform-specific handling:
- with TimeoutWrapper(timeout):
- dev_setup.run()
+ if platform.system().lower() == "windows":
+ # TimeoutWrapper uses SIGALRM which doesn't work on Windows
+ # Run without timeout on Windows or implement threading-based timeout
+ dev_setup.run()
+ else:
+ with TimeoutWrapper(timeout):
+ dev_setup.run()Or better yet, update the TimeoutWrapper class to handle Windows using threading.
🏁 Script executed:
#!/bin/bash
# Check if TimeoutWrapper handles Windows compatibility
cat cli/app/utils/timeout.pyLength of output: 750
TimeoutWrapper uses SIGALRM — incompatible with Windows; fix required
- Confirmed: cli/app/utils/timeout.py registers signal.SIGALRM and calls signal.alarm() (SIGALRM unavailable on Windows).
- Action: implement a cross-platform TimeoutWrapper (e.g., threading.Timer or concurrent.futures timeout) and update callers; short-term workaround: guard the caller (cli/app/commands/setup/command.py lines 79–80) with a platform check — example diff:
- with TimeoutWrapper(timeout):
- dev_setup.run()
+ if platform.system().lower() == "windows":
+ # TimeoutWrapper uses SIGALRM which doesn't work on Windows
+ # Run without timeout on Windows or implement threading-based timeout
+ dev_setup.run()
+ else:
+ with TimeoutWrapper(timeout):
+ dev_setup.run()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| with TimeoutWrapper(timeout): | |
| dev_setup.run() | |
| if platform.system().lower() == "windows": | |
| # TimeoutWrapper uses SIGALRM which doesn't work on Windows | |
| # Run without timeout on Windows or implement threading-based timeout | |
| dev_setup.run() | |
| else: | |
| with TimeoutWrapper(timeout): | |
| dev_setup.run() |
🤖 Prompt for AI Agents
In cli/app/commands/setup/command.py around lines 79–80, the current use of
TimeoutWrapper relies on SIGALRM (not available on Windows); change the call to
be cross-platform by detecting the platform and using a thread-based timeout on
Windows (or any non-POSIX) while keeping the existing TimeoutWrapper on POSIX.
Concretely: import sys (or os), check platform (e.g.,
sys.platform.startswith("win") or os.name == "nt"); if Windows, run
dev_setup.run() inside a Thread/ThreadPoolExecutor and enforce the timeout via
thread.join(timeout) or future.result(timeout=...), otherwise keep the existing
with TimeoutWrapper(timeout): dev_setup.run(). Ensure necessary imports are
added and that exceptions/timeouts are handled the same way as the
TimeoutWrapper path.
| self.logger.info("Phase 3: Setting up repository...") | ||
|
|
||
| # Get repository configuration from config.dev.yaml | ||
| repo_url = self.config.repo or self.nixopus_config.get_yaml_value("clone.repo") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify repository URL configuration handling.
The code retrieves the repository URL from config, but it's unclear how this handles different repository formats (HTTPS vs SSH) and authentication.
🏁 Script executed:
#!/bin/bash
# Check how the clone configuration handles different repository formats
rg -A 5 -B 5 "def clone\(" cli/app/commands/clone/clone.pyLength of output: 763
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== cli/app/commands/setup/dev.py (lines ~1-260) ==="
sed -n '1,260p' cli/app/commands/setup/dev.py || true
echo
echo "=== cli/app/commands/clone/clone.py (full) ==="
sed -n '1,400p' cli/app/commands/clone/clone.py || true
echo
echo "=== Search for CloneService / CloneConfig / CloneResult definitions ==="
rg -n --hidden --no-ignore -S "class\s+CloneService|class\s+CloneConfig|class\s+CloneResult|def\s+clone\(" || true
echo
echo "=== Search for git invocations and subprocess usage ==="
rg -n --hidden --no-ignore -S "git\s+clone|git clone|subprocess\.run|subprocess\.Popen|git\.Repo|GitPython|GIT_SSH_COMMAND|GIT_ASKPASS|SSH_ASKPASS" || true
echo
echo "=== Search for nixopus_config / get_yaml_value usage/definition ==="
rg -n --hidden --no-ignore -S "nixopus_config|get_yaml_value|class\s+NixopusConfig" || true
echo
echo "=== Search for occurrences of 'clone.repo' in repo ==="
rg -n --hidden --no-ignore -S "clone.repo" || trueLength of output: 38321
Ensure repository cloning works with SSH/HTTPS authentication
- Observation: repo_url is read from config (cli/app/commands/setup/dev.py —
repo_url = self.config.repo or self.nixopus_config.get_yaml_value("clone.repo")). DevSetup calls _setup_repository before _setup_ssh in run(), so SSH keys are not configured when cloning. - Observation: Clone accepts SSH/HTTPS formats (cli/app/commands/clone/clone.py —
CloneConfig._is_valid_repo_format) but GitClone simply runsgit clonevia subprocess (cli/app/commands/clone/clone.py —GitClone.clone_repository/subprocess.run) and does not inject credentials or an SSH key. - Action: either run _setup_ssh before _setup_repository, or update the clone flow to support injecting SSH credentials (e.g., set GIT_SSH_COMMAND / SSH_AUTH_SOCK or use a per-clone sshCommand) and/or document that credentials must be preconfigured for SSH/HTTPS clones.
🤖 Prompt for AI Agents
In cli/app/commands/setup/dev.py around line 195, the repo_url is chosen before
SSH is configured so cloning may fail for SSH/HTTPS repos; either call
self._setup_ssh() before self._setup_repository() in run(), or update the clone
implementation to inject credentials: configure GIT_SSH_COMMAND or SSH_AUTH_SOCK
for the subprocess, or pass a per-clone sshCommand (and ensure env includes SSH
agent socket or the generated private key and proper permissions) and fall back
to HTTPS with embedded credentials only if explicitly provided; make the chosen
approach consistent across run() and GitClone.clone_repository so repository
cloning succeeds when using SSH or authenticated HTTPS.
| subprocess.run(["go", "mod", "tidy"], check=True) | ||
| subprocess.run(["go", "mod", "download"], check=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for missing Go toolchain.
The code assumes go command is available but doesn't check for it first. This will fail with an unhelpful error if Go is not installed.
if api_path.exists():
self.logger.info("Setting up Go dependencies...")
+ if not shutil.which("go"):
+ raise Exception("Go is not installed. Please install Go 1.19+ to continue.")
os.chdir(api_path)
subprocess.run(["go", "mod", "tidy"], check=True)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| subprocess.run(["go", "mod", "tidy"], check=True) | |
| subprocess.run(["go", "mod", "download"], check=True) | |
| if not shutil.which("go"): | |
| raise Exception("Go is not installed. Please install Go 1.19+ to continue.") | |
| subprocess.run(["go", "mod", "tidy"], check=True) | |
| subprocess.run(["go", "mod", "download"], check=True) |
🤖 Prompt for AI Agents
In cli/app/commands/setup/dev.py around lines 250-251, the code runs "go mod
tidy" and "go mod download" without verifying the Go toolchain is present; add a
pre-check using shutil.which("go") (or attempt to run "go version" and catch
FileNotFoundError/OSError) and if missing raise a clear, user-facing error and
exit with a non-zero status; otherwise run the subprocess commands inside
try/except to catch CalledProcessError and log the command output for
troubleshooting before exiting.
| if view_path.exists(): | ||
| self.logger.info("Setting up Yarn dependencies...") | ||
| os.chdir(view_path) | ||
| subprocess.run(["yarn", "install", "--frozen-lockfile"], check=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for missing Yarn.
Similar to the Go issue, the code doesn't verify Yarn is installed before using it.
if view_path.exists():
self.logger.info("Setting up Yarn dependencies...")
+ if not shutil.which("yarn"):
+ raise Exception("Yarn is not installed. Please install Yarn to continue.")
os.chdir(view_path)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| subprocess.run(["yarn", "install", "--frozen-lockfile"], check=True) | |
| if view_path.exists(): | |
| self.logger.info("Setting up Yarn dependencies...") | |
| if not shutil.which("yarn"): | |
| raise Exception("Yarn is not installed. Please install Yarn to continue.") | |
| os.chdir(view_path) | |
| subprocess.run(["yarn", "install", "--frozen-lockfile"], check=True) |
🤖 Prompt for AI Agents
In cli/app/commands/setup/dev.py around line 260, the subprocess.run call
invoking "yarn install --frozen-lockfile" assumes Yarn is present; add a
pre-check for Yarn (e.g., shutil.which("yarn")) and if missing raise or print a
clear error directing the user to install Yarn (or provide an alternate
command), and wrap the subprocess.run in try/except to catch FileNotFoundError
and subprocess.CalledProcessError to surface a helpful message instead of
crashing.
| if cli_path.exists() and (cli_path / "pyproject.toml").exists(): | ||
| self.logger.info("Setting up Poetry dependencies...") | ||
| os.chdir(cli_path) | ||
| subprocess.run(["poetry", "install"], check=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for missing Poetry.
The code doesn't verify Poetry is installed before using it.
self.logger.info("Setting up Poetry dependencies...")
+ if not shutil.which("poetry"):
+ raise Exception("Poetry is not installed. Please install Poetry to continue.")
os.chdir(cli_path)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In cli/app/commands/setup/dev.py around line 269, the subprocess.run(["poetry",
"install"], check=True) call assumes Poetry is installed; add a pre-check using
shutil.which("poetry") and if it returns None emit a clear error message and
exit with a non-zero status (or raise a click.Abort) so the user knows to
install Poetry; also wrap the subprocess.run in try/except to catch
FileNotFoundError and subprocess.CalledProcessError and log a helpful message
including the command and error before exiting.
| "DB_NAME": "postgres", | ||
| "USERNAME": "postgres", | ||
| "PASSWORD": "changeme", | ||
| "HOST_NAME": "localhost", | ||
| "SSL_MODE": "disable", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security issue: Hardcoded database credentials.
The database credentials are hardcoded as "postgres"/"changeme" which is insecure. These should be generated or configured securely.
"ENV": "development",
"DB_NAME": "postgres",
"USERNAME": "postgres",
- "PASSWORD": "changeme",
+ "PASSWORD": os.environ.get("DB_PASSWORD", self._generate_secure_password()),
"HOST_NAME": "localhost",Consider adding a method to generate secure passwords:
def _generate_secure_password(self) -> str:
"""Generate a secure random password"""
import secrets
import string
alphabet = string.ascii_letters + string.digits + string.punctuation
return ''.join(secrets.choice(alphabet) for _ in range(16))🤖 Prompt for AI Agents
In cli/app/commands/setup/dev.py around lines 610 to 614 the DB credentials are
hardcoded ("postgres"/"changeme"); replace this insecure approach by removing
hardcoded USERNAME and PASSWORD and instead generate or obtain a secure password
at runtime (use Python's secrets module to create a 16+ character random
password with letters, digits, and punctuation), or read credentials from
environment variables / a secrets manager / interactive prompt if provided;
ensure the generated password is persisted only to the intended config output
with restrictive file permissions and that any code writing it to disk masks or
limits access, and update any callers to use the new credential source instead
of the hardcoded values.
| # Start Air in background | ||
| log_file = api_path / "api.log" | ||
| with open(log_file, "w") as f: | ||
| subprocess.Popen(["air"], cwd=api_path, stdout=f, stderr=subprocess.STDOUT, start_new_session=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Process management issue: Orphaned background processes.
The services are started with start_new_session=True which detaches them from the parent process. If the setup fails or is interrupted, these processes will continue running in the background.
Consider tracking the PIDs and providing cleanup:
+ self.service_pids = []
+
# Start Air in background
log_file = api_path / "api.log"
with open(log_file, "w") as f:
- subprocess.Popen(["air"], cwd=api_path, stdout=f, stderr=subprocess.STDOUT, start_new_session=True)
+ proc = subprocess.Popen(["air"], cwd=api_path, stdout=f, stderr=subprocess.STDOUT, start_new_session=True)
+ self.service_pids.append(proc.pid)
+ # Save PID to file for manual cleanup if needed
+ (api_path / "api.pid").write_text(str(proc.pid))Add a cleanup method:
def cleanup_services(self):
"""Stop any running services"""
for pid in self.service_pids:
try:
os.kill(pid, signal.SIGTERM)
except ProcessLookupError:
pass🤖 Prompt for AI Agents
In cli/app/commands/setup/dev.py around line 706, services are started with
start_new_session=True which detaches them and can leave orphaned processes;
change the startup to capture the subprocess (or its pid) and append it to a
self.service_pids list, implement a cleanup_services method that iterates
self.service_pids and sends a SIGTERM (os.kill(pid, signal.SIGTERM)) while
swallowing ProcessLookupError, and ensure cleanup_services is invoked on
exceptions, interrupts, or at exit (use try/finally, atexit, or signal handlers)
so background processes are reliably terminated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (4)
cli/app/commands/setup/dev.py (4)
246-271: Tool presence not verified (Go/Yarn/Poetry)Without checks these steps fail with poor UX on missing tools. This was raised earlier.
api_path = self.workspace_path / "api" if api_path.exists(): self.logger.info("Setting up Go dependencies...") + if not shutil.which("go"): + raise Exception("Go is not installed. Please install Go >=1.23.") os.chdir(api_path) subprocess.run(["go", "mod", "tidy"], check=True) subprocess.run(["go", "mod", "download"], check=True) self.logger.success("Go dependencies installed") os.chdir(self.workspace_path) @@ view_path = self.workspace_path / "view" if view_path.exists(): self.logger.info("Setting up Yarn dependencies...") + if not shutil.which("yarn"): + raise Exception("Yarn is not installed. Please install Yarn.") os.chdir(view_path) subprocess.run(["yarn", "install", "--frozen-lockfile"], check=True) self.logger.success("Yarn dependencies installed") os.chdir(self.workspace_path) @@ cli_path = self.workspace_path / "cli" if cli_path.exists() and (cli_path / "pyproject.toml").exists(): self.logger.info("Setting up Poetry dependencies...") + if not shutil.which("poetry"): + raise Exception("Poetry is not installed. Please install Poetry.") os.chdir(cli_path) subprocess.run(["poetry", "install"], check=True) self.logger.success("Poetry dependencies installed") os.chdir(self.workspace_path)
721-729: Background processes not tracked; can leave orphaned Air/Yarn on failureCapture PIDs and perform best-effort cleanup on exceptions. This was raised previously.
# Start Air in background log_file = api_path / "api.log" with open(log_file, "w") as f: - subprocess.Popen(["air"], cwd=api_path, stdout=f, stderr=subprocess.STDOUT, start_new_session=True) + proc = subprocess.Popen(["air"], cwd=api_path, stdout=f, stderr=subprocess.STDOUT, start_new_session=True) + getattr(self, "service_pids", []).append(proc.pid) + (api_path / "api.pid").write_text(str(proc.pid)) @@ with open(log_file, "w") as f: - subprocess.Popen( + proc = subprocess.Popen( ["yarn", "run", "dev", "--", "-p", str(self.config.view_port)], cwd=view_path, stdout=f, stderr=subprocess.STDOUT, start_new_session=True, - ) + ) + getattr(self, "service_pids", []).append(proc.pid) + (view_path / "view.pid").write_text(str(proc.pid))And add best-effort cleanup on failures:
except Exception as e: self.logger.error(f"Setup failed at phase: {e}") if self.config.verbose: self.logger.error(traceback.format_exc()) + try: + self._cleanup_services() + except Exception: + pass raiseAdd these helper pieces (outside diff range):
# at top-level imports import signal # in __init__ self.service_pids: list[int] = [] # new method in class def _cleanup_services(self): for pid in getattr(self, "service_pids", []): try: os.kill(pid, signal.SIGTERM) except Exception: passAlso applies to: 741-749, 110-115
83-96: Clone happens before SSH setup — private SSH/HTTPS repos may fail to cloneThis sequencing was flagged earlier and still applies. Move SSH setup before repository cloning, or inject credentials into the clone step.
Apply this diff to reorder phases:
- # Phase 3: Repository Setup - self._setup_repository() - - # Phase 4: Project Dependencies - self._setup_project_dependencies() + # Phase 3: SSH Configuration (needed for private repos) + if not self.config.skip_ssh: + self._setup_ssh() + + # Phase 4: Repository Setup + self._setup_repository() + + # Phase 5: Project Dependencies + self._setup_project_dependencies() - # Phase 5: Infrastructure (Database) + # Phase 6: Infrastructure (Database) - if not self.config.skip_docker: - self._setup_infrastructure() + if not self.config.skip_docker: + self._setup_infrastructure() - # Phase 6: SSH Configuration - if not self.config.skip_ssh: - self._setup_ssh() + # (SSH already done above)
620-637: API .env writes PASSWORD=changeme — mismatches DB default ‘postgres’ (and your new YAML change)This will cause auth failures once Postgres enforces passwords. Derive credentials from services.db.env to keep API and DB in sync.
if api_env_sample.exists(): shutil.copy2(api_env_sample, api_env) # Update API environment variables - self._update_env_file( + # Derive DB creds from YAML to stay consistent + try: + db_env_defaults = self.nixopus_config.get_service_env_values("services.db.env") + except Exception: + db_env_defaults = {} + pg_user = db_env_defaults.get("POSTGRES_USER", "postgres") + pg_password = db_env_defaults.get("POSTGRES_PASSWORD", "postgres") + + self._update_env_file( api_env, { "PORT": str(self.config.api_port), "DB_PORT": str(self.config.db_port), "REDIS_URL": f"redis://localhost:{self.config.redis_port}", "ALLOWED_ORIGIN": f"http://localhost:{self.config.view_port}", "SSH_PRIVATE_KEY": str(Path(self.config.ssh_key_path).expanduser()), "SSH_HOST": "localhost", "SSH_PORT": "22", "SSH_USER": os.environ.get("USER", "admin"), "ENV": "development", "DB_NAME": "postgres", - "USERNAME": "postgres", - "PASSWORD": "changeme", + "USERNAME": pg_user, + "PASSWORD": pg_password, "HOST_NAME": "localhost", "SSL_MODE": "disable", }, )
🧹 Nitpick comments (3)
cli/app/commands/setup/dev.py (3)
59-61: Fragile config path resolutionThe default "../helpers/config.dev.yaml" is relative to CWD; resolve relative to this module as fallback.
- self.nixopus_config._yaml_path = os.path.abspath(self.config.config_file) + cfg_candidate = Path(self.config.config_file).expanduser() + if not cfg_candidate.is_file(): + repo_root = Path(__file__).resolve().parents[4] # <repo>/ + alt = repo_root / "helpers" / "config.dev.yaml" + cfg_candidate = alt if alt.is_file() else cfg_candidate + self.nixopus_config._yaml_path = str(cfg_candidate.resolve())
33-35: SSH key defaults inconsistent with YAMLCLI defaults to ed25519 at ~/.ssh/id_ed25519_nixopus, while helpers/config.dev.yaml defaults API env SSH_PRIVATE_KEY to ~/.ssh/id_rsa_nixopus. Prefer a single default (ed25519).
719-724: Air install assumes Go env (GOBIN/PATH) is setgo install places binaries in GOBIN or GOPATH/bin; if not on PATH, launching “air” will fail. Consider resolving the installed path or using “go run” with Air, or instruct users to export GOBIN/GOPATH/bin to PATH.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cli/app/commands/setup/dev.py(1 hunks)helpers/config.dev.yaml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
cli/app/commands/setup/dev.py (8)
cli/app/utils/lib.py (6)
os(41-42)HostInformation(61-91)DirectoryManager(123-149)set_permissions(154-168)package_manager(49-50)get_package_manager(67-78)cli/app/commands/preflight/run.py (3)
PreflightRunner(14-104)check_windows_environment(25-74)check_ports_from_config(93-104)cli/app/commands/conflict/models.py (1)
ConflictConfig(21-26)cli/app/commands/conflict/conflict.py (1)
ConflictService(391-409)cli/app/commands/install/deps.py (1)
install_all_deps(108-140)cli/app/commands/clone/clone.py (1)
Clone(245-259)cli/app/utils/config.py (2)
get_yaml_value(36-46)get_service_env_values(48-50)cli/app/commands/install/ssh.py (3)
add_to_authorized_keys(240-283)create_ssh_directory(225-238)generate(479-481)
🔇 Additional comments (1)
cli/app/commands/setup/dev.py (1)
59-61: Minor: store resolved config path as string consistentlyYou set _yaml_path to an absolute path; ensure it’s a string (not Path) everywhere to match Config expectations. The diff above already does this.
| repo_url = self.config.repo or self.nixopus_config.get_yaml_value("clone.repo") | ||
| branch = self.config.branch | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Branch is hardcoded; ignores clone.branch in YAML
Use the YAML value when CLI option isn’t provided to avoid stale defaults.
- branch = self.config.branch
+ branch = self.config.branch or self.nixopus_config.get_yaml_value("clone.branch")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| repo_url = self.config.repo or self.nixopus_config.get_yaml_value("clone.repo") | |
| branch = self.config.branch | |
| repo_url = self.config.repo or self.nixopus_config.get_yaml_value("clone.repo") | |
| branch = self.config.branch or self.nixopus_config.get_yaml_value("clone.branch") |
🤖 Prompt for AI Agents
In cli/app/commands/setup/dev.py around lines 195 to 197, the branch variable is
currently set only from the CLI (self.config.branch) which ignores the YAML
clone.branch value; change the assignment to fall back to the YAML value when
the CLI flag is not provided (e.g., branch = self.config.branch or
self.nixopus_config.get_yaml_value("clone.branch")), ensuring the existing CLI
value still takes precedence and no other behavior is changed.
| # Named volumes for persistence (overridable via YAML) | ||
| db_volume_name = db_env.get("DB_VOLUME", f"{db_container_name}-data") | ||
| redis_volume_name = redis_env.get("REDIS_VOLUME", f"{redis_container_name}-data") | ||
|
|
||
| # Ensure volumes exist (best-effort) | ||
| for vol_name in [db_volume_name, redis_volume_name]: | ||
| try: | ||
| inspect = subprocess.run( | ||
| ["docker", "volume", "inspect", vol_name], | ||
| check=False, | ||
| stdout=subprocess.DEVNULL, | ||
| stderr=subprocess.DEVNULL, | ||
| ) | ||
| if inspect.returncode != 0: | ||
| subprocess.run(["docker", "volume", "create", vol_name], check=True) | ||
| except Exception as e: | ||
| self.logger.warning(f"Could not ensure volume '{vol_name}': {e}") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
DB/Redis “volume” handling conflates named volumes with bind mounts
YAML defaults use ./db and ./redis (paths), but here they are treated as Docker named volumes (docker volume inspect/create), which fails and then mounts relative paths without ensuring directories exist. Fix by detecting path-like values and ensuring directories; only use docker volume create for true named volumes.
# Named volumes for persistence (overridable via YAML)
- db_volume_name = db_env.get("DB_VOLUME", f"{db_container_name}-data")
- redis_volume_name = redis_env.get("REDIS_VOLUME", f"{redis_container_name}-data")
+ db_volume_name = db_env.get("DB_VOLUME", f"{db_container_name}-data")
+ redis_volume_name = redis_env.get("REDIS_VOLUME", f"{redis_container_name}-data")
+
+ def resolve_mount_source(val: str) -> tuple[str, str]:
+ v = str(val)
+ if any(sep in v for sep in ("/", "\\")) or v.startswith("."):
+ host_dir = str(Path(v).expanduser().resolve())
+ os.makedirs(host_dir, exist_ok=True)
+ return "bind", host_dir
+ return "volume", v
+
+ db_src_type, db_src = resolve_mount_source(db_volume_name)
+ redis_src_type, redis_src = resolve_mount_source(redis_volume_name)
@@
- for vol_name in [db_volume_name, redis_volume_name]:
- try:
- inspect = subprocess.run(
- ["docker", "volume", "inspect", vol_name],
- check=False,
- stdout=subprocess.DEVNULL,
- stderr=subprocess.DEVNULL,
- )
- if inspect.returncode != 0:
- subprocess.run(["docker", "volume", "create", vol_name], check=True)
- except Exception as e:
- self.logger.warning(f"Could not ensure volume '{vol_name}': {e}")
+ for vol_name, src_type in [(db_src, db_src_type), (redis_src, redis_src_type)]:
+ if src_type == "volume":
+ try:
+ inspect = subprocess.run(
+ ["docker", "volume", "inspect", vol_name],
+ check=False,
+ stdout=subprocess.DEVNULL,
+ stderr=subprocess.DEVNULL,
+ )
+ if inspect.returncode != 0:
+ subprocess.run(["docker", "volume", "create", vol_name], check=True)
+ except Exception as e:
+ self.logger.warning(f"Could not ensure volume '{vol_name}': {e}")
@@
- "-v",
- f"{db_volume_name}:/var/lib/postgresql/data",
+ "-v",
+ f"{db_src}:/var/lib/postgresql/data",
@@
- "-v",
- f"{redis_volume_name}:/data",
+ "-v",
+ f"{redis_src}:/data",Also applies to: 373-376, 404-407
🤖 Prompt for AI Agents
In cli/app/commands/setup/dev.py around lines 327 to 344 (and likewise apply to
373-376 and 404-407), the code treats any DB/Redis volume value as a Docker
named volume; detect whether the value is a path (e.g., starts with "./" or "/"
or contains os.path.sep or looks like a relative/absolute filesystem path) and
if it is, ensure the host directory exists by creating it (os.makedirs(...,
exist_ok=True)) instead of calling docker volume inspect/create; only call
docker volume inspect/create for true named volumes (no path separators, not
starting with "." or "/"). Ensure error handling/logging remains and do this
change at the three mentioned locations.
| DB_CONTAINER_NAME: ${DB_CONTAINER_NAME:-nixopus-db} | ||
| POSTGRES_USER: ${USERNAME:-postgres} | ||
| POSTGRES_PASSWORD: ${PASSWORD:-changeme} | ||
| POSTGRES_PASSWORD: ${PASSWORD:-postgres} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DB password default switched to 'postgres' — align with API/client configs to prevent auth mismatches
The API .env writer in cli/app/commands/setup/dev.py still sets PASSWORD=changeme, which won’t match this new default and can break DB auth (unless trust mode masks it). Please align both sides (prefer deriving API creds from services.db.env).
🤖 Prompt for AI Agents
helpers/config.dev.yaml around line 50: POSTGRES_PASSWORD default is set to
"postgres" which now mismatches the API .env writer that still sets
PASSWORD=changeme; update the setup so both sides use the same credential source
— either change POSTGRES_PASSWORD back to changeme to match existing API setup,
or preferably modify cli/app/commands/setup/dev.py to derive and write PASSWORD
from services.db.env (services.db.env.POSTGRES_PASSWORD) into the API .env
writer so the API/clients use the DB password defined in services.db.env;
implement the chosen change and ensure tests or local dev setup confirm the envs
match.
Description:
Include development env setup for Linux, Mac and Windows
Summary by CodeRabbit
New Features
Improvements
Chores