-
-
Notifications
You must be signed in to change notification settings - Fork 84
feat: merge install scripts, improve theme handling, and enhance container UI #328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: merge install scripts, improve theme handling, and enhance container UI #328
Conversation
…t of the post build process while commiting
WalkthroughRefactors installer by removing scripts/install-cli.sh and rewriting scripts/install.sh to a self-contained cross-platform installer, adds CLI packaging metadata in package.json, introduces container protection checks across start/stop/remove, adds UI gating for protected containers, updates theme dialog preview behavior, enables shallow git clone by default, adjusts CLI build via PyInstaller/Docker, and tweaks workflows/ignore/versions. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Frontend
participant API as Container Controller
participant Docker as Docker Service
UI->>API: Start/Stop/Remove(containerID)
API->>API: isProtectedContainer(containerID, action)
alt Protected
API-->>UI: Response(status=skipped)
else Not protected
API->>Docker: Perform action
Docker-->>API: Result
API-->>UI: Success/Failure
end
sequenceDiagram
participant User
participant Dialog as Theme Dialog
participant Theme as Theme Manager
User->>Dialog: Open
Dialog->>Theme: Snapshot current theme (startTheme)
User->>Dialog: Navigate (mouse/keyboard)
Dialog->>Theme: Preview hovered/selected theme
User->>Dialog: Mouse leaves list
Dialog->>Theme: Revert to startTheme
User->>Dialog: Close
Dialog->>Theme: Revert to last committed theme
sequenceDiagram
participant Sh as install.sh
participant GH as GitHub (package.json)
participant Sys as System
Sh->>GH: Fetch cli-version & packages
Sh->>Sh: Detect OS/Arch & pkg type
Sh->>GH: Build & verify package URL
Sh->>Sys: Download & install (dpkg/dnf/apk/tar)
Sh->>Sys: Verify nixopus in PATH
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
…ixopus into feature/improvements-alpha11
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (2)
.github/workflows/add-sponsors.yml (1)
21-21: Harden package installation stepOn GitHub-hosted runners, apt-get install can intermittently fail without an update. Recommend chaining an update for reliability.
Apply:
- - name: Setup environment - run: sudo apt-get install -y jq + - name: Setup environment + run: sudo apt-get update && sudo apt-get install -y jqview/components/ui/theme-toggler.tsx (1)
117-117: CSS property value is incorrectThe
backgroundstyle property has a string literal'themeName'instead of using the actual theme value.Fix the background style to use the actual theme variable:
-style={{ background: 'themeName' }} +style={{ background: themeName }}Or if you intended to use a computed value from theme colors:
-style={{ background: 'themeName' }} +style={{ backgroundColor: `hsl(${themeColors[themeName]?.muted || themeColors.light.muted})` }}
🧹 Nitpick comments (11)
view/lib/utils.ts (1)
26-29: Keep FE/BE detection logic in sync; consider centralizing the markerThe check mirrors the backend’s strings.Contains(name, "nixopus"). To reduce drift, consider defining a local constant for the marker and reusing it across the UI. This makes future changes less error-prone.
Apply this scoped refactor:
-export function isNixopusContainer(name: unknown): boolean { - if (typeof name !== 'string') return false; - return name.toLowerCase().includes('nixopus'); -} +const PROTECTED_MARKER = 'nixopus'; + +export function isNixopusContainer(name: unknown): boolean { + if (typeof name !== 'string') return false; + return name.toLowerCase().includes(PROTECTED_MARKER); +}.github/workflows/add-sponsors.yml (1)
37-37: Trim trailing spaces to satisfy lintersYamllint flagged trailing spaces on this line. Remove them to keep CI clean.
Apply:
- echo "$SPONSORS_JSON" | jq -r '.data.user.sponsorshipsAsMaintainer.nodes[] | + echo "$SPONSORS_JSON" | jq -r '.data.user.sponsorshipsAsMaintainer.nodes[] |api/internal/features/container/controller/protected.go (2)
13-15: Don’t silently swallow container lookup errorsIf GetContainerById fails, the guard returns “not skipped” and higher layers proceed. This can inadvertently allow operations on protected containers when metadata isn’t retrievable. At minimum, log the error; optionally choose a safer default for destructive actions.
Suggested improvement:
- details, err := c.dockerService.GetContainerById(containerID) - if err != nil { - return nil, false - } + details, err := c.dockerService.GetContainerById(containerID) + if err != nil { + c.logger.Log(logger.Error, fmt.Sprintf("Failed to fetch container details for %s; proceeding without protection check", action), err) + return nil, false + }If you prefer a safer default for remove/stop (to avoid touching protected containers on lookup failures), we can return a “skipped” response when action is destructive. Want me to propose that variant?
17-23: Avoid magic string; align with frontend and make behavior explicitThe literal "nixopus" appears here and in the UI. Define a local constant to avoid drift and ease future changes. Also consider whether substring matching is too permissive (e.g., matches “phoenixopus”). If you want stricter matching (prefix or word boundary), we can tighten it.
Apply:
+const protectedMarker = "nixopus" ... - if strings.Contains(name, "nixopus") { + if strings.Contains(name, protectedMarker) {If you want stricter semantics, we can switch to HasPrefix/regexp with word boundaries.
view/components/ui/theme-toggler.tsx (2)
33-50: Consider optimizing the keyboard event listener setupThe
keyupevent listener is recreated on every render since it has no dependencies. This could lead to unnecessary listener churn.Consider memoizing the event handler or adding proper dependencies:
React.useEffect(() => { const up = (e: KeyboardEvent) => { if (e.key === 'ArrowUp' || e.key === 'ArrowDown') { const currentSelected = document.querySelector( '[aria-selected="true"]', ); if (currentSelected) { const theme = currentSelected.getAttribute('data-theme'); if (theme) { handleOnFocus(theme); } } } }; document.addEventListener('keyup', up); return () => document.removeEventListener('keyup', up); -}, []); +}, [handleOnFocus]);Note: You'll also need to wrap
handleOnFocuswithReact.useCallback:const handleOnFocus = React.useCallback((theme: string) => { setTheme(theme); }, [setTheme]);
98-98: Consider accessibility implications of theme changes on mouse leaveAutomatically reverting the theme when the mouse leaves the list area might be disorienting for keyboard users who are navigating with arrow keys, as their focus remains in the list while the theme reverts.
Consider checking if keyboard navigation is active before reverting:
-<CommandList onMouseLeave={() => setTheme(startThemeRef.current)}> +<CommandList onMouseLeave={(e) => { + // Only revert on mouse leave if not using keyboard navigation + if (!document.querySelector('[aria-selected="true"]')) { + setTheme(startThemeRef.current); + } +}}>scripts/install.sh (2)
106-109: Add retry mechanism for network failuresNetwork downloads can fail temporarily. Adding a retry mechanism would improve reliability.
Consider implementing a retry mechanism:
-curl -L -o "$temp_file" "$download_url" || { - log_error "Failed to download package" - exit 1 -} +# Retry download up to 3 times +local retry_count=0 +local max_retries=3 +while [ $retry_count -lt $max_retries ]; do + if curl -L -o "$temp_file" "$download_url"; then + break + fi + retry_count=$((retry_count + 1)) + if [ $retry_count -eq $max_retries ]; then + log_error "Failed to download package after $max_retries attempts" + exit 1 + fi + echo "Download failed, retrying ($retry_count/$max_retries)..." + sleep 2 +done
129-137: Add ownership verification for securityWhen copying files without sudo, ensure proper ownership is maintained to prevent potential security issues.
Consider verifying and setting proper ownership:
if [[ -w /usr/local/bin ]] || mkdir -p /usr/local/bin 2>/dev/null; then cp /tmp/usr/local/bin/nixopus /usr/local/bin/ chmod +x /usr/local/bin/nixopus + # Ensure proper ownership (current user) + chown "$(id -u):$(id -g)" /usr/local/bin/nixopus 2>/dev/null || true elseview/app/containers/[id]/page.tsx (1)
103-103: Potential null reference when computingisProtectedThe
isProtectedcomputation usescontainer?.namebutcontaineris guaranteed to be non-null at this point (line 99 checks for!container). The optional chaining is unnecessary here.- const isProtected = isNixopusContainer(container?.name); + const isProtected = isNixopusContainer(container.name);view/app/containers/page.tsx (2)
24-25: Consider simplifying the containerName extractionThe type guard for
containerNameis overly defensive since TypeScript's type system and the container object structure should ensurecontainer.nameis always a string when the container exists.- const containerName: string = typeof container?.name === 'string' ? container.name : ''; - const isProtected = isNixopusContainer(containerName); + const isProtected = isNixopusContainer(container?.name);
197-197: TODO comment needs trackingThe TODO comment indicates missing pagination functionality for container listings. This should be tracked as a follow-up task to ensure it's not forgotten.
Would you like me to create a GitHub issue to track the implementation of pagination for the containers listing page?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
.github/workflows/add-sponsors.yml(2 hunks).gitignore(1 hunks)api/api/versions.json(1 hunks)api/internal/features/container/controller/protected.go(1 hunks)api/internal/features/container/controller/remove_container.go(1 hunks)api/internal/features/container/controller/start_container.go(1 hunks)api/internal/features/container/controller/stop_container.go(1 hunks)scripts/install-cli.sh(0 hunks)scripts/install.sh(1 hunks)view/app/containers/[id]/page.tsx(6 hunks)view/app/containers/page.tsx(7 hunks)view/components/ui/theme-toggler.tsx(3 hunks)view/lib/utils.ts(1 hunks)
💤 Files with no reviewable changes (1)
- scripts/install-cli.sh
🧰 Additional context used
🧬 Code Graph Analysis (4)
view/app/containers/[id]/page.tsx (2)
view/lib/utils.ts (1)
isNixopusContainer(26-29)view/components/rbac/PermissionGuard.tsx (1)
ResourceGuard(78-95)
api/internal/features/container/controller/protected.go (3)
api/internal/features/container/controller/init.go (1)
ContainerController(11-17)api/internal/types/response.go (1)
Response(5-20)api/internal/features/logger/init.go (1)
Info(20-20)
view/components/ui/theme-toggler.tsx (2)
view/components/ui/command.tsx (6)
CommandDialog(176-176)CommandInput(177-177)CommandList(178-178)CommandEmpty(179-179)CommandGroup(180-180)CommandItem(181-181)view/components/colors.ts (1)
palette(31-44)
view/app/containers/page.tsx (2)
view/lib/utils.ts (1)
isNixopusContainer(26-29)view/components/ui/typography.tsx (1)
TypographyH2(9-15)
🪛 YAMLlint (1.37.1)
.github/workflows/add-sponsors.yml
[error] 37-37: trailing spaces
(trailing-spaces)
🔇 Additional comments (12)
.gitignore (1)
65-66: Clarify scope of artifacts/ ignore ruleIf the intent is to ignore only the top-level artifacts directory, prefer a leading slash. Without it, any artifacts/ directory at any depth will be ignored.
Apply this diff if top-level-only is desired:
-artifacts/ +/artifacts/api/api/versions.json (1)
6-6: LGTM: release_date update looks fineThe timestamp is valid RFC 3339 and consistent with the existing schema.
.github/workflows/add-sponsors.yml (1)
3-7: Confirm trigger change: push removedThe workflow now runs only on workflow_dispatch and a daily cron, not on push. If intentional to reduce noise, great; otherwise consider re-adding push for quicker README updates after sponsor-related changes.
api/internal/features/container/controller/protected.go (1)
18-18: Verify logger.Log signature and structured contextYou pass details.Name as a third argument. Ensure logger.Log accepts a variadic/context arg and renders it meaningfully; otherwise include it in the formatted message.
Possible alternative:
- c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container", action), details.Name) + c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container %q", action, details.Name))view/components/ui/theme-toggler.tsx (1)
86-95: Theme preview/revert logic appears solidThe implementation correctly captures the current theme when the dialog opens and provides proper revert behavior when closing without selection. This ensures users can preview themes safely without committing changes.
api/internal/features/container/controller/remove_container.go (1)
15-17: Protected container check is properly implementedThe early return pattern for protected containers is clean and consistent with the implementation in start and stop controllers.
api/internal/features/container/controller/stop_container.go (1)
15-17: Protected container guard correctly implementedThe implementation follows the same pattern as other container operations, ensuring consistency across the codebase.
api/internal/features/container/controller/start_container.go (1)
15-17: Consistent protection pattern across container operationsThe protected container check maintains consistency with stop and remove operations, providing a unified approach to container protection.
view/app/containers/[id]/page.tsx (2)
128-128: Good implementation of container protection logicThe protection check is consistently applied across all destructive actions (start, stop, restart, remove), properly preventing operations on Nixopus containers. The disabled state combines both loading and protection states appropriately.
Also applies to: 137-137, 146-146, 161-161
119-119: Consistent RBAC permission structure maintainedThe ResourceGuard components properly enforce role-based access control with appropriate resource/action pairs and loading fallbacks. The separation between update and delete permissions is well-structured.
Also applies to: 152-152, 211-211
view/app/containers/page.tsx (2)
241-246: Clean implementation of empty state UIThe empty state message is well-implemented using TypographyH2 component with proper centering and appropriate styling through the muted-foreground class.
248-259: Simplified container rendering without featured carouselThe removal of the featured carousel and consolidation to a single grid layout simplifies the UI and codebase. All containers are now rendered uniformly in a responsive grid layout.
…auto-recreate spec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
cli/app/commands/clone/clone.py (2)
52-52: Consider making shallow clone depth configurable.The hardcoded
--depth=1makes this always a shallow clone, which may not be suitable for all use cases. Consider making the depth configurable through theCloneConfigto allow full clones when needed.- cmd = ["git", "clone", "--depth=1"] + cmd = ["git", "clone"] + if depth: + cmd.extend(["--depth", str(depth)])And update
CloneConfigto include:depth: Optional[int] = Field(1, description="Clone depth (None for full clone)")
107-107: Minor: Consistent quote usage in debug messages.The change from single quotes to double quotes in the join operation is inconsistent with the existing codebase style. Consider maintaining consistency with the original format or updating all similar instances.
- self.logger.debug(debug_executing_git_clone.format(command=" ".join(cmd))) + self.logger.debug(debug_executing_git_clone.format(command=' '.join(cmd)))cli/build.sh (2)
169-177: Consider adding Poetry version validation and error handling.The hardcoded Poetry version "1.8.3" might become outdated. Consider making it configurable or adding validation to ensure compatibility.
+POETRY_VERSION="${POETRY_VERSION:-1.8.3}" # In the Docker command: -python3 -m pip install 'poetry==1.8.3' && \ +python3 -m pip install \"poetry==$POETRY_VERSION\" && \
152-152: Document the NIXOPUS_DISABLE_DOCKER environment variable.The script introduces the
NIXOPUS_DISABLE_DOCKERenvironment variable but doesn't document it in the usage function. Consider adding documentation.Update the
show_usage()function to include:echo "Environment Variables:" echo " NIXOPUS_DISABLE_DOCKER Disable Docker-based builds (force host builds)" echo ""
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
assets/.DS_Storeis excluded by!**/.DS_Store
📒 Files selected for processing (5)
.gitignore(1 hunks)cli/app/commands/clone/clone.py(4 hunks)cli/build.sh(3 hunks)cli/perf_check.sh(0 hunks)package.json(1 hunks)
💤 Files with no reviewable changes (1)
- cli/perf_check.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- .gitignore
🔇 Additional comments (5)
cli/app/commands/clone/clone.py (1)
218-221: LGTM on timing logic with minor spacing improvement.The import placement and timing implementation are correct. The spacing adjustments improve readability.
package.json (1)
5-5: Confirm change to package.jsonprivateflagpackage.json now contains "private": false — this allows publishing. I couldn't find any git commits that changed this field in the repo history, so please confirm this is intentional.
Files to inspect:
- package.json — line 5: "private": false
Snippet:
"private": false,If this was unintentional, revert to true to prevent accidental publishing.
cli/build.sh (3)
5-6: LGTM on script directory setup.Good practice to establish the script's working directory for reliable relative path operations.
142-186: Excellent manylinux container integration for GLIBC compatibility.This implementation successfully addresses the PR objective of building binaries inside manylinux2014 containers to avoid GLIBC version conflicts. The logic properly:
- Detects OS/architecture combinations
- Uses appropriate manylinux2014 images for x86_64 and aarch64
- Falls back to host builds when Docker isn't available
- Provides clear logging for each build path
The Docker command correctly mounts the working directory and installs the required Python toolchain.
199-199: LGTM on centralized build workflow.The refactoring to use
run_pyinstaller_build()properly centralizes the PyInstaller logic and maintains the existing build pipeline structure.
| "cli-version": "0.1.0", | ||
| "cli-packages": [ | ||
| "nixopus.tar", | ||
| "nixopus-0.1.0-1.aarch64.rpm", | ||
| "nixopus_0.1.0_arm64.apk", | ||
| "nixopus_0.1.0_arm64.deb", | ||
| "nixopus.tar", | ||
| "nixopus-0.1.0-1.x86_64.rpm", | ||
| "nixopus_0.1.0_amd64.deb", | ||
| "nixopus_0.1.0_amd64.apk" | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix duplicate entry and consider version consistency.
There are two issues with the CLI packages configuration:
"nixopus.tar"appears twice (lines 8 and 12)- The
cli-version("0.1.0") differs from the main packageversion("0.1.0-alpha.11")
"cli-packages": [
- "nixopus.tar",
"nixopus-0.1.0-1.aarch64.rpm",
"nixopus_0.1.0_arm64.apk",
"nixopus_0.1.0_arm64.deb",
"nixopus.tar",
"nixopus-0.1.0-1.x86_64.rpm",
"nixopus_0.1.0_amd64.deb",
"nixopus_0.1.0_amd64.apk"
]Also consider if cli-version should match the main version or remain separate for CLI-specific versioning strategy.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "cli-version": "0.1.0", | |
| "cli-packages": [ | |
| "nixopus.tar", | |
| "nixopus-0.1.0-1.aarch64.rpm", | |
| "nixopus_0.1.0_arm64.apk", | |
| "nixopus_0.1.0_arm64.deb", | |
| "nixopus.tar", | |
| "nixopus-0.1.0-1.x86_64.rpm", | |
| "nixopus_0.1.0_amd64.deb", | |
| "nixopus_0.1.0_amd64.apk" | |
| ] | |
| "cli-version": "0.1.0", | |
| "cli-packages": [ | |
| "nixopus-0.1.0-1.aarch64.rpm", | |
| "nixopus_0.1.0_arm64.apk", | |
| "nixopus_0.1.0_arm64.deb", | |
| "nixopus.tar", | |
| "nixopus-0.1.0-1.x86_64.rpm", | |
| "nixopus_0.1.0_amd64.deb", | |
| "nixopus_0.1.0_amd64.apk" | |
| ] |
🤖 Prompt for AI Agents
In package.json around lines 6 to 16, remove the duplicate "nixopus.tar" entry
from the "cli-packages" array and reconcile the version discrepancy: either
update "cli-version" to match the package "version" ("0.1.0-alpha.11") or
explicitly set it to the intended independent CLI version; ensure the final
value reflects your chosen versioning strategy and update any related
release/config files accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (3)
cli/app/commands/clone/clone.py (3)
104-111: Do not log credentials; sanitize the clone command before logging.If repo contains embedded credentials (e.g., https://token@..., https://user:pass@...), logging the raw command leaks secrets. Sanitize before logging.
Apply this diff locally and add the helper shown below:
def clone_repository(self, repo: str, path: str, branch: str = None) -> tuple[bool, str]: cmd = GitCommandBuilder.build_clone_command(repo, path, branch) - - self.logger.debug(debug_executing_git_clone.format(command=" ".join(cmd))) + sanitized_cmd = sanitize_git_command_for_logging(cmd) + self.logger.debug(debug_executing_git_clone.format(command=sanitized_cmd))Add this helper (outside this range) and reuse it also in CloneFormatter.format_dry_run:
# outside this range import re from urllib.parse import urlsplit, urlunsplit def _redact_url(url: str) -> str: try: parts = urlsplit(url) if parts.username or parts.password: host = parts.hostname or "" port = f":{parts.port}" if parts.port else "" netloc = host + port return urlunsplit((parts.scheme, netloc, parts.path, parts.query, parts.fragment)) # Fallback generic redaction for tokens before '@' return re.sub(r'://[^@]+@', '://***@', url) except Exception: return re.sub(r'://[^@]+@', '://***@', url) def sanitize_git_command_for_logging(cmd: list[str]) -> str: sanitized = [] for arg in cmd: if "://" in arg and "@" in arg: sanitized.append(_redact_url(arg)) else: sanitized.append(arg) return " ".join(sanitized)Also update the dry-run line to use sanitized command:
- output.append(dry_run_command.format(command=" ".join(cmd))) + output.append(dry_run_command.format(command=sanitize_git_command_for_logging(cmd)))
133-139: Don't hardcode the default branch — use None so Git decidesSetting branch="master" forces
-b masterand will fail on repos using "main" or other defaults. Make the field default None and accept Optional[str] in the validator.
- File: cli/app/commands/clone/clone.py — lines 133–139: change default branch to None
- File: cli/app/commands/clone/clone.py — lines 168–176: update validator signature to accept Optional[str]
class CloneConfig(BaseModel): repo: str = Field(..., min_length=1, description="Repository URL to clone") - branch: Optional[str] = Field("master", description="Branch to clone") + branch: Optional[str] = Field(None, description="Branch to clone (None = remote default)") @@ - def validate_branch(cls, branch: str) -> Optional[str]: + def validate_branch(cls, branch: Optional[str]) -> Optional[str]: if not branch: return None stripped_branch = branch.strip() if not stripped_branch: return None return stripped_branchNote: CloneFormatter.format_dry_run already prints default_branch when branch is falsy.
52-56: Remove the hardcoded shallow clone (--depth=1) — make depth optional or revert to full clone by defaultGitCommandBuilder currently injects "--depth=1" (breaking existing tests and potentially downstream workflows). Make clone depth configurable and avoid shallow cloning by default, or add an automatic full-clone retry on shallow-clone failure.
Files to change
- cli/app/commands/clone/clone.py — GitCommandBuilder.build_clone_command (currently adds --depth=1).
- cli/app/commands/clone/clone.py — GitClone.clone_repository (add optional retry fallback if you keep shallow as an option).
- cli/tests/commands/clone/test_clone.py — tests expect no --depth in the command/dry-run output; update tests if you change default behavior.
- (Optional) cli/app/commands/clone/clone.py — CloneConfig: add an optional depth field and expose via CLI flags if you want user control.
Suggested minimal code changes (replace the hardcoded builder and add optional fallback):
- Replace builder with a configurable version:
from typing import Optional class GitCommandBuilder: @staticmethod def build_clone_command(repo: str, path: str, branch: Optional[str] = None, depth: Optional[int] = None, no_single_branch: bool = False) -> list[str]: cmd = ["git", "clone"] if depth: cmd.extend(["--depth", str(depth)]) if no_single_branch: cmd.append("--no-single-branch") if branch: cmd.extend(["-b", branch]) cmd.extend([repo, path]) return cmd
- Optional robust fallback in clone_repository (only if you choose to attempt shallow first):
def clone_repository(self, repo: str, path: str, branch: str = None, depth: Optional[int] = None) -> tuple[bool, str]: cmd = GitCommandBuilder.build_clone_command(repo, path, branch, depth) self.logger.debug(debug_executing_git_clone.format(command=" ".join(cmd))) try: subprocess.run(cmd, capture_output=True, text=True, check=True) self.logger.debug(debug_git_clone_success) return True, None except subprocess.CalledProcessError as e: if depth: self.logger.debug("Shallow clone failed; retrying full clone...") full_cmd = GitCommandBuilder.build_clone_command(repo, path, branch, depth=None) try: subprocess.run(full_cmd, capture_output=True, text=True, check=True) self.logger.debug(debug_git_clone_success) return True, None except subprocess.CalledProcessError as e2: self.logger.debug(debug_git_clone_failed.format(code=e2.returncode, error=e2.stderr)) return False, e2.stderr self.logger.debug(debug_git_clone_failed.format(code=e.returncode, error=e.stderr)) return False, e.stderrTests: either
- Revert implementation to not include --depth by default (depth=None) to preserve current tests/dry-run expectations, or
- Intentionally add depth to CloneConfig/CLI and update tests to expect the new command output.
🧹 Nitpick comments (5)
package.json (1)
5-5: Confirm that making the package publishable is intentional"private": false means this package can be published to npm. If this repo isn’t meant to be published, set it to true to prevent accidental publishes.
cli/build.sh (2)
152-166: Arch detection: handle arm64 consistently and fall back clearlyThe aarch64|arm64 mapping is good. When architecture is unsupported, build falls back to host. Consider logging the resolved PYTAG and image to aid debugging.
Apply:
case $ARCH in x86_64) MANYLINUX_IMAGE="quay.io/pypa/manylinux2014_x86_64" PYTAG="cp311-cp311" ;; aarch64|arm64) MANYLINUX_IMAGE="quay.io/pypa/manylinux2014_aarch64" PYTAG="cp311-cp311" ;; *) MANYLINUX_IMAGE="" ;; esac + + log_info "Resolved arch=$ARCH manylinux_image=${MANYLINUX_IMAGE:-<none>} pytag=${PYTAG:-<none>}"
184-186: Host fallback: consider pipefail and clearer error propagationWhen building on host, failures in pyinstaller should fail the script, but adding pipefail at the top would harden early exits across pipes.
Add near the top:
-set -e +set -euo pipefailNote: This is a global change; verify callers/environment variables (e.g., unset vars) before enabling -u.
cli/app/commands/clone/clone.py (2)
219-233: Use monotonic timing for duration; perf_counter is more accurate and robust.Use time.perf_counter() over time.time() for elapsed time measurements (immune to system clock changes, higher resolution).
Apply this diff:
- import time - - start_time = time.time() + import time + start_time = time.perf_counter() @@ - duration = time.time() - start_time + duration = time.perf_counter() - start_time
104-118: Consider adding a timeout and disabling interactive prompts to avoid hangs.git may prompt for credentials and hang in non-interactive contexts. Add a timeout and set GIT_TERMINAL_PROMPT=0 to fail fast with a clear error.
Example change (outside this range):
env = os.environ.copy() env.setdefault("GIT_TERMINAL_PROMPT", "0") # disable interactive prompts result = subprocess.run(cmd, capture_output=True, text=True, check=True, timeout=300, env=env)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
assets/.DS_Storeis excluded by!**/.DS_Store
📒 Files selected for processing (5)
.gitignore(1 hunks)cli/app/commands/clone/clone.py(4 hunks)cli/build.sh(3 hunks)cli/perf_check.sh(0 hunks)package.json(1 hunks)
💤 Files with no reviewable changes (1)
- cli/perf_check.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- .gitignore
🔇 Additional comments (4)
package.json (1)
6-7: Installer already reads the hyphenated keys — no change requiredscripts/install.sh explicitly parses "cli-version" and "cli-packages" by their exact hyphenated keys; no consumer using dot-notation or alternate key names was found.
- package.json (lines 6–7): "cli-version": "0.1.0", "cli-packages": [
- scripts/install.sh (lines ~58–62): extracts CLI_VERSION with grep -o '"cli-version":…' and CLI_PACKAGES via grep -A 100 '"cli-packages"' …; has an error path if cli-version is missing
- Search for cliVersion|cli_packages|cli_version returned only local Python variables (cli/app/*.py) — not JSON key consumers
cli/build.sh (3)
5-7: Good: build script runs from its own directoryResolving SCRIPT_DIR and cd-ing into it prevents path issues when invoked from other locations.
199-199: Good: single entrypoint for PyInstaller logicDelegating to run_pyinstaller_build centralizes the portability logic (manylinux on Linux; host elsewhere), reducing duplication and future drift.
142-186: Dockerized PyInstaller: map host UID/GID, pin image digest, handle UPX, and confirm Python constraintQuick note: my verification run could not find pyproject.toml and the sandbox lacks the id command (so I couldn't confirm the project's Python constraint or demonstrate UID/GID values). Please verify the Python constraint locally. Meanwhile apply the following safe changes.
Files to update
- cli/build.sh — run_pyinstaller_build (approx. lines 142–186)
Suggested changes (minimal diff)
- docker run --rm -v "$PWD":/work -w /work "$MANYLINUX_IMAGE" bash -lc \ + # run as host UID/GID to avoid root-owned artifacts and (optionally) pin image by digest + docker run --rm -u "$(id -u):$(id -g)" -v "$PWD":/work -w /work "$MANYLINUX_IMAGE" bash -lc \ "export PATH=/opt/python/$PYTAG/bin:\$PATH && \ python3 -m pip install -U pip && \ python3 -m pip install 'poetry==1.8.3' && \ +# Optional: install upx if your spec uses upx (manylinux images don't include it) +# yum -y install upx || true && \ poetry install --with dev && \ poetry run pyinstaller --clean --noconfirm $SPEC_FILE" || {Additional recommendations
- Pin the manylinux image by digest for reproducible builds, e.g. MANYLINUX_IMAGE="quay.io/pypa/manylinux2014_x86_64@sha256:".
- If your PyInstaller spec sets upx=True, either install upx inside the container (see comment above) or set upx=False to avoid warnings/failures.
- Ensure the project’s Python constraint in pyproject.toml (poetry) matches PYTAG=cp311-cp311 — otherwise adjust PYTAG or the in-container Python used by PATH.
Verification note
- Script output: pyproject.toml not found in sandbox; 'id' command unavailable — I could not confirm the Python constraint or host UID/GID from this environment. Please verify those locally.
| "cli-version": "0.1.0", | ||
| "cli-packages": [ | ||
| "nixopus.tar", | ||
| "nixopus-0.1.0-1.aarch64.rpm", | ||
| "nixopus_0.1.0_arm64.apk", | ||
| "nixopus_0.1.0_arm64.deb", | ||
| "nixopus.tar", | ||
| "nixopus-0.1.0-1.x86_64.rpm", | ||
| "nixopus_0.1.0_amd64.deb", | ||
| "nixopus_0.1.0_amd64.apk" | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Duplicate entry and artifact name mismatch with build output
- The cli-packages array contains a duplicate entry for "nixopus.tar" (Line 12). This can lead to redundant downloads/installs.
- The build script produces tar.gz archives named like nixopus_${OS}_${ARCH}.tar.gz (see cli/build.sh create_release_archive). The entries "nixopus.tar" do not match either the extension or the naming convention, which will break installer flows that rely on this metadata.
Apply this minimal fix to remove the duplicate; additionally, align names/extensions with the actual artifacts produced:
"cli-packages": [
- "nixopus.tar",
+ "nixopus_linux_amd64.tar.gz",
"nixopus-0.1.0-1.aarch64.rpm",
"nixopus_0.1.0_arm64.apk",
"nixopus_0.1.0_arm64.deb",
- "nixopus.tar",
+ "nixopus_linux_arm64.tar.gz",
"nixopus-0.1.0-1.x86_64.rpm",
"nixopus_0.1.0_amd64.deb",
"nixopus_0.1.0_amd64.apk"
]If macOS artifacts are intended, include darwin tarballs too (e.g., nixopus_darwin_arm64.tar.gz). If RPM/DEB/APK are built elsewhere in CI, note that cli/build.sh does not produce these packages; ensure parity between metadata and actual publish pipeline.
Run the following to verify parity between metadata and actual build outputs and to find installer references to these keys:
🏁 Script executed:
#!/bin/bash
# 1) Show declared CLI packages from package.json
jq -r '.["cli-packages"][]' package.json
# 2) Show local build outputs (if any build artifacts exist)
fd -t f -a . | rg -n "nixopus.*\.(tar\.gz|deb|rpm|apk)$" -n
# 3) Locate consumers of cli-version/cli-packages keys and confirm expected names/structure
rg -n --no-heading -S 'cli-version|cli-packages|CLI_VERSION|CLI_PACKAGES'Length of output: 1448
Fix duplicate package entry and align tar name with installer logic
package.json currently has a duplicate "nixopus.tar" and the tar entries don't include the CLI version. scripts/install.sh builds tar names as "nixopus-${CLI_VERSION}.tar" (with CLI_VERSION = "0.1.0" in package.json), so the metadata should include the versioned tar filename or the installer should be updated to match the artifact names produced by CI.
Files to check
- package.json (lines ~6-16): duplicate "nixopus.tar" and missing version in tar entry.
- scripts/install.sh (lines ~73-76): installer expects "nixopus-${CLI_VERSION}.tar". Other package name formats (deb/rpm/apk) match the entries in package.json.
Minimal fix (example diff)
"cli-packages": [
- "nixopus.tar",
+ "nixopus-0.1.0.tar",
"nixopus-0.1.0-1.aarch64.rpm",
"nixopus_0.1.0_arm64.apk",
"nixopus_0.1.0_arm64.deb",
- "nixopus.tar",
"nixopus-0.1.0-1.x86_64.rpm",
"nixopus_0.1.0_amd64.deb",
"nixopus_0.1.0_amd64.apk"
]
Notes
- If your build pipeline actually produces compressed tarballs (e.g., .tar.gz) or arch-specific tar names, either (A) change package.json to list those exact filenames, or (B) update scripts/install.sh to construct the names your CI produces. A quick repo scan found no tar.gz artifacts, so prefer matching install.sh or adjust CI to publish the named artifact.
🤖 Prompt for AI Agents
In package.json around lines 6 to 16, update the "cli-packages" array to remove
the duplicate "nixopus.tar" entry and replace the tar entry with the versioned
filename that matches CLI_VERSION (e.g., "nixopus-0.1.0.tar"), or alternatively
adjust scripts/install.sh to construct the tar filename your CI actually
produces; ensure the final list contains one tar entry with the correct exact
filename (including version and compression/arch if applicable) and no
duplicates so the installer can find the artifact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (2)
api/api/versions.json(1 hunks)api/internal/features/container/controller/init.go(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- api/api/versions.json
🧰 Additional context used
🧬 Code Graph Analysis (1)
api/internal/features/container/controller/init.go (2)
api/internal/types/response.go (1)
Response(5-20)api/internal/features/logger/init.go (1)
Info(20-20)
🔇 Additional comments (3)
api/internal/features/container/controller/init.go (3)
5-6: New imports are appropriate.fmt and strings are correctly introduced for logging and case-insensitive checks.
12-12: Alias import for shared types improves clarity.Using shared_types.Response is explicit and consistent.
45-45: Logger.Log signature matches usage — no change requiredVerified that api/internal/features/logger/init.go defines:
- func (l *Logger) Log(sev Severity, msg, data string)
The call in api/internal/features/container/controller/init.go:
- c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container", action), details.Name)
matches that signature. The logger currently accepts a string "data" field rather than structured fields; consider adding structured logging later if you want to pass fields instead of a positional string.
| func (c *ContainerController) isProtectedContainer(containerID string, action string) (*shared_types.Response, bool) { | ||
| details, err := c.dockerService.GetContainerById(containerID) | ||
| if err != nil { | ||
| return nil, false | ||
| } | ||
| name := strings.ToLower(details.Name) | ||
| if strings.Contains(name, "nixopus") { | ||
| c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container", action), details.Name) | ||
| return &shared_types.Response{ | ||
| Status: "success", | ||
| Message: "Operation skipped for protected container", | ||
| Data: map[string]string{"status": "skipped"}, | ||
| }, true | ||
| } | ||
| return nil, false | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden the protection check, return richer context, and fail-safe on remove when metadata fetch fails.
Current logic silently proceeds if container lookup fails, and the response lacks context fields. Recommend:
- Fail-safe on destructive remove if we can’t verify protection.
- Enrich response Data with containerId/containerName to aid clients.
- Slightly improve log message to include container id.
Apply this diff:
- func (c *ContainerController) isProtectedContainer(containerID string, action string) (*shared_types.Response, bool) {
- details, err := c.dockerService.GetContainerById(containerID)
- if err != nil {
- return nil, false
- }
- name := strings.ToLower(details.Name)
- if strings.Contains(name, "nixopus") {
- c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container", action), details.Name)
- return &shared_types.Response{
- Status: "success",
- Message: "Operation skipped for protected container",
- Data: map[string]string{"status": "skipped"},
- }, true
- }
- return nil, false
- }
+ // isProtectedContainer determines whether a container operation should be skipped
+ // for a protected container. It returns a response (when skipping) and a boolean
+ // indicating whether the caller should skip the requested action.
+ func (c *ContainerController) isProtectedContainer(containerID string, action string) (*shared_types.Response, bool) {
+ details, err := c.dockerService.GetContainerById(containerID)
+ if err != nil {
+ // Be conservative for destructive actions if we can't verify protection.
+ if action == "remove" {
+ c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s: unable to fetch container details for id %s", action, containerID))
+ return &shared_types.Response{
+ Status: "success",
+ Message: "Operation skipped: unable to verify protection status",
+ Data: map[string]string{"status": "skipped", "containerId": containerID},
+ }, true
+ }
+ return nil, false
+ }
+ name := strings.ToLower(details.Name)
+ if strings.Contains(name, "nixopus") {
+ c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container (id=%s)", action, containerID), details.Name)
+ return &shared_types.Response{
+ Status: "success",
+ Message: "Operation skipped for protected container",
+ Data: map[string]string{"status": "skipped", "containerId": containerID, "containerName": details.Name},
+ }, true
+ }
+ return nil, false
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (c *ContainerController) isProtectedContainer(containerID string, action string) (*shared_types.Response, bool) { | |
| details, err := c.dockerService.GetContainerById(containerID) | |
| if err != nil { | |
| return nil, false | |
| } | |
| name := strings.ToLower(details.Name) | |
| if strings.Contains(name, "nixopus") { | |
| c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container", action), details.Name) | |
| return &shared_types.Response{ | |
| Status: "success", | |
| Message: "Operation skipped for protected container", | |
| Data: map[string]string{"status": "skipped"}, | |
| }, true | |
| } | |
| return nil, false | |
| } | |
| // isProtectedContainer determines whether a container operation should be skipped | |
| // for a protected container. It returns a response (when skipping) and a boolean | |
| // indicating whether the caller should skip the requested action. | |
| func (c *ContainerController) isProtectedContainer(containerID string, action string) (*shared_types.Response, bool) { | |
| details, err := c.dockerService.GetContainerById(containerID) | |
| if err != nil { | |
| // Be conservative for destructive actions if we can't verify protection. | |
| if action == "remove" { | |
| c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s: unable to fetch container details for id %s", action, containerID)) | |
| return &shared_types.Response{ | |
| Status: "success", | |
| Message: "Operation skipped: unable to verify protection status", | |
| Data: map[string]string{"status": "skipped", "containerId": containerID}, | |
| }, true | |
| } | |
| return nil, false | |
| } | |
| name := strings.ToLower(details.Name) | |
| if strings.Contains(name, "nixopus") { | |
| c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container (id=%s)", action, containerID), details.Name) | |
| return &shared_types.Response{ | |
| Status: "success", | |
| Message: "Operation skipped for protected container", | |
| Data: map[string]string{"status": "skipped", "containerId": containerID, "containerName": details.Name}, | |
| }, true | |
| } | |
| return nil, false | |
| } |
💡 Verification agent
🧩 Analysis chain
Prefer a label-based protection signal over name substring matching.
Name-based heuristics can lead to false positives/negatives. If available, check a dedicated label (e.g., nixopus.protected=true) first, then fall back to name matching for backward compatibility. This keeps the backend robust and aligned with future UI logic.
Example inside the function (pseudocode-ish; verify the actual field names on details):
- Check details.Labels["nixopus.protected"] == "true" (case-insensitive)
- Fallback to strings.Contains(strings.ToLower(details.Name), "nixopus")
I can wire this in once we confirm the docker container struct exposes Labels.
Run to confirm label availability and call sites:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Locate GetContainerById signature and container struct fields:"
rg -n "GetContainerById\\(" -A 12 api/internal
echo
echo "2) Inspect container detail struct for Labels/Name:"
rg -n "type .*Container" -A 50 api/internal | sed -n '1,200p'
echo
echo "3) Verify all controller actions use isProtectedContainer:"
rg -n "isProtectedContainer\\(" -A 2 api/internal/features/container/controller
echo
echo "4) Check for a Restart action to protect as well (if exists):"
rg -n "RestartContainer\\(|restart" api/internal/features/container/controller -A 2Length of output: 30467
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "----- File: api/internal/features/deploy/docker/init.go (head + GetContainerById) -----"
sed -n '1,260p' api/internal/features/deploy/docker/init.goLength of output: 10648
Use a label-based protection flag (nixopus.protected=true) with a name fallback
Verified findings:
- GetContainerById returns docker's container.InspectResponse (api/internal/features/deploy/docker/init.go) — details.Config.Labels is available.
- The project also has container types with Labels map[string]string (api/internal/features/container/types/container_types.go).
- isProtectedContainer is already used by start/stop/remove controllers but NOT by restart (api/internal/features/container/controller/restart_container.go).
What to change:
- Update api/internal/features/container/controller/init.go:isProtectedContainer to check a label first, then fall back to name substring matching.
- Add the same protection check to api/internal/features/container/controller/restart_container.go before calling RestartContainer.
Suggested replacement for isProtectedContainer (pseudocode-ready Go):
func (c *ContainerController) isProtectedContainer(containerID string, action string) (*shared_types.Response, bool) {
details, err := c.dockerService.GetContainerById(containerID)
if err != nil {
return nil, false
}
// Prefer label-based signal
if details.Config != nil {
if v, ok := details.Config.Labels["nixopus.protected"]; ok && strings.EqualFold(v, "true") {
c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container", action), details.Name)
return &shared_types.Response{
Status: "success",
Message: "Operation skipped for protected container",
Data: map[string]string{"status": "skipped"},
}, true
}
}
// Backwards-compatible fallback
name := strings.ToLower(details.Name)
if strings.Contains(name, "nixopus") {
c.logger.Log(logger.Info, fmt.Sprintf("Skipping %s for protected container", action), details.Name)
return &shared_types.Response{
Status: "success",
Message: "Operation skipped for protected container",
Data: map[string]string{"status": "skipped"},
}, true
}
return nil, false
}
Small note: ensure restart_container.go calls c.isProtectedContainer(containerID, "restart") and returns the skipped response like other controllers.
🤖 Prompt for AI Agents
In api/internal/features/container/controller/init.go around lines 38-53, update
isProtectedContainer to first check details.Config != nil and then check
details.Config.Labels["nixopus.protected"] for a case-insensitive "true" value;
if present log the skip and return the same success/skipped Response as the
existing name-based check, otherwise fall back to the existing name substring
match on details.Name; then in
api/internal/features/container/controller/restart_container.go call
c.isProtectedContainer(containerID, "restart") before invoking RestartContainer
and return the skipped Response when the function indicates protection.
Summary by CodeRabbit
Changes in build logic:
Problem: Linux bundle failed on older distros with "GLIBC_2.38 not found" due to host's GLIBC version conflict.
Solution: build binray in a manylinux2014 container (older glibc), run natively on host.
Reference: https://github.com/pypa/manylinux
TODO: Create a article