feat(fsearch): add include/exclude filtering and telemetry coverage#6
Conversation
NPI mode now has separate NPI_INCLUDE (broader) and NPI_EXCLUDE (surgical) filter sets. Shows DUT commands, IO signals with names, robot high-level moves, connection events, and error responses while filtering plumbing.
Pretty mode fell off the end of the script without an explicit exit, causing the last arithmetic test (( _METACHAR_WARNING == 1 )) to set the exit code to 1 when no warning was needed. JSON and paths modes already had exit 0; pretty was the only mode missing it.
When multiple matches occur in the same file, only show the full path
for the first match. Subsequent matches show basename:line:content only.
This reduces token usage when the same file has many hits, making
output more compact for AI agent consumption.
Before:
/very/long/path/to/file.py:10:match1
/very/long/path/to/file.py:15:match2
/very/long/path/to/file.py:20:match3
After:
/very/long/path/to/file.py:10:match1
file.py:15:match2
file.py:20:match3
Summary by CodeRabbitRelease Notes
WalkthroughThis PR adds path filtering to fsearch (--include/--exclude with wildcard support), introduces flog as a new Fusion socket log viewer, adds deploy-fsuite.sh for automated deployment orchestration to Bertta/RADis infrastructure, enhances fcontent output formatting, and provides comprehensive deployment documentation with SSH configuration guidance. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Local as Local Host
participant Bertta
participant RADi as RADi (Multiple)
User->>Local: Run deploy-fsuite.sh
Local->>Local: Validate prerequisites<br/>(packages, help text)
Local->>Bertta: Check SSH connectivity
alt Check Mode
Bertta-->>Local: Report version/status
Local-->>User: Display deployment status
else Deploy Mode
Local->>Bertta: Transfer .deb packages<br/>to /var/db/fusion
Local->>Bertta: Install/upgrade fsuite<br/>and dependencies
Bertta-->>Local: Confirm fsuite version
Local->>Bertta: Retrieve RADi inventory<br/>(names & IPs)
loop For Each RADi
Bertta->>RADi: Deploy SSH public keys<br/>(Bertta + Local host)
RADi-->>Bertta: Keys installed
Local->>RADi: Transfer fsuite .deb<br/>via Bertta proxy
RADi->>RADi: Install fsuite via dpkg
RADi-->>Local: Confirm fsuite version
Local->>Local: Append RADi SSH config<br/>entry (if not duplicate)
end
Local-->>User: Deployment complete<br/>with banner & examples
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/test_telemetry.sh (1)
131-148: 🧹 Nitpick | 🔵 TrivialLine 134 is dead code after the cleanup reorder.
After
rm -fon line 133, the file will never exist on line 134, sobeforeis always 0. The test still passes correctly —before=0is the right initial state — but the conditional assignment is unreachable. Consider removing line 134 or dropping therm -fif the intent was to measure a delta against a pre-existing file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_telemetry.sh` around lines 131 - 148, In test_tier0_no_telemetry() the conditional assignment [[ -f "$HOME/.fsuite/telemetry.jsonl" ]] && before=$(wc -l < "$HOME/.fsuite/telemetry.jsonl") is dead because rm -f was called just before; either remove that conditional assignment (keep local before=0) or, if you intended to measure an existing file, move the rm -f after computing before; update the test_tier0_no_telemetry function accordingly so before is set consistently with the chosen intent.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (13)
.gitignoreREADME.mddebian/changelogdeploy-fsuite.shdocs/DeploymentLogSuccessSetupBertta.mddocs/FSUITE-BUILD-SUMMARY.mddocs/REMOTE-ACCESS-CHEATSHEET.mddocs/SSH-RADI-CONFIG.mdfcontentflogfsearchtests/test_fsearch.shtests/test_telemetry.sh
🧰 Additional context used
📓 Path-based instructions (4)
docs/**
⚙️ CodeRabbit configuration file
docs/**: Check for: accurate CLI examples, no hardcoded personal paths or credentials,
and clear explanations of flags and defaults.
Files:
docs/DeploymentLogSuccessSetupBertta.mddocs/REMOTE-ACCESS-CHEATSHEET.mddocs/FSUITE-BUILD-SUMMARY.mddocs/SSH-RADI-CONFIG.md
fsearch
⚙️ CodeRabbit configuration file
fsearch: Focus on: pattern normalization (glob/extension heuristics), backend selection
(fd/fdfind vs find), and safe handling of user input (quoting, no word-splitting
regressions). Prefer correctness and stability over micro-optimizations.
Files:
fsearch
fcontent
⚙️ CodeRabbit configuration file
fcontent: Focus on: rg invocation safety (args handling), stdin file list handling,
truncation accounting (SHOWN_MATCHES/TOTAL_FILES), and JSON/pretty output
correctness. Avoid changes that could leak private paths in examples.
Files:
fcontent
README.md
⚙️ CodeRabbit configuration file
README.md: Check for: accurate examples, correct filenames/paths, and no stale references.
Files:
README.md
🪛 LanguageTool
docs/DeploymentLogSuccessSetupBertta.md
[grammar] ~1-~1: Ensure spelling is correct
Context: # Sample run of deploymment script ```sh ● Bash(./deploy-fsuite.s...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🪛 markdownlint-cli2 (0.21.0)
docs/FSUITE-BUILD-SUMMARY.md
[warning] 9-9: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 286-286: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
docs/SSH-RADI-CONFIG.md
[warning] 18-18: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
[warning] 523-523: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 524-524: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
[warning] 532-532: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 533-533: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
[warning] 564-564: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 565-565: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
[warning] 576-576: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 577-577: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
[warning] 585-585: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 586-586: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
[warning] 606-606: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🪛 Shellcheck (0.11.0)
tests/test_fsearch.sh
[info] 163-173: This function is never invoked. Check usage (or ignored if invoked indirectly).
(SC2329)
[info] 175-185: This function is never invoked. Check usage (or ignored if invoked indirectly).
(SC2329)
deploy-fsuite.sh
[info] 154-154: Note that, unescaped, this expands on the client side.
(SC2029)
[info] 159-159: Note that, unescaped, this expands on the client side.
(SC2029)
[warning] 330-330: DRY_RUN appears unused. Verify use (or export if used externally).
(SC2034)
tests/test_telemetry.sh
[info] 411-422: This function is never invoked. Check usage (or ignored if invoked indirectly).
(SC2329)
🔇 Additional comments (8)
.gitignore (1)
2-4: LGTM.debian/changelog (1)
1-7: LGTM. Changelog entry is well-structured and consistent with existing entries.tests/test_telemetry.sh (1)
411-422: LGTM. Test correctly validates telemetry flag propagation for include/exclude filters.README.md (2)
842-849: LGTM. Changelog and documentation updates are accurate and consistent with the implementation.
176-181: LGTM. Examples are correct and demonstrate both exclusion-only and combined include/exclude workflows.tests/test_fsearch.sh (2)
163-194: LGTM. The three new include/exclude tests cover the key scenarios (include-only, exclude-only, combined) and the expected counts are consistent with the test fixtures.
25-40: LGTM. Test fixtures correctly extended to includenode_modulesstructure for exercising the exclude filtering.fsearch (1)
490-518: The claimed bug does not exist.FILTERED_RESULTSis explicitly initialized on line 491 (FILTERED_RESULTS=()) before its expansion on line 517, so it is a bound (declared) variable, not an unbound one. Underset -u, only unbound variables cause errors—an empty array that was explicitly assigned is safe even on bash < 4.4.Likely an incorrect or invalid review comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deploy-fsuite.sh`:
- Around line 322-330: The script declares and parses DRY_RUN (via --dry-run)
but never uses it; either remove the flag and its docs or implement dry-run
behavior by gating side-effecting operations. Add a single helper (e.g., execute
or run_cmd) that checks DRY_RUN and either echoes the command (when DRY_RUN=1)
or actually runs it, and replace direct calls to
ssh/scp/systemctl/docker/rsync/any file writes and service restarts with that
helper; ensure variables BERTTA_ONLY, UPDATE_SSH_ONLY and SKIP_SSH_KEYS logic
still work when using the helper so the --dry-run behavior simulates the exact
actions without making changes.
- Line 132: Introduce a single variable (e.g., TARGET_FSUITE_VERSION="1.3.0") at
the top of the script and replace all hardcoded version literals ("1.3.0",
"1.2.0", "1.0.0") with references to that variable; update the if-check that
currently uses installed_version (if [[ "$installed_version" == "1.3.0" ]]) to
compare against "$TARGET_FSUITE_VERSION". For the ftree check that currently
compares against "1.2.0", either reference a separate FTREE_VERSION or
FTREE_MIN_COMPATIBLE_VERSION variable and use that constant instead of a
literal, or change the verification logic to detect fsuite installation by a
capability/file rather than comparing ftree’s version; update all occurrences
(installed_version checks and any other string matches) to use these variables
so version values are centralized.
- Around line 340-344: The script currently advertises --update-ssh-config but
the handler uses UPDATE_SSH_ONLY and immediately dies in deploy-fsuite.sh;
either remove the flag from the usage/help and the UPDATE_SSH_ONLY
parsing/variable, or implement the behavior instead of dying. Locate the flag
parsing and help text that set UPDATE_SSH_ONLY and the conditional block that
checks "if (( UPDATE_SSH_ONLY == 1 )); then" (and the die/log_info calls) and
either: 1) delete the flag handling and any mentions in usage/help so it's not
accepted/advertised, or 2) replace the die with a real implementation (or an
early, clear error at parse time rejecting the flag) so the flag is not
advertised but unhandled later. Ensure consistency between the usage text, the
flag variable UPDATE_SSH_ONLY, and the conditional block around log_info/die.
- Around line 17-19: Remove the hardcoded RADI_PASS value and change the script
to read credentials from a secure source: load RADI_USER and RADI_PASS from
environment variables (e.g., require RADI_USER and RADI_PASS to be set) or read
RADI_PASS from a file with restricted permissions (e.g., a dotfile only readable
by the user), and fail fast with a clear message if missing; update all uses of
RADI_PASS (references to RADI_PASS and any sshpass -p / echo ... | sudo -S
invocations) to use the variable or a more secure auth method (SSH
key/ssh-agent) rather than embedding the password on the command line or in
process listings.
- Around line 154-159: The SSH commands in deploy-fsuite.sh expand and pass
$RADI_PASS client-side and unquoted (lines invoking ssh "$bertta" "echo
$RADI_PASS | sudo -S dpkg -i ..."), which leaks the password and breaks on
special chars; update the ssh invocations that run dpkg (the two occurrences
referencing $RADI_PASS and dpkg -i) to stop client-side expansion and quote the
password properly on the remote side (e.g., send the password via stdin or use a
remote-safe printf '%s' with quoted variable), or better yet remove the piped
echo entirely and switch to a secure method (configure sudo NOPASSWD for the
dpkg -i command or use ssh to run sudo --stdin with a heredoc) so the password
is not exposed in command arguments or /proc; ensure you reference RADI_PASS
only inside the remote shell context and always quote it (\"$RADI_PASS\") if you
must pass it.
In `@docs/DeploymentLogSuccessSetupBertta.md`:
- Line 8: The log in docs/DeploymentLogSuccessSetupBertta.md contains a
hardcoded personal path in the line starting with "[OK] Found fsuite package:";
update the code that produces this message to sanitize paths before writing docs
by replacing user-home segments with a generic placeholder (e.g., "~" or
"<agent_path>") or by emitting only the filename via path.basename; locate the
generator that emits the "[OK] Found fsuite package:" message and change it to
use a path-sanitization helper (or call path.basename on the package path) so no
personal directories appear in committed docs.
- Line 1: Replace the misspelled word in the markdown header "# Sample run of
deploymment script" by changing "deploymment" to "deployment" so the header
reads "# Sample run of deployment script"; update only that header text in the
file to correct the typo.
In `@docs/FSUITE-BUILD-SUMMARY.md`:
- Around line 222-232: Replace the hardcoded password 'fusionproject' in the
deployment example (the sshpass -p 'fusionproject' and the echo fusionproject |
sudo -S parts) with a placeholder or environment variable (e.g. <RADI_PASSWORD>
or $RADI_PASSWORD) and update the example text to reference that variable so
credentials are not embedded directly in the command; ensure both occurrences in
the ssh command are changed and add a short note that the user should export the
env var before running the command.
- Around line 315-327: The document docs/FSUITE-BUILD-SUMMARY.md contains
hardcoded absolute developer paths (e.g., instances of
"/home/player2vscpu/Desktop/agent/fsuite/..."); replace those with relative
paths or generic placeholders such as "<repo_root>/flog",
"<repo_root>/fsuite_1.3.0_amd64.deb", or
"<build_dir>/ripgrep-15.1.0-x86_64-unknown-linux-musl/" so no personal home
directories remain; update the table rows that list File paths (the entries
shown under "In Build Directory" and the earlier file list) to use the
placeholders consistently and verify the README still reads clearly.
In `@docs/REMOTE-ACCESS-CHEATSHEET.md`:
- Around line 142-156: The document contains hardcoded credentials
('fusionproject') and internal host/IPs in the sshpass/ssh and dpkg examples
(e.g., the ssh command and package deployment snippets); remove these secrets
and replace them with environment-variable placeholders (e.g., $RADI_PASSWORD)
in the sshpass and ssh examples, remove or redact the credentials table, and
scrub internal hostnames/IPs/DHCP details or replace them with generic
placeholders (e.g., JUMP_HOST, RADI_HOST, INTERNAL_IP). If the file must remain
in the repository, update the examples to use placeholders and add a note
instructing readers to source credentials from a secure secret store; if the
file should not be public, remove it from version control and perform a git
history purge/credential rotation to eliminate the leaked 'fusionproject'
secret.
In `@docs/SSH-RADI-CONFIG.md`:
- Around line 58-59: Update the SSH config docs to explicitly call out the MITM
risk introduced by using "StrictHostKeyChecking no" and "UserKnownHostsFile
/dev/null" and either recommend switching to "StrictHostKeyChecking accept-new"
for safer automatic acceptance of new hosts or clearly document the accepted
tradeoff for ephemeral hosts; edit the occurrences referencing these settings
(the lines showing StrictHostKeyChecking and UserKnownHostsFile) to add a short
explanatory note about the security implications and the safer alternative.
- Line 24: Replace the hardcoded password in the ssh command that uses sshpass
(the string "sshpass -p 'fusionproject'") with a placeholder or environment
variable (e.g., "sshpass -p \"$RADI_PASSWORD\"" or reference <RADI_PASSWORD>)
and update the surrounding text to instruct readers to export RADI_PASSWORD or
use a secrets manager; ensure all occurrences of "sshpass -p 'fusionproject'"
(including the example command with ssh bertta103 and the inner ssh to
fusion@10.11.40.21 'flog tower') are updated consistently so no plaintext
credential remains in the document or history examples.
- Around line 523-533: Several Markdown headings and fenced code blocks (e.g.,
the "### Before (Old Way)" and "### After (New Way)" headings and their
```bash``` blocks) are missing required blank lines; update the Markdown to
comply with MD022/MD031 by inserting a blank line before each heading and
ensuring there is a blank line both immediately before and immediately after
each fenced code block (triple backticks) so the headings and code blocks are
properly separated.
In `@fcontent`:
- Around line 473-503: The regex in the conditional that parses each entry in
MATCH_LINES is too restrictive (it uses ^([^:]+):([0-9]+):(.*)) and breaks when
filenames contain colons; change the pattern used in the if [[ "$line" =~ ... ]]
test to allow a greedy filename capture (e.g. ^(.+):([0-9]+):(.*)) so _cur_file,
_cur_line and _cur_text are populated correctly via BASH_REMATCH; update that
regex in the block that sets _cur_file/_cur_line/_cur_text and keep the rest of
the logic using _SEEN_FILES and basename unchanged.
In `@flog`:
- Around line 455-468: cmd_search currently pipes the entire "$LOG_FILE" through
multiple greps which is slow on huge logs; limit input by tailing the file first
or by using grep's match limit. Modify the search pipelines (inside cmd_search
where RAW_MODE / OUTPUT branches build search_lines) to read from tail -n
"$LINES" "$LOG_FILE" | grep -E "$ACTIVE_INCLUDE" ... (and then the rest of the
pipeline: grep -v -E "$ACTIVE_EXCLUDE" | grep -i -E "$PATTERN" | ...), or
alternatively add grep -m "$LINES" to the pattern match step; update all
branches (RAW_MODE, OUTPUT == "slim", "npi", and default) so they all bound
their input using tail -n "$LINES" (or grep -m) to avoid scanning the whole
file.
- Around line 238-251: The option handlers for -f|--file, -o|--output and
-n|--lines use "${2:-}" and then unconditionally run shift 2 which will fail
silently under set -e when the argument is missing; update the case branches
that set LOG_FILE, OUTPUT and LINES to explicitly validate that an argument
exists (e.g., check that "${2:-}" is non-empty) and call die with a clear
message if missing, otherwise assign the value and then perform shift 2 so shift
never runs with insufficient positional parameters.
- Around line 205-208: The json_escape function currently only escapes
backslashes, quotes and tabs then replaces newlines with spaces, which leaves
raw newlines/carriage-returns and other control characters that can break JSON;
update json_escape to first escape literal newlines and carriage returns
(replace LF with \n and CR with \r) and also escape control characters in the
0x00–0x1F range (e.g., as \u00XX) before you collapse newlines, keeping the
existing escapes for backslash, quote and tab; locate the json_escape function
and perform the additional escaping steps (escape \n and \r and map control
bytes to \u00XX) prior to the final tr that converts newlines to spaces.
- Around line 1-7: The VERSION constant in the script (VERSION="1.2.0")
conflicts with the documented version in the build summary (v1.0.0); decide
which is authoritative and make them consistent: either update the script's
VERSION variable to "1.0.0" to match the docs, or update the build summary
entries that reference flog (the header and the --version output) to "1.2.0".
Locate the VERSION symbol in the flog script and the two occurrences in the
build summary and change them so all three show the same version string.
In `@fsearch`:
- Around line 112-126: path_matches_filter currently treats patterns with
wildcard chars using glob matching (unquoted $pattern in [[ ... ]]) while
non-wildcard patterns are matched as substrings via *"$pattern"*; update the
user-facing help/documentation to explicitly state this dual behavior (functions
has_wildcard and path_matches_filter) so users know that a plain pattern like
"src" will match any path containing "src" rather than only exact directory
names, and suggest documenting how to require exact matches (e.g., using anchors
or explicit glob like "src/" or patterns with wildcards) or providing an option
to switch to exact/equality matching.
- Around line 399-410: Declare include_match and exclude_match as local
variables to avoid leaking them into the global/script scope; inside the
filtering block that processes INCLUDE_PATTERNS/EXCLUDE_PATTERNS (the same block
that updates INCLUDE_PATTERNS, EXCLUDE_PATTERNS and _TELEM_FLAGS when handling
the I|--include and -x|--exclude cases), add a local declaration for
include_match and exclude_match before the loop or at the top of the block so
each iteration uses a locally scoped variable rather than polluting the
function/script scope.
---
Outside diff comments:
In `@tests/test_telemetry.sh`:
- Around line 131-148: In test_tier0_no_telemetry() the conditional assignment
[[ -f "$HOME/.fsuite/telemetry.jsonl" ]] && before=$(wc -l <
"$HOME/.fsuite/telemetry.jsonl") is dead because rm -f was called just before;
either remove that conditional assignment (keep local before=0) or, if you
intended to measure an existing file, move the rm -f after computing before;
update the test_tier0_no_telemetry function accordingly so before is set
consistently with the chosen intent.
| # Credentials for RADi access | ||
| RADI_USER="fusion" | ||
| RADI_PASS="fusionproject" |
There was a problem hiding this comment.
Hardcoded credential in source code.
RADI_PASS="fusionproject" is committed to version control. This password is then passed via sshpass -p and echo ... | sudo -S throughout the script, making it visible in process listings (ps aux). At minimum, source it from an environment variable or a file with restricted permissions.
Proposed fix
-RADI_PASS="fusionproject"
+RADI_PASS="${RADI_PASS:-}"
+[[ -n "$RADI_PASS" ]] || die "RADI_PASS environment variable must be set"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-fsuite.sh` around lines 17 - 19, Remove the hardcoded RADI_PASS value
and change the script to read credentials from a secure source: load RADI_USER
and RADI_PASS from environment variables (e.g., require RADI_USER and RADI_PASS
to be set) or read RADI_PASS from a file with restricted permissions (e.g., a
dotfile only readable by the user), and fail fast with a clear message if
missing; update all uses of RADI_PASS (references to RADI_PASS and any sshpass
-p / echo ... | sudo -S invocations) to use the variable or a more secure auth
method (SSH key/ssh-agent) rather than embedding the password on the command
line or in process listings.
| log_info "fsuite $installed_version already installed on $bertta" | ||
|
|
||
| # Check if upgrade needed | ||
| if [[ "$installed_version" == "1.3.0" ]]; then |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Version strings "1.3.0", "1.2.0", "1.0.0" are scattered as magic literals.
The target fsuite version is hardcoded in at least 4 places. Extract to a single variable (e.g., TARGET_FSUITE_VERSION="1.3.0") to avoid drift.
Also, line 165 checks for "1.2.0" (ftree's version) to verify fsuite installation — this is fragile and will silently report "issues" when ftree's version changes independently.
Also applies to: 165-165, 217-217, 230-230
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-fsuite.sh` at line 132, Introduce a single variable (e.g.,
TARGET_FSUITE_VERSION="1.3.0") at the top of the script and replace all
hardcoded version literals ("1.3.0", "1.2.0", "1.0.0") with references to that
variable; update the if-check that currently uses installed_version (if [[
"$installed_version" == "1.3.0" ]]) to compare against "$TARGET_FSUITE_VERSION".
For the ftree check that currently compares against "1.2.0", either reference a
separate FTREE_VERSION or FTREE_MIN_COMPATIBLE_VERSION variable and use that
constant instead of a literal, or change the verification logic to detect fsuite
installation by a capability/file rather than comparing ftree’s version; update
all occurrences (installed_version checks and any other string matches) to use
these variables so version values are centralized.
| ssh "$bertta" "echo $RADI_PASS | sudo -S dpkg -i /var/db/fusion/tree_*.deb 2>&1" || true | ||
| fi | ||
|
|
||
| # Install fsuite | ||
| log_info "Installing fsuite on $bertta..." | ||
| ssh "$bertta" "echo $RADI_PASS | sudo -S dpkg -i /var/db/fusion/fsuite_1.3.0_amd64.deb 2>&1" |
There was a problem hiding this comment.
Unquoted $RADI_PASS in remote echo commands leaks password and risks word splitting.
Lines 154 and 159 use echo $RADI_PASS | sudo -S inside a double-quoted SSH command string. The password is expanded client-side (ShellCheck SC2029) and passed unquoted to echo, which means special characters in the password would cause breakage. Also, sudo -S with piped echo exposes the password in /proc/<pid>/cmdline.
Proposed fix (quote the variable at minimum)
- ssh "$bertta" "echo $RADI_PASS | sudo -S dpkg -i /var/db/fusion/tree_*.deb 2>&1" || true
+ ssh "$bertta" "echo '$RADI_PASS' | sudo -S dpkg -i /var/db/fusion/tree_*.deb 2>&1" || trueNote: Even with quoting, echo password | sudo -S is inherently insecure. Consider using sudo with NOPASSWD for the specific dpkg -i command on these hosts, or use sshpass + sudo --stdin with a heredoc.
🧰 Tools
🪛 Shellcheck (0.11.0)
[info] 154-154: Note that, unescaped, this expands on the client side.
(SC2029)
[info] 159-159: Note that, unescaped, this expands on the client side.
(SC2029)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-fsuite.sh` around lines 154 - 159, The SSH commands in
deploy-fsuite.sh expand and pass $RADI_PASS client-side and unquoted (lines
invoking ssh "$bertta" "echo $RADI_PASS | sudo -S dpkg -i ..."), which leaks the
password and breaks on special chars; update the ssh invocations that run dpkg
(the two occurrences referencing $RADI_PASS and dpkg -i) to stop client-side
expansion and quote the password properly on the remote side (e.g., send the
password via stdin or use a remote-safe printf '%s' with quoted variable), or
better yet remove the piped echo entirely and switch to a secure method
(configure sudo NOPASSWD for the dpkg -i command or use ssh to run sudo --stdin
with a heredoc) so the password is not exposed in command arguments or /proc;
ensure you reference RADI_PASS only inside the remote shell context and always
quote it (\"$RADI_PASS\") if you must pass it.
| DRY_RUN=0 | ||
| UPDATE_SSH_ONLY=0 | ||
|
|
||
| while [[ $# -gt 0 ]]; do | ||
| case "$1" in | ||
| --bertta-only) BERTTA_ONLY=1; shift ;; | ||
| --skip-ssh-keys) SKIP_SSH_KEYS=1; shift ;; | ||
| --check) CHECK_ONLY=1; shift ;; | ||
| --dry-run) DRY_RUN=1; shift ;; |
There was a problem hiding this comment.
DRY_RUN is parsed but never read.
The --dry-run flag is accepted and documented in usage, but DRY_RUN is never checked anywhere in the script. Either implement it or remove the flag to avoid misleading users.
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 330-330: DRY_RUN appears unused. Verify use (or export if used externally).
(SC2034)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-fsuite.sh` around lines 322 - 330, The script declares and parses
DRY_RUN (via --dry-run) but never uses it; either remove the flag and its docs
or implement dry-run behavior by gating side-effecting operations. Add a single
helper (e.g., execute or run_cmd) that checks DRY_RUN and either echoes the
command (when DRY_RUN=1) or actually runs it, and replace direct calls to
ssh/scp/systemctl/docker/rsync/any file writes and service restarts with that
helper; ensure variables BERTTA_ONLY, UPDATE_SSH_ONLY and SKIP_SSH_KEYS logic
still work when using the helper so the --dry-run behavior simulates the exact
actions without making changes.
| if (( UPDATE_SSH_ONLY == 1 )); then | ||
| log_info "Updating SSH config for all known RADis..." | ||
| # TODO: Read from inventory file | ||
| die "Not implemented yet. Use deploy with specific bertta." | ||
| fi |
There was a problem hiding this comment.
--update-ssh-config is advertised but dies with "Not implemented yet".
If this isn't ready, either remove it from usage text or don't accept the flag. Advertising a feature that immediately errors out is confusing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-fsuite.sh` around lines 340 - 344, The script currently advertises
--update-ssh-config but the handler uses UPDATE_SSH_ONLY and immediately dies in
deploy-fsuite.sh; either remove the flag from the usage/help and the
UPDATE_SSH_ONLY parsing/variable, or implement the behavior instead of dying.
Locate the flag parsing and help text that set UPDATE_SSH_ONLY and the
conditional block that checks "if (( UPDATE_SSH_ONLY == 1 )); then" (and the
die/log_info calls) and either: 1) delete the flag handling and any mentions in
usage/help so it's not accepted/advertised, or 2) replace the die with a real
implementation (or an early, clear error at parse time rejecting the flag) so
the flag is not advertised but unhandled later. Ensure consistency between the
usage text, the flag variable UPDATE_SSH_ONLY, and the conditional block around
log_info/die.
| # JSON escape | ||
| json_escape() { | ||
| sed -e 's/\\/\\\\/g' -e 's/"/\\"/g' -e 's/\t/\\t/g' | tr '\n' ' ' | ||
| } |
There was a problem hiding this comment.
json_escape doesn't handle newlines, carriage returns, or control characters.
The function escapes \, ", and \t, then collapses newlines with tr. Raw newlines inside a log line (e.g., multi-line stack traces) could still produce invalid JSON. At minimum, literal newlines and carriage returns should be escaped before they're stripped.
Proposed fix
json_escape() {
- sed -e 's/\\/\\\\/g' -e 's/"/\\"/g' -e 's/\t/\\t/g' | tr '\n' ' '
+ sed -e 's/\\/\\\\/g' -e 's/"/\\"/g' -e 's/\t/\\t/g' -e $'s/\r/\\\\r/g' | tr '\n' ' '
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # JSON escape | |
| json_escape() { | |
| sed -e 's/\\/\\\\/g' -e 's/"/\\"/g' -e 's/\t/\\t/g' | tr '\n' ' ' | |
| } | |
| # JSON escape | |
| json_escape() { | |
| sed -e 's/\\/\\\\/g' -e 's/"/\\"/g' -e 's/\t/\\t/g' -e $'s/\r/\\\\r/g' | tr '\n' ' ' | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@flog` around lines 205 - 208, The json_escape function currently only escapes
backslashes, quotes and tabs then replaces newlines with spaces, which leaves
raw newlines/carriage-returns and other control characters that can break JSON;
update json_escape to first escape literal newlines and carriage returns
(replace LF with \n and CR with \r) and also escape control characters in the
0x00–0x1F range (e.g., as \u00XX) before you collapse newlines, keeping the
existing escapes for backslash, quote and tab; locate the json_escape function
and perform the additional escaping steps (escape \n and \r and map control
bytes to \u00XX) prior to the final tr that converts newlines to spaces.
| while [[ $# -gt 0 ]]; do | ||
| case "$1" in | ||
| -f|--file) LOG_FILE="${2:-}"; shift 2 ;; | ||
| -o|--output) OUTPUT="${2:-}"; shift 2 ;; | ||
| -n|--lines) LINES="${2:-}"; shift 2 ;; | ||
| -r|--raw) RAW_MODE=1; shift ;; | ||
| --no-color) NO_COLOR=1; shift ;; | ||
| --self-check) SELF_CHECK=1; shift ;; | ||
| -h|--help) usage; exit 0 ;; | ||
| --version) echo "flog $VERSION"; exit 0 ;; | ||
| -*) die "Unknown option: $1 (try --help)" ;; | ||
| *) POSITIONAL+=("$1"); shift ;; | ||
| esac | ||
| done |
There was a problem hiding this comment.
shift 2 exits silently when option argument is missing.
If a user runs flog -f (without a path), ${2:-} prevents the unbound-variable error, but shift 2 fails because only 1 positional parameter remains. Under set -e, bash exits with no error message. Same problem for -o and -n.
Proposed fix
- -f|--file) LOG_FILE="${2:-}"; shift 2 ;;
- -o|--output) OUTPUT="${2:-}"; shift 2 ;;
- -n|--lines) LINES="${2:-}"; shift 2 ;;
+ -f|--file) [[ $# -ge 2 ]] || die "-f/--file requires an argument"; LOG_FILE="$2"; shift 2 ;;
+ -o|--output) [[ $# -ge 2 ]] || die "-o/--output requires an argument"; OUTPUT="$2"; shift 2 ;;
+ -n|--lines) [[ $# -ge 2 ]] || die "-n/--lines requires an argument"; LINES="$2"; shift 2 ;;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| while [[ $# -gt 0 ]]; do | |
| case "$1" in | |
| -f|--file) LOG_FILE="${2:-}"; shift 2 ;; | |
| -o|--output) OUTPUT="${2:-}"; shift 2 ;; | |
| -n|--lines) LINES="${2:-}"; shift 2 ;; | |
| -r|--raw) RAW_MODE=1; shift ;; | |
| --no-color) NO_COLOR=1; shift ;; | |
| --self-check) SELF_CHECK=1; shift ;; | |
| -h|--help) usage; exit 0 ;; | |
| --version) echo "flog $VERSION"; exit 0 ;; | |
| -*) die "Unknown option: $1 (try --help)" ;; | |
| *) POSITIONAL+=("$1"); shift ;; | |
| esac | |
| done | |
| while [[ $# -gt 0 ]]; do | |
| case "$1" in | |
| -f|--file) [[ $# -ge 2 ]] || die "-f/--file requires an argument"; LOG_FILE="$2"; shift 2 ;; | |
| -o|--output) [[ $# -ge 2 ]] || die "-o/--output requires an argument"; OUTPUT="$2"; shift 2 ;; | |
| -n|--lines) [[ $# -ge 2 ]] || die "-n/--lines requires an argument"; LINES="$2"; shift 2 ;; | |
| -r|--raw) RAW_MODE=1; shift ;; | |
| --no-color) NO_COLOR=1; shift ;; | |
| --self-check) SELF_CHECK=1; shift ;; | |
| -h|--help) usage; exit 0 ;; | |
| --version) echo "flog $VERSION"; exit 0 ;; | |
| -*) die "Unknown option: $1 (try --help)" ;; | |
| *) POSITIONAL+=("$1"); shift ;; | |
| esac | |
| done |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@flog` around lines 238 - 251, The option handlers for -f|--file, -o|--output
and -n|--lines use "${2:-}" and then unconditionally run shift 2 which will fail
silently under set -e when the argument is missing; update the case branches
that set LOG_FILE, OUTPUT and LINES to explicitly validate that an argument
exists (e.g., check that "${2:-}" is non-empty) and call die with a clear
message if missing, otherwise assign the value and then perform shift 2 so shift
never runs with insufficient positional parameters.
| cmd_search() { | ||
| [[ -z "$PATTERN" ]] && die "search requires a pattern" | ||
|
|
||
| local search_lines | ||
|
|
||
| if (( RAW_MODE == 1 )); then | ||
| mapfile -t search_lines < <(grep -E "$ACTIVE_INCLUDE" "$LOG_FILE" | grep -v -E "$ACTIVE_EXCLUDE" | grep -i -E "$PATTERN" | tail -n "$LINES") | ||
| elif [[ "$OUTPUT" == "slim" ]]; then | ||
| mapfile -t search_lines < <(grep -E "$ACTIVE_INCLUDE" "$LOG_FILE" | grep -v -E "$ACTIVE_EXCLUDE" | grep -i -E "$PATTERN" | clean_line_slim | tail -n "$LINES") | ||
| elif [[ "$OUTPUT" == "npi" ]]; then | ||
| mapfile -t search_lines < <(grep -E "$ACTIVE_INCLUDE" "$LOG_FILE" | grep -v -E "$ACTIVE_EXCLUDE" | grep -i -E "$PATTERN" | clean_line_npi | tail -n "$LINES") | ||
| else | ||
| mapfile -t search_lines < <(grep -E "$ACTIVE_INCLUDE" "$LOG_FILE" | grep -v -E "$ACTIVE_EXCLUDE" | grep -i -E "$PATTERN" | clean_line_pretty | tail -n "$LINES") | ||
| fi |
There was a problem hiding this comment.
cmd_search scans the entire log file without tail — slow on large logs.
Unlike cmd_snapshot and cmd_errors which use tail -n $fetch_lines to bound input, cmd_search runs grep over the full file. Socket logger files can grow to hundreds of MB. Consider adding a similar tail-based bound or --max-count to grep.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@flog` around lines 455 - 468, cmd_search currently pipes the entire
"$LOG_FILE" through multiple greps which is slow on huge logs; limit input by
tailing the file first or by using grep's match limit. Modify the search
pipelines (inside cmd_search where RAW_MODE / OUTPUT branches build
search_lines) to read from tail -n "$LINES" "$LOG_FILE" | grep -E
"$ACTIVE_INCLUDE" ... (and then the rest of the pipeline: grep -v -E
"$ACTIVE_EXCLUDE" | grep -i -E "$PATTERN" | ...), or alternatively add grep -m
"$LINES" to the pattern match step; update all branches (RAW_MODE, OUTPUT ==
"slim", "npi", and default) so they all bound their input using tail -n "$LINES"
(or grep -m) to avoid scanning the whole file.
| has_wildcard() { | ||
| local token="$1" | ||
| [[ "$token" == *'*'* || "$token" == *'?'* || "$token" == *'['* || "$token" == *']'* ]] | ||
| } | ||
|
|
||
| path_matches_filter() { | ||
| local path="$1" | ||
| local pattern="$2" | ||
|
|
||
| if has_wildcard "$pattern"; then | ||
| [[ "$path" == $pattern ]] | ||
| else | ||
| [[ "$path" == *"$pattern"* ]] | ||
| fi | ||
| } |
There was a problem hiding this comment.
path_matches_filter glob matching works correctly but has an edge case with patterns containing extglob characters.
The unquoted $pattern in [[ "$path" == $pattern ]] (line 122) is the correct bash idiom for glob matching inside [[ ]]. No word-splitting occurs in this context.
One subtlety: if a user passes a pattern like src (no wildcard), has_wildcard returns false, and the match falls through to substring containment (*"$pattern"*). This means -I src matches any path containing "src" anywhere (e.g., resources/file.txt). The help text says "Pattern supports wildcard matching if it contains *, ?, or [ ]" but doesn't clarify that non-wildcard patterns do substring matching. This could surprise users expecting exact directory matching.
Consider documenting this explicitly in the help text.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@fsearch` around lines 112 - 126, path_matches_filter currently treats
patterns with wildcard chars using glob matching (unquoted $pattern in [[ ...
]]) while non-wildcard patterns are matched as substrings via *"$pattern"*;
update the user-facing help/documentation to explicitly state this dual behavior
(functions has_wildcard and path_matches_filter) so users know that a plain
pattern like "src" will match any path containing "src" rather than only exact
directory names, and suggest documenting how to require exact matches (e.g.,
using anchors or explicit glob like "src/" or patterns with wildcards) or
providing an option to switch to exact/equality matching.
| -I|--include) | ||
| [[ -n "${2:-}" ]] || die "Missing value for --include" | ||
| INCLUDE_PATTERNS+=("$2") | ||
| _TELEM_FLAGS+=" -I $2" | ||
| shift 2 | ||
| ;; | ||
| -x|--exclude) | ||
| [[ -n "${2:-}" ]] || die "Missing value for --exclude" | ||
| EXCLUDE_PATTERNS+=("$2") | ||
| _TELEM_FLAGS+=" -x $2" | ||
| shift 2 | ||
| ;; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Variables include_match and exclude_match (lines 494, 505) are not declared local.
They leak into the function/script scope. Not a correctness bug here, but inconsistent with the rest of the script's style.
Fix
Add at the top of the filtering block (after line 491):
FILTERED_RESULTS=()
for item in "${RESULTS[@]}"; do
+ local include_match=0 exclude_match=0
if (( ${`#INCLUDE_PATTERNS`[@]} > 0 )); then
- include_match=0Note: local inside a loop iteration in bash re-declares each time through, which is fine. Alternatively, declare them before the loop.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@fsearch` around lines 399 - 410, Declare include_match and exclude_match as
local variables to avoid leaking them into the global/script scope; inside the
filtering block that processes INCLUDE_PATTERNS/EXCLUDE_PATTERNS (the same block
that updates INCLUDE_PATTERNS, EXCLUDE_PATTERNS and _TELEM_FLAGS when handling
the I|--include and -x|--exclude cases), add a local declaration for
include_match and exclude_match before the loop or at the top of the block so
each iteration uses a locally scoped variable rather than polluting the
function/script scope.
Summary
This PR adds production-grade path filtering to
fsearchand wires it through telemetry, docs, and tests.Changes
fsearch:-I/--includewith repeatable semantics (OR logic across values)-x/--excludewith repeatable semantics (OR logic across values)*,?, or[ ]flagsfieldfsearchversion to1.6.21.6.2-1README.mdwith:-I/-xv1.6.2tests/test_telemetry.shforfsearch -I/-xValidation
bash -n fsearchbash -n tests/test_fsearch.shbash tests/test_fsearch.shbash tests/test_telemetry.shbash tests/run_all_tests.shNotes
This keeps behavior backwards-compatible for existing
fsearchusage while improving precision and composability for downstream pipelines (fsearch | fcontent/fmap).