-
Notifications
You must be signed in to change notification settings - Fork 2
0.3.0 #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.3.0 #64
Conversation
…e for better state management
…ity and debugging
…on; refactor code structure and update configuration handling
…rotobuf compilation process and remove unused dependencies
WalkthroughReplaces the old JSON conf loader and observability with a typed config module and tracing; adds protobuf-based request types, a NATS client and ProxyService, central SharedState wiring, build/proto tooling, Docker/CI/Make modernizations, and client-IP extraction with tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Handler as HTTP Handler
participant State as SharedState
participant Proto as Protobuf Encoder
participant Proxy as ProxyService
participant Nats as NatsClient
participant NATS as NATS Server
Client->>Handler: HTTP request (headers, body)
Handler->>State: access SharedState via State<>
Handler->>Handler: extract client_ip, authorization
Handler->>Proto: build HttpRequest from components
Proto->>Handler: protobuf bytes
Handler->>Proxy: proxy_request(HttpRequest, request_id)
Proxy->>Nats: request_with_headers(subject, headers, payload)
Nats->>NATS: send request
NATS-->>Nats: response message
Nats-->>Proxy: return message
Proxy->>Handler: (StatusCode, body)
Handler->>Client: HTTP response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 27
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
tests/integration_test.rs (3)
42-45: Consider using#[ignore]or proper skip mechanism instead of println.The current pattern silently skips tests when NATS is unavailable. Using
#[ignore]with conditional compilation or therstestcrate's skip functionality would make test results clearer.// Alternative: use #[ignore] and run with --ignored when NATS is available #[tokio::test] #[ignore = "requires NATS server"] async fn test_full_application_health_check() { // ... }
141-143: Test environment variable manipulation is not thread-safe.Setting
CFG_PATHviaenv::set_varandenv::remove_varin tests can cause race conditions when tests run in parallel. Consider using a test-specific configuration loading method or a serial test attribute.// Option 1: Use serial test execution #[tokio::test] #[serial_test::serial] async fn test_configuration_integration() { // ... } // Option 2: Pass config path directly to Config::new_from_path()This applies to
test_config_from_json_file,test_config_with_default_logging,test_invalid_port_in_config, andtest_missing_config_fileinsrc/config/settings.rsas well.
267-273: Dead code:nats_available()always returns false and is unused.The function is marked
#[allow(dead_code)]but provides no utility. Consider removing it or implementing actual NATS availability detection.-// Helper function to skip integration tests if NATS is not available -#[allow(dead_code)] -fn nats_available() -> bool { - // In real tests, you'd check if NATS is available - // For now, assume it's not available in test environments - false -}src/handlers.rs (1)
202-209: Status code parsing from string is fragile.The
status_codevariable is aStringthat's later parsed back tou16. Sincestatus_codeis derived fromStatusCode.to_string(), it contains the full status text (e.g., "200 OK"), not just the numeric code, makingparse::<u16>()always fail and default to 500.Use the numeric status code directly:
- let status_code: String; + let mut status_num: u16 = 500; let resp = match client .request_with_headers(SUBJECT, headers, Bytes::from(buf)) .await { Ok(response) => { // ... existing code ... - status_code = code.to_string(); + status_num = code.as_u16(); // ... } Err(e) => { - status_code = StatusCode::REQUEST_TIMEOUT.to_string(); + status_num = 408; // ... } }; // Record metrics if available if let Some(metrics) = &shared_state.metrics { let route_template = matched_path.as_str(); - let status_num: u16 = status_code.parse().unwrap_or(500); metrics.record_http_request(method.as_str(), route_template, status_num, elapsed_time); metrics.record_nats_request(SUBJECT, status_num < 400, elapsed_time); } info!( request_id = %id, method = %method, route = %matched_path.as_str(), - status = status_code.as_str(), + status = status_num, elapsed_time_ms = elapsed_time.as_millis(), "request completed" );
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (38)
.cargo/config.toml(1 hunks).dockerignore(1 hunks).github/workflows/main.yml(5 hunks).gitignore(1 hunks)AGENTS.md(1 hunks)CLAUDE.md(1 hunks)Cargo.toml(2 hunks)Dockerfile(1 hunks)Makefile(2 hunks)VERSION(1 hunks)build.rs(1 hunks)config.json(1 hunks)http2.iml(1 hunks)proto/events.proto(1 hunks)src/client_ip.rs(1 hunks)src/conf.rs(0 hunks)src/config/mod.rs(1 hunks)src/config/settings.rs(1 hunks)src/domain/mod.rs(1 hunks)src/domain/types.rs(1 hunks)src/events.rs(0 hunks)src/handlers.rs(9 hunks)src/lib.rs(1 hunks)src/main.rs(1 hunks)src/observability.rs(0 hunks)src/responses/errors.rs(1 hunks)src/responses/statuses.rs(1 hunks)src/routes.rs(4 hunks)src/service/mod.rs(1 hunks)src/service/proxy.rs(1 hunks)src/signals.rs(2 hunks)src/transport/mod.rs(1 hunks)src/transport/nats.rs(1 hunks)src/transport/proto.rs(1 hunks)tests/client_ip_test.rs(1 hunks)tests/integration_test.rs(8 hunks)tests/proto_test.rs(1 hunks)tests/routes_test.rs(8 hunks)
💤 Files with no reviewable changes (3)
- src/events.rs
- src/conf.rs
- src/observability.rs
🧰 Additional context used
🧬 Code graph analysis (9)
src/client_ip.rs (1)
src/domain/types.rs (8)
new(19-25)new(46-48)new(86-88)new(193-201)empty(51-53)empty(91-93)is_empty(62-64)is_empty(103-105)
src/transport/nats.rs (2)
src/domain/types.rs (7)
as_str(56-58)as_str(97-99)as_str(204-206)new(19-25)new(46-48)new(86-88)new(193-201)src/service/proxy.rs (1)
new(20-22)
src/main.rs (4)
src/routes.rs (1)
build_routes(16-242)src/config/settings.rs (1)
new(53-70)src/service/proxy.rs (1)
new(20-22)src/transport/nats.rs (1)
connect(17-34)
src/service/proxy.rs (1)
src/config/settings.rs (1)
new(53-70)
src/transport/proto.rs (1)
src/domain/types.rs (4)
new(19-25)new(46-48)new(86-88)new(193-201)
src/config/settings.rs (1)
src/domain/types.rs (4)
new(19-25)new(46-48)new(86-88)new(193-201)
tests/routes_test.rs (1)
src/routes.rs (1)
build_routes(16-242)
src/routes.rs (1)
src/handlers.rs (2)
metrics_handler(51-82)proxy(90-221)
src/handlers.rs (3)
src/client_ip.rs (1)
extract_client_ip(14-64)src/domain/types.rs (10)
as_str(56-58)as_str(97-99)as_str(204-206)from_axum_method(141-152)empty(51-53)empty(91-93)new(19-25)new(46-48)new(86-88)new(193-201)src/transport/proto.rs (1)
from_components(32-71)
🪛 checkmake (0.2.2)
Makefile
[warning] 5-5: Target body for "help" exceeds allowed length of 5 (10).
(maxbodylength)
[warning] 3-3: Missing required phony target "all"
(minphony)
[warning] 3-3: Missing required phony target "clean"
(minphony)
🪛 Checkov (3.2.334)
Dockerfile
[low] 1-48: Ensure that HEALTHCHECK instructions have been added to container images
(CKV_DOCKER_2)
🪛 Hadolint (2.14.0)
Dockerfile
[warning] 5-5: Pin versions in apt get install. Instead of apt-get install <package> use apt-get install <package>=<version>
(DL3008)
[warning] 41-41: Pin versions in apt get install. Instead of apt-get install <package> use apt-get install <package>=<version>
(DL3008)
🪛 markdownlint-cli2 (0.18.1)
CLAUDE.md
18-18: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
18-18: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
87-87: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
134-134: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
138-138: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
141-141: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
144-144: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
191-191: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
195-195: Files should end with a single newline character
(MD047, single-trailing-newline)
AGENTS.md
3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
9-9: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
17-17: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
23-23: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
29-29: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
34-34: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test
- GitHub Check: lint
- GitHub Check: build
🔇 Additional comments (50)
.dockerignore (2)
23-23: Verify.origpattern specificity.The pattern
*.origis broad and could match unintended files in your source tree (e.g.,config.orig.json, generated files with.origextension). Consider whether this needs to be more specific or if you want to rely on git/source control to exclude such files from the repo instead.
1-62: Well-structured.dockerignorewith good Docker caching awareness.The file is organized logically with clear sections and helpful comments. The inclusion of the "IMPORTANT for Docker caching" note shows good understanding of build optimization. Patterns appropriately exclude build artifacts (
target/,vendor/), development files (IDE, macOS, local config), tests, and documentation for a production Docker image.One minor concern is addressed separately regarding pattern specificity.
src/signals.rs (1)
6-7: Logging migration totracinglooks correct.No behavior change; just ensure a subscriber is initialized in the binary so these logs are emitted.
Also applies to: 23-30, 37-37
src/transport/mod.rs (1)
1-6: Public transport surface is clean and discoverable.The module split + re-exports look sensible for a crate-level API.
src/responses/statuses.rs (1)
1-1: Good: idiomaticserdederive import.http2.iml (1)
8-9: Thehttp2.imlfile is intentionally tracked in the repository (while.idea/is excluded in.gitignore). This is a valid pattern—sharing the IntelliJ module configuration across contributors while keeping user-specific IDE settings out of version control. No action needed..cargo/config.toml (1)
1-24: No action needed—build configuration is properly supported.The Dockerfile and CI workflows both install
lldand includegit(via the standard Rust image). The Cargo config keys used ([build] incremental,git-fetch-with-cli,[registries.crates-io] protocol = "sparse") are all standard and supported by Rust 1.89. No build-breaking issues detected.src/config/mod.rs (1)
1-3: LGTM!The module structure and re-exports follow standard Rust conventions for organizing configuration types.
config.json (1)
17-20: LGTM!The structured logging configuration with JSON format and info level is a solid choice for production environments, providing machine-readable logs at an appropriate verbosity level.
src/responses/errors.rs (1)
1-1: LGTM!Migrating from
serde_derive::Serializetoserde::Serializeis the modern approach when using serde with the derive feature enabled..github/workflows/main.yml (4)
22-25: LGTM!Installing
protobuf-compilerfor proto code generation andlldfor faster linking is appropriate for the build requirements.
36-39: LGTM!Consistent installation of build dependencies in the test job ensures the environment matches the lint job.
69-71: LGTM!Extracting the version from the VERSION file enables dynamic tagging and aligns with semantic versioning practices.
91-93: LGTM!Dynamic version tags using
${{ steps.version.outputs.VERSION }}replace hard-coded versions, making releases more maintainable and reducing the risk of stale image tags.Also applies to: 104-106
src/service/mod.rs (1)
1-3: LGTM!The module declaration and re-export follow standard Rust patterns for exposing service layer components.
src/domain/mod.rs (1)
1-3: LGTM!The module structure and re-exports provide clean access to domain types, following idiomatic Rust module organization patterns.
tests/proto_test.rs (2)
5-53: LGTM!The test thoroughly validates protobuf round-trip serialization across all HttpRequest fields, including nested structures (Uri, Args). The direct construction approach (without HttpRequestInfo) is appropriate for unit testing protobuf serialization logic.
55-90: LGTM!The Go backend compatibility test ensures that protobuf messages can be exchanged with the Go services, validating the user_values map and core request fields.
src/client_ip.rs (1)
144-253: Good test coverage for client IP extraction scenarios.The unit tests comprehensively cover header priority, private IP filtering, IPv6 support, and edge cases. The tests properly validate the expected behavior.
src/lib.rs (1)
4-14: LGTM! Module declarations are well-organized.The new module structure (client_ip, config, domain, service, transport) aligns with the architectural refactoring. The metrics module is appropriately feature-gated.
tests/client_ip_test.rs (1)
147-190: Good security test coverage.The security scenarios effectively test protection against IP spoofing attempts, invalid IP injection, and all-internal-address attacks.
tests/routes_test.rs (1)
8-9: LGTM! Tests correctly updated to use SharedState.The migration from individual parameters to
SharedStateis consistent with the newbuild_routesAPI signature.Makefile (1)
36-44:increment-versionincrements minor but doesn't reset patch.When incrementing minor version (e.g.,
1.2.3→1.3.3), the patch number is typically reset to 0 (e.g.,1.2.3→1.3.0). The current implementation preserves the patch number.Is this the intended behavior? If not:
new_minor=$$((minor + 1)); \ - new_version="$$major.$$new_minor.$$patch"; \ + new_version="$$major.$$new_minor.0"; \src/service/proxy.rs (1)
24-70: LGTM - proxy logic is well-structured with appropriate error handling.The method correctly:
- Serializes the request to protobuf
- Attaches request_id as a header for tracing
- Handles missing headers gracefully with a 500 default
- Logs errors without crashing on invalid status codes
src/transport/proto.rs (1)
30-71: LGTM - Clean factory method for constructingHttpRequestfrom components.The method properly maps HTTP metadata to protobuf fields with sensible defaults for optional URI components.
Cargo.toml (2)
15-17: LGTM - Clean feature flag setup for optional metrics.The metrics feature is properly gated with
dep:syntax and set as default, allowing users to disable it if needed.
20-44: Dependencies are appropriate and free of known critical vulnerabilities.The resolved dependency tree shows ring 0.17.14 (safe from RUSTSEC-2025-0009) and async-nats 0.38.0 is secure. The protobuf-based NATS transport and tracing-based observability setup is sound.
src/main.rs (2)
19-42: LGTM - Tracing initialization with JSON/human-readable format switch.The implementation cleanly separates JSON output for production monitoring from human-readable output for development.
81-86: The current code is using the correct approach.NatsClientdoes not implementClone, and the wrappedArc<NatsClient>requires direct field access vianats_client.client.clone()to obtain the cloneableArc<RwLock<Client>>. The suggestion to usenats_client.clone()would not work—it would clone the outerArcbut still require dereferencing and field access. The existing pattern is idiomatic and appropriate for this structure.Likely an incorrect or invalid review comment.
src/transport/nats.rs (1)
11-14: TheArc<RwLock<Client>>wrapper serves a purpose and cannot be removed as suggested.The
RwLockis actively used within theNatsClientimplementation:request_with_headers()acquires a read lock (line 44), andclose()acquires a write lock (line 52). Additionally, theclientfield is public and directly accessed outside this module—specifically insrc/main.rs:83wherenats_client.client.clone()is called. Making this field private would break existing code. Whileasync_nats::Clientis indeedClone + Send + Sync, theRwLockprovides necessary synchronization for coordinating concurrent access patterns in this application. The current design is intentional and should be preserved.Likely an incorrect or invalid review comment.
proto/events.proto (3)
1-17: Well-structured protobuf schema with good documentation.The HTTPRequest message is well-designed with appropriate field types. The comments documenting Rust type compatibility are helpful for cross-language development.
19-34: URI and Args messages appropriately model HTTP request components.Good use of
bytesfor fields that map toVec<u8>in Rust andstringforStringfields. The nestedArgsmessage with a map is a clean design.
84-99: Enums follow proto3 best practices with UNSPECIFIED = 0.Both
CurrencyIDandOperationIDcorrectly use 0 for the unspecified/default value, which is the recommended pattern for proto3 enums.src/routes.rs (4)
16-19: Clean migration to SharedState pattern.The function signature change from separate
natsandmetricsparameters to a unifiedSharedStateimproves cohesion and simplifies the API surface.
47-49: Conditional metrics endpoint exposure is well-implemented.Checking
shared_state.metrics.is_some()before registering the/metricsroute prevents exposing an endpoint that would return 503 when metrics are disabled.
53-156: Path parameter syntax correctly updated to Axum 0.8 style.The migration from
:paramto{param}syntax aligns with Axum 0.8's routing conventions.
192-221: Comprehensive tracing hooks with appropriate log levels.The
on_responseandon_failurehooks at DEBUG level provide detailed tracing without cluttering production logs. The comment correctly notes thathandlers.rsprovides production-level logging.tests/integration_test.rs (1)
19-23: SharedState construction is consistent and correct.The pattern of constructing
SharedStatewith the mock NATS client andmetrics: Noneis applied consistently across all tests.src/config/settings.rs (3)
10-25: LoggingConfig with serde defaults and Default trait is well-implemented.The dual approach of using
#[serde(default = "...")]for deserialization andimpl Defaultfor programmatic construction ensures consistent defaults in both contexts.
35-50: Config and NatsConfig structs with validation deserializers are clean.The custom deserializers ensure that invalid configurations fail at load time with clear error messages rather than causing runtime issues.
73-89: Custom deserializers correctly delegate validation to domain types.The deserializers properly use
Port::new()andNatsHost::new()for validation, maintaining a single source of truth for the validation logic insrc/domain/types.rs.src/handlers.rs (5)
51-52: Handler signature updated to use State.The migration from extension-based state to
State<SharedState>extraction is consistent with the routing changes.
103-106: Client IP extraction with debug logging is well-implemented.Using a dedicated
extract_client_iphelper and logging the result at debug level provides good observability without cluttering production logs.
110-129: Authorization header extraction and request context construction are clean.The pattern of stripping "Bearer " prefix and falling back to
AuthToken::empty()handles missing/malformed headers gracefully.
131-141: Protobuf serialization via encode_to_vec() is correct.Using
prost::Message::encode_to_vec()for protobuf serialization is the idiomatic approach and properly handles theHttpRequestconstruction viafrom_components.
237-261: Error response serialization with fallback is robust.The pattern of attempting JSON serialization and falling back to a hardcoded JSON string ensures errors are always returned in a valid format.
src/domain/types.rs (4)
17-31: LGTM!Port validation correctly prevents zero values, and the u16 type naturally enforces the valid range. The implementation is clean with proper accessor methods.
33-37: LGTM!All
Displayimplementations are clean and consistent.Also applies to: 73-77, 114-118, 155-167, 215-219
221-292: LGTM!Test coverage is comprehensive and validates all the key behaviors: validation rules, serialization, and basic operations. The tests are well-structured and use appropriate assertions.
44-71: No action needed. IP validation is properly handled insrc/client_ip.rsthrough thevalidate_and_clean_ip()function beforeClientIp::new()is called. This separates concerns correctly: theClientIptype trusts that only validated, public IP addresses are passed to it, while the extraction logic handles all validation rules (filtering private IPs, loopback addresses, invalid formats, etc.).
| # These are backup files generated by rustfmt | ||
| **/*.rs.bk | ||
|
|
||
| # Generated protobuf files | ||
| src/transport/proto/*.rs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Check if src/transport/proto directory exists =="
ls -la src/transport/proto 2>/dev/null || echo "Directory does not exist"
echo
echo "== build.rs protobuf settings =="
fd -a '^build\.rs$' -x cat {} \; 2>/dev/null || echo "No build.rs found"
echo
echo "== Look for prost/tonic/protobuf configuration =="
rg -n --hidden -S 'prost_build|tonic_build|protoc|out_dir|OUT_DIR' . --max-count=20 2>/dev/null || echo "No matches"
echo
echo "== Check for .proto files in transport directory =="
fd -a '\.proto$' . 2>/dev/null || echo "No .proto files found"
echo
echo "== Verify gitignore current state =="
sed -n '28,32p' .gitignore
echo
echo "== Check if any .rs files are in src/transport/proto =="
find src/transport/proto -name '*.rs' 2>/dev/null || echo "No .rs files or directory doesn't exist"Repository: stockwayup/http2
Length of output: 2575
Generate protobuf files to $OUT_DIR, not src/transport/proto/.
The build.rs currently writes generated .rs files directly into src/transport/proto/ (line 21), which is non-standard and requires the .gitignore rule to prevent source pollution. This creates reproducibility issues: a clean checkout cannot compile without first running the build script to regenerate files.
Update build.rs to use std::env::var("OUT_DIR") instead of the hardcoded "src/transport/proto" path. This follows Rust conventions and eliminates the need for this gitignore entry.
| # Repository Guidelines | ||
|
|
||
| ## Project Structure & Module Organization | ||
| - `src/`: Rust sources — key modules: `main.rs` (binary), `lib.rs` (exports), `routes.rs` (HTTP routing), `handlers.rs` (request/response), `conf.rs` (JSON config), `metrics.rs` and `observability.rs` (Prometheus + tracing). | ||
| - `tests/`: Integration and API tests (`*_test.rs`). | ||
| - `config.json`: Default runtime config; override path via `CFG_PATH` env. | ||
| - `Makefile`, `Dockerfile`, `Cargo.toml`: tooling and build metadata. | ||
|
|
||
| ## Build, Test, and Development Commands | ||
| - `cargo run` — start locally using `config.json` (or `CFG_PATH=./path/config.json cargo run`). | ||
| - `make fmt` — format with rustfmt. | ||
| - `make lint` — run Clippy with autofix. | ||
| - `make test` / `make test-verbose` — run unit/integration tests. | ||
| - `make check` — type-check (fast feedback). | ||
| - `make build` — Docker build and push (updates tag in Makefile). | ||
|
|
||
| ## Coding Style & Naming Conventions | ||
| - Rust 2021; 4-space indentation; no unsafe (`#![forbid(unsafe_code)]`) and no warnings (`#![deny(warnings)]`). | ||
| - Use idiomatic Rust naming: modules/paths `snake_case`, types `PascalCase`, functions `snake_case`. | ||
| - Responses must set `content-type: application/vnd.api+json` (see `handlers.rs`). | ||
| - Keep route templates under `routes.rs` (`/api/v1/...`); prefer constants for shared prefixes. | ||
|
|
||
| ## Testing Guidelines | ||
| - Frameworks: `tokio` async tests, Axum + `tower::ServiceExt` for routing. | ||
| - Locations: unit tests inline (e.g., `src/conf.rs`), integration tests in `tests/`. | ||
| - Names: use `*_test.rs` and descriptive `#[tokio::test]` functions. | ||
| - Run: `cargo test` (some end-to-end paths expect a NATS server; tests are written to skip gracefully when unavailable). | ||
|
|
||
| ## Commit & Pull Request Guidelines | ||
| - Commits: concise, imperative subject (e.g., "Add request timing logging"); include scope when helpful. | ||
| - PRs must include: summary of changes, rationale, testing notes (commands used), and any config or API impacts. Link related issues. | ||
| - CI expectations: pass `make fmt lint test check` before requesting review. | ||
|
|
||
| ## Security & Configuration Tips | ||
| - Configure via JSON: example `config.json` keys: `listen_port`, `nats.host`, `allowed_origins`, `is_debug`. Use `CFG_PATH` to point to env-specific files. | ||
| - Limit CORS origins to required hosts only (see `routes.rs`). | ||
| - Body size is limited to 250KB by middleware; avoid increasing without justification. | ||
| - Docker image runs as non-root (`www-data`). Avoid committing secrets; use secure secret stores. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add blank lines after headings for markdown consistency.
The markdownlint tool flags missing blank lines below section headings at lines 3, 9, 17, 23, 29, and 34. Adding blank lines improves readability and ensures compliance with markdown best practices.
Apply this diff to fix the formatting:
## Project Structure & Module Organization
+
- `src/`: Rust sources — key modules: `main.rs` (binary), `lib.rs` (exports), `routes.rs` (HTTP routing), `handlers.rs` (request/response), `conf.rs` (JSON config), `metrics.rs` and `observability.rs` (Prometheus + tracing).
- `tests/`: Integration and API tests (`*_test.rs`).
- `config.json`: Default runtime config; override path via `CFG_PATH` env.
- `Makefile`, `Dockerfile`, `Cargo.toml`: tooling and build metadata.
## Build, Test, and Development Commands
+
- `cargo run` — start locally using `config.json` (or `CFG_PATH=./path/config.json cargo run`).
- `make fmt` — format with rustfmt.
- `make lint` — run Clippy with autofix.
- `make test` / `make test-verbose` — run unit/integration tests.
- `make check` — type-check (fast feedback).
- `make build` — Docker build and push (updates tag in Makefile).
## Coding Style & Naming Conventions
+
- Rust 2021; 4-space indentation; no unsafe (`#![forbid(unsafe_code)]`) and no warnings (`#![deny(warnings)]`).
- Use idiomatic Rust naming: modules/paths `snake_case`, types `PascalCase`, functions `snake_case`.
- Responses must set `content-type: application/vnd.api+json` (see `handlers.rs`).
- Keep route templates under `routes.rs` (`/api/v1/...`); prefer constants for shared prefixes.
## Testing Guidelines
+
- Frameworks: `tokio` async tests, Axum + `tower::ServiceExt` for routing.
- Locations: unit tests inline (e.g., `src/conf.rs`), integration tests in `tests/`.
- Names: use `*_test.rs` and descriptive `#[tokio::test]` functions.
- Run: `cargo test` (some end-to-end paths expect a NATS server; tests are written to skip gracefully when unavailable).
## Commit & Pull Request Guidelines
+
- Commits: concise, imperative subject (e.g., "Add request timing logging"); include scope when helpful.
- PRs must include: summary of changes, rationale, testing notes (commands used), and any config or API impacts. Link related issues.
- CI expectations: pass `make fmt lint test check` before requesting review.
## Security & Configuration Tips
+
- Configure via JSON: example `config.json` keys: `listen_port`, `nats.host`, `allowed_origins`, `is_debug`. Use `CFG_PATH` to point to env-specific files.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Repository Guidelines | |
| ## Project Structure & Module Organization | |
| - `src/`: Rust sources — key modules: `main.rs` (binary), `lib.rs` (exports), `routes.rs` (HTTP routing), `handlers.rs` (request/response), `conf.rs` (JSON config), `metrics.rs` and `observability.rs` (Prometheus + tracing). | |
| - `tests/`: Integration and API tests (`*_test.rs`). | |
| - `config.json`: Default runtime config; override path via `CFG_PATH` env. | |
| - `Makefile`, `Dockerfile`, `Cargo.toml`: tooling and build metadata. | |
| ## Build, Test, and Development Commands | |
| - `cargo run` — start locally using `config.json` (or `CFG_PATH=./path/config.json cargo run`). | |
| - `make fmt` — format with rustfmt. | |
| - `make lint` — run Clippy with autofix. | |
| - `make test` / `make test-verbose` — run unit/integration tests. | |
| - `make check` — type-check (fast feedback). | |
| - `make build` — Docker build and push (updates tag in Makefile). | |
| ## Coding Style & Naming Conventions | |
| - Rust 2021; 4-space indentation; no unsafe (`#![forbid(unsafe_code)]`) and no warnings (`#![deny(warnings)]`). | |
| - Use idiomatic Rust naming: modules/paths `snake_case`, types `PascalCase`, functions `snake_case`. | |
| - Responses must set `content-type: application/vnd.api+json` (see `handlers.rs`). | |
| - Keep route templates under `routes.rs` (`/api/v1/...`); prefer constants for shared prefixes. | |
| ## Testing Guidelines | |
| - Frameworks: `tokio` async tests, Axum + `tower::ServiceExt` for routing. | |
| - Locations: unit tests inline (e.g., `src/conf.rs`), integration tests in `tests/`. | |
| - Names: use `*_test.rs` and descriptive `#[tokio::test]` functions. | |
| - Run: `cargo test` (some end-to-end paths expect a NATS server; tests are written to skip gracefully when unavailable). | |
| ## Commit & Pull Request Guidelines | |
| - Commits: concise, imperative subject (e.g., "Add request timing logging"); include scope when helpful. | |
| - PRs must include: summary of changes, rationale, testing notes (commands used), and any config or API impacts. Link related issues. | |
| - CI expectations: pass `make fmt lint test check` before requesting review. | |
| ## Security & Configuration Tips | |
| - Configure via JSON: example `config.json` keys: `listen_port`, `nats.host`, `allowed_origins`, `is_debug`. Use `CFG_PATH` to point to env-specific files. | |
| - Limit CORS origins to required hosts only (see `routes.rs`). | |
| - Body size is limited to 250KB by middleware; avoid increasing without justification. | |
| - Docker image runs as non-root (`www-data`). Avoid committing secrets; use secure secret stores. | |
| # Repository Guidelines | |
| ## Project Structure & Module Organization | |
| - `src/`: Rust sources — key modules: `main.rs` (binary), `lib.rs` (exports), `routes.rs` (HTTP routing), `handlers.rs` (request/response), `conf.rs` (JSON config), `metrics.rs` and `observability.rs` (Prometheus + tracing). | |
| - `tests/`: Integration and API tests (`*_test.rs`). | |
| - `config.json`: Default runtime config; override path via `CFG_PATH` env. | |
| - `Makefile`, `Dockerfile`, `Cargo.toml`: tooling and build metadata. | |
| ## Build, Test, and Development Commands | |
| - `cargo run` — start locally using `config.json` (or `CFG_PATH=./path/config.json cargo run`). | |
| - `make fmt` — format with rustfmt. | |
| - `make lint` — run Clippy with autofix. | |
| - `make test` / `make test-verbose` — run unit/integration tests. | |
| - `make check` — type-check (fast feedback). | |
| - `make build` — Docker build and push (updates tag in Makefile). | |
| ## Coding Style & Naming Conventions | |
| - Rust 2021; 4-space indentation; no unsafe (`#![forbid(unsafe_code)]`) and no warnings (`#![deny(warnings)]`). | |
| - Use idiomatic Rust naming: modules/paths `snake_case`, types `PascalCase`, functions `snake_case`. | |
| - Responses must set `content-type: application/vnd.api+json` (see `handlers.rs`). | |
| - Keep route templates under `routes.rs` (`/api/v1/...`); prefer constants for shared prefixes. | |
| ## Testing Guidelines | |
| - Frameworks: `tokio` async tests, Axum + `tower::ServiceExt` for routing. | |
| - Locations: unit tests inline (e.g., `src/conf.rs`), integration tests in `tests/`. | |
| - Names: use `*_test.rs` and descriptive `#[tokio::test]` functions. | |
| - Run: `cargo test` (some end-to-end paths expect a NATS server; tests are written to skip gracefully when unavailable). | |
| ## Commit & Pull Request Guidelines | |
| - Commits: concise, imperative subject (e.g., "Add request timing logging"); include scope when helpful. | |
| - PRs must include: summary of changes, rationale, testing notes (commands used), and any config or API impacts. Link related issues. | |
| - CI expectations: pass `make fmt lint test check` before requesting review. | |
| ## Security & Configuration Tips | |
| - Configure via JSON: example `config.json` keys: `listen_port`, `nats.host`, `allowed_origins`, `is_debug`. Use `CFG_PATH` to point to env-specific files. | |
| - Limit CORS origins to required hosts only (see `routes.rs`). | |
| - Body size is limited to 250KB by middleware; avoid increasing without justification. | |
| - Docker image runs as non-root (`www-data`). Avoid committing secrets; use secure secret stores. |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
9-9: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
17-17: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
23-23: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
29-29: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
34-34: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
In AGENTS.md around lines 1 to 39 (specifically after the headings at lines 3,
9, 17, 23, 29, and 34) the headings lack a blank line beneath them; insert a
single blank line immediately after each of those heading lines so each heading
is followed by an empty line to satisfy markdownlint and improve readability.
|
|
||
| eprintln!("Compiling protobuf with prost-build..."); | ||
|
|
||
| // Compile proto files with timeout protection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Misleading comment: no timeout protection implemented.
The comment states "Compile proto files with timeout protection" but there is no actual timeout mechanism in the code. Either remove the misleading part of the comment or implement actual timeout protection if needed.
- // Compile proto files with timeout protection
+ // Compile proto files📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Compile proto files with timeout protection | |
| // Compile proto files |
🤖 Prompt for AI Agents
In build.rs around line 29, the comment "Compile proto files with timeout
protection" is misleading because no timeout is implemented; either remove the
"with timeout protection" clause and change the comment to simply "Compile proto
files", or implement real timeout protection around the proto compilation step
(e.g., run the compile command in a spawned thread or process and enforce a
duration using a timeout API or crate, handle the timeout by terminating the
child and returning a clear error/exit code). Ensure any implemented timeout
logs an error and exits with non-zero status so build failures are visible.
| println!("cargo:rerun-if-changed={}", proto_file); | ||
| eprintln!("Build script completed successfully"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider adding rerun-if-changed for build.rs itself.
Changes to build.rs should also trigger a rebuild. Currently only changes to the proto file are tracked.
println!("cargo:rerun-if-changed={}", proto_file);
+ println!("cargo:rerun-if-changed=build.rs");
eprintln!("Build script completed successfully");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| println!("cargo:rerun-if-changed={}", proto_file); | |
| eprintln!("Build script completed successfully"); | |
| println!("cargo:rerun-if-changed={}", proto_file); | |
| println!("cargo:rerun-if-changed=build.rs"); | |
| eprintln!("Build script completed successfully"); |
🤖 Prompt for AI Agents
In build.rs around lines 43 to 44, the build script only registers the proto
file for rebuild triggers; add a rerun-if-changed for the build script itself by
calling println!("cargo:rerun-if-changed=build.rs") (or equivalent with the
correct path) so changes to build.rs will cause Cargo to re-run the build
script.
| #[test] | ||
| fn test_nats_client_creation() { | ||
| let config = Config { | ||
| listen_port: Port::new(8080).unwrap(), | ||
| enable_cors: true, | ||
| nats: NatsConfig { | ||
| host: NatsHost::new("localhost:4222".to_string()).unwrap(), | ||
| }, | ||
| allowed_origins: vec![], | ||
| logging: crate::config::LoggingConfig::default(), | ||
| }; | ||
|
|
||
| // Just test that the config is valid for NatsClient | ||
| assert_eq!(config.nats.host.as_str(), "localhost:4222"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Test doesn't exercise NatsClient functionality.
This test only validates Config construction, not NatsClient behavior. Consider renaming to test_config_for_nats_client to clarify intent, or adding actual unit tests with mocked NATS interactions.
🤖 Prompt for AI Agents
In src/transport/nats.rs around lines 66 to 80, the test named
test_nats_client_creation only constructs a Config and asserts the host string,
so it doesn't exercise NatsClient behavior; either rename the test to something
like test_config_for_nats_client to reflect its current scope, or replace/extend
it with a real unit test that constructs a NatsClient (or a mock/fake) and
asserts expected interactions (connection attempt, subscription/publish
behavior, or error handling) using a mocked NATS server or dependency-injected
client; update the test name or add the new test and any necessary test-only
mock setup and assertions accordingly.
| .and_then(|pq| pq.query()) | ||
| .map(|q| q.to_string()) | ||
| .unwrap_or_default(), | ||
| hash: Vec::new(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the actual code in proto.rs around line 60
cd src/transport && cat -n proto.rs | head -80 | tail -40Repository: stockwayup/http2
Length of output: 1772
🏁 Script executed:
# Check proto definition to understand the hash field's purpose
fd -e proto | xargs cat 2>/dev/null | grep -A2 -B2 'hash'Repository: stockwayup/http2
Length of output: 394
🏁 Script executed:
# Search for usage of hash field in the codebase
rg 'hash\s*:' --type rust -A 2 -B 2Repository: stockwayup/http2
Length of output: 971
🏁 Script executed:
# Check if the URI type has a fragment method
rg 'fragment' --type rust -B 2 -A 2Repository: stockwayup/http2
Length of output: 42
🏁 Script executed:
# Search for where Uri hash field is read or accessed
rg '\.hash' --type rust -B 2 -A 2Repository: stockwayup/http2
Length of output: 42
🏁 Script executed:
# Check the type of http_info.uri to understand its capabilities
rg 'http_info\s*:\s*HttpRequestInfo' --type rust -B 5 -A 5Repository: stockwayup/http2
Length of output: 634
🏁 Script executed:
# Look for the definition of HttpRequestInfo and what uri field it contains
rg 'struct HttpRequestInfo|pub uri' --type rust -B 2 -A 5Repository: stockwayup/http2
Length of output: 489
🏁 Script executed:
# Search for any TODO or FIXME comments related to hash or uri
rg 'TODO|FIXME' --type rust -B 2 -A 2 | grep -i -B 2 -A 2 'hash\|uri\|fragment'Repository: stockwayup/http2
Length of output: 42
🏁 Script executed:
# Check if there are any imports or usage of Uri methods
rg 'use http' --type rust | head -20Repository: stockwayup/http2
Length of output: 1003
🏁 Script executed:
# Look at test cases for URIs with fragments
rg '#|fragment' --type rustRepository: stockwayup/http2
Length of output: 6235
🏁 Script executed:
# Let me verify the http crate version and check if Uri has fragment method
rg 'http = ' Cargo.tomlRepository: stockwayup/http2
Length of output: 148
🏁 Script executed:
# Check the full context around the Uri creation to see if there's any pattern
cat -n src/transport/proto.rs | sed -n '40,75p'Repository: stockwayup/http2
Length of output: 1455
Populate hash field from URI fragment or add explanatory comment.
The hash field is hardcoded to Vec::new() while other URI components (scheme, path, query, host) are extracted from http_info.uri. The http::Uri type provides a fragment() method that should be used to populate this field consistently. If URI fragments are intentionally not captured, add a comment explaining why.
🤖 Prompt for AI Agents
In src/transport/proto.rs around line 60, the `hash` field is set to
`Vec::new()` while other URI parts are derived from `http_info.uri`; replace the
hardcoded empty vector with the URI fragment value (e.g. use
http_info.uri.fragment().map(|f|
f.as_str().as_bytes().to_vec()).unwrap_or_default()) so the fragment is captured
consistently, or if fragments are intentionally ignored add a brief comment
explaining that choice and why fragments should not be populated.
| #[tokio::test] | ||
| async fn test_client_ip_extraction_integration() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Unnecessary async annotation.
These tests don't perform any async operations. Consider using #[test] instead of #[tokio::test] for simpler, faster test execution.
-#[tokio::test]
-async fn test_client_ip_extraction_integration() {
+#[test]
+fn test_client_ip_extraction_integration() {Apply the same change to test_real_world_scenarios and test_security_scenarios.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[tokio::test] | |
| async fn test_client_ip_extraction_integration() { | |
| #[test] | |
| fn test_client_ip_extraction_integration() { |
🤖 Prompt for AI Agents
In tests/client_ip_test.rs around lines 5-6, the test function is annotated with
#[tokio::test] and declared async though it performs no async work; replace
#[tokio::test] with the standard #[test] and remove the async keyword from the
function signature. Also apply the same change (replace #[tokio::test] with
#[test] and remove async) to the test_real_world_scenarios and
test_security_scenarios functions in this file so each non-async test uses the
synchronous #[test] attribute.
| let shared_state = SharedState { | ||
| nats: mock_client, | ||
| metrics: None, | ||
| }; | ||
| let app = build_routes(allowed_origins, true, shared_state); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider extracting SharedState construction to a helper function.
The SharedState { nats: mock_client, metrics: None } pattern is repeated 7 times. A helper would reduce duplication and make future changes easier.
fn create_shared_state(nats: Arc<RwLock<Client>>) -> SharedState {
SharedState {
nats,
metrics: None,
}
}Then in tests:
- let shared_state = SharedState {
- nats: mock_client,
- metrics: None,
- };
+ let shared_state = create_shared_state(mock_client);🤖 Prompt for AI Agents
In tests/routes_test.rs around lines 28 to 32, SharedState construction
(SharedState { nats: mock_client, metrics: None }) is duplicated across tests;
extract a small helper fn (e.g., create_shared_state(nats: Arc<RwLock<Client>>)
-> SharedState) that returns SharedState with metrics set to None, replace each
inline construction with a call to that helper, and add any needed imports or
visibility (pub(crate) or within test module) so all tests can use it.
| @@ -0,0 +1 @@ | |||
| 0.3.0 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add trailing newline at EOF (if missing) to keep tooling quiet.
🤖 Prompt for AI Agents
In VERSION around lines 1 to 1, the file is missing a trailing newline; add a
single newline character at the end of the file so the file ends with "0.3.0\n"
to satisfy tooling that requires EOF newline.
…st request serialization and configuration settings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
CLAUDE.md (2)
136-156: Fix markdown headings: add blank lines after each subsection heading.Multiple subheadings are missing blank lines after them (MD022), causing linting violations at lines 138, 142, 145, 148, and 153.
Apply this diff to add blank lines after each subsection heading:
### Transport Layer (`transport/`) + - **nats.rs** - NATS client wrapper with connection management ### Service Layer (`service/`) + - **proxy.rs** - Core proxy logic for forwarding requests to NATS ### Domain Layer (`domain/`) + - **types.rs** - Domain types with validation (Port, NatsHost, ClientIp, AuthToken, HttpMethod) ### Configuration (`config/`) + - **settings.rs** - Configuration loading from JSON with validation ### Protocol Buffers (`proto/`) + - **events.proto** - Defines HTTPRequest, URI, and other message types
205-209: Fix markdown heading spacing and add trailing newline.The "important-instruction-reminders" heading (line 205) is missing a blank line above it (MD022), and the file must end with a single newline (MD047).
Apply this diff:
- Tests skip gracefully when NATS is unavailable 6. **API Responses** - All responses must use `content-type: application/vnd.api+json` - Follow JSON API specification for error responses # important-instruction-reminders + Do what has been asked; nothing more, nothing less. NEVER create files unless they're absolutely necessary for achieving your goal. ALWAYS prefer editing an existing file to creating a new one. NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
CLAUDE.md(1 hunks)config.json(2 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
CLAUDE.md
18-18: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
18-18: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
91-91: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
138-138: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
142-142: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
145-145: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
148-148: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
153-153: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
205-205: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
209-209: Files should end with a single newline character
(MD047, single-trailing-newline)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test
- GitHub Check: lint
- GitHub Check: lint
🔇 Additional comments (2)
CLAUDE.md (1)
1-209: Documentation is well-aligned with protobuf-based architecture.The file accurately reflects the PR's changes: protobuf serialization (line 13), NATS transport layer, domain-driven types, configuration module, and layered architecture are all correctly documented. Module references to
transport/proto,service/proxy, anddomain/typesare consistent with the implementation described in the PR summary. This addresses the prior concerns about MessagePack and stale module references.config.json (1)
17-20: LGTM! Logging configuration aligns with the new config system.The logging configuration with JSON format and "info" level is appropriate and matches the new typed
LoggingConfigintroduced in the config subsystem.
| **Architecture Layers:** | ||
| ``` | ||
| src/ | ||
| ├── config/ # Configuration management | ||
| │ ├── mod.rs # Public API exports | ||
| │ └── settings.rs # Config structures and loading | ||
| ├── transport/ # Transport layer | ||
| │ ├── mod.rs # Public API exports | ||
| │ ├── nats.rs # NATS client wrapper | ||
| │ └── proto.rs # Protocol Buffers types (generated from proto/events.proto) | ||
| ├── service/ # Business logic layer | ||
| │ ├── mod.rs # Public API exports | ||
| │ └── proxy.rs # Proxy service implementation | ||
| ├── domain/ # Domain types and models | ||
| │ ├── mod.rs # Public API exports | ||
| │ └── types.rs # Core domain types | ||
| ├── handlers.rs # HTTP request handlers | ||
| ├── routes.rs # API route definitions | ||
| ├── client_ip.rs # Client IP extraction | ||
| ├── metrics.rs # Prometheus metrics (optional feature) | ||
| ├── responses/ # Response structures | ||
| ├── signals.rs # Signal handling for graceful shutdown | ||
| ├── lib.rs # Library exports (minimal) | ||
| └── main.rs # Application entry point | ||
| proto/ | ||
| └── events.proto # Protocol Buffers definitions | ||
| build.rs # Build script for protobuf compilation | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Fix markdown fencing: add blank line before block and specify language.
The code block for architecture needs a blank line above it and an explicit language spec (use text for directory trees).
Apply this diff:
**Architecture Layers:**
-```
+
+```text
src/
├── config/ # Configuration management
│ ├── mod.rs # Public API exports🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
18-18: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
18-18: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In CLAUDE.md around lines 17 to 44, the Markdown code block for the architecture
tree is missing a blank line before the fenced block and does not specify a
language; insert a blank line immediately above the existing triple backticks
and change the opening fence to ```text so the block is fenced correctly and
treated as plain text (ensure there is exactly one empty line before the fence
and the rest of the block remains unchanged).
| Example config.json: | ||
| ```json | ||
| { | ||
| "listen_port": 8000, | ||
| "enable_cors": true, | ||
| "nats": { | ||
| "host": "nats.backend.orb.local:4222" | ||
| }, | ||
| "allowed_origins": ["http://localhost:3000"], | ||
| "logging": { | ||
| "log_format": "json", | ||
| "log_level": "info" | ||
| } | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Fix markdown fencing: add blank line before config JSON block.
The JSON example needs a blank line before the fenced code block per MD031.
Apply this diff:
Example config.json:
+
```json
{
"listen_port": 8000,🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
91-91: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In CLAUDE.md around lines 90 to 104, the JSON example's fenced code block is
missing a blank line before the opening ```json fence causing MD031 lint errors;
fix by inserting a single empty line immediately before the ```json fence so the
fenced block is preceded by a blank line, then save the file.
config.json
Outdated
| @@ -1,5 +1,5 @@ | |||
| { | |||
| "listen_port": 8000, | |||
| "listen_port": 8001, | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for references to port 8000 across the codebase (excluding config.json)
echo "=== Searching for port 8000 references ==="
rg "8000" -g "!config.json" -C 2
echo ""
echo "=== Searching for listen_port references ==="
rg "listen_port" -C 2
echo ""
echo "=== Checking for port 8001 references ==="
rg "8001" -C 2Repository: stockwayup/http2
Length of output: 4799
Update documentation to reflect port change.
The port change from 8000 to 8001 is safe from an application perspective—the code dynamically reads listen_port from config. However, CLAUDE.md contains outdated examples showing port 8000 in the request flow description and in the example config.json snippet. Update these references to 8001 for consistency.
🤖 Prompt for AI Agents
In config.json around line 2, the repository documentation (CLAUDE.md) still
shows port 8000 while config.json now uses "listen_port": 8001; update CLAUDE.md
to replace any references and example config snippets that show port 8000 with
8001 so the docs match the current config and example request flows (including
curl examples and config blocks).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
.gitignore (1)
30-32: Protobuf generation location still non-standard.This gitignore entry is necessary because
build.rswrites generated files intosrc/transport/proto/, which is non-idiomatic for Rust. As flagged in the previous review, the standard approach is to generate to$OUT_DIRand include those files viainclude!()macro.The previous review comment remains valid: consider updating
build.rsto usestd::env::var("OUT_DIR")instead of writing to the source tree, which would eliminate the need for this gitignore rule and improve build reproducibility.Based on learnings, this issue was already identified in a prior review.
Makefile (1)
53-63:docker-buildboth builds and pushes images.As noted in the previous review, the
docker-buildtarget performs both build and push operations (lines 61-63), which is surprising given the name and duplicates the separatepushtarget (lines 36-38). This can cause unintended pushes during local development when developers expect only a build.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
.gitignoreMakefileconfig.json
🧰 Additional context used
🪛 checkmake (0.2.2)
Makefile
[warning] 3-3: Missing required phony target "all"
(minphony)
[warning] 3-3: Missing required phony target "clean"
(minphony)
[warning] 5-5: Target body for "help" exceeds allowed length of 5 (11).
(maxbodylength)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: lint
- GitHub Check: test
- GitHub Check: lint
- GitHub Check: test
🔇 Additional comments (10)
config.json (2)
5-5: NATS host configuration updated for local development.The NATS connection string now uses the standard
nats://protocol scheme and points to localhost, which is appropriate for local development.
8-11: Logging configuration looks good.The structured logging configuration with
log_formatandlog_levelfields follows best practices and provides clear observability controls..gitignore (1)
18-19: Config file paths updated to match root-level configuration.The ignore patterns now correctly target
config.local.jsonandconfig.remote.jsonat the repository root, aligning with the configuration system refactoring described in the PR.Makefile (7)
1-3: LGTM!The VERSION variable is well-structured, reading from a VERSION file at parse time. The PHONY declaration correctly includes all defined targets.
5-16: LGTM!The help target is clear and comprehensively documents all available commands.
18-31: LGTM!Standard Rust development targets are correctly implemented with appropriate cargo commands.
33-34: LGTM!The build target correctly uses
--lockedflag to ensure reproducible builds by respecting the Cargo.lock file, which is essential for consistent releases.
36-38: LGTM!The push target correctly pushes both versioned and latest Docker images.
50-51: LGTM!Simple and useful target for displaying the current version.
65-66: LGTM!Convenient alias for the docker-build target.
| @@ -1,18 +1,12 @@ | |||
| { | |||
| "listen_port": 8000, | |||
| "listen_port": 8080, | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Find hardcoded port references in documentation and example files
echo "=== Checking for port references in documentation ==="
rg -g "*.md" -g "README*" -g "CLAUDE*" "\b(8000|8001)\b" -C 2
echo ""
echo "=== Checking for localhost:port patterns ==="
rg "localhost:(8000|8001)" -C 2Repository: stockwayup/http2
Length of output: 569
Update documentation to reflect the port 8080 configuration change.
Documentation in CLAUDE.md contains stale references to port 8000. Update:
- The request flow description mentioning "port 8000"
- The JSON example config snippet with
"listen_port": 8000
Both should reference port 8080 to match the current config.json.
🤖 Prompt for AI Agents
In config.json around lines 2-2, the project now listens on port 8080 but
CLAUDE.md still references port 8000; update CLAUDE.md by replacing the request
flow text that says "port 8000" with "port 8080" and change the example JSON
snippet's "listen_port": 8000 to "listen_port": 8080, then save and run a quick
grep to ensure no other stale 8000 references remain.
| "enable_cors": true, | ||
| "nats": { | ||
| "host": "nats.backend.orb.local:4222" | ||
| "host": "nats://localhost:4222" | ||
| }, | ||
| "allowed_origins": [ | ||
| "http://127.0.0.1", | ||
| "http://localhost", | ||
| "http://127.0.0.1:8080", | ||
| "http://localhost:8080", | ||
| "http://127.0.0.1:8081", | ||
| "http://localhost:8081", | ||
| "https://dev.stockwayup.com", | ||
| "https://stockwayup.com" | ||
| ], | ||
| "is_debug": false | ||
| "allowed_origins": [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security risk: Empty allowed_origins with CORS enabled.
With enable_cors: true and an empty allowed_origins array, the application may accept requests from any origin, which is a security vulnerability in production environments.
🔎 Verify CORS implementation behavior
Check how the application handles an empty allowed_origins list:
#!/bin/bash
# Description: Check CORS middleware implementation to verify behavior with empty origins
echo "=== Searching for CORS configuration logic ==="
rg -A 10 "allowed_origins|enable_cors" --type rust
echo ""
echo "=== Searching for CORS middleware setup ==="
ast-grep --pattern $'fn $_($$$) {
$$$
cors($$$)
$$$
}'🤖 Prompt for AI Agents
In config.json around lines 3-7 the configuration enables CORS but leaves
"allowed_origins" as an empty array which can allow unsafe origins; either
populate "allowed_origins" with explicit trusted origins (e.g.
https://example.com) before deploying, or add startup validation in the
application that when "enable_cors" is true and "allowed_origins" is empty it
fails fast with a clear error instructing the operator to provide explicit
origins (or disable CORS), and ensure this validation is covered by a test and
reflected in the README/config docs.
| increment-version: | ||
| @current_version=$$(cat VERSION); \ | ||
| major=$$(echo $$current_version | cut -d. -f1); \ | ||
| minor=$$(echo $$current_version | cut -d. -f2); \ | ||
| patch=$$(echo $$current_version | cut -d. -f3); \ | ||
| new_minor=$$((minor + 1)); \ | ||
| new_version="$$major.$$new_minor.$$patch"; \ | ||
| echo "Incrementing version from $$current_version to $$new_version"; \ | ||
| echo $$new_version > VERSION |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix semantic versioning: reset patch to 0 when incrementing minor.
The increment-version target has a critical bug. When incrementing the minor version, the patch version should be reset to 0 according to semantic versioning conventions. Currently, 0.2.3 would become 0.3.3 instead of the correct 0.3.0.
🔎 Proposed fix
increment-version:
@current_version=$$(cat VERSION); \
major=$$(echo $$current_version | cut -d. -f1); \
minor=$$(echo $$current_version | cut -d. -f2); \
patch=$$(echo $$current_version | cut -d. -f3); \
new_minor=$$((minor + 1)); \
- new_version="$$major.$$new_minor.$$patch"; \
+ new_version="$$major.$$new_minor.0"; \
echo "Incrementing version from $$current_version to $$new_version"; \
echo $$new_version > VERSION🤖 Prompt for AI Agents
In Makefile around lines 40-48, the increment-version target increments the
minor field but leaves the old patch value, violating semantic versioning;
change the logic so that when computing new_minor you also set patch to 0 and
build new_version as "major.new_minor.0" (ensuring the patch is reset to zero)
before writing it back to VERSION and echoing the new version.
Summary by CodeRabbit
New Features
Configuration
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.