feat: add TRON substreams for ERC20FeeProxy payment detection#2
feat: add TRON substreams for ERC20FeeProxy payment detection#2rodrigopavezi wants to merge 22 commits intomainfrom
Conversation
- Move substreams-tron from requestNetwork monorepo - Add tron/ folder with Rust WASM module for indexing payments - Add GitHub Actions CI workflow with optional integration tests - Index TransferWithReferenceAndFee events from ERC20FeeProxy - Support both Nile testnet and mainnet contracts
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThis PR introduces a complete TRON Substreams module for indexing ERC20FeeProxy payment events, including Rust implementation with protobuf definitions, CI/CD pipeline, Docker deployment infrastructure, SQL schema, GraphQL support, and comprehensive documentation for multi-network payment tracking. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Add deploy_subgraph workflow input to trigger deployment - Add deploy_network choice (nile or mainnet) - Install Graph CLI and authenticate with deploy key - Package substreams and deploy to The Graph Studio - Requires GRAPH_STUDIO_DEPLOY_KEY secret
- Push to main → deploy to Nile testnet (with commit SHA in version) - GitHub release published → deploy to mainnet (with release tag as version) - Manual workflow_dispatch still available for both networks - Integration tests run before testnet deployment
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tron/subgraph.yaml`:
- Around line 16-19: The subgraph source.file references a package file with
hyphens ("./request-network-tron-v0.1.0.spkg") but the actual built package name
uses underscores from the substreams package (request_network_tron), so update
the source.package.file value in tron/subgraph.yaml to
"./request_network_tron-v0.1.0.spkg" so it matches the package name; keep the
same moduleName (map_erc20_fee_proxy_payments) and ensure the filename spelling
matches the package name used in tron/substreams.yaml (request_network_tron).
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.github/workflows/tron-build.yml:
- Around line 227-234: The deploy-mainnet job currently only depends on
build-and-test, so it can run without running integration-test; update the job's
needs array for the deploy-mainnet job to include integration-test (i.e., change
needs: [build-and-test] to needs: [build-and-test, integration-test]) so mainnet
deployments wait for integration tests to pass.
- Around line 129-143: The current step runs "substreams run ./substreams.yaml
map_erc20_fee_proxy_payments" and redirects output to output.json but masks
failures with "|| true" and then uses a fragile grep -q "error" on output.json;
remove the "|| true" so the command's exit code is preserved, check the
command's exit status immediately (fail if non-zero), and replace the blind grep
with a precise JSON check against output.json (e.g., using jq to test for a
top-level error field or non-success status) instead of matching any substring
"error"; update references to the exact symbols "substreams run
./substreams.yaml map_erc20_fee_proxy_payments", "output.json", and the current
grep -q "error" usage when making this change.
🧹 Nitpick comments (4)
.github/workflows/tron-build.yml (4)
49-60: Consider includingCargo.lockin the cache key for reproducible builds.The cache key only uses
Cargo.tomlhash, which doesn't capture locked dependency versions. IfCargo.lockexists in the repository, including it ensures cache invalidation when dependencies are updated.Also, caching
~/.cargo/bin/may persist stale tool binaries across workflow runs. Consider removing it unless specific tools are installed there.♻️ Suggested improvement
- key: ${{ runner.os }}-cargo-${{ hashFiles('tron/Cargo.toml') }} + key: ${{ runner.os }}-cargo-${{ hashFiles('tron/Cargo.lock', 'tron/Cargo.toml') }}
112-116: Consider pinning the Substreams CLI version for reproducible CI.Using
releases/latestcan cause unexpected CI failures or behavior changes when a new CLI version is released with breaking changes.♻️ Suggested fix
- name: Install Substreams CLI run: | - curl -sSL https://github.com/streamingfast/substreams/releases/latest/download/substreams_linux_x86_64.tar.gz | tar xz + # Pin to a specific version for reproducible builds + SUBSTREAMS_VERSION="v1.12.1" # Update as needed + curl -sSL "https://github.com/streamingfast/substreams/releases/download/${SUBSTREAMS_VERSION}/substreams_linux_x86_64.tar.gz" | tar xz sudo mv substreams /usr/local/bin/
292-308: Consider validating version format before deployment.The mainnet deployment extracts version from release tag or Cargo.toml without validation. Malformed versions could cause deployment issues or make it harder to track releases.
♻️ Suggested improvement
- name: Deploy to Mainnet working-directory: tron run: | # For releases, use the release tag as version if [ "${{ github.event_name }}" = "release" ]; then VERSION="${{ github.event.release.tag_name }}" else VERSION=$(grep '^version' Cargo.toml | head -1 | sed 's/.*"\(.*\)".*/\1/') VERSION="v${VERSION}" fi + # Validate version format (basic semver check) + if ! echo "$VERSION" | grep -qE '^v?[0-9]+\.[0-9]+\.[0-9]+'; then + echo "WARNING: Version '$VERSION' may not follow semver format" + fi + echo "Deploying to request-payments-tron with version: $VERSION"
207-208: The suggested environment variable approach is not supported by Graph CLI.Graph CLI does not support
GRAPH_AUTH_TOKENor similar environment variables for authentication. Thegraph auth --studiocommand expects the deploy key as a direct argument, which is the documented pattern. While the security concern about passing secrets via command-line arguments is valid, the suggested alternative will not work.For CI environments, consider using
graph deploy --access-token <token>instead, which is designed for non-interactive usage and avoids keychain storage complications.
- Add unit tests for base58 encoding, address parsing, uint256 parsing - Add test for full TransferWithReferenceAndFee event data structure - Add test for edge cases (short data, zero values) - Improve integration test to output JSON and validate structure - Integration test processes 100 blocks and validates payment fields
MantisClone
left a comment
There was a problem hiding this comment.
Review Summary
Thanks for moving the Tron substreams to the dedicated payments-substream repo - this follows the established convention (like price-aggregators-subgraph) and keeps the SDK repo focused.
Items to Address
-
Amount format (
tron/src/lib.rs) - see pending comment thread about hex vs decimal -
Package filename (
tron/subgraph.yaml:19) - please verify the filename produced bymake packagematches what's referenced here (underscores vs hyphens)
Suggestions
-
Package naming (
tron/substreams.yaml:3) - worth considering what naming convention will work as the repo grows to support other protocols (see inline comment) -
CI error handling (
.github/workflows/tron-build.yml:153) -set +emasks failures; consider explicit exit code checking
Questions
- What's the plan for
packages/substreams-tronin the requestNetwork monorepo PR #1691? Should that code be removed now that this repo is the canonical location?
What Looks Good
- Rust implementation is solid with good test coverage
- Base58Check address encoding is correct
- Event signature detection is properly validated
- CI/CD pipeline is comprehensive (build, test, deploy stages)
- README documentation is helpful
- Fix package filename in subgraph.yaml (use underscores to match substreams.yaml) - Improve CI error handling by checking exit code directly instead of fragile grep - Convert uint256 to decimal strings for TheGraph BigInt compatibility
…ingfast - Update substreams.yaml for mainnet only (block 79216121) - Use mainnet-evm.tron.streamingfast.io:443 endpoint - Add CI workflow for build, test, and publish on merge to main - Add Dockerfile.sink and docker-compose.yml for SQL sink deployment - Remove subgraph.yaml (Substreams-Powered Subgraphs not supported for non-EVM) - Add .gitignore to exclude build artifacts - Simplify lib.rs for mainnet-only proxy address
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tron/src/lib.rs`:
- Around line 194-201: The function extract_owner_address currently slices raw
protobuf bytes (parameter.value[0..21]) and thus picks up wire-format tag/length
bytes; instead decode the Any.value as the actual TriggerSmartContract protobuf
message using prost::Message::decode (or the generated TriggerSmartContract
struct), then read the owner_address field from that decoded struct and
base58_encode the actual 20-byte address; handle decode errors by
logging/returning an empty string or Result as appropriate. Ensure you reference
extract_owner_address and use prost::Message::decode on parameter.value to get
TriggerSmartContract.owner_address before encoding.
| fn extract_owner_address(parameter: &prost_types::Any) -> String { | ||
| // The owner_address is typically at the beginning of the parameter value | ||
| if parameter.value.len() >= 21 { | ||
| base58_encode(¶meter.value[0..21]) | ||
| } else { | ||
| String::new() | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check repository structure
find . -type f -name "*.rs" | grep -E "tron|proto" | head -20Repository: RequestNetwork/payments-substream
Length of output: 207
🏁 Script executed:
# Look at the file mentioned in the review
wc -l tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 96
🏁 Script executed:
# Check if there are proto files in the repository
find . -type f -name "*.proto"Repository: RequestNetwork/payments-substream
Length of output: 120
🏁 Script executed:
# Read the extract_owner_address function and surrounding context
sed -n '180,220p' tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 1347
🏁 Script executed:
# Search for where extract_owner_address is called
rg "extract_owner_address" tron/src/lib.rs -B 3 -A 3Repository: RequestNetwork/payments-substream
Length of output: 541
🏁 Script executed:
# Check the proto file to understand message structures
cat tron/proto/request/tron/v1/payments.protoRepository: RequestNetwork/payments-substream
Length of output: 1052
🏁 Script executed:
# Look for log_context definition and contracts structure
rg "log_context" tron/src/lib.rs -B 5 -A 5Repository: RequestNetwork/payments-substream
Length of output: 59
🏁 Script executed:
# Check the proto-generated files to understand the message structure
head -100 tron/src/pb/protocol.rsRepository: RequestNetwork/payments-substream
Length of output: 3477
🏁 Script executed:
# Look for TriggerSmartContract or related message definitions
rg "TriggerSmartContract" tron/src/pb/protocol.rs | head -20Repository: RequestNetwork/payments-substream
Length of output: 311
🏁 Script executed:
# Find TriggerSmartContract struct definition
rg "pub struct TriggerSmartContract" tron/src/pb/protocol.rs -A 30Repository: RequestNetwork/payments-substream
Length of output: 1220
🏁 Script executed:
# Look for how contracts are retrieved and what type they are
rg "contracts" tron/src/lib.rs | grep -E "fn |struct |let " | head -20Repository: RequestNetwork/payments-substream
Length of output: 59
🏁 Script executed:
# Check how the parameter field is populated and used
sed -n '1,100p' tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 3838
🏁 Script executed:
# Continue reading from line 100 to see parse_transfer_with_reference_and_fee
sed -n '100,200p' tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 3238
🏁 Script executed:
# Look for Transaction struct definition to understand parameter field
rg "pub struct Transaction" tron/src/pb/ -A 20Repository: RequestNetwork/payments-substream
Length of output: 7624
🏁 Script executed:
# Check how parameter is being used specifically in the calling code
sed -n '130,165p' tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 1165
🏁 Script executed:
# Look for tests of extract_owner_address
rg "extract_owner_address" tron/src/lib.rs -B 5 -A 10Repository: RequestNetwork/payments-substream
Length of output: 975
🏁 Script executed:
# Check if there are any tests in the file
rg "#\[test\]|#\[cfg\(test\)\]" tron/src/lib.rs -A 20 | head -50Repository: RequestNetwork/payments-substream
Length of output: 1868
🏁 Script executed:
# Look at the base58_encode function to understand expected input
sed -n '202,230p' tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 1055
🏁 Script executed:
# Check if there are integration tests or test data files
find . -type f -name "*.rs" | xargs grep -l "extract_owner_address"Repository: RequestNetwork/payments-substream
Length of output: 94
🏁 Script executed:
# Look for any test data or fixtures
find . -type f \( -name "*test*" -o -name "*fixture*" \) | head -20Repository: RequestNetwork/payments-substream
Length of output: 59
🏁 Script executed:
# Check if there are any comments or TODOs about the parameter deserialization
rg "TODO|FIXME|BUG|HACK" tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 59
🏁 Script executed:
# Let me verify the protobuf wire format by checking how prost serializes messages
rg "prost_types::Any" tron/src/ -B 3 -A 3Repository: RequestNetwork/payments-substream
Length of output: 979
🏁 Script executed:
# Check if there's any code that actually uses the contract type information
rg "r#type\|ContractType" tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 59
🏁 Script executed:
# Look for any deserialization code that might handle this properly elsewhere
rg "prost::Message|decode|Message::decode" tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 654
🏁 Script executed:
# Check the Contract structure to confirm the parameter is prost_types::Any
sed -n '1,50p' tron/src/pb/protocol.rs | grep -A 10 "pub struct Contract"Repository: RequestNetwork/payments-substream
Length of output: 59
🏁 Script executed:
# Look at how the contract parameter is actually populated/created
rg "parameter:" tron/src/ -B 2 -A 2Repository: RequestNetwork/payments-substream
Length of output: 1083
🏁 Script executed:
# Verify if prost_types::Any is actually the serialized protobuf or something else
rg "prost_types::Any" tron/src/pb/protocol.rs -B 3 -A 3Repository: RequestNetwork/payments-substream
Length of output: 407
🏁 Script executed:
# Let me check how prost_types::Any is documented and what value contains
# Also check if there's any documentation in comments about the parameter structure
sed -n '1,50p' tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 2188
🏁 Script executed:
# Check if there's any import or usage documentation about prost_types::Any
rg "use.*prost_types" tron/src/Repository: RequestNetwork/payments-substream
Length of output: 59
🏁 Script executed:
# Let me verify the actual protobuf message structure by examining how TriggerSmartContract would be serialized
# Check if there are any comments explaining the parameter format
sed -n '135,155p' tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 653
🏁 Script executed:
# Check what prost_types::Any actually contains by looking at its usage
# See if there are any comments or documentation
head -350 tron/src/lib.rs | tail -50Repository: RequestNetwork/payments-substream
Length of output: 1978
🏁 Script executed:
# Look for actual data about how the parameter is used - check if there are real values being parsed
rg "parameter" tron/src/lib.rs | grep -v "fn \|///"Repository: RequestNetwork/payments-substream
Length of output: 379
🏁 Script executed:
# Check the Cargo.toml to understand the prost version being used
cat Cargo.toml | grep -A 5 prostRepository: RequestNetwork/payments-substream
Length of output: 119
🏁 Script executed:
# Find Cargo.toml
find . -name "Cargo.toml" -type fRepository: RequestNetwork/payments-substream
Length of output: 94
🏁 Script executed:
# Let me understand prost_types::Any by looking at actual TRON protocol docs
# Check if there's any documentation in the codebase about parameter structure
rg "Any" tron/src/pb/protocol.rs | head -20Repository: RequestNetwork/payments-substream
Length of output: 143
🏁 Script executed:
# The key insight: prost_types::Any stores serialized bytes in the 'value' field
# Let me verify by checking what the actual bytes would look like
# If owner_address is first field (tag=0x0a) and is 20 bytes, the wire format would be:
# 0x0a (tag), 0x14 (length=20), then 20 bytes of address
# So bytes[0..21] would get: tag + length + only 19 bytes of actual data
# This is the critical issue - let's see if this is actually tested
cat tron/src/lib.rs | grep -A 50 "test_" | grep -A 20 "extract_owner"Repository: RequestNetwork/payments-substream
Length of output: 59
🏁 Script executed:
# Check the Cargo.toml to see dependencies
cat tron/Cargo.tomlRepository: RequestNetwork/payments-substream
Length of output: 715
🏁 Script executed:
# Most importantly, check if extract_owner_address is tested
rg "fn test" tron/src/lib.rs | tail -20Repository: RequestNetwork/payments-substream
Length of output: 397
🏁 Script executed:
# See all tests in the file
sed -n '300,359p' tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 2333
🏁 Script executed:
# Let me verify the exact protobuf wire format by examining what the serialized bytes would look like
# Create a simple test to see what prost actually generates
python3 << 'EOF'
# For protobuf wire format:
# Field 1 (tag 0x0a), wire type 2 (length-delimited), with 20 bytes of data
# The wire format would be: 0x0a (tag+wiretype), 0x14 (length=20), then 20 bytes
# So if owner_address is the first field and is 20 bytes:
# Serialized: [0x0a, 0x14, byte1, byte2, ..., byte20, ...]
# Taking [0..21] would give:
# [0x0a, 0x14, byte1, byte2, ..., byte19] -- MISSING BYTE20
# This means the function is taking the wrong bytes!
# It includes frame bytes and cuts off the last byte of the address
tag = 0x0a # field 1, wire type 2
length = 0x14 # 20 bytes
print(f"Frame overhead: 2 bytes (tag + length)")
print(f"Taking [0..21]: tag + length + 19 bytes = WRONG (missing 1 byte, includes frame)")
print(f"Correct would be: [2..22] to skip frame and get all 20 bytes")
EOFRepository: RequestNetwork/payments-substream
Length of output: 257
🏁 Script executed:
# Verify there's no test that catches this bug with actual protobuf-encoded data
grep -n "extract_owner" tron/src/lib.rsRepository: RequestNetwork/payments-substream
Length of output: 194
extract_owner_address incorrectly extracts address bytes from serialized protobuf data, producing corrupted addresses.
The function takes raw bytes [0..21] from the serialized TriggerSmartContract message. However, the protobuf wire format includes a tag byte (0x0a) and length byte (0x14), so this slice includes frame overhead and truncates the actual 20-byte address, resulting in incorrect "from" addresses in all payment records.
Deserialize the protobuf message properly using prost::Message::decode(), or add a test that validates address extraction with actual protobuf-encoded data.
🤖 Prompt for AI Agents
In `@tron/src/lib.rs` around lines 194 - 201, The function extract_owner_address
currently slices raw protobuf bytes (parameter.value[0..21]) and thus picks up
wire-format tag/length bytes; instead decode the Any.value as the actual
TriggerSmartContract protobuf message using prost::Message::decode (or the
generated TriggerSmartContract struct), then read the owner_address field from
that decoded struct and base58_encode the actual 20-byte address; handle decode
errors by logging/returning an empty string or Result as appropriate. Ensure you
reference extract_owner_address and use prost::Message::decode on
parameter.value to get TriggerSmartContract.owner_address before encoding.
Include the .spkg file in the repository so Easypanel can build the Docker image.
Simplify database connection config to avoid variable substitution issues.
Use list format instead of map format to properly pass through environment variables from Easypanel to the container.
Join the easypanel network to allow DNS resolution of other Easypanel services like PostgreSQL.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.gitignore:
- Around line 4-6: Remove the tracked .spkg artifact by deleting the
"!tron/*.spkg" negation entry from .gitignore and remove the committed .spkg
file from Git history/working tree (unstage/remove the tracked tron/*.spkg and
commit the deletion, e.g., git rm --cached or git rm then commit) so the
CI-built tron .spkg artifact is used instead; ensure only the general pattern
"*.spkg" remains in .gitignore and commit the .gitignore change along with the
removal of the tracked file.
🧹 Nitpick comments (1)
.gitignore (1)
1-20: Consider adding optional ignore patterns.While the current patterns cover the essentials, you might want to add:
*.log– Log files that may be generated during development or testingnode_modules/– If any JavaScript/TypeScript tooling is added later.env.*– More general pattern to catch variants like.env.production,.env.test, etc.These are optional improvements and not critical for the current PR.
| # Substreams packages (built during CI) - except production package | ||
| *.spkg | ||
| !tron/*.spkg |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if .spkg files are currently tracked and their sizes
echo "=== Checking tracked .spkg files ==="
git ls-files '*.spkg' 2>/dev/null || echo "No .spkg files currently tracked"
echo -e "\n=== Checking tron directory ==="
if [ -d "tron" ]; then
echo "Size of tron/ directory:"
du -sh tron/ 2>/dev/null || echo "Unable to determine size"
echo -e "\nContents of tron/:"
ls -lah tron/ 2>/dev/null || echo "Unable to list contents"
fi
echo -e "\n=== Checking for any untracked .spkg files ==="
find . -name "*.spkg" -type f 2>/dev/null | head -20
echo -e "\n=== Checking git status for .spkg files ==="
git status --short | grep -i spkg || echo "No .spkg files in current git status"Repository: RequestNetwork/payments-substream
Length of output: 1200
🏁 Script executed:
#!/bin/bash
# Get more context about the deployment and build process
echo "=== Checking Makefile in tron directory ==="
cat tron/Makefile
echo -e "\n=== Checking substreams.yaml ==="
cat tron/substreams.yaml
echo -e "\n=== Checking README in tron directory ==="
cat tron/README.md
echo -e "\n=== Checking main README ==="
head -100 README.md 2>/dev/null || echo "No main README"
echo -e "\n=== Checking for CI/CD workflow files ==="
find . -path ./.git -prune -o -name "*.yml" -o -name "*.yaml" | grep -E "(workflow|ci|deploy)" | head -20
echo -e "\n=== Checking git log for this file ==="
git log --oneline -- "tron/request-network-tron-v0.1.0.spkg" | head -5Repository: RequestNetwork/payments-substream
Length of output: 6068
🏁 Script executed:
#!/bin/bash
# Check the CI/CD workflow to understand build and deployment strategy
echo "=== Checking tron-build.yml workflow ==="
cat .github/workflows/tron-build.yml
echo -e "\n=== Checking git history for .spkg file ==="
git log --all --oneline --follow -- "tron/request-network-tron-v0.1.0.spkg" | head -10
echo -e "\n=== Checking if .spkg changes are common ==="
git log --oneline --all --diff-filter=M -- "tron/request-network-tron-v0.1.0.spkg" | wc -l
echo -e "\n=== Checking deployment documentation ==="
find . -name "*.md" -type f | xargs grep -l "deploy\|spkg" 2>/dev/null | head -10Repository: RequestNetwork/payments-substream
Length of output: 9908
Remove the .spkg file from version control—your CI/CD workflow already handles artifact publishing correctly.
The exception pattern !tron/*.spkg tracks a 636KB build artifact that is never rebuilt (git history shows zero modifications to this file). Meanwhile, your CI/CD workflow (tron-build.yml) already builds, tests, and publishes the .spkg as a GitHub artifact after every push to main, making the committed copy stale and redundant.
Remove the exception from .gitignore and delete the tracked .spkg file. Developers and deployments should use the CI-generated artifact from GitHub instead, ensuring they always get the freshly built version that passed integration tests.
🤖 Prompt for AI Agents
In @.gitignore around lines 4 - 6, Remove the tracked .spkg artifact by deleting
the "!tron/*.spkg" negation entry from .gitignore and remove the committed .spkg
file from Git history/working tree (unstage/remove the tracked tron/*.spkg and
commit the deletion, e.g., git rm --cached or git rm then commit) so the
CI-built tron .spkg artifact is used instead; ensure only the general pattern
"*.spkg" remains in .gitignore and commit the .gitignore change along with the
removal of the tracked file.
- Added 'chain' field to Payment protobuf message - Updated schema.sql with chain column and indexes - Modified Rust code to parse chain from params (defaults to "tron") - Updated db_out to include chain in database changes - Key now includes chain for uniqueness across networks - Added tests for chain parameter parsing This allows the same payments table to be used by multiple substreams targeting different blockchain networks.
- Prerequisites and installation instructions - Project structure overview - Development workflow (changes, build, test) - Local stream and sink testing - Production deployment to Easypanel - Multi-chain support documentation - Troubleshooting section
- Add substreams-entity-change dependency for graph_out module - Implement graph_out function to convert payments to EntityChanges - Add subgraph.yaml manifest for Substreams-powered subgraph deployment - Update schema.graphql with chain field for multi-chain support - Update README with The Graph deployment instructions - Rebuild .spkg with new graph_out module
- Add deploy-subgraph job that runs on merge to main - Install Graph CLI and deploy to Subgraph Studio - Add manual workflow_dispatch option for subgraph deployment - Deploy uses version from Cargo.toml as version label
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tron/src/lib.rs`:
- Around line 146-148: The code is parsing payment.amount and payment.fee_amount
to i64 then falling back to 0 (causing data corruption for large values);
instead pass the original string values into the .set calls so the Graph BigInt
can accept them. Replace the .set("amount",
payment.amount.parse::<i64>().unwrap_or(0)) and .set("feeAmount",
payment.fee_amount.parse::<i64>().unwrap_or(0)) uses with .set("amount",
payment.amount) and .set("feeAmount", payment.fee_amount) (or
payment.amount.clone()/payment.fee_amount.clone() as needed) so values are
stored as strings rather than truncated i64s. Ensure payment.payment_reference
handling remains unchanged.
tron/src/lib.rs
Outdated
| .set("amount", payment.amount.parse::<i64>().unwrap_or(0)) | ||
| .set("reference", hex::decode(&payment.payment_reference).unwrap_or_default()) | ||
| .set("feeAmount", payment.fee_amount.parse::<i64>().unwrap_or(0)) |
There was a problem hiding this comment.
Data corruption: i64 parsing silently truncates large token amounts to 0.
payment.amount.parse::<i64>() fails for values exceeding i64::MAX (~9.2×10¹⁸). For 18-decimal tokens, amounts as small as ~9.2 tokens overflow. The unwrap_or(0) silently records these as 0, corrupting payment data.
The Graph's BigInt type accepts string values directly—pass the string without parsing to preserve precision.
Proposed fix
tables
.create_row("Payment", &id)
.set("chain", &payment.chain)
.set("tokenAddress", &payment.token_address)
.set("to", &payment.to)
- .set("amount", payment.amount.parse::<i64>().unwrap_or(0))
+ .set("amount", &payment.amount)
.set("reference", hex::decode(&payment.payment_reference).unwrap_or_default())
- .set("feeAmount", payment.fee_amount.parse::<i64>().unwrap_or(0))
+ .set("feeAmount", &payment.fee_amount)
.set("feeAddress", &payment.fee_address)🤖 Prompt for AI Agents
In `@tron/src/lib.rs` around lines 146 - 148, The code is parsing payment.amount
and payment.fee_amount to i64 then falling back to 0 (causing data corruption
for large values); instead pass the original string values into the .set calls
so the Graph BigInt can accept them. Replace the .set("amount",
payment.amount.parse::<i64>().unwrap_or(0)) and .set("feeAmount",
payment.fee_amount.parse::<i64>().unwrap_or(0)) uses with .set("amount",
payment.amount) and .set("feeAmount", payment.fee_amount) (or
payment.amount.clone()/payment.fee_amount.clone() as needed) so values are
stored as strings rather than truncated i64s. Ensure payment.payment_reference
handling remains unchanged.
Use request-payments-tron to follow existing naming pattern: - request-payments-mainnet (Ethereum) - request-payments-matic (Polygon) - request-payments-arbitrum-one - etc.
The Graph only supports Substreams on native TRON - not subgraphs. Subgraphs are only available for TRON EVM, which is a different network. Removed: - subgraph.yaml - graph_out module and function - substreams-entity-change dependency - deploy-subgraph GitHub Actions job - The Graph deployment docs from README SQL sink remains as the primary deployment option for native TRON.
- Add 0x prefix to payment_reference to match SDK expectations - Add Hasura GraphQL API documentation to README - Update substream package with payment_reference fix
- Add energy_used, energy_fee, and net_fee fields to Payment proto - Extract TRON resource info from transaction receipt - Update SQL schema with new columns - Regenerate protobuf bindings and spkg

Summary by CodeRabbit
Release Notes
New Features
Chores
Documentation