Skip to content

feat: generate testing reports in RPC conformance testing #5778

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

akaladarshi
Copy link
Collaborator

@akaladarshi akaladarshi commented Jun 24, 2025

Summary of changes

Changes introduced in this pull request:

  • Adds need changes to generate the testing report
  • Removed the old reporting mechanism
  • Added a flag --report-dir in forest-tool api compare cmd to dump the generated report in JSON format
  • Added --report-mode to generate different type of reports:
    • summary
    • failure-only
    • full (Default)
  • Refactored data printing on console, now it only prints if we are not generating the report (--report-dir flag is not provided).

Reference issue to close (if applicable)

Closes: #4308

Other information and links

Change checklist

  • I have performed a self-review of my own code,
  • I have made corresponding changes to the documentation. All new code adheres to the team's documentation standards,
  • I have added tests that prove my fix is effective or that my feature works (if possible),
  • I have made sure the CHANGELOG is up-to-date. All user-facing changes should be reflected in this document.

@akaladarshi akaladarshi force-pushed the akaladarshi/add-testing-report branch from 15e1488 to d77cfa4 Compare June 24, 2025 15:30
@akaladarshi akaladarshi force-pushed the akaladarshi/add-testing-report branch from d77cfa4 to d1808e7 Compare June 26, 2025 17:24
@akaladarshi akaladarshi marked this pull request as ready for review June 26, 2025 17:28
@akaladarshi akaladarshi requested a review from a team as a code owner June 26, 2025 17:28
@akaladarshi akaladarshi requested review from LesnyRumcajs and elmattic and removed request for a team June 26, 2025 17:28
@akaladarshi akaladarshi added the RPC requires calibnet RPC checks to run on CI label Jun 26, 2025
Comment on lines 85 to 89
/// Brief description of a single method call against a single host
#[derive(Debug, Clone, PartialOrd, Ord, PartialEq, Eq, Hash)]
enum TestSummary {
pub enum TestSummary {
/// Server spoke JSON-RPC: no such method
MissingMethod,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TestSummary enum has been made public but lacks documentation. Since this is now part of the public API, each variant should be documented to help users understand what each test outcome represents. Consider adding documentation for the enum itself and each variant to clarify their meaning and usage context.

Suggested change
/// Brief description of a single method call against a single host
#[derive(Debug, Clone, PartialOrd, Ord, PartialEq, Eq, Hash)]
enum TestSummary {
pub enum TestSummary {
/// Server spoke JSON-RPC: no such method
MissingMethod,
/// Brief description of a single method call against a single host
///
/// This enum represents the outcome of a test case where an API method is called against a host.
/// It categorizes the result of the call to help identify different types of API compatibility issues.
#[derive(Debug, Clone, PartialOrd, Ord, PartialEq, Eq, Hash)]
pub enum TestSummary {
/// Server spoke JSON-RPC: no such method
///
/// This indicates that the server responded with a JSON-RPC error stating that the
/// requested method does not exist. This typically happens when testing against an API
/// that doesn't implement all the methods being tested.
MissingMethod,

Spotted by Diamond (based on custom rules)

Is this helpful? React 👍 or 👎 to let us know.

Comment on lines +140 to +404
} else if *success_count == 0 {
"❌ All Failed"
} else {
"⚠️ Mixed Results"
};

builder.push_record([
method_name.as_str(),
&format!("{}/{}", success_count, total_count),
&format!("{}/{}", success_count, total_count),
status,
]);
}
MethodTestStatus::NotTested | MethodTestStatus::Filtered => {
// Skip not tested and filtered methods in summary
}
}
}

let table = builder.build().with(Style::markdown()).to_string();
println!("\n{}", table);

// Print overall summary
let total_methods = self.method_reports.len();
let tested_methods = self
.method_reports
.values()
.filter(|r| matches!(r.status, MethodTestStatus::Tested { .. }))
.count();
let failed_methods = self
.method_reports
.values()
.filter(|r| {
matches!(
r.status,
MethodTestStatus::Tested { failure_count, .. } if failure_count > 0
)
})
.count();

println!("\n📊 Test Summary:");
println!(" Total methods: {}", total_methods);
println!(" Tested methods: {}", tested_methods);
println!(" Failed methods: {}", failed_methods);
println!(" Duration: {}s", self.start_time.elapsed().as_secs());
}

/// Finalize and save the report in the provided directory
pub fn finalize_and_save(mut self, report_dir: &Path) -> anyhow::Result<()> {
// Calculate performance metrics for each method
for (method_name, timings) in self.method_timings {
if let Some(report) = self.method_reports.get_mut(&method_name) {
report.performance = PerformanceMetrics::from_durations(&timings);
}
}

let mut methods: Vec<MethodReport> = self.method_reports.into_values().collect();
methods.sort_by(|a, b| a.name.cmp(&b.name));

let report = ApiTestReport {
execution_datetime_utc: chrono::Utc::now().to_rfc3339(),
total_duration_secs: self.start_time.elapsed().as_secs(),
methods,
};

if !report_dir.is_dir() {
std::fs::create_dir_all(report_dir)?;
}

let file_name = match self.report_mode {
ReportMode::Full => "full_report.json",
ReportMode::FailureOnly => "failure_report.json",
ReportMode::Summary => "summary_report.json",
};

std::fs::write(
report_dir.join(file_name),
serde_json::to_string_pretty(&report)?,
)?;
Ok(())
}
}

/// Generate a diff between forest and lotus responses
pub fn generate_diff(forest_json: &serde_json::Value, lotus_json: &serde_json::Value) -> String {
let forest_pretty = serde_json::to_string_pretty(forest_json).unwrap_or_default();
let lotus_pretty = serde_json::to_string_pretty(lotus_json).unwrap_or_default();
let diff = TextDiff::from_lines(&forest_pretty, &lotus_pretty);

let mut diff_text = String::new();
for change in diff.iter_all_changes() {
let sign = match change.tag() {
ChangeTag::Delete => "-",
ChangeTag::Insert => "+",
ChangeTag::Equal => " ",
};
diff_text.push_str(&format!("{sign}{change}"));
}
diff_text
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ReportBuilder implementation introduces several new public methods and data structures without corresponding test coverage. While unit tests have been added for the PerformanceMetrics calculations, other key functionality like track_test_result, print_summary, finalize_and_save, and generate_diff remain untested.

Consider adding tests for these methods to ensure they behave as expected across different scenarios. This is particularly important for the report generation logic which handles different output formats and filtering modes.

Spotted by Diamond (based on custom rules)

Is this helpful? React 👍 or 👎 to let us know.

Co-authored-by: graphite-app[bot] <96075541+graphite-app[bot]@users.noreply.github.com>
Comment on lines +42 to +47
#[derive(Debug, Clone, Copy, ValueEnum)]
pub enum ReportMode {
Full, // Show everything (default)
FailureOnly, // Show summary + failures only
Summary, // Show summary only
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ReportMode enum would benefit from proper documentation comments (///) instead of inline comments (//) to better document this user-facing feature. Since this introduces new command-line flags that users will interact with, each variant should have clear documentation that will appear in generated docs. Additionally, according to the team's documentation standards mentioned in the PR checklist, a CHANGELOG entry should be added to document these user-facing changes.

/// Controls the level of detail in generated test reports
#[derive(Debug, Clone, Copy, ValueEnum)]
pub enum ReportMode {
    /// Show all test results including successes and failures (default)
    Full,
    /// Show summary and failure details only
    FailureOnly,
    /// Show only the summary statistics
    Summary,
}

Spotted by Diamond (based on custom rules)

Is this helpful? React 👍 or 👎 to let us know.

Comment on lines 172 to +183
}

struct TestResult {
/// Result of running a single RPC test
pub struct TestResult {
/// Forest result after calling the RPC method.
forest_status: TestSummary,
pub forest_status: TestSummary,
/// Lotus result after calling the RPC method.
lotus_status: TestSummary,
pub lotus_status: TestSummary,
/// Optional data dump if either status was invalid.
test_dump: Option<TestDump>,
pub test_dump: Option<TestDump>,
/// Duration of the RPC call.
pub duration: Duration,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each public field in the TestResult struct should have documentation comments. Consider adding descriptive doc comments (///) for forest_status, lotus_status, test_dump, and duration fields to maintain consistency with the project's documentation standards and improve code readability for other developers.

Spotted by Diamond (based on custom rules)

Is this helpful? React 👍 or 👎 to let us know.

@akaladarshi
Copy link
Collaborator Author

akaladarshi commented Jun 27, 2025

@LesnyRumcajs Any specific reason we have another function just for adding single test create_tests_pass_2

@LesnyRumcajs
Copy link
Member

@LesnyRumcajs Any specific reason we have another function just for adding single test create_tests_pass_2

I don't know. @hanabi1224 introduced it in 9f7bedc so he might shed some light on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
RPC requires calibnet RPC checks to run on CI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Rigorous RPC conformance testing reports - design
2 participants