-
Notifications
You must be signed in to change notification settings - Fork 166
feat: generate testing reports in RPC conformance testing #5778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
15e1488
to
d77cfa4
Compare
d77cfa4
to
d1808e7
Compare
/// Brief description of a single method call against a single host | ||
#[derive(Debug, Clone, PartialOrd, Ord, PartialEq, Eq, Hash)] | ||
enum TestSummary { | ||
pub enum TestSummary { | ||
/// Server spoke JSON-RPC: no such method | ||
MissingMethod, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TestSummary
enum has been made public but lacks documentation. Since this is now part of the public API, each variant should be documented to help users understand what each test outcome represents. Consider adding documentation for the enum itself and each variant to clarify their meaning and usage context.
/// Brief description of a single method call against a single host | |
#[derive(Debug, Clone, PartialOrd, Ord, PartialEq, Eq, Hash)] | |
enum TestSummary { | |
pub enum TestSummary { | |
/// Server spoke JSON-RPC: no such method | |
MissingMethod, | |
/// Brief description of a single method call against a single host | |
/// | |
/// This enum represents the outcome of a test case where an API method is called against a host. | |
/// It categorizes the result of the call to help identify different types of API compatibility issues. | |
#[derive(Debug, Clone, PartialOrd, Ord, PartialEq, Eq, Hash)] | |
pub enum TestSummary { | |
/// Server spoke JSON-RPC: no such method | |
/// | |
/// This indicates that the server responded with a JSON-RPC error stating that the | |
/// requested method does not exist. This typically happens when testing against an API | |
/// that doesn't implement all the methods being tested. | |
MissingMethod, | |
Spotted by Diamond (based on custom rules)
Is this helpful? React 👍 or 👎 to let us know.
} else if *success_count == 0 { | ||
"❌ All Failed" | ||
} else { | ||
"⚠️ Mixed Results" | ||
}; | ||
|
||
builder.push_record([ | ||
method_name.as_str(), | ||
&format!("{}/{}", success_count, total_count), | ||
&format!("{}/{}", success_count, total_count), | ||
status, | ||
]); | ||
} | ||
MethodTestStatus::NotTested | MethodTestStatus::Filtered => { | ||
// Skip not tested and filtered methods in summary | ||
} | ||
} | ||
} | ||
|
||
let table = builder.build().with(Style::markdown()).to_string(); | ||
println!("\n{}", table); | ||
|
||
// Print overall summary | ||
let total_methods = self.method_reports.len(); | ||
let tested_methods = self | ||
.method_reports | ||
.values() | ||
.filter(|r| matches!(r.status, MethodTestStatus::Tested { .. })) | ||
.count(); | ||
let failed_methods = self | ||
.method_reports | ||
.values() | ||
.filter(|r| { | ||
matches!( | ||
r.status, | ||
MethodTestStatus::Tested { failure_count, .. } if failure_count > 0 | ||
) | ||
}) | ||
.count(); | ||
|
||
println!("\n📊 Test Summary:"); | ||
println!(" Total methods: {}", total_methods); | ||
println!(" Tested methods: {}", tested_methods); | ||
println!(" Failed methods: {}", failed_methods); | ||
println!(" Duration: {}s", self.start_time.elapsed().as_secs()); | ||
} | ||
|
||
/// Finalize and save the report in the provided directory | ||
pub fn finalize_and_save(mut self, report_dir: &Path) -> anyhow::Result<()> { | ||
// Calculate performance metrics for each method | ||
for (method_name, timings) in self.method_timings { | ||
if let Some(report) = self.method_reports.get_mut(&method_name) { | ||
report.performance = PerformanceMetrics::from_durations(&timings); | ||
} | ||
} | ||
|
||
let mut methods: Vec<MethodReport> = self.method_reports.into_values().collect(); | ||
methods.sort_by(|a, b| a.name.cmp(&b.name)); | ||
|
||
let report = ApiTestReport { | ||
execution_datetime_utc: chrono::Utc::now().to_rfc3339(), | ||
total_duration_secs: self.start_time.elapsed().as_secs(), | ||
methods, | ||
}; | ||
|
||
if !report_dir.is_dir() { | ||
std::fs::create_dir_all(report_dir)?; | ||
} | ||
|
||
let file_name = match self.report_mode { | ||
ReportMode::Full => "full_report.json", | ||
ReportMode::FailureOnly => "failure_report.json", | ||
ReportMode::Summary => "summary_report.json", | ||
}; | ||
|
||
std::fs::write( | ||
report_dir.join(file_name), | ||
serde_json::to_string_pretty(&report)?, | ||
)?; | ||
Ok(()) | ||
} | ||
} | ||
|
||
/// Generate a diff between forest and lotus responses | ||
pub fn generate_diff(forest_json: &serde_json::Value, lotus_json: &serde_json::Value) -> String { | ||
let forest_pretty = serde_json::to_string_pretty(forest_json).unwrap_or_default(); | ||
let lotus_pretty = serde_json::to_string_pretty(lotus_json).unwrap_or_default(); | ||
let diff = TextDiff::from_lines(&forest_pretty, &lotus_pretty); | ||
|
||
let mut diff_text = String::new(); | ||
for change in diff.iter_all_changes() { | ||
let sign = match change.tag() { | ||
ChangeTag::Delete => "-", | ||
ChangeTag::Insert => "+", | ||
ChangeTag::Equal => " ", | ||
}; | ||
diff_text.push_str(&format!("{sign}{change}")); | ||
} | ||
diff_text | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ReportBuilder
implementation introduces several new public methods and data structures without corresponding test coverage. While unit tests have been added for the PerformanceMetrics
calculations, other key functionality like track_test_result
, print_summary
, finalize_and_save
, and generate_diff
remain untested.
Consider adding tests for these methods to ensure they behave as expected across different scenarios. This is particularly important for the report generation logic which handles different output formats and filtering modes.
Spotted by Diamond (based on custom rules)
Is this helpful? React 👍 or 👎 to let us know.
#[derive(Debug, Clone, Copy, ValueEnum)] | ||
pub enum ReportMode { | ||
Full, // Show everything (default) | ||
FailureOnly, // Show summary + failures only | ||
Summary, // Show summary only | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ReportMode
enum would benefit from proper documentation comments (///) instead of inline comments (//) to better document this user-facing feature. Since this introduces new command-line flags that users will interact with, each variant should have clear documentation that will appear in generated docs. Additionally, according to the team's documentation standards mentioned in the PR checklist, a CHANGELOG entry should be added to document these user-facing changes.
/// Controls the level of detail in generated test reports
#[derive(Debug, Clone, Copy, ValueEnum)]
pub enum ReportMode {
/// Show all test results including successes and failures (default)
Full,
/// Show summary and failure details only
FailureOnly,
/// Show only the summary statistics
Summary,
}
Spotted by Diamond (based on custom rules)
Is this helpful? React 👍 or 👎 to let us know.
} | ||
|
||
struct TestResult { | ||
/// Result of running a single RPC test | ||
pub struct TestResult { | ||
/// Forest result after calling the RPC method. | ||
forest_status: TestSummary, | ||
pub forest_status: TestSummary, | ||
/// Lotus result after calling the RPC method. | ||
lotus_status: TestSummary, | ||
pub lotus_status: TestSummary, | ||
/// Optional data dump if either status was invalid. | ||
test_dump: Option<TestDump>, | ||
pub test_dump: Option<TestDump>, | ||
/// Duration of the RPC call. | ||
pub duration: Duration, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each public field in the TestResult
struct should have documentation comments. Consider adding descriptive doc comments (///) for forest_status
, lotus_status
, test_dump
, and duration
fields to maintain consistency with the project's documentation standards and improve code readability for other developers.
Spotted by Diamond (based on custom rules)
Is this helpful? React 👍 or 👎 to let us know.
@LesnyRumcajs Any specific reason we have another function just for adding single test create_tests_pass_2 |
I don't know. @hanabi1224 introduced it in 9f7bedc so he might shed some light on this. |
Summary of changes
Changes introduced in this pull request:
--report-dir
inforest-tool api compare
cmd to dump the generated report in JSON format--report-mode
to generate different type of reports:summary
failure-only
full
(Default)Reference issue to close (if applicable)
Closes: #4308
Other information and links
Change checklist