-
Notifications
You must be signed in to change notification settings - Fork 224
feat(benchmark): Add performance test suite and analysis panel #1987
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* fix: docs dead links * fix: add /en prefix to english docs
- Upgrade manifest version from v2 to v3 with updated permissions format - Migrate background scripts to service worker - Update content_security_policy and web_accessible_resources format - Replace browser_action with action - Upgrade react and react-dom from v16 to v18 in g-devtool - Update devtool UI to support React 18 createRoot API - Maintain backward compatibility with legacy versions - Update minimum chrome version requirement to 88
- Add new GitHub Actions workflow 'bug-report-reproduction-check' - Automatically analyze new bug reports for reproduction steps - Use Mistral AI to check for complete reproduction information - Add friendly comment when reproduction details are missing - Only trigger for issues labeled as 'bug' - Add necessary permissions for issues and models access
- Add benchmark infrastructure with TestCase and TestRunner base classes - Implement test cases for basic shapes (circle, rect, path, etc.) across multiple renderers - Support g-canvas, g-canvas-v4 and zrender rendering engines - Add UI components for test execution and result visualization - Include i18n support with Chinese and English translations - Set up build configuration with Vite and TypeScript
- Add state to track insight panel expansion - Implement collapsible UI with smooth animations - Improve styling and layout of insight panel - Add expand/collapse toggle functionality - Enhance user experience with better visual feedback
Add benchmark results for basic shapes rendering comparison between different engines including g-canvas and zrender. The results include execution duration and memory usage metrics.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @wang1212, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new, comprehensive performance benchmark suite designed to compare the rendering performance of different graphics engines, specifically g-canvas, g-canvas-v4, and zrender. It includes a user interface with control panels, real-time logs, and an interactive chart for detailed analysis of test results, along with internationalization support.
Highlights
- New Benchmark Suite: A new, dedicated 'benchmark' directory has been added, containing a complete framework for performance testing of graphics rendering engines.
- Multi-Engine Comparison: The suite is specifically configured to test and compare the rendering performance of 'g-canvas', 'g-canvas-v4', and 'zrender' engines.
- Basic Shape Test Cases: Comprehensive test cases are included for various basic shapes, such as circles, ellipses, rectangles, lines, polylines, polygons, paths, text, and images, to evaluate rendering efficiency.
- Interactive Analysis Panel: A user-friendly interface is provided, featuring control panels for test configuration, real-time logging of test progress and results, and a dynamic chart for visualizing performance data and insights.
- Performance Metrics Collection: The system collects detailed performance metrics, including execution duration and memory usage, for both the test execution and cleanup phases, offering a thorough performance profile.
- Internationalization (i18n): The benchmark tool supports multiple languages, currently English and Chinese, enhancing usability and accessibility for a broader audience.
- Result Export Functionality: Users can export benchmark results and logs in various formats for external analysis, reporting, and historical tracking.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces an impressive and comprehensive benchmark suite for performance testing. The overall structure is well-designed, with a clear separation of concerns between the test runner, engines, and test cases. The UI is also well-thought-out, providing useful controls, logs, and visualizations. I've identified a few areas for improvement, mainly concerning type safety, potential memory leaks in test cases, and opportunities for refactoring to reduce code duplication. Addressing these points will enhance the robustness and maintainability of this new benchmark framework.
- Add new translation key 'highestFailureRate' for displaying failure rate in both English and Chinese - Refactor failure rate display to use i18n template - Improve code formatting in PerformanceChart component - Fix whitespace and indentation issues in TestRunner
* feat(benchmark): Add performance test suite and analysis panel (#1987) * fix: docs dead links (#1984) * fix: docs dead links * fix: add /en prefix to english docs * feat: upgrade chrome extension to manifest v3 and react to v18 - Upgrade manifest version from v2 to v3 with updated permissions format - Migrate background scripts to service worker - Update content_security_policy and web_accessible_resources format - Replace browser_action with action - Upgrade react and react-dom from v16 to v18 in g-devtool - Update devtool UI to support React 18 createRoot API - Maintain backward compatibility with legacy versions - Update minimum chrome version requirement to 88 * feat: add GitHub workflow for bug report reproduction check - Add new GitHub Actions workflow 'bug-report-reproduction-check' - Automatically analyze new bug reports for reproduction steps - Use Mistral AI to check for complete reproduction information - Add friendly comment when reproduction details are missing - Only trigger for issues labeled as 'bug' - Add necessary permissions for issues and models access * feat: add benchmark suite for rendering performance comparison - Add benchmark infrastructure with TestCase and TestRunner base classes - Implement test cases for basic shapes (circle, rect, path, etc.) across multiple renderers - Support g-canvas, g-canvas-v4 and zrender rendering engines - Add UI components for test execution and result visualization - Include i18n support with Chinese and English translations - Set up build configuration with Vite and TypeScript * feat(benchmark): add collapsible insight panel in PerformanceChart - Add state to track insight panel expansion - Implement collapsible UI with smooth animations - Improve styling and layout of insight panel - Add expand/collapse toggle functionality - Enhance user experience with better visual feedback * chore(benchmark): add performance test results for basic shapes Add benchmark results for basic shapes rendering comparison between different engines including g-canvas and zrender. The results include execution duration and memory usage metrics. * feat(benchmark): enhance i18n support for failure rate display - Add new translation key 'highestFailureRate' for displaying failure rate in both English and Chinese - Refactor failure rate display to use i18n template - Improve code formatting in PerformanceChart component - Fix whitespace and indentation issues in TestRunner * chore: remove other file * feat: Add native pan and zoom demo (#1994) * feat: add native pan and zoom demo Adds a new demo under `__tests__/demos/camera/` that showcases how to implement panning and zooming on the canvas using native DOM events. This is in response to the user request to add such a demo. An issue in the execution environment prevented the test suite from being run. A `commitlint` hook blocked all commands, including `pnpm test`. The changes are submitted without test verification due to this environmental constraint. * fix: use getContextService for container access in nativePanZoom demo --------- Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com> Co-authored-by: wang1212 <mrwang1212@126.com> * feat: add script to fetch and display npm download stats for monorepo packages * chore: update test config and TypeScript settings - Add JSDoc link to jest.unit.config.js - Fix module name mapper path in jest.unit.config.js - Expand coverage collection to more packages - Update coverage reporters - Move isolatedModules to tsconfig.json * fix: fix loop index in tapable (#2003) * fix: fix loop index in SyncWaterfallHook and AsyncSeriesWaterfallHook - Fix loop index in SyncWaterfallHook to start from 1 instead of 0 since the first callback is already called - Apply the same fix to AsyncSeriesWaterfallHook for consistency - Add comprehensive unit tests for all tapable hook types * chore: fix code style * chore: fix code lint issue * chore: add changeset * Add basic shape benchmark cases for g-canvas-local engine (#2030) * test: Add basic shape benchmark cases for g-canvas-local engine * test: Add basic shape benchmark cases for g-canvas-local engine * Update benchmark/src/benchmarks/g-canvas-local/engine.ts Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * feat(text): add text-decoration support for text elements (#2035) * feat(text): add text-decoration support for text elements * docs: update text decoration info * docs: fix typos * perf: element event batch triggering (#2005) * perf: element event batch triggering * chore: update test snapshot * chore: use Array.from to convert iterator for compatibility * chore: add changeset * Update __tests__/demos/perf/custom-event.ts Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * perf: remove rBush logic from element picking mechanism (#2031) * perf: remove rBush logic from element picking mechanism * chore: fix lint error * chore: add changeset * chore: update test case * fix: the element picking range includes the element border * chore(release): bump version (#2004) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * perf(g-plugin-canvas-renderer): improve wavy text decoration with quadratic curves * Update __tests__/demos/event/hit-test.ts Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Feature Overview
Key Changes
Test Coverage
Performance metrics are collected for the following basic shapes: