-
-
Notifications
You must be signed in to change notification settings - Fork 908
feat(XMarkdown): streaming markdown benchmark #1314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @Div627, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new, standardized performance benchmark for streaming Markdown rendering. It utilizes Playwright Component Testing to automate the measurement of various Markdown libraries, including Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive performance benchmark for streaming Markdown rendering using Playwright, which is a valuable addition for evaluating and comparing different rendering libraries. The overall implementation is solid. My review focuses on improving code quality, maintainability, and the robustness of the testing setup. I've included suggestions for using proper TypeScript types, avoiding hardcoded values, ensuring test failures are correctly reported, and optimizing component rendering logic within the benchmark.
| }; | ||
| }); | ||
|
|
||
| await page.context().tracing.stop({ path: `test-results/trace-xmarkdown.zip` }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The trace file path is hardcoded. This will cause the trace file to be overwritten by each test run, losing valuable debugging information for all but the last test. The path should be unique for each renderer being tested.
| await page.context().tracing.stop({ path: `test-results/trace-xmarkdown.zip` }); | |
| await page.context().tracing.stop({ path: `test-results/trace-${name}.zip` }); |
| try { | ||
| const result = await measure({ name: rendererName, page, mount, browserName }); | ||
| results.push(result); | ||
| } catch (error) { | ||
| console.error(`Error in ${rendererName}-Performance:`, error); | ||
| results.push({ | ||
| name: rendererName, | ||
| duration: 0, | ||
| avgFPS: 0, | ||
| minFPS: 0, | ||
| maxFPS: 0, | ||
| maxMemory: 0, | ||
| avgMemory: 0, | ||
| totalFrames: 0, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The try...catch block prevents a single failing benchmark from halting the entire test suite, which is good. However, by catching the error without re-throwing it, a failing test will be reported as 'passed' by the test runner. This is misleading and can hide real problems in the benchmark. To ensure failing tests are correctly reported, you should re-throw the error after handling it.
try {
const result = await measure({ name: rendererName, page, mount, browserName });
results.push(result);
} catch (error) {
console.error(`Error in ${rendererName}-Performance:`, error);
results.push({
name: rendererName,
duration: 0,
avgFPS: 0,
minFPS: 0,
maxFPS: 0,
maxMemory: 0,
avgMemory: 0,
totalFrames: 0,
});
throw error;
}
| const MarkdownItRenderer: FC<MarkdownRendererProps> = (props) => { | ||
| const md = new MarkdownIt(); | ||
|
|
||
| return ( | ||
| // biome-ignore lint/security/noDangerouslySetInnerHtml: benchmark only | ||
| <div dangerouslySetInnerHTML={{ __html: md.render(props.md) }} /> | ||
| ); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The MarkdownIt instance is created on every render of the MarkdownItRenderer component. This is inefficient and might not accurately reflect a real-world usage scenario where the instance would be initialized once and reused. This could also skew the benchmark results by including the instantiation time in every render measurement.
To fix this, you should instantiate MarkdownIt outside the component, so it's only created once per module.
const md = new MarkdownIt();
const MarkdownItRenderer: FC<MarkdownRendererProps> = (props) => {
return (
// biome-ignore lint/security/noDangerouslySetInnerHtml: benchmark only
<div dangerouslySetInnerHTML={{ __html: md.render(props.md) }} />
);
};| "test:all": "playwright test -c playwright-ct.config.ts", | ||
| "setup": "node scripts/setup.js", | ||
| "test-ct": "playwright test -c playwright-ct.config.ts" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The npm scripts test:all and test-ct are identical. This redundancy can be confusing. It's better to have a single, clearly named script for a given action. Since you are using Playwright Component Testing, test-ct is more descriptive.
"setup": "node scripts/setup.js",
"test-ct": "playwright test -c playwright-ct.config.ts"| @@ -0,0 +1,183 @@ | |||
| import { test } from '@playwright/experimental-ct-react'; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| default: { | ||
| return <div>{md}</div>; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default case in the getRenderer switch statement currently renders the raw markdown text. This can silently hide errors if an unexpected renderer name is ever passed. It's safer to throw an error to fail fast and make the problem immediately obvious.
| default: { | |
| return <div>{md}</div>; | |
| } | |
| default: { | |
| throw new Error(`Unknown renderer: ${name}`); | |
| } |
| }: { | ||
| name: string; | ||
| page: any; | ||
| mount: any; | ||
| browserName: string; | ||
| }): Promise<BenchmarkResult> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using any for page and mount parameters defeats the purpose of TypeScript. Use the specific types from Playwright for better type safety and code clarity. You'll need to import Page and MountResult as suggested in a separate comment.
| }: { | |
| name: string; | |
| page: any; | |
| mount: any; | |
| browserName: string; | |
| }): Promise<BenchmarkResult> { | |
| }: { | |
| name: string; | |
| page: Page; | |
| mount: (component: React.ReactElement) => Promise<MountResult>; | |
| browserName: string; | |
| }): Promise<BenchmarkResult> { |
|
|
||
| await page.context().tracing.start({ | ||
| screenshots: true, | ||
| title: `XMarkdown_Stream_Perf_${browserName}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The trace title is not unique per renderer. When running multiple benchmark tests, this can make it harder to distinguish between traces. Including the renderer name in the title will make each trace uniquely identifiable.
| title: `XMarkdown_Stream_Perf_${browserName}`, | |
| title: `XMarkdown_Stream_Perf_${name}_${browserName}`, |
|
|
||
| const component = await mount(<Empty />); | ||
|
|
||
| const updateInterval = 50; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file uses several 'magic numbers' for configuration, such as 50 for the update interval, 100 for the chunk size (line 98), and 1 and 120 for FPS filtering (line 116). It's a good practice to extract these into named constants at the top of the file. This improves readability and makes the benchmark easier to configure.
For example:
const UPDATE_INTERVAL = 50;
const CHUNK_SIZE = 100;
const MIN_VALID_FPS = 1;
const MAX_VALID_FPS = 120;
Bundle ReportChanges will increase total bundle size by 300.1kB (169.65%) ⬆️
Affected Assets, Files, and Routes:view changes for bundle: x-markdown-array-pushAssets Changed:
|
size-limit report 📦
|
Deploying ant-design-x with
|
| Latest commit: |
d95896e
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://aababc3c.ant-design-x.pages.dev |
| Branch Preview URL: | https://feature-markdown-benchmark.ant-design-x.pages.dev |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## next #1314 +/- ##
=======================================
Coverage 94.06% 94.06%
=======================================
Files 144 144
Lines 3708 3708
Branches 1028 1042 +14
=======================================
Hits 3488 3488
Misses 218 218
Partials 2 2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|

中文版模板 / Chinese template
🤔 This is a ...
🔗 Related Issues
This PR implements the benchmark proposed in the RFC:
💡 Background and Solution
📝 Change Log