-
Notifications
You must be signed in to change notification settings - Fork 492
feat(prerenderer): add prom + 503 overload protection #595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
c16df88 to
1e386b9
Compare
28abae1 to
44647ba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds Prometheus metrics support and overload protection to the prerender server. The implementation prevents server overload by limiting concurrent renders and provides monitoring capabilities through metrics.
- Adds Prometheus client for collecting metrics on active renders, total renders, and render duration
- Implements overload protection by limiting concurrent renders to 2x CPU count across cluster workers
- Exposes metrics endpoint for monitoring server performance
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| prerender-server/src/server.js | Adds Prometheus metrics collection and overload protection logic |
| prerender-server/src/cluster.js | Implements cluster-level render limiting with message passing |
| prerender-server/package.json | Adds prom-client dependency |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| if (typeof process.send === 'function') { | ||
| const requestId = ++requestCounter | ||
| process.send({ type: 'startRender', requestId }) | ||
| let responded = false | ||
| const handler = (msg) => { | ||
| if (msg.requestId === requestId && !responded) { | ||
| responded = true | ||
| clearTimeout(timeout) | ||
| process.removeListener('message', handler) | ||
| if (msg.type === 'renderAllowed') { | ||
| doRender() | ||
| } else if (msg.type === 'renderDenied') { | ||
| res.status(503).send('Server overloaded') | ||
| } | ||
| } | ||
| } | ||
| process.on('message', handler) | ||
| const timeout = setTimeout(() => { | ||
| if (!responded) { | ||
| responded = true | ||
| process.removeListener('message', handler) | ||
| console.error('IPC timeout for request', requestId) | ||
| res.status(500).send('Internal server error') | ||
| } | ||
| }, 5000) // 5 second timeout |
Copilot
AI
Sep 23, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The requestCounter increment operation is not atomic and could cause race conditions in concurrent scenarios. Consider using a more robust method for generating unique request IDs or implement proper synchronization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ++requestCounter is atomic in Node.js since each worker is single-threaded, and requests are processed sequentially in the event loop. No race conditions possible per worker.
The counter is unique per worker, and since IPC is worker-to-master, it works correctly.
The code is solid as-is.
No description provided.