-
Notifications
You must be signed in to change notification settings - Fork 215
Task5 #93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Task5 #93
Conversation
Reviewer's GuideThis PR introduces a full-featured report generation workflow: a new backend module computes CLIP and LPIPS scores, packages results into a zip via a FastAPI endpoint, and adds frontend support for triggering and downloading the report, alongside documentation and basic schema testing. Sequence diagram for report generation via FastAPI endpointsequenceDiagram
actor User
participant Frontend
participant FastAPI_Server
participant ReportModule
participant FileSystem
User->>Frontend: Click 'Generate Report' button
Frontend->>FastAPI_Server: POST /generate_report (with settings)
FastAPI_Server->>ReportModule: run_report(settings)
ReportModule->>FileSystem: Read results.csv, grids/, README.txt
ReportModule->>ReportModule: Compute CLIP & LPIPS scores
ReportModule->>FileSystem: Write updated results.csv, config.json
ReportModule->>FileSystem: Create report.zip
ReportModule-->>FastAPI_Server: Path to report.zip
FastAPI_Server-->>Frontend: Return report.zip
Frontend-->>User: Download report.zip
Class diagram for report generation module and FastAPI endpointclassDiagram
class ReportModule {
+compute_scores(prompt, image_path, reference_image_path)
+add_scores_to_csv(results_file)
+generate_config_from_frontend(config_file, settings_dict)
+create_report_zip(output_zip)
+run_report(settings_from_frontend)
}
class FastAPI_Server {
+generate_report_endpoint(frontend_settings)
}
FastAPI_Server --> ReportModule: calls run_report
ReportModule --> FileSystem: reads/writes files
ReportModule --> CLIP_Model: uses for scoring
ReportModule --> LPIPS_Model: uses for scoring
class FileSystem {
results.csv
config.json
README.txt
grids/
report.zip
}
class CLIP_Model {
+encode_image(image_tensor)
+encode_text(text)
}
class LPIPS_Model {
+__call__(a, b)
}
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
WalkthroughAdds end-to-end “Report Bundle” generation: frontend triggers a new POST /generate_report API to build a ZIP (results.csv, config.json, README, grids). Implements a FastAPI server, report generation pipeline with CLIP/LPIPS scoring, packaging, and a reusable React button. Includes README, example config, and a CSV schema test script. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Frontend (Txt2Img/GenerateReportButton)
participant API as FastAPI /generate_report
participant RG as run_report()
participant FS as Filesystem
UI->>API: POST frontend_settings
API->>RG: run_report(settings)
RG->>FS: Validate inputs (results.csv, grids, README)
RG->>FS: Compute scores, update results.csv
RG->>FS: Write config.json
RG->>FS: Create report.zip
RG-->>API: Path to report.zip
API-->>UI: FileResponse (report.zip)
sequenceDiagram
participant RR as run_report()
participant CSV as add_scores_to_csv
participant CFG as generate_config_from_frontend
participant ZIP as create_report_zip
RR->>CSV: Update results.csv with CLIP/LPIPS
RR->>CFG: Write config.json from settings
RR->>ZIP: Zip results/config/README/grids
ZIP-->>RR: report.zip path
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @lialyAdam - I've reviewed your changes - here's some feedback:
- Endpoint path mismatch: frontend calls
/api/generate-reportwhereas backend defines/generate_report—unify the route naming and path across frontend and server. - In add_scores_to_csv you open and write
results.csvtwice in a row; remove the redundant write block to avoid overwriting the updated data. - The download/report logic in Txt2ImgPage duplicates the GenerateReportButton component—extract the shared fetch-and-download code into a single reusable hook or component.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Endpoint path mismatch: frontend calls `/api/generate-report` whereas backend defines `/generate_report`—unify the route naming and path across frontend and server.
- In add_scores_to_csv you open and write `results.csv` twice in a row; remove the redundant write block to avoid overwriting the updated data.
- The download/report logic in Txt2ImgPage duplicates the GenerateReportButton component—extract the shared fetch-and-download code into a single reusable hook or component.
## Individual Comments
### Comment 1
<location> `tasks/report_bundle/generate_report.py:85` </location>
<code_context>
+ row['run_id'] = f"run_{row.get('id', '')}"
+ rows.append(row)
+
+ with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
+ writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
+ writer.writeheader()
</code_context>
<issue_to_address>
Duplicate block for writing CSV file.
The file is being written twice; removing the duplicate write will improve efficiency.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
=======
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
>>>>>>> REPLACE
</suggested_fix>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | ||
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | ||
| writer.writeheader() | ||
| writer.writerows(rows) | ||
|
|
||
|
|
||
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | ||
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | ||
| writer.writeheader() | ||
| writer.writerows(rows) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Duplicate block for writing CSV file.
The file is being written twice; removing the duplicate write will improve efficiency.
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | |
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | |
| writer.writeheader() | |
| writer.writerows(rows) | |
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | |
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | |
| writer.writeheader() | |
| writer.writerows(rows) | |
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | |
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | |
| writer.writeheader() | |
| writer.writerows(rows) |
| if missing: | ||
| raise ValueError(f"CSV is missing required columns: {missing}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Avoid conditionals in tests. (no-conditionals-in-tests)
Explanation
Avoid complex code, like conditionals, in test functions.Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
- loops
- conditionals
Some ways to fix this:
- Use parametrized tests to get rid of the loop.
- Move the complex logic into helpers.
- Move the complex part into pytest fixtures.
Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.
Software Engineering at Google / Don't Put Logic in Tests
| ref_img = Image.open(reference_image_path).convert("RGB") | ||
| # lpips expects tensors in range [-1,1] as float with shape [1,3,H,W] | ||
| # use lpips helper if available or convert | ||
| from torchvision import transforms | ||
| to_tensor = transforms.ToTensor() | ||
| a = to_tensor(img).unsqueeze(0).mul(2).sub(1).to(device) # map [0,1] -> [-1,1] | ||
| b = to_tensor(ref_img).unsqueeze(0).mul(2).sub(1).to(device) | ||
| with torch.no_grad(): | ||
| lpips_score = float(lpips_model(a, b).item()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Extract code out into function (extract-method)
| """ | ||
| rows = [] | ||
| with open(results_file, newline='', encoding='utf-8') as csvfile: | ||
| reader = csv.DictReader(csvfile) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Extract code out into function (
extract-method) - Extract duplicate code into function (
extract-duplicate-method)
| html_content = """ | ||
| <html> | ||
| <head> | ||
| <title>DreamLayer Report API</title> | ||
| </head> | ||
| <body> | ||
| <h1>Welcome to DreamLayer Report API</h1> | ||
| <p>Use the <code>/generate_report</code> POST endpoint to generate reports.</p> | ||
| </body> | ||
| </html> | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Inline variable that is immediately returned (inline-immediately-returned-variable)
| raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.") | ||
| return FileResponse(zip_path, filename="report.zip", media_type="application/zip") | ||
| except Exception as e: | ||
| raise HTTPException(status_code=500, detail=str(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Explicitly raise from a previous error (raise-from-previous-error)
| raise HTTPException(status_code=500, detail=str(e)) | |
| raise HTTPException(status_code=500, detail=str(e)) from e |
| missing = required_columns - headers | ||
| if missing: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Use named expression to simplify assignment and conditional (use-named-expression)
| missing = required_columns - headers | |
| if missing: | |
| if missing := required_columns - headers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (12)
tasks/report_bundle/config.json (1)
1-8: Clarify sample vs generated config; consider renaming to avoid confusionThis checked-in config looks like a placeholder. Since run_report overwrites CONFIG_FILE from frontend settings, shipping a static config.json alongside can be confusing. Recommend renaming to config.sample.json (and ignoring config.json) or clearly documenting that config.json will be regenerated at runtime.
Would you like me to generate a small README snippet and .gitignore entry to enforce this?
tasks/report_bundle/README.txt (1)
11-13: Polish tone and clarity per style hintsTweak wording for a slightly more formal tone and clarity.
-Feel free to explore or reuse any part of it. Let me know if anything’s missing! +Please feel free to explore or reuse any part of it. If anything is missing, please let me know.tasks/report_bundle/generate_report.py (4)
27-31: Close images deterministically; minor import cleanupUse context managers so file handles are closed promptly. Also, keep ToTensor import at module scope or reuse a single instance.
- # open image - img = Image.open(image_path).convert("RGB") + # open image + with Image.open(image_path) as _img: + img = _img.convert("RGB") image_tensor = preprocess(img).unsqueeze(0).to(device) text = clip.tokenize([prompt]).to(device) @@ - if reference_image_path and os.path.exists(reference_image_path): - ref_img = Image.open(reference_image_path).convert("RGB") + if reference_image_path and os.path.exists(reference_image_path): + with Image.open(reference_image_path) as _ref_img: + ref_img = _ref_img.convert("RGB") # lpips expects tensors in range [-1,1] as float with shape [1,3,H,W] # use lpips helper if available or convert - from torchvision import transforms - to_tensor = transforms.ToTensor() + from torchvision import transforms + to_tensor = transforms.ToTensor() a = to_tensor(img).unsqueeze(0).mul(2).sub(1).to(device) # map [0,1] -> [-1,1] b = to_tensor(ref_img).unsqueeze(0).mul(2).sub(1).to(device)Also applies to: 37-49
71-84: Unify comment language for team readabilityNon-English inline comments can confuse readers. Suggest replacing Arabic comments with concise English.
- prompt = row.get('prompt', '') # لو موجود - ref_path = row.get('reference_image_path') # لو موجود + prompt = row.get('prompt', '') # optional + ref_path = row.get('reference_image_path') # optional @@ - # إنشاء run_id من id (مثلاً run_1, run_2, ...) + # derive run_id from id (e.g., run_1, run_2, ...)
107-116: Enable ZIP compression and preserve relative arcnamesUse deflated compression to shrink the bundle and ensure consistent arcnames.
-def create_report_zip(output_zip=OUTPUT_ZIP): - with zipfile.ZipFile(output_zip, 'w') as zf: +def create_report_zip(output_zip=OUTPUT_ZIP): + with zipfile.ZipFile(output_zip, 'w', compression=zipfile.ZIP_DEFLATED) as zf: zf.write(RESULTS_FILE) zf.write(CONFIG_FILE) zf.write(README_FILE) for image_file in os.listdir(GRIDS_DIR): full_path = os.path.join(GRIDS_DIR, image_file) if os.path.isfile(full_path): zf.write(full_path, arcname=os.path.join('grids', image_file))If you want max compression, I can add compresslevel for Python 3.7+.
123-139: Path resolution robustnessIf the server isn’t started from tasks/report_bundle/, relative constants can fail. Consider resolving paths relative to this module.
I can provide a short patch that adds BASE_DIR = Path(file).resolve().parent and rebinds RESULTS_FILE, CONFIG_FILE, etc., to BASE_DIR/… to harden path handling.
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx (1)
429-435: Optional: keep action buttons consistentConsider moving “Generate Report” into the existing ActionButtons component for consistent spacing and responsive behavior (and to co-locate CTA logic). Not required.
tasks/report_bundle/server.py (1)
32-38: Validate and constrain output path (defense-in-depth).
run_reportreturns a path that's sent back. As a safety net, ensure it points to an expected directory and is a file before responding to avoid accidental disclosure.- if not os.path.exists(zip_path): + if not (os.path.isfile(zip_path)): raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.")Optionally, assert it’s under a known reports directory.
tasks/report_bundle/src/components/GenerateReportButton.jsx (4)
41-44: Surface API JSON errors cleanly (extract detail).FastAPI returns JSON
{"detail": "..."}for errors. Parse JSON when appropriate for better UX.- if (!res.ok) { - const txt = await res.text(); - throw new Error(txt || `Server responded ${res.status}`); - } + if (!res.ok) { + const ct = res.headers.get("content-type") || ""; + let msg; + if (ct.includes("application/json")) { + try { + const data = await res.json(); + msg = (data && (data.detail || data.message)) || JSON.stringify(data); + } catch { + msg = await res.text(); + } + } else { + msg = await res.text(); + } + throw new Error(msg || `Server responded ${res.status}`); + }
46-55: Use server-provided filename from Content-Disposition when available.Respect the filename suggested by the API (e.g., timestamped zips).
- const blob = await res.blob(); + const blob = await res.blob(); const url = window.URL.createObjectURL(blob); const a = document.createElement("a"); - a.href = url; - a.download = "report.zip"; + a.href = url; + let filename = "report.zip"; + const cd = res.headers.get("content-disposition") || ""; + const m = cd.match(/filename\*=UTF-8''([^;]+)|filename="?([^";]+)"?/i); + if (m) { + filename = decodeURIComponent(m[1] || m[2]); + } + a.download = filename; document.body.appendChild(a); a.click(); a.remove(); window.URL.revokeObjectURL(url);
14-15: Make API URL environment-driven to avoid hard-coded localhost.Defaulting to localhost is fine for dev, but prefer env-driven or relative URLs to simplify deployment and avoid CORS.
- Prefer a relative default:
apiUrl = "/generate_report", or- Read from env (Vite):
apiUrl = import.meta.env.VITE_REPORT_API_URL || "/generate_report"I can submit a small PR wiring this through your build tooling if helpful.
64-74: Optional UX: cancellation and double-click protection.Consider using an
AbortControllerto cancel in-flight requests on re-click/unmount and guard against double-submits.High-level example:
- Keep a ref to an AbortController.
- Abort previous request before starting a new one.
- Clean up on unmount.
I can provide a concrete patch if you want this enhancement.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (5)
tasks/report_bundle.zipis excluded by!**/*.ziptasks/report_bundle/grids/grid1.pngis excluded by!**/*.pngtasks/report_bundle/grids/grid2.pngis excluded by!**/*.pngtasks/report_bundle/report.zipis excluded by!**/*.ziptasks/report_bundle/results.csvis excluded by!**/*.csv
📒 Files selected for processing (7)
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx(2 hunks)tasks/report_bundle/README.txt(1 hunks)tasks/report_bundle/config.json(1 hunks)tasks/report_bundle/generate_report.py(1 hunks)tasks/report_bundle/server.py(1 hunks)tasks/report_bundle/src/components/GenerateReportButton.jsx(1 hunks)tasks/report_bundle/test_schema.py(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
tasks/report_bundle/server.py (1)
tasks/report_bundle/generate_report.py (1)
run_report(118-139)
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx (1)
tasks/report_bundle/src/components/GenerateReportButton.jsx (1)
error(17-17)
🪛 LanguageTool
tasks/report_bundle/README.txt
[style] ~10-~10: Consider using a less common alternative to make your writing sound more unique and professional.
Context: ...ages — labeled previews of the outputs Feel free to explore or reuse any part of it. Let me...
(FEEL_FREE_TO_STYLE_ME)
[style] ~11-~11: Consider using polite language here.
Context: ...ree to explore or reuse any part of it. Let me know if anything’s missing! — Lial
(INSERT_PLEASE)
🪛 Ruff (0.12.2)
tasks/report_bundle/server.py
33-33: Do not perform function call Body in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
40-40: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
| const handleGenerateReport = async () => { | ||
| try { | ||
| const response = await fetch("/api/generate-report", { method: "POST" }); | ||
| if (!response.ok) throw new Error("Failed to generate report"); | ||
| const blob = await response.blob(); | ||
| const url = window.URL.createObjectURL(blob); | ||
| const link = document.createElement("a"); | ||
| link.href = url; | ||
| link.download = "report.zip"; | ||
| link.click(); | ||
| } catch (error) { | ||
| console.error("Error generating report:", error); | ||
| } | ||
| }; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Endpoint mismatch and missing payload; also revoke Blob URL
Server exposes POST /generate_report (underscore). Frontend calls /api/generate-report (hyphen) with no body. This likely 404s or fails JSON parsing. Send the settings payload (aligned with run_report) and use underscore path. Also revoke the URL.
- const handleGenerateReport = async () => {
- try {
- const response = await fetch("/api/generate-report", { method: "POST" });
+ const handleGenerateReport = async () => {
+ try {
+ // Reuse the same settings shape used for generation
+ const requestData = {
+ ...coreSettings,
+ ...(controlNetConfig && { controlnet: controlNetConfig }),
+ ...(loraConfig?.enabled && { lora: loraConfig }),
+ custom_workflow: customWorkflow || null,
+ };
+ const response = await fetch("/api/generate_report", {
+ method: "POST",
+ headers: { "Content-Type": "application/json" },
+ body: JSON.stringify(requestData),
+ });
if (!response.ok) throw new Error("Failed to generate report");
const blob = await response.blob();
const url = window.URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = "report.zip";
- link.click();
+ document.body.appendChild(link);
+ link.click();
+ link.remove();
+ setTimeout(() => window.URL.revokeObjectURL(url), 1000);
} catch (error) {
console.error("Error generating report:", error);
}
};To confirm the route name across the repo, run:
🏁 Script executed:
#!/bin/bash
rg -n -A2 -i $'/generate[_-]report'Length of output: 1462
Fix endpoint path, include JSON payload, and revoke Blob URL
Please update the report‐generation handler in Txt2ImgPage to match the backend’s /generate_report route, send the required settings in the request body, and clean up the created Blob URL to avoid leaks.
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx
const handleGenerateReport = async () => {
- try {
- const response = await fetch("/api/generate-report", { method: "POST" });
+ try {
+ // Send current generation settings to the backend
+ const requestData = {
+ ...coreSettings,
+ ...(controlNetConfig && { controlnet: controlNetConfig }),
+ ...(loraConfig?.enabled && { lora: loraConfig }),
+ custom_workflow: customWorkflow || null,
+ };
+ const response = await fetch("/api/generate_report", {
+ method: "POST",
+ headers: { "Content-Type": "application/json" },
+ body: JSON.stringify(requestData),
+ });
if (!response.ok) throw new Error("Failed to generate report");
const blob = await response.blob();
const url = window.URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = "report.zip";
- link.click();
+ // Append to DOM, trigger download, then clean up
+ document.body.appendChild(link);
+ link.click();
+ link.remove();
+ setTimeout(() => window.URL.revokeObjectURL(url), 1000);
} catch (error) {
console.error("Error generating report:", error);
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleGenerateReport = async () => { | |
| try { | |
| const response = await fetch("/api/generate-report", { method: "POST" }); | |
| if (!response.ok) throw new Error("Failed to generate report"); | |
| const blob = await response.blob(); | |
| const url = window.URL.createObjectURL(blob); | |
| const link = document.createElement("a"); | |
| link.href = url; | |
| link.download = "report.zip"; | |
| link.click(); | |
| } catch (error) { | |
| console.error("Error generating report:", error); | |
| } | |
| }; | |
| const handleGenerateReport = async () => { | |
| try { | |
| // Send current generation settings to the backend | |
| const requestData = { | |
| ...coreSettings, | |
| ...(controlNetConfig && { controlnet: controlNetConfig }), | |
| ...(loraConfig?.enabled && { lora: loraConfig }), | |
| custom_workflow: customWorkflow || null, | |
| }; | |
| const response = await fetch("/api/generate_report", { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify(requestData), | |
| }); | |
| if (!response.ok) throw new Error("Failed to generate report"); | |
| const blob = await response.blob(); | |
| const url = window.URL.createObjectURL(blob); | |
| const link = document.createElement("a"); | |
| link.href = url; | |
| link.download = "report.zip"; | |
| // Append to DOM, trigger download, then clean up | |
| document.body.appendChild(link); | |
| link.click(); | |
| link.remove(); | |
| setTimeout(() => window.URL.revokeObjectURL(url), 1000); | |
| } catch (error) { | |
| console.error("Error generating report:", error); | |
| } | |
| }; |
🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx around lines 61 to
75, the handler posts to the wrong endpoint, omits the required JSON payload,
and never revokes the created Blob URL; change the fetch URL to
"/generate_report", send the required settings object in the request body as
JSON with headers "Content-Type: application/json" (use
JSON.stringify(settings)), keep the existing response/blob handling but after
programmatically clicking the link call window.URL.revokeObjectURL(url) and
remove the link element to avoid leaks, and preserve proper error handling and
response.ok checks.
| # Load models (once) | ||
| device = "cuda" if torch.cuda.is_available() else "cpu" | ||
| clip_model, preprocess = clip.load("ViT-B/32", device=device) | ||
| lpips_model = lpips.LPIPS(net='alex').to(device) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Set models to eval mode; optionally lazy-load to avoid import-time cost
Switch learned models to eval() to disable dropout/batch-norm training behavior. Consider lazy-loading on first use to reduce import-time latency.
device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)
+clip_model.eval()
+lpips_model.eval()If startup latency is a concern, I can refactor to lazy-load these handles the first time compute_scores is called.
Set models to inference mode and consider lazy‐loading
After loading the CLIP and LPIPS models in tasks/report_bundle/generate_report.py (around lines 16–20), switch them to evaluation mode to disable dropout/BatchNorm training behavior. If startup latency becomes an issue, you can move the load + eval calls into your compute_scores function and only execute them on first use.
• File: tasks/report_bundle/generate_report.py
• Lines: 16–20
device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)
+clip_model.eval()
+lpips_model.eval()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Load models (once) | |
| device = "cuda" if torch.cuda.is_available() else "cpu" | |
| clip_model, preprocess = clip.load("ViT-B/32", device=device) | |
| lpips_model = lpips.LPIPS(net='alex').to(device) | |
| # Load models (once) | |
| device = "cuda" if torch.cuda.is_available() else "cpu" | |
| clip_model, preprocess = clip.load("ViT-B/32", device=device) | |
| lpips_model = lpips.LPIPS(net='alex').to(device) | |
| clip_model.eval() | |
| lpips_model.eval() |
🤖 Prompt for AI Agents
In tasks/report_bundle/generate_report.py around lines 16 to 20, after loading
the CLIP and LPIPS models call .eval() on both models to put them into inference
mode (e.g., clip_model.eval(); lpips_model.eval()) so dropout/BatchNorm are
disabled; additionally, to reduce startup latency optionally move the load and
.eval() calls into compute_scores and implement lazy initialization (initialize
globals on first use, load models, .to(device) and .eval() there) so the models
are only loaded and set to eval when compute_scores is first invoked.
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | ||
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | ||
| writer.writeheader() | ||
| writer.writerows(rows) | ||
|
|
||
|
|
||
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | ||
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | ||
| writer.writeheader() | ||
| writer.writerows(rows) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate write to results.csv (idempotent but unnecessary, potential race window)
The file is written twice back-to-back with identical content. Remove the second block.
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
-
- with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
- writer.writeheader()
- writer.writerows(rows)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | |
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | |
| writer.writeheader() | |
| writer.writerows(rows) | |
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | |
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | |
| writer.writeheader() | |
| writer.writerows(rows) | |
| with open(results_file, 'w', newline='', encoding='utf-8') as csvfile: | |
| writer = csv.DictWriter(csvfile, fieldnames=fieldnames) | |
| writer.writeheader() | |
| writer.writerows(rows) |
🤖 Prompt for AI Agents
In tasks/report_bundle/generate_report.py around lines 85 to 95 there is a
duplicated block that opens results_file and writes the CSV (writeheader and
writerows) twice; remove the second identical with-open/write block so the file
is written only once (keep the first context manager), leaving a single
csv.DictWriter writeheader() + writerows(rows) call and no extra duplicate
write.
| from fastapi import FastAPI, Body, HTTPException | ||
| from fastapi.responses import FileResponse, HTMLResponse | ||
| from fastapi.middleware.cors import CORSMiddleware |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid blocking the event loop, address Ruff warnings (B008/B904), and sanitize error details.
- Offload CPU/IO-heavy
run_reportto a thread pool to keep the async server responsive. - Use
Annotated[dict, Body(...)]to avoid function calls in defaults (Ruff B008). - Log and return a generic 500 to avoid leaking internals; chain the original exception (Ruff B904).
-from fastapi import FastAPI, Body, HTTPException
+from fastapi import FastAPI, Body, HTTPException
+from fastapi.concurrency import run_in_threadpool
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.middleware.cors import CORSMiddleware
+from typing import Annotated
+import logging
import os
-from generate_report import run_report
+from .generate_report import run_report
@@
app = FastAPI()
+logger = logging.getLogger(__name__)
@@
-@app.post("/generate_report")
-async def generate_report_endpoint(frontend_settings: dict = Body(...)):
+@app.post("/generate_report")
+async def generate_report_endpoint(frontend_settings: Annotated[dict, Body(...)]):
try:
- zip_path = run_report(frontend_settings)
+ # Offload synchronous heavy work to a threadpool to prevent event loop blocking
+ zip_path = await run_in_threadpool(run_report, frontend_settings)
if not os.path.exists(zip_path):
raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.")
return FileResponse(zip_path, filename="report.zip", media_type="application/zip")
except Exception as e:
- raise HTTPException(status_code=500, detail=str(e))
+ # Don't leak internal paths/details; keep cause chained for observability
+ logger.exception("Report generation failed")
+ raise HTTPException(status_code=500, detail="Report generation failed") from eAlso applies to: 33-40
🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around lines 1-3 (and apply same changes to
lines 33-40), the review asks to avoid blocking the event loop, fix Ruff
warnings B008/B904, and sanitize error details: change endpoint parameters to
use Annotated[dict, Body(...)] instead of calling Body in a default, offload the
CPU/IO-heavy run_report call to a thread pool (e.g., FastAPI's run_in_threadpool
or loop.run_in_executor) so the async server stays responsive, catch exceptions,
log the full exception internally but return a generic HTTP 500 without internal
details, and re-raise or raise HTTPException(...) from the original exception to
preserve chaining.
| from fastapi.responses import FileResponse, HTMLResponse | ||
| from fastapi.middleware.cors import CORSMiddleware | ||
| import os | ||
| from generate_report import run_report |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fix import to work when running as a package (uvicorn tasks.report_bundle.server:app).
Absolute import from generate_report import run_report will fail under package execution. Use a relative import.
-from generate_report import run_report
+from .generate_report import run_reportIf you don’t already have them, ensure tasks/__init__.py and tasks/report_bundle/__init__.py exist so relative imports resolve.
🏁 Script executed:
#!/bin/bash
# Verify package markers exist for relative imports
set -euo pipefail
for d in tasks tasks/report_bundle; do
if [ -d "$d" ]; then
if [ ! -f "$d/__init__.py" ]; then
echo "Missing $d/__init__.py"
else
echo "Found $d/__init__.py"
fi
fi
done
# Show the current import line
rg -n $'^from .*generate_report import run_report' -A 0 tasks/report_bundle/server.pyLength of output: 507
Add missing package markers and update import in server.py
The tasks and tasks/report_bundle directories are missing __init__.py, so Python won’t treat them as packages when invoking via uvicorn tasks.report_bundle.server:app. Additionally, the import in tasks/report_bundle/server.py must be relative.
• Create empty __init__.py in:
tasks/__init__.pytasks/report_bundle/__init__.py
• Intasks/report_bundle/server.py, change:
-from generate_report import run_report
+from .generate_report import run_report📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from generate_report import run_report | |
| from .generate_report import run_report |
🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around line 5, the project lacks package
markers and uses an absolute import that will fail when running via uvicorn; add
empty files tasks/__init__.py and tasks/report_bundle/__init__.py to make those
directories packages, and update the import in tasks/report_bundle/server.py to
use a relative import (import the run_report from the local module) so Python
can resolve it when invoked as tasks.report_bundle.server:app.
| app.add_middleware( | ||
| CORSMiddleware, | ||
| allow_origins=["*"], | ||
| allow_credentials=True, | ||
| allow_methods=["*"], | ||
| allow_headers=["*"], | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
CORS: Don’t combine allow_credentials=True with wildcard origins.
Per CORS spec and Starlette behavior, "*" with credentials is invalid. Either:
- Set
allow_credentials=False(simple fix), or - Replace
"*"with an explicit list of origins.
Minimal safe change:
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
- allow_credentials=True,
+ allow_credentials=False,
allow_methods=["*"],
allow_headers=["*"],
)If you need credentials, replace ["*"] with explicit origins (e.g., from env) and keep allow_credentials=True.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| app.add_middleware( | |
| CORSMiddleware, | |
| allow_origins=["*"], | |
| allow_credentials=True, | |
| allow_methods=["*"], | |
| allow_headers=["*"], | |
| ) | |
| app.add_middleware( | |
| CORSMiddleware, | |
| allow_origins=["*"], | |
| allow_credentials=False, | |
| allow_methods=["*"], | |
| allow_headers=["*"], | |
| ) |
🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around lines 9 to 15, the CORS middleware is
configured with allow_origins=["*"] and allow_credentials=True which is invalid
per the CORS spec and Starlette; either set allow_credentials=False to keep the
wildcard origin, or replace the wildcard with an explicit list of allowed
origins (e.g., load a comma-separated ALLOWED_ORIGINS from environment/config
and split into a list) and keep allow_credentials=True; update the code to use
one of these two safe options and ensure allow_origins is a concrete list when
allow_credentials is True.
| const resolveSettings = () => { | ||
| if (typeof getFrontendSettings === "function") return getFrontendSettings(); | ||
| return frontendSettings; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Support async settings providers (await getFrontendSettings).
Allow getFrontendSettings to return a Promise for flexibility, then await it in the click handler.
- const resolveSettings = () => {
- if (typeof getFrontendSettings === "function") return getFrontendSettings();
- return frontendSettings;
- };
+ const resolveSettings = async () => {
+ if (typeof getFrontendSettings === "function") {
+ return await Promise.resolve(getFrontendSettings());
+ }
+ return frontendSettings;
+ };
@@
- const settings = resolveSettings();
+ const settings = await resolveSettings();
if (!settings || Object.keys(settings).length === 0) {
throw new Error("No frontend settings provided. Pass frontendSettings or getFrontendSettings.");
}Also applies to: 25-33
🤖 Prompt for AI Agents
In tasks/report_bundle/src/components/GenerateReportButton.jsx around lines
20-23 (and similarly 25-33), the current resolveSettings assumes
getFrontendSettings is synchronous; change resolveSettings to be async and await
getFrontendSettings() when it's a function (i.e. if (typeof getFrontendSettings
=== "function") return await getFrontendSettings();), then update the click
handler to await resolveSettings() (and propagate or handle the returned
Promise) — wrap the await call in a try/catch to handle rejections and
surface/log errors appropriately so async settings providers are supported.
| def test_csv_schema(filename): | ||
| required_columns = {'id', 'image_path', 'score'} | ||
| with open(filename, newline='') as csvfile: | ||
| reader = csv.DictReader(csvfile) | ||
| headers = set(reader.fieldnames) | ||
| missing = required_columns - headers | ||
| if missing: | ||
| raise ValueError(f"CSV is missing required columns: {missing}") | ||
| print("✅ CSV schema test passed!") | ||
|
|
||
| if __name__ == "__main__": | ||
| test_csv_schema('results.csv') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Schema is out of sync with new CLIP/LPIPS fields; hard requirement on “score” will fail
The pipeline now writes clip_score and lpips_score (not a single score). Also add basic robustness (encoding, empty-header guard).
-def test_csv_schema(filename):
- required_columns = {'id', 'image_path', 'score'}
- with open(filename, newline='') as csvfile:
- reader = csv.DictReader(csvfile)
- headers = set(reader.fieldnames)
- missing = required_columns - headers
- if missing:
- raise ValueError(f"CSV is missing required columns: {missing}")
- print("✅ CSV schema test passed!")
+def test_csv_schema(filename):
+ # Minimum columns required by add_scores_to_csv
+ required_columns = {'id', 'image_path', 'prompt'}
+ # Accept either the new pair of scores or the legacy single "score" for backward compatibility
+ acceptable_score_sets = [
+ {'clip_score', 'lpips_score'},
+ {'score'},
+ ]
+ with open(filename, newline='', encoding='utf-8') as csvfile:
+ reader = csv.DictReader(csvfile)
+ if reader.fieldnames is None:
+ raise ValueError("CSV is empty or missing a header row")
+ headers = set(reader.fieldnames)
+ missing = required_columns - headers
+ if missing:
+ raise ValueError(f"CSV is missing required columns: {missing}")
+ if not any(scores.issubset(headers) for scores in acceptable_score_sets):
+ raise ValueError("CSV must contain either {'clip_score','lpips_score'} or legacy {'score'}")
+ print("✅ CSV schema test passed!")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_csv_schema(filename): | |
| required_columns = {'id', 'image_path', 'score'} | |
| with open(filename, newline='') as csvfile: | |
| reader = csv.DictReader(csvfile) | |
| headers = set(reader.fieldnames) | |
| missing = required_columns - headers | |
| if missing: | |
| raise ValueError(f"CSV is missing required columns: {missing}") | |
| print("✅ CSV schema test passed!") | |
| if __name__ == "__main__": | |
| test_csv_schema('results.csv') | |
| def test_csv_schema(filename): | |
| # Minimum columns required by add_scores_to_csv | |
| required_columns = {'id', 'image_path', 'prompt'} | |
| # Accept either the new pair of scores or the legacy single "score" for backward compatibility | |
| acceptable_score_sets = [ | |
| {'clip_score', 'lpips_score'}, | |
| {'score'}, | |
| ] | |
| with open(filename, newline='', encoding='utf-8') as csvfile: | |
| reader = csv.DictReader(csvfile) | |
| if reader.fieldnames is None: | |
| raise ValueError("CSV is empty or missing a header row") | |
| headers = set(reader.fieldnames) | |
| missing = required_columns - headers | |
| if missing: | |
| raise ValueError(f"CSV is missing required columns: {missing}") | |
| if not any(scores.issubset(headers) for scores in acceptable_score_sets): | |
| raise ValueError("CSV must contain either {'clip_score','lpips_score'} or legacy {'score'}") | |
| print("✅ CSV schema test passed!") | |
| if __name__ == "__main__": | |
| test_csv_schema('results.csv') |
🤖 Prompt for AI Agents
tasks/report_bundle/test_schema.py around lines 3 to 14: the test currently
requires a single "score" column and opens the CSV without explicit encoding or
a guard for empty headers; update the required columns to require "clip_score"
and "lpips_score" (or both if your pipeline writes both), open the file with an
explicit encoding (e.g., encoding='utf-8', errors='replace') and check
reader.fieldnames is not None before using it (raise a clear ValueError if
headers are missing), then compute missing = required_columns -
set(reader.fieldnames) and raise the existing error message if any are missing.
Description
Implemented the report generation feature using FastAPI.
The backend computes CLIP and LPIPS scores for images based on prompts and packages results into a zip report.
Also added the API endpoint
/generate_reportwith Swagger UI support for easy testing.Changes Made
generate_report.pyto handle scoring and report creation.server.pyFastAPI app with/generate_reportendpoint.report.zipas output bundle for reports.Evidence Required ✅
UI Screenshot
Generated Image
Logs
INFO: Started server process [12345]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:54321 - "POST /generate_report HTTP/1.1" 200 OK
INFO: Report generated successfully: report.zip
Tests (Optional)
No automated tests yet; manual testing done via Swagger UI.
Test results:
/docsto POST sample report requests with valid JSON payloads.report.zipattachment.Checklist
Summary by Sourcery
Implement end-to-end report generation: compute image quality metrics in Python, bundle results into a ZIP via a FastAPI endpoint, and integrate a React button for users to trigger and download the report
New Features:
/generate_reportendpoint to generate and serve ZIP reportsgenerate_report.py) to compute CLIP and LPIPS scores, produce config JSON, and bundle results into a ZIPGenerateReportButtoncomponent for triggering report generation and downloading the ZIPEnhancements:
Documentation:
config.json,results.csv, andREADME.txtfor the report bundleTests:
test_schema.pyto validate the CSV format for the reportSummary by CodeRabbit
New Features
Documentation
Tests