Skip to content

Conversation

@lialyAdam
Copy link

@lialyAdam lialyAdam commented Aug 11, 2025

Description

Implemented the report generation feature using FastAPI.
The backend computes CLIP and LPIPS scores for images based on prompts and packages results into a zip report.
Also added the API endpoint /generate_report with Swagger UI support for easy testing.

Changes Made

  • Added generate_report.py to handle scoring and report creation.
  • Created server.py FastAPI app with /generate_report endpoint.
  • Added requirements to handle image processing and model loading.
  • Updated README and added sample config and results files.
  • Created report.zip as output bundle for reports.

Evidence Required ✅

UI Screenshot

Generated Image

Logs

INFO: Started server process [12345]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:54321 - "POST /generate_report HTTP/1.1" 200 OK
INFO: Report generated successfully: report.zip

Tests (Optional)

No automated tests yet; manual testing done via Swagger UI.

Test results:

  • Used Swagger UI /docs to POST sample report requests with valid JSON payloads.
  • Verified server returns HTTP 200 status with report.zip attachment.
  • Unzipped report and confirmed presence of expected CSV/JSON result files and summary.

Checklist

  • UI screenshot provided
  • Generated image provided (Not applicable)
  • Logs provided
  • Tests added (optional) (Manual testing performed)
  • Code follows project style and conventions
  • Self-review completed with successful manual validation

Summary by Sourcery

Implement end-to-end report generation: compute image quality metrics in Python, bundle results into a ZIP via a FastAPI endpoint, and integrate a React button for users to trigger and download the report

New Features:

  • Add FastAPI /generate_report endpoint to generate and serve ZIP reports
  • Introduce Python report generator (generate_report.py) to compute CLIP and LPIPS scores, produce config JSON, and bundle results into a ZIP
  • Create React GenerateReportButton component for triggering report generation and downloading the ZIP
  • Add ‘Generate Report’ button to the Txt2ImgPage UI to invoke the report endpoint

Enhancements:

  • Enable CORS and add a root HTML page in the FastAPI server for basic API info

Documentation:

  • Include sample config.json, results.csv, and README.txt for the report bundle

Tests:

  • Add test_schema.py to validate the CSV format for the report

Summary by CodeRabbit

  • New Features

    • Added a Generate Report button to export a ZIP containing results, generation settings, and preview grids. Automatic download with clear loading state; no impact to existing generation flow.
  • Documentation

    • Added a README describing the report bundle contents and how to use them.
  • Tests

    • Introduced a utility to validate the report CSV schema for required columns.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Aug 11, 2025

Reviewer's Guide

This PR introduces a full-featured report generation workflow: a new backend module computes CLIP and LPIPS scores, packages results into a zip via a FastAPI endpoint, and adds frontend support for triggering and downloading the report, alongside documentation and basic schema testing.

Sequence diagram for report generation via FastAPI endpoint

sequenceDiagram
    actor User
    participant Frontend
    participant FastAPI_Server
    participant ReportModule
    participant FileSystem

    User->>Frontend: Click 'Generate Report' button
    Frontend->>FastAPI_Server: POST /generate_report (with settings)
    FastAPI_Server->>ReportModule: run_report(settings)
    ReportModule->>FileSystem: Read results.csv, grids/, README.txt
    ReportModule->>ReportModule: Compute CLIP & LPIPS scores
    ReportModule->>FileSystem: Write updated results.csv, config.json
    ReportModule->>FileSystem: Create report.zip
    ReportModule-->>FastAPI_Server: Path to report.zip
    FastAPI_Server-->>Frontend: Return report.zip
    Frontend-->>User: Download report.zip
Loading

Class diagram for report generation module and FastAPI endpoint

classDiagram
    class ReportModule {
        +compute_scores(prompt, image_path, reference_image_path)
        +add_scores_to_csv(results_file)
        +generate_config_from_frontend(config_file, settings_dict)
        +create_report_zip(output_zip)
        +run_report(settings_from_frontend)
    }
    class FastAPI_Server {
        +generate_report_endpoint(frontend_settings)
    }
    FastAPI_Server --> ReportModule: calls run_report
    ReportModule --> FileSystem: reads/writes files
    ReportModule --> CLIP_Model: uses for scoring
    ReportModule --> LPIPS_Model: uses for scoring
    class FileSystem {
        results.csv
        config.json
        README.txt
        grids/
        report.zip
    }
    class CLIP_Model {
        +encode_image(image_tensor)
        +encode_text(text)
    }
    class LPIPS_Model {
        +__call__(a, b)
    }
Loading

File-Level Changes

Change Details Files
Add backend report generation pipeline with CLIP and LPIPS scoring
  • Implement generate_report.py: compute_scores, update CSV, generate config, and create ZIP
  • Build FastAPI server in server.py with CORS and /generate_report endpoint returning the ZIP
  • Include test_schema.py to validate CSV structure and add sample config.json, results.csv, README.txt
tasks/report_bundle/generate_report.py
tasks/report_bundle/server.py
tasks/report_bundle/test_schema.py
tasks/report_bundle/README.txt
tasks/report_bundle/config.json
tasks/report_bundle/results.csv
Integrate report generation feature into React frontend
  • Add handleGenerateReport function and Generate Report button in Txt2ImgPage.tsx
  • Create GenerateReportButton.jsx to post frontend settings and handle ZIP download
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx
tasks/report_bundle/src/components/GenerateReportButton.jsx
Update documentation and dependencies for the report feature
  • Add required image-processing and model-loading libraries to requirements
  • Update root README with report generation instructions and sample configurations
  • Bundle sample config/results files for user reference
requirements.txt
README.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 11, 2025

Walkthrough

Adds end-to-end “Report Bundle” generation: frontend triggers a new POST /generate_report API to build a ZIP (results.csv, config.json, README, grids). Implements a FastAPI server, report generation pipeline with CLIP/LPIPS scoring, packaging, and a reusable React button. Includes README, example config, and a CSV schema test script.

Changes

Cohort / File(s) Summary
Frontend trigger (Txt2Img integration)
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx
Adds handleGenerateReport() to POST /api/generate-report, download report.zip; injects “Generate Report” button, disabled during generation.
Reusable report button (standalone UI)
tasks/report_bundle/src/components/GenerateReportButton.jsx
New React component that collects settings, POSTs to API, handles loading/error, and downloads ZIP. Configurable apiUrl.
Report generation pipeline
tasks/report_bundle/generate_report.py
New module: loads CLIP and LPIPS, computes scores, updates results.csv, writes config.json from frontend settings, zips outputs (results, config, README, grids). Provides run_report() entrypoint.
Backend API server
tasks/report_bundle/server.py
FastAPI app with CORS, GET / landing, POST /generate_report to invoke run_report() and return report.zip via FileResponse with error handling.
Report bundle docs and sample config
tasks/report_bundle/README.txt, tasks/report_bundle/config.json
Adds README describing bundle contents; adds sample config.json for report task.
CSV schema check utility
tasks/report_bundle/test_schema.py
Adds test function to validate required CSV columns and a script entry for results.csv.

Sequence Diagram(s)

sequenceDiagram
  participant UI as Frontend (Txt2Img/GenerateReportButton)
  participant API as FastAPI /generate_report
  participant RG as run_report()
  participant FS as Filesystem

  UI->>API: POST frontend_settings
  API->>RG: run_report(settings)
  RG->>FS: Validate inputs (results.csv, grids, README)
  RG->>FS: Compute scores, update results.csv
  RG->>FS: Write config.json
  RG->>FS: Create report.zip
  RG-->>API: Path to report.zip
  API-->>UI: FileResponse (report.zip)
Loading
sequenceDiagram
  participant RR as run_report()
  participant CSV as add_scores_to_csv
  participant CFG as generate_config_from_frontend
  participant ZIP as create_report_zip

  RR->>CSV: Update results.csv with CLIP/LPIPS
  RR->>CFG: Write config.json from settings
  RR->>ZIP: Zip results/config/README/grids
  ZIP-->>RR: report.zip path
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

I stitched a zip with whiskered care,
Scores and grids and configs there—
A hop, a post, the bytes all flow,
From prompt to pouch, reports now go.
Thump-thump! my paws approve this art,
A bunny’s bundle, download to heart. 🐰📦✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @lialyAdam - I've reviewed your changes - here's some feedback:

  • Endpoint path mismatch: frontend calls /api/generate-report whereas backend defines /generate_report—unify the route naming and path across frontend and server.
  • In add_scores_to_csv you open and write results.csv twice in a row; remove the redundant write block to avoid overwriting the updated data.
  • The download/report logic in Txt2ImgPage duplicates the GenerateReportButton component—extract the shared fetch-and-download code into a single reusable hook or component.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Endpoint path mismatch: frontend calls `/api/generate-report` whereas backend defines `/generate_report`—unify the route naming and path across frontend and server.
- In add_scores_to_csv you open and write `results.csv` twice in a row; remove the redundant write block to avoid overwriting the updated data.
- The download/report logic in Txt2ImgPage duplicates the GenerateReportButton component—extract the shared fetch-and-download code into a single reusable hook or component.

## Individual Comments

### Comment 1
<location> `tasks/report_bundle/generate_report.py:85` </location>
<code_context>
+            row['run_id'] = f"run_{row.get('id', '')}"
+            rows.append(row)
+
+    with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
+        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
+        writer.writeheader()
</code_context>

<issue_to_address>
Duplicate block for writing CSV file.

The file is being written twice; removing the duplicate write will improve efficiency.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
    with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(rows)


    with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(rows)
=======
    with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(rows)
>>>>>>> REPLACE

</suggested_fix>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +85 to +94
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)


with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Duplicate block for writing CSV file.

The file is being written twice; removing the duplicate write will improve efficiency.

Suggested change
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)

Comment on lines +9 to +10
if missing:
raise ValueError(f"CSV is missing required columns: {missing}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Avoid conditionals in tests. (no-conditionals-in-tests)

ExplanationAvoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:

  • loops
  • conditionals

Some ways to fix this:

  • Use parametrized tests to get rid of the loop.
  • Move the complex logic into helpers.
  • Move the complex part into pytest fixtures.

Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / Don't Put Logic in Tests

Comment on lines +39 to +47
ref_img = Image.open(reference_image_path).convert("RGB")
# lpips expects tensors in range [-1,1] as float with shape [1,3,H,W]
# use lpips helper if available or convert
from torchvision import transforms
to_tensor = transforms.ToTensor()
a = to_tensor(img).unsqueeze(0).mul(2).sub(1).to(device) # map [0,1] -> [-1,1]
b = to_tensor(ref_img).unsqueeze(0).mul(2).sub(1).to(device)
with torch.no_grad():
lpips_score = float(lpips_model(a, b).item())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Extract code out into function (extract-method)

"""
rows = []
with open(results_file, newline='', encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): We've found these issues:

Comment on lines +19 to +29
html_content = """
<html>
<head>
<title>DreamLayer Report API</title>
</head>
<body>
<h1>Welcome to DreamLayer Report API</h1>
<p>Use the <code>/generate_report</code> POST endpoint to generate reports.</p>
</body>
</html>
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Inline variable that is immediately returned (inline-immediately-returned-variable)

raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.")
return FileResponse(zip_path, filename="report.zip", media_type="application/zip")
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Explicitly raise from a previous error (raise-from-previous-error)

Suggested change
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=str(e)) from e

Comment on lines +8 to +9
missing = required_columns - headers
if missing:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Use named expression to simplify assignment and conditional (use-named-expression)

Suggested change
missing = required_columns - headers
if missing:
if missing := required_columns - headers:

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (12)
tasks/report_bundle/config.json (1)

1-8: Clarify sample vs generated config; consider renaming to avoid confusion

This checked-in config looks like a placeholder. Since run_report overwrites CONFIG_FILE from frontend settings, shipping a static config.json alongside can be confusing. Recommend renaming to config.sample.json (and ignoring config.json) or clearly documenting that config.json will be regenerated at runtime.

Would you like me to generate a small README snippet and .gitignore entry to enforce this?

tasks/report_bundle/README.txt (1)

11-13: Polish tone and clarity per style hints

Tweak wording for a slightly more formal tone and clarity.

-Feel free to explore or reuse any part of it. Let me know if anything’s missing!
+Please feel free to explore or reuse any part of it. If anything is missing, please let me know.
tasks/report_bundle/generate_report.py (4)

27-31: Close images deterministically; minor import cleanup

Use context managers so file handles are closed promptly. Also, keep ToTensor import at module scope or reuse a single instance.

-    # open image
-    img = Image.open(image_path).convert("RGB")
+    # open image
+    with Image.open(image_path) as _img:
+        img = _img.convert("RGB")
     image_tensor = preprocess(img).unsqueeze(0).to(device)
     text = clip.tokenize([prompt]).to(device)
@@
-    if reference_image_path and os.path.exists(reference_image_path):
-        ref_img = Image.open(reference_image_path).convert("RGB")
+    if reference_image_path and os.path.exists(reference_image_path):
+        with Image.open(reference_image_path) as _ref_img:
+            ref_img = _ref_img.convert("RGB")
         # lpips expects tensors in range [-1,1] as float with shape [1,3,H,W]
         # use lpips helper if available or convert
-        from torchvision import transforms
-        to_tensor = transforms.ToTensor()
+        from torchvision import transforms
+        to_tensor = transforms.ToTensor()
         a = to_tensor(img).unsqueeze(0).mul(2).sub(1).to(device)   # map [0,1] -> [-1,1]
         b = to_tensor(ref_img).unsqueeze(0).mul(2).sub(1).to(device)

Also applies to: 37-49


71-84: Unify comment language for team readability

Non-English inline comments can confuse readers. Suggest replacing Arabic comments with concise English.

-            prompt = row.get('prompt', '')  # لو موجود
-            ref_path = row.get('reference_image_path')  # لو موجود
+            prompt = row.get('prompt', '')  # optional
+            ref_path = row.get('reference_image_path')  # optional
@@
-            # إنشاء run_id من id (مثلاً run_1, run_2, ...)
+            # derive run_id from id (e.g., run_1, run_2, ...)

107-116: Enable ZIP compression and preserve relative arcnames

Use deflated compression to shrink the bundle and ensure consistent arcnames.

-def create_report_zip(output_zip=OUTPUT_ZIP):
-    with zipfile.ZipFile(output_zip, 'w') as zf:
+def create_report_zip(output_zip=OUTPUT_ZIP):
+    with zipfile.ZipFile(output_zip, 'w', compression=zipfile.ZIP_DEFLATED) as zf:
         zf.write(RESULTS_FILE)
         zf.write(CONFIG_FILE)
         zf.write(README_FILE)
         for image_file in os.listdir(GRIDS_DIR):
             full_path = os.path.join(GRIDS_DIR, image_file)
             if os.path.isfile(full_path):
                 zf.write(full_path, arcname=os.path.join('grids', image_file))

If you want max compression, I can add compresslevel for Python 3.7+.


123-139: Path resolution robustness

If the server isn’t started from tasks/report_bundle/, relative constants can fail. Consider resolving paths relative to this module.

I can provide a short patch that adds BASE_DIR = Path(file).resolve().parent and rebinds RESULTS_FILE, CONFIG_FILE, etc., to BASE_DIR/… to harden path handling.

dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx (1)

429-435: Optional: keep action buttons consistent

Consider moving “Generate Report” into the existing ActionButtons component for consistent spacing and responsive behavior (and to co-locate CTA logic). Not required.

tasks/report_bundle/server.py (1)

32-38: Validate and constrain output path (defense-in-depth).

run_report returns a path that's sent back. As a safety net, ensure it points to an expected directory and is a file before responding to avoid accidental disclosure.

-        if not os.path.exists(zip_path):
+        if not (os.path.isfile(zip_path)):
             raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.")

Optionally, assert it’s under a known reports directory.

tasks/report_bundle/src/components/GenerateReportButton.jsx (4)

41-44: Surface API JSON errors cleanly (extract detail).

FastAPI returns JSON {"detail": "..."} for errors. Parse JSON when appropriate for better UX.

-      if (!res.ok) {
-        const txt = await res.text();
-        throw new Error(txt || `Server responded ${res.status}`);
-      }
+      if (!res.ok) {
+        const ct = res.headers.get("content-type") || "";
+        let msg;
+        if (ct.includes("application/json")) {
+          try {
+            const data = await res.json();
+            msg = (data && (data.detail || data.message)) || JSON.stringify(data);
+          } catch {
+            msg = await res.text();
+          }
+        } else {
+          msg = await res.text();
+        }
+        throw new Error(msg || `Server responded ${res.status}`);
+      }

46-55: Use server-provided filename from Content-Disposition when available.

Respect the filename suggested by the API (e.g., timestamped zips).

-      const blob = await res.blob();
+      const blob = await res.blob();
       const url = window.URL.createObjectURL(blob);
       const a = document.createElement("a");
-      a.href = url;
-      a.download = "report.zip";
+      a.href = url;
+      let filename = "report.zip";
+      const cd = res.headers.get("content-disposition") || "";
+      const m = cd.match(/filename\*=UTF-8''([^;]+)|filename="?([^";]+)"?/i);
+      if (m) {
+        filename = decodeURIComponent(m[1] || m[2]);
+      }
+      a.download = filename;
       document.body.appendChild(a);
       a.click();
       a.remove();
       window.URL.revokeObjectURL(url);

14-15: Make API URL environment-driven to avoid hard-coded localhost.

Defaulting to localhost is fine for dev, but prefer env-driven or relative URLs to simplify deployment and avoid CORS.

  • Prefer a relative default: apiUrl = "/generate_report", or
  • Read from env (Vite): apiUrl = import.meta.env.VITE_REPORT_API_URL || "/generate_report"

I can submit a small PR wiring this through your build tooling if helpful.


64-74: Optional UX: cancellation and double-click protection.

Consider using an AbortController to cancel in-flight requests on re-click/unmount and guard against double-submits.

High-level example:

  • Keep a ref to an AbortController.
  • Abort previous request before starting a new one.
  • Clean up on unmount.

I can provide a concrete patch if you want this enhancement.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1911cd3 and bb7235a.

⛔ Files ignored due to path filters (5)
  • tasks/report_bundle.zip is excluded by !**/*.zip
  • tasks/report_bundle/grids/grid1.png is excluded by !**/*.png
  • tasks/report_bundle/grids/grid2.png is excluded by !**/*.png
  • tasks/report_bundle/report.zip is excluded by !**/*.zip
  • tasks/report_bundle/results.csv is excluded by !**/*.csv
📒 Files selected for processing (7)
  • dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx (2 hunks)
  • tasks/report_bundle/README.txt (1 hunks)
  • tasks/report_bundle/config.json (1 hunks)
  • tasks/report_bundle/generate_report.py (1 hunks)
  • tasks/report_bundle/server.py (1 hunks)
  • tasks/report_bundle/src/components/GenerateReportButton.jsx (1 hunks)
  • tasks/report_bundle/test_schema.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
tasks/report_bundle/server.py (1)
tasks/report_bundle/generate_report.py (1)
  • run_report (118-139)
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx (1)
tasks/report_bundle/src/components/GenerateReportButton.jsx (1)
  • error (17-17)
🪛 LanguageTool
tasks/report_bundle/README.txt

[style] ~10-~10: Consider using a less common alternative to make your writing sound more unique and professional.
Context: ...ages — labeled previews of the outputs Feel free to explore or reuse any part of it. Let me...

(FEEL_FREE_TO_STYLE_ME)


[style] ~11-~11: Consider using polite language here.
Context: ...ree to explore or reuse any part of it. Let me know if anything’s missing! — Lial

(INSERT_PLEASE)

🪛 Ruff (0.12.2)
tasks/report_bundle/server.py

33-33: Do not perform function call Body in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


40-40: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

Comment on lines +61 to +75
const handleGenerateReport = async () => {
try {
const response = await fetch("/api/generate-report", { method: "POST" });
if (!response.ok) throw new Error("Failed to generate report");
const blob = await response.blob();
const url = window.URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = "report.zip";
link.click();
} catch (error) {
console.error("Error generating report:", error);
}
};

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Endpoint mismatch and missing payload; also revoke Blob URL

Server exposes POST /generate_report (underscore). Frontend calls /api/generate-report (hyphen) with no body. This likely 404s or fails JSON parsing. Send the settings payload (aligned with run_report) and use underscore path. Also revoke the URL.

-  const handleGenerateReport = async () => {
-  try {
-    const response = await fetch("/api/generate-report", { method: "POST" });
+  const handleGenerateReport = async () => {
+  try {
+    // Reuse the same settings shape used for generation
+    const requestData = {
+      ...coreSettings,
+      ...(controlNetConfig && { controlnet: controlNetConfig }),
+      ...(loraConfig?.enabled && { lora: loraConfig }),
+      custom_workflow: customWorkflow || null,
+    };
+    const response = await fetch("/api/generate_report", {
+      method: "POST",
+      headers: { "Content-Type": "application/json" },
+      body: JSON.stringify(requestData),
+    });
     if (!response.ok) throw new Error("Failed to generate report");
     const blob = await response.blob();
     const url = window.URL.createObjectURL(blob);
     const link = document.createElement("a");
     link.href = url;
     link.download = "report.zip"; 
-    link.click();
+    document.body.appendChild(link);
+    link.click();
+    link.remove();
+    setTimeout(() => window.URL.revokeObjectURL(url), 1000);
   } catch (error) {
     console.error("Error generating report:", error);
   }
 };

To confirm the route name across the repo, run:


🏁 Script executed:

#!/bin/bash
rg -n -A2 -i $'/generate[_-]report'

Length of output: 1462


Fix endpoint path, include JSON payload, and revoke Blob URL

Please update the report‐generation handler in Txt2ImgPage to match the backend’s /generate_report route, send the required settings in the request body, and clean up the created Blob URL to avoid leaks.

dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx

   const handleGenerateReport = async () => {
-    try {
-      const response = await fetch("/api/generate-report", { method: "POST" });
+    try {
+      // Send current generation settings to the backend
+      const requestData = {
+        ...coreSettings,
+        ...(controlNetConfig && { controlnet: controlNetConfig }),
+        ...(loraConfig?.enabled && { lora: loraConfig }),
+        custom_workflow: customWorkflow || null,
+      };
+      const response = await fetch("/api/generate_report", {
+        method: "POST",
+        headers: { "Content-Type": "application/json" },
+        body: JSON.stringify(requestData),
+      });
       if (!response.ok) throw new Error("Failed to generate report");
       const blob = await response.blob();
       const url = window.URL.createObjectURL(blob);
       const link = document.createElement("a");
       link.href = url;
       link.download = "report.zip";
-      link.click();
+      // Append to DOM, trigger download, then clean up
+      document.body.appendChild(link);
+      link.click();
+      link.remove();
+      setTimeout(() => window.URL.revokeObjectURL(url), 1000);
     } catch (error) {
       console.error("Error generating report:", error);
     }
   };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handleGenerateReport = async () => {
try {
const response = await fetch("/api/generate-report", { method: "POST" });
if (!response.ok) throw new Error("Failed to generate report");
const blob = await response.blob();
const url = window.URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = "report.zip";
link.click();
} catch (error) {
console.error("Error generating report:", error);
}
};
const handleGenerateReport = async () => {
try {
// Send current generation settings to the backend
const requestData = {
...coreSettings,
...(controlNetConfig && { controlnet: controlNetConfig }),
...(loraConfig?.enabled && { lora: loraConfig }),
custom_workflow: customWorkflow || null,
};
const response = await fetch("/api/generate_report", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(requestData),
});
if (!response.ok) throw new Error("Failed to generate report");
const blob = await response.blob();
const url = window.URL.createObjectURL(blob);
const link = document.createElement("a");
link.href = url;
link.download = "report.zip";
// Append to DOM, trigger download, then clean up
document.body.appendChild(link);
link.click();
link.remove();
setTimeout(() => window.URL.revokeObjectURL(url), 1000);
} catch (error) {
console.error("Error generating report:", error);
}
};
🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx around lines 61 to
75, the handler posts to the wrong endpoint, omits the required JSON payload,
and never revokes the created Blob URL; change the fetch URL to
"/generate_report", send the required settings object in the request body as
JSON with headers "Content-Type: application/json" (use
JSON.stringify(settings)), keep the existing response/blob handling but after
programmatically clicking the link call window.URL.revokeObjectURL(url) and
remove the link element to avoid leaks, and preserve proper error handling and
response.ok checks.

Comment on lines +16 to +20
# Load models (once)
device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Set models to eval mode; optionally lazy-load to avoid import-time cost

Switch learned models to eval() to disable dropout/batch-norm training behavior. Consider lazy-loading on first use to reduce import-time latency.

 device = "cuda" if torch.cuda.is_available() else "cpu"
 clip_model, preprocess = clip.load("ViT-B/32", device=device)
 lpips_model = lpips.LPIPS(net='alex').to(device)
+clip_model.eval()
+lpips_model.eval()

If startup latency is a concern, I can refactor to lazy-load these handles the first time compute_scores is called.


Set models to inference mode and consider lazy‐loading

After loading the CLIP and LPIPS models in tasks/report_bundle/generate_report.py (around lines 16–20), switch them to evaluation mode to disable dropout/BatchNorm training behavior. If startup latency becomes an issue, you can move the load + eval calls into your compute_scores function and only execute them on first use.

• File: tasks/report_bundle/generate_report.py
• Lines: 16–20

 device = "cuda" if torch.cuda.is_available() else "cpu"
 clip_model, preprocess = clip.load("ViT-B/32", device=device)
 lpips_model = lpips.LPIPS(net='alex').to(device)
+clip_model.eval()
+lpips_model.eval()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Load models (once)
device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)
# Load models (once)
device = "cuda" if torch.cuda.is_available() else "cpu"
clip_model, preprocess = clip.load("ViT-B/32", device=device)
lpips_model = lpips.LPIPS(net='alex').to(device)
clip_model.eval()
lpips_model.eval()
🤖 Prompt for AI Agents
In tasks/report_bundle/generate_report.py around lines 16 to 20, after loading
the CLIP and LPIPS models call .eval() on both models to put them into inference
mode (e.g., clip_model.eval(); lpips_model.eval()) so dropout/BatchNorm are
disabled; additionally, to reduce startup latency optionally move the load and
.eval() calls into compute_scores and implement lazy initialization (initialize
globals on first use, load models, .to(device) and .eval() there) so the models
are only loaded and set to eval when compute_scores is first invoked.

Comment on lines +85 to +95
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)


with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Duplicate write to results.csv (idempotent but unnecessary, potential race window)

The file is written twice back-to-back with identical content. Remove the second block.

     with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
         writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
         writer.writeheader()
         writer.writerows(rows)
 
-
-    with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
-        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
-        writer.writeheader()
-        writer.writerows(rows)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
with open(results_file, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
🤖 Prompt for AI Agents
In tasks/report_bundle/generate_report.py around lines 85 to 95 there is a
duplicated block that opens results_file and writes the CSV (writeheader and
writerows) twice; remove the second identical with-open/write block so the file
is written only once (keep the first context manager), leaving a single
csv.DictWriter writeheader() + writerows(rows) call and no extra duplicate
write.

Comment on lines +1 to +3
from fastapi import FastAPI, Body, HTTPException
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.middleware.cors import CORSMiddleware
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid blocking the event loop, address Ruff warnings (B008/B904), and sanitize error details.

  • Offload CPU/IO-heavy run_report to a thread pool to keep the async server responsive.
  • Use Annotated[dict, Body(...)] to avoid function calls in defaults (Ruff B008).
  • Log and return a generic 500 to avoid leaking internals; chain the original exception (Ruff B904).
-from fastapi import FastAPI, Body, HTTPException
+from fastapi import FastAPI, Body, HTTPException
+from fastapi.concurrency import run_in_threadpool
 from fastapi.responses import FileResponse, HTMLResponse
 from fastapi.middleware.cors import CORSMiddleware
+from typing import Annotated
+import logging
 import os
-from generate_report import run_report
+from .generate_report import run_report
@@
 app = FastAPI()
+logger = logging.getLogger(__name__)
@@
-@app.post("/generate_report")
-async def generate_report_endpoint(frontend_settings: dict = Body(...)):
+@app.post("/generate_report")
+async def generate_report_endpoint(frontend_settings: Annotated[dict, Body(...)]):
     try:
-        zip_path = run_report(frontend_settings)
+        # Offload synchronous heavy work to a threadpool to prevent event loop blocking
+        zip_path = await run_in_threadpool(run_report, frontend_settings)
         if not os.path.exists(zip_path):
             raise HTTPException(status_code=500, detail=f"{zip_path} not found after generation.")
         return FileResponse(zip_path, filename="report.zip", media_type="application/zip")
     except Exception as e:
-        raise HTTPException(status_code=500, detail=str(e))
+        # Don't leak internal paths/details; keep cause chained for observability
+        logger.exception("Report generation failed")
+        raise HTTPException(status_code=500, detail="Report generation failed") from e

Also applies to: 33-40

🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around lines 1-3 (and apply same changes to
lines 33-40), the review asks to avoid blocking the event loop, fix Ruff
warnings B008/B904, and sanitize error details: change endpoint parameters to
use Annotated[dict, Body(...)] instead of calling Body in a default, offload the
CPU/IO-heavy run_report call to a thread pool (e.g., FastAPI's run_in_threadpool
or loop.run_in_executor) so the async server stays responsive, catch exceptions,
log the full exception internally but return a generic HTTP 500 without internal
details, and re-raise or raise HTTPException(...) from the original exception to
preserve chaining.

from fastapi.responses import FileResponse, HTMLResponse
from fastapi.middleware.cors import CORSMiddleware
import os
from generate_report import run_report
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fix import to work when running as a package (uvicorn tasks.report_bundle.server:app).

Absolute import from generate_report import run_report will fail under package execution. Use a relative import.

-from generate_report import run_report
+from .generate_report import run_report

If you don’t already have them, ensure tasks/__init__.py and tasks/report_bundle/__init__.py exist so relative imports resolve.


🏁 Script executed:

#!/bin/bash
# Verify package markers exist for relative imports
set -euo pipefail
for d in tasks tasks/report_bundle; do
  if [ -d "$d" ]; then
    if [ ! -f "$d/__init__.py" ]; then
      echo "Missing $d/__init__.py"
    else
      echo "Found $d/__init__.py"
    fi
  fi
done

# Show the current import line
rg -n $'^from .*generate_report import run_report' -A 0 tasks/report_bundle/server.py

Length of output: 507


Add missing package markers and update import in server.py

The tasks and tasks/report_bundle directories are missing __init__.py, so Python won’t treat them as packages when invoking via uvicorn tasks.report_bundle.server:app. Additionally, the import in tasks/report_bundle/server.py must be relative.

• Create empty __init__.py in:

  • tasks/__init__.py
  • tasks/report_bundle/__init__.py
    • In tasks/report_bundle/server.py, change:
-from generate_report import run_report
+from .generate_report import run_report
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from generate_report import run_report
from .generate_report import run_report
🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around line 5, the project lacks package
markers and uses an absolute import that will fail when running via uvicorn; add
empty files tasks/__init__.py and tasks/report_bundle/__init__.py to make those
directories packages, and update the import in tasks/report_bundle/server.py to
use a relative import (import the run_report from the local module) so Python
can resolve it when invoked as tasks.report_bundle.server:app.

Comment on lines +9 to +15
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

CORS: Don’t combine allow_credentials=True with wildcard origins.

Per CORS spec and Starlette behavior, "*" with credentials is invalid. Either:

  • Set allow_credentials=False (simple fix), or
  • Replace "*" with an explicit list of origins.

Minimal safe change:

 app.add_middleware(
     CORSMiddleware,
     allow_origins=["*"],
-    allow_credentials=True,
+    allow_credentials=False,
     allow_methods=["*"],
     allow_headers=["*"],
 )

If you need credentials, replace ["*"] with explicit origins (e.g., from env) and keep allow_credentials=True.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=False,
allow_methods=["*"],
allow_headers=["*"],
)
🤖 Prompt for AI Agents
In tasks/report_bundle/server.py around lines 9 to 15, the CORS middleware is
configured with allow_origins=["*"] and allow_credentials=True which is invalid
per the CORS spec and Starlette; either set allow_credentials=False to keep the
wildcard origin, or replace the wildcard with an explicit list of allowed
origins (e.g., load a comma-separated ALLOWED_ORIGINS from environment/config
and split into a list) and keep allow_credentials=True; update the code to use
one of these two safe options and ensure allow_origins is a concrete list when
allow_credentials is True.

Comment on lines +20 to +23
const resolveSettings = () => {
if (typeof getFrontendSettings === "function") return getFrontendSettings();
return frontendSettings;
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Support async settings providers (await getFrontendSettings).

Allow getFrontendSettings to return a Promise for flexibility, then await it in the click handler.

-  const resolveSettings = () => {
-    if (typeof getFrontendSettings === "function") return getFrontendSettings();
-    return frontendSettings;
-  };
+  const resolveSettings = async () => {
+    if (typeof getFrontendSettings === "function") {
+      return await Promise.resolve(getFrontendSettings());
+    }
+    return frontendSettings;
+  };
@@
-      const settings = resolveSettings();
+      const settings = await resolveSettings();
       if (!settings || Object.keys(settings).length === 0) {
         throw new Error("No frontend settings provided. Pass frontendSettings or getFrontendSettings.");
       }

Also applies to: 25-33

🤖 Prompt for AI Agents
In tasks/report_bundle/src/components/GenerateReportButton.jsx around lines
20-23 (and similarly 25-33), the current resolveSettings assumes
getFrontendSettings is synchronous; change resolveSettings to be async and await
getFrontendSettings() when it's a function (i.e. if (typeof getFrontendSettings
=== "function") return await getFrontendSettings();), then update the click
handler to await resolveSettings() (and propagate or handle the returned
Promise) — wrap the await call in a try/catch to handle rejections and
surface/log errors appropriately so async settings providers are supported.

Comment on lines +3 to +14
def test_csv_schema(filename):
required_columns = {'id', 'image_path', 'score'}
with open(filename, newline='') as csvfile:
reader = csv.DictReader(csvfile)
headers = set(reader.fieldnames)
missing = required_columns - headers
if missing:
raise ValueError(f"CSV is missing required columns: {missing}")
print("✅ CSV schema test passed!")

if __name__ == "__main__":
test_csv_schema('results.csv')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Schema is out of sync with new CLIP/LPIPS fields; hard requirement on “score” will fail

The pipeline now writes clip_score and lpips_score (not a single score). Also add basic robustness (encoding, empty-header guard).

-def test_csv_schema(filename):
-    required_columns = {'id', 'image_path', 'score'}
-    with open(filename, newline='') as csvfile:
-        reader = csv.DictReader(csvfile)
-        headers = set(reader.fieldnames)
-        missing = required_columns - headers
-        if missing:
-            raise ValueError(f"CSV is missing required columns: {missing}")
-        print("✅ CSV schema test passed!")
+def test_csv_schema(filename):
+    # Minimum columns required by add_scores_to_csv
+    required_columns = {'id', 'image_path', 'prompt'}
+    # Accept either the new pair of scores or the legacy single "score" for backward compatibility
+    acceptable_score_sets = [
+        {'clip_score', 'lpips_score'},
+        {'score'},
+    ]
+    with open(filename, newline='', encoding='utf-8') as csvfile:
+        reader = csv.DictReader(csvfile)
+        if reader.fieldnames is None:
+            raise ValueError("CSV is empty or missing a header row")
+        headers = set(reader.fieldnames)
+        missing = required_columns - headers
+        if missing:
+            raise ValueError(f"CSV is missing required columns: {missing}")
+        if not any(scores.issubset(headers) for scores in acceptable_score_sets):
+            raise ValueError("CSV must contain either {'clip_score','lpips_score'} or legacy {'score'}")
+    print("✅ CSV schema test passed!")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_csv_schema(filename):
required_columns = {'id', 'image_path', 'score'}
with open(filename, newline='') as csvfile:
reader = csv.DictReader(csvfile)
headers = set(reader.fieldnames)
missing = required_columns - headers
if missing:
raise ValueError(f"CSV is missing required columns: {missing}")
print("✅ CSV schema test passed!")
if __name__ == "__main__":
test_csv_schema('results.csv')
def test_csv_schema(filename):
# Minimum columns required by add_scores_to_csv
required_columns = {'id', 'image_path', 'prompt'}
# Accept either the new pair of scores or the legacy single "score" for backward compatibility
acceptable_score_sets = [
{'clip_score', 'lpips_score'},
{'score'},
]
with open(filename, newline='', encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile)
if reader.fieldnames is None:
raise ValueError("CSV is empty or missing a header row")
headers = set(reader.fieldnames)
missing = required_columns - headers
if missing:
raise ValueError(f"CSV is missing required columns: {missing}")
if not any(scores.issubset(headers) for scores in acceptable_score_sets):
raise ValueError("CSV must contain either {'clip_score','lpips_score'} or legacy {'score'}")
print("✅ CSV schema test passed!")
if __name__ == "__main__":
test_csv_schema('results.csv')
🤖 Prompt for AI Agents
tasks/report_bundle/test_schema.py around lines 3 to 14: the test currently
requires a single "score" column and opens the CSV without explicit encoding or
a guard for empty headers; update the required columns to require "clip_score"
and "lpips_score" (or both if your pipeline writes both), open the file with an
explicit encoding (e.g., encoding='utf-8', errors='replace') and check
reader.fieldnames is not None before using it (raise a clear ValueError if
headers are missing), then compute missing = required_columns -
set(reader.fieldnames) and raise the existing error message if any are missing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant