-
Notifications
You must be signed in to change notification settings - Fork 215
feat: report bundle, presets, tiling+blend, metrics; tests, docs, CI #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Note Currently processing new changes in this PR. This may take a few minutes, please wait... 📒 Files selected for processing (1)
WalkthroughIntroduces core subsystems for presets, tiling/blending, and quality metrics (CLIP, SSIM, LPIPS); adds deterministic report bundle tooling and schema validation; establishes tests for all new components; updates README, CI workflow, pytest config, pre-commit hooks, and requirements; and adds package initializers and default presets JSON. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant PresetManager
participant Disk
Client->>PresetManager: get_preset(name)
PresetManager->>Disk: load presets.json (on init)
PresetManager-->>Client: Preset
Client->>PresetManager: apply_preset_to_config(preset_name, cfg)
PresetManager-->>Client: merged config
sequenceDiagram
participant Orchestrator
participant Tiling
participant Generator
Orchestrator->>Tiling: process_tiled(generate_fn, width,height,tile,overlap,mode)
loop per tile
Tiling->>Generator: generate_fn(crop=rect, **kwargs)
Generator-->>Tiling: tile image
Tiling-->>Tiling: blend_paste(canvas, tile, rect, overlap, mode)
end
Tiling-->>Orchestrator: final image
sequenceDiagram
participant Evaluator
participant CLIP
participant SSIM
participant LPIPS
Evaluator->>CLIP: clip_text_image_similarity(images, prompts)
CLIP-->>Evaluator: [score|None]*
Evaluator->>SSIM: compute_ssim(img_a, img_b)
SSIM-->>Evaluator: score|None
Evaluator->>LPIPS: compute_lpips(img_a, img_b)
LPIPS-->>Evaluator: score|None
Evaluator-->>Evaluator: aggregate metrics
sequenceDiagram
participant User
participant Bundler
participant Schema
User->>Bundler: build_report_bundle(run_dir, out_zip)
Bundler->>Schema: validate_results_csv(results.csv)
Schema-->>Bundler: ok|error
Bundler-->>User: bundle summary (sha256, files)
User->>Bundler: validate_bundle(zip)
Bundler->>Schema: validate_results_csv(extracted csv)
Bundler-->>User: validity report
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Tip CodeRabbit can generate a title for your PR based on the changes.Add @coderabbitai placeholder anywhere in the title of your PR and CodeRabbit will replace it with a title based on the changes in the PR. You can change the placeholder by changing the ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
Reviewer's GuideThis PR delivers four major end-to-end features—deterministic report bundles, versioned presets, high-resolution tiling with seamless blending, and optional image quality metrics—alongside comprehensive tests, updated documentation, and CI/tooling enhancements for consistent development workflows. Class diagram for the new Preset and PresetManager systemclassDiagram
class Preset {
+name: str
+version: int
+models: Dict[str, Any]
+params: Dict[str, Any]
+preset_hash: str
+description: str
+created_at: str
+updated_at: str
+_compute_hash(): str
+to_dict(): Dict[str, Any]
+update(**kwargs): None
+is_compatible_with(other: Preset): bool
}
class PresetManager {
+presets_file: Path
+presets: Dict[str, Preset]
+_load_presets(): None
+_save_presets(): None
+get_preset(name: str): Preset
+list_presets(): List[str]
+add_preset(preset: Preset): None
+remove_preset(name: str): bool
+create_preset_from_config(name: str, config: Dict[str, Any], description: str): Preset
+apply_preset_to_config(preset_name: str, config: Dict[str, Any]): Dict[str, Any]
+validate_preset(preset_name: str): Dict[str, Any]
}
PresetManager "1" -- "*" Preset
Class diagram for the new TilingConfig and tiling functionsclassDiagram
class TilingConfig {
+tile_size: int
+overlap: int
+blend_mode: str
+to_dict(): Dict[str, Any]
}
class Tiling {
+tile_slices(width, height, tile_size, overlap): List[Tuple]
+create_blend_mask(tile_size, overlap, mode): np.ndarray
+blend_paste(canvas, tile_img, rect, overlap, mode): None
+process_tiled(generate_fn, width, height, tile_size, overlap, blend_mode, **gen_kwargs): np.ndarray | Image
+calculate_optimal_tile_size(width, height, max_tile_size, min_tile_size): Tuple[int, int]
+validate_tiling_config(width, height, tile_size, overlap): Dict[str, Any]
}
TilingConfig <|-- Tiling
Class diagram for the new Quality Metrics (SSIM, LPIPS, CLIP)classDiagram
class SSIMScorer {
+available: bool
+compute_ssim(img_a, img_b, **kwargs): Optional[float]
+get_info(): dict
}
class LPIPSScorer {
+net: str
+available: bool
+model: Any
+compute_lpips(img_a, img_b, net): Optional[float]
+get_info(): dict
}
class CLIPScorer {
+model_id: str
+model: Any
+processor: Any
+device: str
+clip_text_image_similarity(images, prompts, batch_size): List[Optional[float]]
+get_model_info(): dict
}
class QualityMetrics {
+ssim_scorer: SSIMScorer
+lpips_scorer: LPIPSScorer
+compute_all_metrics(img_a, img_b): dict
+get_metrics_info(): dict
}
QualityMetrics "1" -- "1" SSIMScorer
QualityMetrics "1" -- "1" LPIPSScorer
Class diagram for the Report Bundle systemclassDiagram
class ReportBundle {
+build_report_bundle(run_dir, out_zip, selected_globs): Dict[str, Any]
+validate_bundle(zip_path): Dict[str, Any]
}
class ReportSchema {
+validate_results_csv(csv_path, bundle_root, config_data): None
+get_schema_version(csv_path): str
+create_schema_header(): List[str]
+validate_csv_structure(csv_path): Dict[str, Any]
}
ReportBundle ..> ReportSchema
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- This PR bundles several major features at once; consider splitting it into smaller, feature-focused PRs (e.g., presets, tiling, report bundle) to simplify review and reduce risk.
- In tools/report_bundle.py, using tempfile.TemporaryDirectory for the temporary bundle folder would ensure reliable cleanup and avoid leftover artifacts on failure.
- PresetManager currently uses print statements for feedback—switching these to a proper logger would align with the rest of the codebase and improve configurability.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- This PR bundles several major features at once; consider splitting it into smaller, feature-focused PRs (e.g., presets, tiling, report bundle) to simplify review and reduce risk.
- In tools/report_bundle.py, using tempfile.TemporaryDirectory for the temporary bundle folder would ensure reliable cleanup and avoid leftover artifacts on failure.
- PresetManager currently uses print statements for feedback—switching these to a proper logger would align with the rest of the codebase and improve configurability.
## Individual Comments
### Comment 1
<location> `dream_layer_backend/core/tiling.py:72` </location>
<code_context>
+ step = tile_size - overlap
+
+ # Generate tiles
+ for y in range(0, height, step):
+ for x in range(0, width, step):
+ # Calculate tile boundaries
+ x1 = min(x + tile_size, width)
+ y1 = min(y + tile_size, height)
+
+ # Ensure minimum tile size
+ if x1 - x >= overlap and y1 - y >= overlap:
+ tiles.append((x, y, x1, y1))
+
</code_context>
<issue_to_address>
Tile generation may miss edge tiles for certain image sizes.
For images where width or height isn't a multiple of (tile_size - overlap), the current loop may leave out tiles at the right or bottom edges. Please update the logic to guarantee full image coverage.
</issue_to_address>
### Comment 2
<location> `dream_layer_backend/core/tiling.py:101` </location>
<code_context>
+ Returns:
+ Blend mask as numpy array
+ """
+ if overlap == 0:
+ return np.ones((tile_size, tile_size))
+
</code_context>
<issue_to_address>
Blend mask creation does not handle non-square tiles.
The mask should be generated using the actual tile dimensions to avoid errors or incorrect blending at image edges.
</issue_to_address>
### Comment 3
<location> `dream_layer_backend/core/tiling.py:190` </location>
<code_context>
+ # Blend with existing content
+ if overlap > 0:
+ # Create overlap mask for existing content
+ overlap_mask = 1.0 - blend_mask[:y1-y0, :x1-x0]
+
+ if len(canvas_region.shape) == 3:
</code_context>
<issue_to_address>
Overlap mask slicing may not match tile/canvas region shape.
If the tile is smaller than expected, this slicing could cause shape mismatches or broadcasting errors. Please ensure blend_mask matches the tile region size.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| for y in range(0, height, step): | ||
| for x in range(0, width, step): | ||
| # Calculate tile boundaries | ||
| x1 = min(x + tile_size, width) | ||
| y1 = min(y + tile_size, height) | ||
|
|
||
| # Ensure minimum tile size | ||
| if x1 - x >= overlap and y1 - y >= overlap: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue: Tile generation may miss edge tiles for certain image sizes.
For images where width or height isn't a multiple of (tile_size - overlap), the current loop may leave out tiles at the right or bottom edges. Please update the logic to guarantee full image coverage.
| Returns: | ||
| Blend mask as numpy array | ||
| """ | ||
| if overlap == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue: Blend mask creation does not handle non-square tiles.
The mask should be generated using the actual tile dimensions to avoid errors or incorrect blending at image edges.
| # Blend with existing content | ||
| if overlap > 0: | ||
| # Create overlap mask for existing content | ||
| overlap_mask = 1.0 - blend_mask[:y1-y0, :x1-x0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue: Overlap mask slicing may not match tile/canvas region shape.
If the tile is smaller than expected, this slicing could cause shape mismatches or broadcasting errors. Please ensure blend_mask matches the tile region size.
| for i, config in enumerate(configs): | ||
| manager.create_preset_from_config(f"persistent_preset_{i}", config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Avoid loops in tests. (no-loop-in-tests)
Explanation
Avoid complex code, like loops, in test functions.Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
- loops
- conditionals
Some ways to fix this:
- Use parametrized tests to get rid of the loop.
- Move the complex logic into helpers.
- Move the complex part into pytest fixtures.
Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.
Software Engineering at Google / Don't Put Logic in Tests
| for i in range(3): | ||
| preset_name = f"persistent_preset_{i}" | ||
| assert preset_name in new_manager.list_presets() | ||
|
|
||
| # Verify preset content | ||
| preset = new_manager.get_preset(preset_name) | ||
| assert preset.params["steps"] == configs[i]["steps"] | ||
| assert preset.params["cfg_scale"] == configs[i]["cfg_scale"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Avoid loops in tests. (no-loop-in-tests)
Explanation
Avoid complex code, like loops, in test functions.Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
- loops
- conditionals
Some ways to fix this:
- Use parametrized tests to get rid of the loop.
- Move the complex logic into helpers.
- Move the complex part into pytest fixtures.
Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.
Software Engineering at Google / Don't Put Logic in Tests
| if crop: | ||
| x0, y0, x1, y1 = crop | ||
| # Create a PIL Image | ||
| tile = Image.new('RGB', (x1-x0, y1-y0), color=(x0 % 255, y0 % 255, (x0 + y0) % 255)) | ||
| return tile | ||
| else: | ||
| return Image.new('RGB', (512, 512), color=(0, 0, 0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Avoid conditionals in tests. (no-conditionals-in-tests)
Explanation
Avoid complex code, like conditionals, in test functions.Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
- loops
- conditionals
Some ways to fix this:
- Use parametrized tests to get rid of the loop.
- Move the complex logic into helpers.
- Move the complex part into pytest fixtures.
Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.
Software Engineering at Google / Don't Put Logic in Tests
| if crop: | ||
| x0, y0, x1, y1 = crop | ||
| # Create a test pattern that varies by position | ||
| tile = np.zeros((y1-y0, x1-x0, 3), dtype=np.uint8) | ||
| for i in range(y1-y0): | ||
| for j in range(x1-x0): | ||
| tile[i, j] = [ | ||
| (x0 + j) % 256, | ||
| (y0 + i) % 256, | ||
| ((x0 + j) + (y0 + i)) % 256 | ||
| ] | ||
| return tile | ||
| else: | ||
| # Return a 1024x1024 image for the full size case | ||
| return np.zeros((1024, 1024, 3), dtype=np.uint8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Avoid conditionals in tests. (no-conditionals-in-tests)
Explanation
Avoid complex code, like conditionals, in test functions.Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
- loops
- conditionals
Some ways to fix this:
- Use parametrized tests to get rid of the loop.
- Move the complex logic into helpers.
- Move the complex part into pytest fixtures.
Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.
Software Engineering at Google / Don't Put Logic in Tests
| for i in range(y1-y0): | ||
| for j in range(x1-x0): | ||
| tile[i, j] = [ | ||
| (x0 + j) % 256, | ||
| (y0 + i) % 256, | ||
| ((x0 + j) + (y0 + i)) % 256 | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Avoid loops in tests. (no-loop-in-tests)
Explanation
Avoid complex code, like loops, in test functions.Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
- loops
- conditionals
Some ways to fix this:
- Use parametrized tests to get rid of the loop.
- Move the complex logic into helpers.
- Move the complex part into pytest fixtures.
Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.
Software Engineering at Google / Don't Put Logic in Tests
| for j in range(x1-x0): | ||
| tile[i, j] = [ | ||
| (x0 + j) % 256, | ||
| (y0 + i) % 256, | ||
| ((x0 + j) + (y0 + i)) % 256 | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Avoid loops in tests. (no-loop-in-tests)
Explanation
Avoid complex code, like loops, in test functions.Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
- loops
- conditionals
Some ways to fix this:
- Use parametrized tests to get rid of the loop.
- Move the complex logic into helpers.
- Move the complex part into pytest fixtures.
Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.
Software Engineering at Google / Don't Put Logic in Tests
| for tile_size, overlap, blend_mode in test_configs: | ||
| # Generate tiled image | ||
| tiled_result = process_tiled( | ||
| create_test_image, | ||
| width=1024, | ||
| height=1024, | ||
| tile_size=tile_size, | ||
| overlap=overlap, | ||
| blend_mode=blend_mode, | ||
| crop=None # Add crop parameter to kwargs | ||
| ) | ||
|
|
||
| # Generate reference image (single pass) | ||
| reference_result = create_test_image() | ||
| # Resize reference to match tiled result | ||
| from scipy.ndimage import zoom | ||
| if reference_result.shape != (1024, 1024, 3): | ||
| zoom_factors = [1024 / reference_result.shape[0], 1024 / reference_result.shape[1], 1] | ||
| reference_result = zoom(reference_result, zoom_factors, order=1).astype(np.uint8) | ||
|
|
||
| # Check dimensions | ||
| assert tiled_result.shape == (1024, 1024, 3) | ||
| assert reference_result.shape == (1024, 1024, 3) | ||
|
|
||
| # Check that results have the same shape and are not all zeros | ||
| assert tiled_result.shape == (1024, 1024, 3) | ||
| assert not np.all(tiled_result == 0) # Should have some non-zero values | ||
|
|
||
| # Check that results are reasonable (not all zeros or all same value) | ||
| assert not np.allclose(tiled_result, 0) | ||
| assert not np.allclose(tiled_result, tiled_result[0, 0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Avoid loops in tests. (no-loop-in-tests)
Explanation
Avoid complex code, like loops, in test functions.Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
- loops
- conditionals
Some ways to fix this:
- Use parametrized tests to get rid of the loop.
- Move the complex logic into helpers.
- Move the complex part into pytest fixtures.
Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.
Software Engineering at Google / Don't Put Logic in Tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 26
🔭 Outside diff range comments (1)
dream_layer_backend/tests/test_presets_e2e.py (1)
424-431: Fix indentation inconsistency.The indentation is inconsistent in the reference image generation section.
Apply this diff to fix the indentation:
- # Generate reference image (single pass) + # Generate reference image (single pass) reference_result = create_test_image() # Resize reference to match tiled result from scipy.ndimage import zoom if reference_result.shape != (1024, 1024, 3): zoom_factors = [1024 / reference_result.shape[0], 1024 / reference_result.shape[1], 1] reference_result = zoom(reference_result, zoom_factors, order=1).astype(np.uint8) -
♻️ Duplicate comments (9)
dream_layer_backend/core/tiling.py (3)
72-80: Potential issue with tile generation for certain image sizes.The current tiling logic may miss edge tiles when the image dimensions aren't multiples of
(tile_size - overlap). This could leave portions of the image uncovered, particularly at the right and bottom edges.
176-176: Blend mask creation doesn't handle non-square tiles.The
create_blend_maskfunction is called withtile_widthas the only size parameter, assuming square tiles. For edge tiles that may be rectangular (e.g., when the tile extends beyond image boundaries), this could cause dimension mismatches.
190-190: Potential shape mismatch in overlap mask slicing.The slicing
[:y1-y0, :x1-x0]assumes the blend mask dimensions match the tile region, but for edge tiles smaller than the standard tile size, this could cause shape mismatches or broadcasting errors during blending.dream_layer_backend/tests/test_presets_e2e.py (1)
185-186: Consider using parametrized tests instead of loops.Using loops in tests can make it harder to identify which specific case failed and goes against testing best practices.
Also applies to: 196-203
dream_layer_backend/tests/test_tiling_blend.py (2)
75-76: Consider using parametrized tests instead of loops.Using loops in tests makes it harder to identify which specific case failed when a test fails.
Also applies to: 392-398, 412-442
233-241: Consider extracting conditional logic to helper functions.The conditional logic in test functions makes tests harder to understand at a glance.
Also applies to: 286-292, 388-402
dream_layer_backend/tests/test_quality_metrics.py (3)
132-139: Avoid conditionals in test assertionsComplex conditional logic in tests makes them harder to understand and maintain. Consider using pytest's parametrization or helper functions.
375-378: Avoid loops in test fixturesUsing loops in test fixtures can make tests less clear. Consider using explicit test data or parametrization.
@pytest.fixture def test_images(self): """Create test images for integration testing.""" - images = [] - for i in range(3): - # Create images with different patterns - img = Image.new('RGB', (100, 100), color=(i * 50, i * 50, i * 50)) - images.append(img) - return images + return [ + Image.new('RGB', (100, 100), color=(0, 0, 0)), # dark + Image.new('RGB', (100, 100), color=(50, 50, 50)), # medium + Image.new('RGB', (100, 100), color=(100, 100, 100)) # bright + ]
65-68: Simplify conditional logic in testThe conditional assertion is unnecessarily complex and always evaluates to true.
# Check availability based on dependencies if CLIP_AVAILABLE: - assert scorer.model is not None or scorer.model is None # May be None if loading failed + # Model may be None if loading failed, even when dependencies are available + pass # No assertion needed here else: assert scorer.model is None
🧹 Nitpick comments (21)
dream_layer_backend/core/__init__.py (1)
1-2: Prefer a module docstring over a comment for package descriptionA module-level docstring is discoverable by tooling (help(), Sphinx) and serves the same purpose as the comment.
Apply this diff:
-# Core package for DreamLayer AI +"""Core package for DreamLayer AI."""pytest.ini (1)
1-8: Registering markers is good; consider stricter marker handling and clearer test outputDefining markers here is correct and avoids PytestUnknownMarkWarning. To prevent typos and ensure robust CI, enable strict marker and xfail handling. Also consider adding -ra to summarize skipped/xfail reasons without being too verbose.
Apply this diff:
[pytest] -addopts = -q +addopts = -q -ra +xfail_strict = true +strict_markers = true markers = requires_torch: needs PyTorch requires_transformers: needs HuggingFace transformers requires_lpips: needs lpips requires_skimage: needs scikit-imagedream_layer_backend/tools/__init__.py (1)
1-2: Use a module docstring for package descriptionConsistent with other packages, a docstring is preferable to a comment.
-# Tools package for DreamLayer AI +"""Tools package for DreamLayer AI."""dream_layer_backend/metrics/__init__.py (1)
1-2: Use a module docstring for package descriptionMinor stylistic improvement for tooling discoverability.
-# Metrics package for DreamLayer AI +"""Metrics package for DreamLayer AI."""dream_layer_backend/requirements.txt (1)
12-27: Optional deps are documented well; consider extras and version floors compatible with modern PythonThe commented guidance is helpful. Two suggestions:
- Define extras (e.g., metrics, tiling, all) in your packaging config (pyproject.toml/setup.cfg) so users can
pip install dreamlayer_backend[metrics].- Ensure minimum versions align with Python versions used in CI (e.g., Py3.11/3.12) and available prebuilt wheels to avoid build-from-source surprises.
If you want to keep requirements.txt-only, add separate optional files (requirements-metrics.txt, requirements-tiling.txt) and reference them in docs.
I can draft a minimal pyproject.toml with extras or add split requirements files—let me know your preferred approach.
.pre-commit-config.yaml (1)
11-23: Optional: centralize tool configuration in pyproject.tomlTo avoid duplicating config (e.g., line-length and profiles) across tools and runners, consider moving Black/Isort configuration to pyproject.toml and letting pre-commit pick it up automatically. This keeps one source of truth when running the tools outside pre-commit (IDEs, CI).
.github/workflows/test.yml (3)
18-34: Remove trailing whitespace to satisfy YAML lintersYAMLlint reports trailing spaces on blank lines at Lines 18, 23, 28, and 34. Remove the stray spaces to keep the workflow clean and avoid linter noise.
24-28: Verify correct requirements path and consider caching
- The workflow installs from root requirements.txt. The README and PR context reference dream_layer_backend/requirements.txt with optional deps. Please verify which file is canonical in this repo and point pip to it to avoid drift.
- Consider enabling pip cache through setup-python to speed CI.
Example (if backend requirements is canonical):
- python -m pip install -r requirements.txt + python -m pip install -r dream_layer_backend/requirements.txtWould you like me to scan the repo to confirm which requirements.txt is present and adjust the workflow accordingly?
13-15: Consider adding Python 3.12 to the matrix; 3.8 is EOL
- Python 3.8 reached EOL; unless you need it for users, consider dropping it to reduce CI surface area.
- Add "3.12" to ensure compatibility with current stable Python.
- python-version: [3.8, 3.9, "3.10", "3.11"] + python-version: [3.9, "3.10", "3.11", "3.12"].github/pull_request_template.md (2)
18-25: Clarify test path vs working directoryThese commands assume tests/ is at the repo root. In CI you run tests from dream_layer_backend. For consistency and to avoid confusion, either instruct to cd into dream_layer_backend or reference the full path.
Example:
-# Run all tests (heavy deps auto-skip) -python -m pytest tests/ --tb=short -q +# Run all tests (heavy deps auto-skip) +cd dream_layer_backend +python -m pytest tests/ --tb=short -q
12-16: Align SSIM messaging with PR objective (SSIM is optional with graceful fallback)The PR objectives state CLIP/SSIM/LPIPS are optional with graceful fallbacks. The current bullet suggests SSIM is always enabled. Please reword to reflect optional behavior and auto-skip when scikit-image is missing.
README.md (2)
250-273: Add language hint to the fenced code block (markdownlint MD040)Specify a language for the directory tree fence to satisfy linters and improve rendering.
-``` +```text ... -``` +```
274-349: Deduplicate repeated sections to reduce maintenance burdenThe README repeats “🧪 Testing”, “📦 Dependencies”, and “🔍 Feature Details” sections. Suggest consolidating each into a single authoritative section to avoid future drift.
Also applies to: 326-349, 342-368
dream_layer_backend/tools/report_schema.py (1)
8-8: Remove unused import (flagged by Ruff F401)json is imported but unused.
-import jsondream_layer_backend/metrics/clip_score.py (1)
8-8: Remove unused importsThe following imports are unused and should be removed:
typing.Tuple(line 8)numpy(line 10)transformers(line 20)-from typing import List, Optional, Tuple +from typing import List, Optional from PIL import Image -import numpy as np logger = logging.getLogger(__name__)For line 20, the
transformersimport is only used for the specific classes imported on line 21, so it can be removed:try: import torch - import transformers from transformers import CLIPProcessor, CLIPModelAlso applies to: 10-10, 20-20
dream_layer_backend/tools/report_bundle.py (1)
11-11: Remove unused importsThe following imports are unused and should be removed:
os(line 11)create_schema_headerfrom report_schema (line 18)import hashlib import json -import os import shutil-from .report_schema import validate_results_csv, create_schema_header, SCHEMA_VERSION +from .report_schema import validate_results_csv, SCHEMA_VERSIONAlso applies to: 18-18
dream_layer_backend/tests/test_quality_metrics.py (1)
22-30: Remove unused imports from test fileSeveral imported functions are not used in the tests and should be removed to keep the imports clean.
from metrics.clip_score import ( - CLIPScorer, get_clip_scorer, clip_text_image_similarity, + CLIPScorer, CLIP_AVAILABLE, CLIP_MODEL_ID, CLIP_MODEL_HASH ) from metrics.ssim_lpips import ( SSIMScorer, LPIPSScorer, QualityMetrics, - get_ssim_scorer, get_lpips_scorer, get_quality_metrics, - compute_ssim, compute_lpips, compute_all_quality_metrics, - SSIM_AVAILABLE, LPIPS_AVAILABLE )dream_layer_backend/metrics/ssim_lpips.py (2)
8-8: Remove unused importThe
Tupletype from typing is imported but never used.-from typing import Optional, Tuple, Union +from typing import Optional, Union
121-132: Inconsistent model loading pattern with CLIPScorerThe LPIPS model loading differs from the CLIP model loading pattern. The model isn't moved to a device (CPU/GPU), which is handled in CLIPScorer. Consider adding device management for consistency and performance.
def _load_model(self) -> None: """Load LPIPS model.""" try: + import torch logger.info(f"Loading LPIPS model: {self.net}") - self.model = lpips.LPIPS(net=self.net) + device = "cuda" if torch.cuda.is_available() else "cpu" + self.model = lpips.LPIPS(net=self.net).to(device) + self.device = device logger.info("LPIPS model loaded successfully") except Exception as e: logger.error(f"Failed to load LPIPS model: {e}") self.model = None self.available = False + self.device = Nonedream_layer_backend/core/presets.py (2)
9-9: Remove unused importThe
osmodule is imported but never used.import hashlib import json -import os from datetime import datetime
272-284: Inconsistent key naming between config and presetThere's an inconsistency in how model keys are handled - the config uses 'model_name' but the preset stores it as 'checkpoint'. This mapping should be documented or made consistent.
Consider creating a mapping dictionary to make these transformations explicit:
+ # Mapping between config keys and preset keys + MODEL_KEY_MAPPING = { + 'model_name': 'checkpoint', + 'vae_name': 'vae', + 'lora_name': 'lora', + 'controlnet_model': 'controlnet' + } + # Model-related keys model_keys = ['model_name', 'vae_name', 'lora_name', 'controlnet_model'] for key in model_keys: if key in config and config[key]: - models[key] = config[key] + preset_key = MODEL_KEY_MAPPING.get(key, key) + models[preset_key] = config[key]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
.github/pull_request_template.md(1 hunks).github/workflows/test.yml(1 hunks).pre-commit-config.yaml(1 hunks)README.md(3 hunks)dream_layer_backend/core/__init__.py(1 hunks)dream_layer_backend/core/presets.py(1 hunks)dream_layer_backend/core/tiling.py(1 hunks)dream_layer_backend/metrics/__init__.py(1 hunks)dream_layer_backend/metrics/clip_score.py(1 hunks)dream_layer_backend/metrics/ssim_lpips.py(1 hunks)dream_layer_backend/presets/presets.json(1 hunks)dream_layer_backend/requirements.txt(1 hunks)dream_layer_backend/tests/test_presets_e2e.py(1 hunks)dream_layer_backend/tests/test_quality_metrics.py(1 hunks)dream_layer_backend/tests/test_report_bundle.py(1 hunks)dream_layer_backend/tests/test_tiling_blend.py(1 hunks)dream_layer_backend/tools/__init__.py(1 hunks)dream_layer_backend/tools/report_bundle.py(1 hunks)dream_layer_backend/tools/report_schema.py(1 hunks)pytest.ini(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (6)
dream_layer_backend/tools/report_bundle.py (1)
dream_layer_backend/tools/report_schema.py (2)
validate_results_csv(26-88)create_schema_header(118-125)
dream_layer_backend/tests/test_quality_metrics.py (2)
dream_layer_backend/metrics/clip_score.py (6)
CLIPScorer(28-155)get_clip_scorer(158-162)clip_text_image_similarity(62-103)clip_text_image_similarity(165-186)model_hash(153-155)get_model_info(143-150)dream_layer_backend/metrics/ssim_lpips.py (15)
SSIMScorer(35-106)LPIPSScorer(109-198)QualityMetrics(201-244)get_ssim_scorer(253-258)get_lpips_scorer(261-266)get_quality_metrics(269-274)compute_ssim(43-97)compute_ssim(277-292)compute_lpips(134-188)compute_lpips(295-312)compute_all_quality_metrics(315-330)get_info(99-106)get_info(190-198)compute_all_metrics(208-237)get_metrics_info(239-244)
dream_layer_backend/tests/test_report_bundle.py (2)
dream_layer_backend/tools/report_bundle.py (2)
build_report_bundle(21-102)validate_bundle(292-353)dream_layer_backend/tools/report_schema.py (1)
validate_results_csv(26-88)
dream_layer_backend/tests/test_presets_e2e.py (1)
dream_layer_backend/core/presets.py (16)
Preset(18-121)PresetManager(124-367)get_preset_manager(374-379)load_presets(382-385)save_presets(388-392)compute_preset_hash(395-398)create_preset_from_config(260-295)apply_preset_to_config(297-333)list_presets(241-243)get_preset(237-239)to_dict(75-86)update(102-116)add_preset(245-249)is_compatible_with(118-121)validate_preset(335-356)remove_preset(251-258)
dream_layer_backend/tests/test_tiling_blend.py (1)
dream_layer_backend/core/tiling.py (8)
TilingConfig(16-45)tile_slices(48-82)create_blend_mask(85-152)blend_paste(155-201)process_tiled(204-317)calculate_optimal_tile_size(320-349)validate_tiling_config(352-407)to_dict(39-45)
dream_layer_backend/metrics/clip_score.py (1)
dream_layer_backend/metrics/ssim_lpips.py (1)
_load_model(122-132)
🪛 Ruff (0.12.2)
dream_layer_backend/tools/report_bundle.py
11-11: os imported but unused
Remove unused import: os
(F401)
18-18: .report_schema.create_schema_header imported but unused
Remove unused import: .report_schema.create_schema_header
(F401)
dream_layer_backend/tests/test_quality_metrics.py
23-23: metrics.clip_score.get_clip_scorer imported but unused
Remove unused import
(F401)
23-23: metrics.clip_score.clip_text_image_similarity imported but unused
Remove unused import
(F401)
28-28: metrics.ssim_lpips.get_ssim_scorer imported but unused
Remove unused import
(F401)
28-28: metrics.ssim_lpips.get_lpips_scorer imported but unused
Remove unused import
(F401)
28-28: metrics.ssim_lpips.get_quality_metrics imported but unused
Remove unused import
(F401)
29-29: metrics.ssim_lpips.compute_ssim imported but unused
Remove unused import
(F401)
29-29: metrics.ssim_lpips.compute_lpips imported but unused
Remove unused import
(F401)
29-29: metrics.ssim_lpips.compute_all_quality_metrics imported but unused
Remove unused import
(F401)
30-30: metrics.ssim_lpips.SSIM_AVAILABLE imported but unused
Remove unused import
(F401)
30-30: metrics.ssim_lpips.LPIPS_AVAILABLE imported but unused
Remove unused import
(F401)
dream_layer_backend/tests/test_report_bundle.py
11-11: unittest.mock.patch imported but unused
Remove unused import
(F401)
11-11: unittest.mock.MagicMock imported but unused
Remove unused import
(F401)
154-154: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
dream_layer_backend/tools/report_schema.py
8-8: json imported but unused
Remove unused import: json
(F401)
dream_layer_backend/tests/test_presets_e2e.py
5-5: json imported but unused
Remove unused import: json
(F401)
8-8: unittest.mock.patch imported but unused
Remove unused import
(F401)
8-8: unittest.mock.MagicMock imported but unused
Remove unused import
(F401)
161-161: Local variable preset is assigned to but never used
Remove assignment to unused variable preset
(F841)
dream_layer_backend/core/presets.py
9-9: os imported but unused
Remove unused import: os
(F401)
dream_layer_backend/tests/test_tiling_blend.py
8-8: unittest.mock.MagicMock imported but unused
Remove unused import
(F401)
8-8: unittest.mock.patch imported but unused
Remove unused import
(F401)
dream_layer_backend/metrics/ssim_lpips.py
8-8: typing.Tuple imported but unused
Remove unused import: typing.Tuple
(F401)
dream_layer_backend/metrics/clip_score.py
8-8: typing.Tuple imported but unused
Remove unused import: typing.Tuple
(F401)
10-10: numpy imported but unused
Remove unused import: numpy
(F401)
20-20: transformers imported but unused; consider using importlib.util.find_spec to test for availability
(F401)
dream_layer_backend/core/tiling.py
228-228: Local variable config is assigned to but never used
Remove assignment to unused variable config
(F841)
314-314: Do not use bare except
(E722)
🪛 actionlint (1.7.7)
.github/workflows/test.yml
20-20: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.37.1)
.github/workflows/test.yml
[error] 18-18: trailing spaces
(trailing-spaces)
[error] 23-23: trailing spaces
(trailing-spaces)
[error] 28-28: trailing spaces
(trailing-spaces)
[error] 34-34: trailing spaces
(trailing-spaces)
🪛 markdownlint-cli2 (0.17.2)
README.md
250-250: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
274-274: Multiple headings with the same content
(MD024, no-duplicate-heading)
326-326: Multiple headings with the same content
(MD024, no-duplicate-heading)
342-342: Multiple headings with the same content
(MD024, no-duplicate-heading)
🔇 Additional comments (7)
.pre-commit-config.yaml (1)
1-29: Solid pre-commit setup; good coverage of basic hygiene checksHooks cover the right bases (formatting, import sorting, linting, and common mistakes). This will help keep the new modules consistent.
README.md (1)
165-171: Contradiction: SSIM returns None when missing vs always enabledThis section states SSIM returns None with a warning when missing (correct for optional dependency), but earlier sections imply it’s always enabled. Keep this behavior and adjust the other sections to be consistent.
Do you want me to push a cleaned-up README patch unifying SSIM handling and removing duplicate sections?
dream_layer_backend/tools/report_schema.py (1)
158-166: Structural validity conflates schema presence with metric columnsIn validate_csv_structure, validity is computed as core_valid and (has_schema_version or metrics_valid). This marks a CSV valid if it merely has a schema_version column even if metrics are required by config. Since this function is structural, that may be acceptable, but ensure validate_results_csv enforces metric requirements (after the fixes above). If you intend stricter structural checks, change the predicate accordingly.
Would you like me to align validate_csv_structure’s validity rule with validate_results_csv (e.g., report missing required metrics when schema_version is present)?
dream_layer_backend/presets/presets.json (1)
1-70: LGTM! Well-structured preset configuration file.The JSON file is properly formatted with three distinct presets (default, high_quality, fast) that provide good coverage of different use cases. The preset hashes appear to be SHA256 digests, ensuring deterministic preset identification.
dream_layer_backend/tests/test_presets_e2e.py (1)
117-122: Fix indentation issue.There's an indentation error on line 117 that would cause a syntax error.
Apply this diff to fix the indentation:
- # Create initial preset + # Create initial preset config = {"steps": 20, "cfg_scale": 7.0} preset1 = manager.create_preset_from_config("versioned_preset", config) assert preset1.version == 1 original_hash = preset1.preset_hash -Likely an incorrect or invalid review comment.
dream_layer_backend/metrics/clip_score.py (2)
113-119: Incorrect input preparation for CLIP modelThe processor call is incorrect. CLIP processor expects separate
textandimagesparameters, but you're passing both to a single call which would cause the processor to fail.# Prepare inputs inputs = self.processor( text=prompts, images=images, return_tensors="pt", padding=True, truncation=True - ).to(self.device) + ) + # Move tensors to device + inputs = {k: v.to(self.device) if hasattr(v, 'to') else v for k, v in inputs.items()}Likely an incorrect or invalid review comment.
52-53: Fix .to() method usage for model loadingThe
.to(self.device)call should be chained properly when loading the model.# Load model and processor - self.model = CLIPModel.from_pretrained(self.model_id).to(self.device) + self.model = CLIPModel.from_pretrained(self.model_id) + self.model = self.model.to(self.device) self.processor = CLIPProcessor.from_pretrained(self.model_id)Likely an incorrect or invalid review comment.
| uses: actions/setup-python@v4 | ||
| with: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update setup-python to v5 to avoid deprecation issues
actionlint flags v4 as too old for current runners. Upgrade to v5 to stay supported.
Apply this diff:
- uses: actions/setup-python@v4
+ uses: actions/setup-python@v5📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| uses: actions/setup-python@v4 | |
| with: | |
| uses: actions/setup-python@v5 | |
| with: |
🧰 Tools
🪛 actionlint (1.7.7)
20-20: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🤖 Prompt for AI Agents
In .github/workflows/test.yml around lines 20 to 21, the workflow uses
actions/setup-python@v4 which is flagged as deprecated; update the action
reference to actions/setup-python@v5 to avoid deprecation warnings and runner
incompatibilities, and verify any existing 'with' inputs remain valid under v5
(adjust input names if needed) before committing.
| print(f"Loaded {len(self.presets)} presets from {self.presets_file}") | ||
| else: | ||
| print(f"Presets file not found: {self.presets_file}") | ||
| self._create_default_presets() | ||
|
|
||
| except Exception as e: | ||
| print(f"Error loading presets: {e}") | ||
| self._create_default_presets() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace print statements with proper logging
Using print() statements in library code is not recommended. Use the logging module for better control over output.
+import logging
+
+logger = logging.getLogger(__name__)
+
# ... throughout the file, replace print statements:
- print(f"Loaded {len(self.presets)} presets from {self.presets_file}")
+ logger.info(f"Loaded {len(self.presets)} presets from {self.presets_file}")
- print(f"Presets file not found: {self.presets_file}")
+ logger.warning(f"Presets file not found: {self.presets_file}")
- print(f"Error loading presets: {e}")
+ logger.error(f"Error loading presets: {e}")
# ... and so on for all print statementsAlso applies to: 213-213, 232-235, 249-249, 256-256
🤖 Prompt for AI Agents
In dream_layer_backend/core/presets.py around lines 144-151 (and also at 213,
232-235, 249, 256), replace all print(...) calls with the standard logging
module: create a module-level logger via logger = logging.getLogger(__name__)
and use logger.info(...) for normal messages (e.g., "Loaded X presets..."),
logger.warning(...) for missing file notices, and logger.error(...) (including
exception info) when catching exceptions; ensure the exception log includes
exc_info=True or the exception string so stack traces are preserved, and remove
the prints after switching to logging.
| """Check if a model exists in the system.""" | ||
| # This is a simplified check - in a real implementation, | ||
| # you would check against the actual model directories | ||
| if model_name == "auto": | ||
| return True | ||
|
|
||
| # For now, assume models exist if they have valid extensions | ||
| valid_extensions = {'.safetensors', '.ckpt', '.pth', '.pt', '.bin'} | ||
| return Path(model_name).suffix.lower() in valid_extensions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Weak model existence validation
The _model_exists method only checks file extensions, which doesn't actually verify if the model files exist on disk. This could lead to false positives.
Consider implementing actual file system checks:
def _model_exists(self, model_type: str, model_name: str) -> bool:
"""Check if a model exists in the system."""
- # This is a simplified check - in a real implementation,
- # you would check against the actual model directories
if model_name == "auto":
return True
- # For now, assume models exist if they have valid extensions
+ # Check if model file actually exists
+ # You'll need to define MODEL_BASE_DIR based on your system configuration
valid_extensions = {'.safetensors', '.ckpt', '.pth', '.pt', '.bin'}
- return Path(model_name).suffix.lower() in valid_extensions
+ if Path(model_name).suffix.lower() not in valid_extensions:
+ return False
+
+ # Add actual file existence check when model directories are configured
+ # model_path = MODEL_BASE_DIR / model_type / model_name
+ # return model_path.exists()
+
+ # For now, just check extension
+ return TrueWould you like me to help implement proper model path resolution and validation based on your model directory structure?
🏁 Script executed:
#!/bin/bash
# Check if there are model directories defined in the codebase
rg -A 5 "model.*dir|checkpoint.*path|models.*folder" --type pyLength of output: 48320
Use ComfyUI model directories to verify model existence (fix required)
Search shows ComfyUI model-directory helpers exist (dream_layer_backend/shared_utils.py, dream_layer_backend/dream_layer_backend_utils/fetch_advanced_models.py, dream_layer_backend/dream_layer.py), so _model_exists should check actual file paths instead of only the extension.
Files to update / review:
- dream_layer_backend/core/presets.py — _model_exists implementation (lines ~359-367)
- dream_layer_backend/dream_layer_backend_utils/fetch_advanced_models.py — get_settings (use to read modelsDirectory)
- dream_layer_backend/shared_utils.py — model directory layout / mappings (reference)
- dream_layer_backend/dream_layer.py — how modelsDirectory is passed/configured
- dream_layer_backend/tests/test_upload_security.py — add/adjust tests for _model_exists
Suggested patch (replace current method body with a real path check, falling back to extension-only check if no models dir is configured):
def _model_exists(self, model_name: str) -> bool:
"""Check if a model exists in the system."""
- # This is a simplified check - in a real implementation,
- # you would check against the actual model directories
- if model_name == "auto":
- return True
-
- # For now, assume models exist if they have valid extensions
- valid_extensions = {'.safetensors', '.ckpt', '.pth', '.pt', '.bin'}
- return Path(model_name).suffix.lower() in valid_extensions
+ if model_name == "auto":
+ return True
+
+ valid_extensions = {'.safetensors', '.ckpt', '.pth', '.pt', '.bin'}
+ suffix = Path(model_name).suffix.lower()
+ if suffix not in valid_extensions:
+ return False
+
+ # If provided as a path and it exists, accept it
+ candidate = Path(model_name)
+ if candidate.exists():
+ return True
+
+ # Try to resolve against configured ComfyUI models directory
+ try:
+ from dream_layer_backend.dream_layer_backend_utils.fetch_advanced_models import get_settings
+ settings = get_settings()
+ models_dir = settings.get("modelsDirectory")
+ except Exception:
+ models_dir = None
+
+ if models_dir:
+ for sub in ("checkpoints", "loras", "controlnet", "upscale_models", "vae", "embeddings", "hypernetworks"):
+ if (Path(models_dir) / sub / model_name).exists():
+ return True
+
+ # No file found
+ return FalseThis uses existing project utilities (get_settings / configured modelsDirectory) and falls back safely. Add unit tests to cover absolute paths, filenames resolved via modelsDirectory, and the "auto" shortcut.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| """Check if a model exists in the system.""" | |
| # This is a simplified check - in a real implementation, | |
| # you would check against the actual model directories | |
| if model_name == "auto": | |
| return True | |
| # For now, assume models exist if they have valid extensions | |
| valid_extensions = {'.safetensors', '.ckpt', '.pth', '.pt', '.bin'} | |
| return Path(model_name).suffix.lower() in valid_extensions | |
| def _model_exists(self, model_name: str) -> bool: | |
| """Check if a model exists in the system.""" | |
| if model_name == "auto": | |
| return True | |
| valid_extensions = {'.safetensors', '.ckpt', '.pth', '.pt', '.bin'} | |
| suffix = Path(model_name).suffix.lower() | |
| if suffix not in valid_extensions: | |
| return False | |
| # If provided as a path and it exists, accept it | |
| candidate = Path(model_name) | |
| if candidate.exists(): | |
| return True | |
| # Try to resolve against configured ComfyUI models directory | |
| try: | |
| from dream_layer_backend.dream_layer_backend_utils.fetch_advanced_models import get_settings | |
| settings = get_settings() | |
| models_dir = settings.get("modelsDirectory") | |
| except Exception: | |
| models_dir = None | |
| if models_dir: | |
| for sub in ("checkpoints", "loras", "controlnet", "upscale_models", "vae", "embeddings", "hypernetworks"): | |
| if (Path(models_dir) / sub / model_name).exists(): | |
| return True | |
| # No file found | |
| return False |
| Returns: | ||
| Generated image as numpy array or PIL Image | ||
| """ | ||
| config = TilingConfig(tile_size, overlap, blend_mode) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused variable assignment.
The config variable is created but never used in the function.
Apply this diff to remove the unused variable:
- config = TilingConfig(tile_size, overlap, blend_mode)
+ TilingConfig(tile_size, overlap, blend_mode) # Validate parametersOr simply remove the line entirely since the validation occurs in tile_slices anyway.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| config = TilingConfig(tile_size, overlap, blend_mode) | |
| TilingConfig(tile_size, overlap, blend_mode) # Validate parameters |
🧰 Tools
🪛 Ruff (0.12.2)
228-228: Local variable config is assigned to but never used
Remove assignment to unused variable config
(F841)
🤖 Prompt for AI Agents
In dream_layer_backend/core/tiling.py around line 228, the function assigns
`config = TilingConfig(tile_size, overlap, blend_mode)` but never uses `config`;
remove that unused assignment line (or delete the statement entirely) since
validation is handled in `tile_slices`, keeping any needed validation there.
| try: | ||
| # Test if generate function returns PIL Image | ||
| test_result = generate_fn(**{**gen_kwargs, 'crop': (0, 0, 1, 1)}) | ||
| if isinstance(test_result, Image.Image): | ||
| return Image.fromarray(canvas) | ||
| except: | ||
| pass | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve exception handling and avoid bare except.
The bare except clause could mask important errors. Additionally, the test generation with a 1x1 crop may not be reliable for all generator functions.
Apply this diff to improve the error handling:
# Convert back to PIL Image if the generate function returns PIL Images
- try:
- # Test if generate function returns PIL Image
- test_result = generate_fn(**{**gen_kwargs, 'crop': (0, 0, 1, 1)})
- if isinstance(test_result, Image.Image):
- return Image.fromarray(canvas)
- except:
- pass
+ # Check if any tile was a PIL Image to determine return type
+ if tiles and 'crop' in gen_kwargs:
+ try:
+ # Test with first tile coordinates for more reliable detection
+ x0, y0, x1, y1 = tiles[0]
+ test_kwargs = gen_kwargs.copy()
+ test_kwargs['crop'] = (x0, y0, min(x1, x0+1), min(y1, y0+1))
+ test_result = generate_fn(**test_kwargs)
+ if isinstance(test_result, Image.Image):
+ return Image.fromarray(canvas)
+ except Exception as e:
+ logger.debug(f"Could not determine output type, returning numpy array: {e}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| # Test if generate function returns PIL Image | |
| test_result = generate_fn(**{**gen_kwargs, 'crop': (0, 0, 1, 1)}) | |
| if isinstance(test_result, Image.Image): | |
| return Image.fromarray(canvas) | |
| except: | |
| pass | |
| if tiles and 'crop' in gen_kwargs: | |
| try: | |
| # Test with first tile coordinates for more reliable detection | |
| x0, y0, x1, y1 = tiles[0] | |
| test_kwargs = gen_kwargs.copy() | |
| test_kwargs['crop'] = (x0, y0, min(x1, x0+1), min(y1, y0+1)) | |
| test_result = generate_fn(**test_kwargs) | |
| if isinstance(test_result, Image.Image): | |
| return Image.fromarray(canvas) | |
| except Exception as e: | |
| logger.debug(f"Could not determine output type, returning numpy array: {e}") |
🧰 Tools
🪛 Ruff (0.12.2)
314-314: Do not use bare except
(E722)
| if missing_columns: | ||
| # Check if this is an older schema version | ||
| if "schema_version" in fieldnames: | ||
| # Allow missing metric columns for older schemas | ||
| missing_metric_columns = [col for col in missing_columns if col in METRIC_COLUMNS] | ||
| if len(missing_metric_columns) == len(missing_columns): | ||
| # Only metric columns are missing, which is acceptable for older schemas | ||
| pass | ||
| else: | ||
| raise ValueError(f"Missing required columns: {missing_columns}") | ||
| else: | ||
| raise ValueError(f"Missing required columns: {missing_columns}") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Backward-compatibility logic is inverted/misleading; base on actual version, not presence of header
Comment says “older schema,” but code allows missing metric columns when schema_version is present. Use the actual schema version to decide whether missing metric columns are acceptable.
Minimal fix without adding dependencies:
- if missing_columns:
- # Check if this is an older schema version
- if "schema_version" in fieldnames:
- # Allow missing metric columns for older schemas
- missing_metric_columns = [col for col in missing_columns if col in METRIC_COLUMNS]
- if len(missing_metric_columns) == len(missing_columns):
- # Only metric columns are missing, which is acceptable for older schemas
- pass
- else:
- raise ValueError(f"Missing required columns: {missing_columns}")
- else:
- raise ValueError(f"Missing required columns: {missing_columns}")
+ if missing_columns:
+ # Allow older files (no schema_version column) to omit metric columns.
+ # For files with a schema_version, require all non-metric columns.
+ missing_metric_columns = [c for c in missing_columns if c in METRIC_COLUMNS]
+ missing_non_metric_columns = [c for c in missing_columns if c not in METRIC_COLUMNS]
+ if missing_non_metric_columns:
+ raise ValueError(f"Missing required columns: {missing_non_metric_columns}")
+ # Only metrics are missing at this point:
+ if "schema_version" not in fieldnames:
+ # Older files (no schema_version) may omit metrics.
+ pass
+ else:
+ # Newer files with schema_version should include required metrics if enabled.
+ if missing_metric_columns:
+ raise ValueError(f"Missing required metric columns: {missing_metric_columns}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if missing_columns: | |
| # Check if this is an older schema version | |
| if "schema_version" in fieldnames: | |
| # Allow missing metric columns for older schemas | |
| missing_metric_columns = [col for col in missing_columns if col in METRIC_COLUMNS] | |
| if len(missing_metric_columns) == len(missing_columns): | |
| # Only metric columns are missing, which is acceptable for older schemas | |
| pass | |
| else: | |
| raise ValueError(f"Missing required columns: {missing_columns}") | |
| else: | |
| raise ValueError(f"Missing required columns: {missing_columns}") | |
| if missing_columns: | |
| # Allow older files (no schema_version column) to omit metric columns. | |
| # For files with a schema_version, require all non-metric columns. | |
| missing_metric_columns = [c for c in missing_columns if c in METRIC_COLUMNS] | |
| missing_non_metric_columns = [c for c in missing_columns if c not in METRIC_COLUMNS] | |
| if missing_non_metric_columns: | |
| raise ValueError(f"Missing required columns: {missing_non_metric_columns}") | |
| # Only metrics are missing at this point: | |
| if "schema_version" not in fieldnames: | |
| # Older files (no schema_version) may omit metrics. | |
| pass | |
| else: | |
| # Newer files with schema_version should include required metrics if enabled. | |
| if missing_metric_columns: | |
| raise ValueError(f"Missing required metric columns: {missing_metric_columns}") |
🤖 Prompt for AI Agents
In dream_layer_backend/tools/report_schema.py around lines 64-76, the current
check wrongly treats presence of the "schema_version" header as meaning an older
schema; instead read the actual schema_version value, parse it (int/semantic as
needed) and compare it to a defined threshold (e.g.
SUPPORT_METRICS_SINCE_VERSION = 2). Only allow missing metric columns when the
parsed schema_version is less than that threshold; otherwise raise ValueError
listing the missing columns. Also handle missing/unparsable schema_version by
treating it as the latest schema (i.e., do not permit missing metric columns).
| # Validate image paths if bundle_root is provided | ||
| if bundle_root: | ||
| for row_num, row in enumerate(reader, start=2): # Start at 2 for header row | ||
| image_path = row.get('image_path', '') | ||
| if image_path: | ||
| # Convert relative path to absolute within bundle | ||
| full_image_path = bundle_root / image_path | ||
| if not full_image_path.exists(): | ||
| raise ValueError( | ||
| f"Image path not found in row {row_num}: {image_path} " | ||
| f"(resolved to: {full_image_path})" | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Path traversal risk: absolute paths or “..” can escape bundle_root
current code uses bundle_root / image_path without normalizing and without ensuring the result stays within bundle_root. Absolute paths will ignore bundle_root, and paths with “..” may escape the bundle.
Harden with resolve() and a relative_to() check (compatible with Python 3.8):
- # Validate image paths if bundle_root is provided
- if bundle_root:
- for row_num, row in enumerate(reader, start=2): # Start at 2 for header row
- image_path = row.get('image_path', '')
- if image_path:
- # Convert relative path to absolute within bundle
- full_image_path = bundle_root / image_path
- if not full_image_path.exists():
- raise ValueError(
- f"Image path not found in row {row_num}: {image_path} "
- f"(resolved to: {full_image_path})"
- )
+ # Validate image paths if bundle_root is provided
+ if bundle_root:
+ root = bundle_root.resolve()
+ for row_num, row in enumerate(reader, start=2): # Start at 2 for header row
+ image_path = (row.get("image_path") or "").strip()
+ if image_path:
+ # Normalize and ensure path stays inside the bundle root
+ candidate = (root / image_path).resolve()
+ try:
+ candidate.relative_to(root)
+ except ValueError:
+ raise ValueError(
+ f"Image path escapes bundle root in row {row_num}: {image_path} "
+ f"(resolved to: {candidate})"
+ )
+ if not candidate.exists():
+ raise ValueError(
+ f"Image path not found in row {row_num}: {image_path} "
+ f"(resolved to: {candidate})"
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Validate image paths if bundle_root is provided | |
| if bundle_root: | |
| for row_num, row in enumerate(reader, start=2): # Start at 2 for header row | |
| image_path = row.get('image_path', '') | |
| if image_path: | |
| # Convert relative path to absolute within bundle | |
| full_image_path = bundle_root / image_path | |
| if not full_image_path.exists(): | |
| raise ValueError( | |
| f"Image path not found in row {row_num}: {image_path} " | |
| f"(resolved to: {full_image_path})" | |
| ) | |
| # Validate image paths if bundle_root is provided | |
| if bundle_root: | |
| root = bundle_root.resolve() | |
| for row_num, row in enumerate(reader, start=2): # Start at 2 for header row | |
| image_path = (row.get("image_path") or "").strip() | |
| if image_path: | |
| # Normalize and ensure path stays inside the bundle root | |
| candidate = (root / image_path).resolve() | |
| try: | |
| candidate.relative_to(root) | |
| except ValueError: | |
| raise ValueError( | |
| f"Image path escapes bundle root in row {row_num}: {image_path} " | |
| f"(resolved to: {candidate})" | |
| ) | |
| if not candidate.exists(): | |
| raise ValueError( | |
| f"Image path not found in row {row_num}: {image_path} " | |
| f"(resolved to: {candidate})" | |
| ) |
| def get_schema_version(csv_path: Path) -> str: | ||
| """ | ||
| Get the schema version from a results.csv file. | ||
| Args: | ||
| csv_path: Path to the results.csv file | ||
| Returns: | ||
| Schema version string | ||
| """ | ||
| try: | ||
| with open(csv_path, 'r', newline='', encoding='utf-8') as f: | ||
| reader = csv.DictReader(f) | ||
| fieldnames = reader.fieldnames or [] | ||
|
|
||
| if "schema_version" in fieldnames: | ||
| # Read first row to get schema version | ||
| f.seek(0) | ||
| next(f) # Skip header | ||
| first_row = next(csv.DictReader(f)) | ||
| return first_row.get('schema_version', "1.0") | ||
| else: | ||
| return "1.0" # Default for older files | ||
| except Exception: | ||
| return "1.0" # Default on error | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: get_schema_version always falls back to "1.0" due to incorrect file pointer handling
You seek to the start and then construct a new DictReader after skipping the header, which causes the first data row to be treated as the header. As a result, schema_version is almost never read.
Fix by reading the first row from the original DictReader:
def get_schema_version(csv_path: Path) -> str:
@@
- try:
- with open(csv_path, 'r', newline='', encoding='utf-8') as f:
- reader = csv.DictReader(f)
- fieldnames = reader.fieldnames or []
-
- if "schema_version" in fieldnames:
- # Read first row to get schema version
- f.seek(0)
- next(f) # Skip header
- first_row = next(csv.DictReader(f))
- return first_row.get('schema_version', "1.0")
- else:
- return "1.0" # Default for older files
- except Exception:
- return "1.0" # Default on error
+ try:
+ with open(csv_path, "r", newline="", encoding="utf-8") as f:
+ reader = csv.DictReader(f)
+ first_row = next(reader, None)
+ if first_row and "schema_version" in first_row:
+ return first_row.get("schema_version", "1.0") or "1.0"
+ return "1.0"
+ except Exception:
+ return "1.0"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def get_schema_version(csv_path: Path) -> str: | |
| """ | |
| Get the schema version from a results.csv file. | |
| Args: | |
| csv_path: Path to the results.csv file | |
| Returns: | |
| Schema version string | |
| """ | |
| try: | |
| with open(csv_path, 'r', newline='', encoding='utf-8') as f: | |
| reader = csv.DictReader(f) | |
| fieldnames = reader.fieldnames or [] | |
| if "schema_version" in fieldnames: | |
| # Read first row to get schema version | |
| f.seek(0) | |
| next(f) # Skip header | |
| first_row = next(csv.DictReader(f)) | |
| return first_row.get('schema_version', "1.0") | |
| else: | |
| return "1.0" # Default for older files | |
| except Exception: | |
| return "1.0" # Default on error | |
| def get_schema_version(csv_path: Path) -> str: | |
| """ | |
| Get the schema version from a results.csv file. | |
| Args: | |
| csv_path: Path to the results.csv file | |
| Returns: | |
| Schema version string | |
| """ | |
| try: | |
| with open(csv_path, "r", newline="", encoding="utf-8") as f: | |
| reader = csv.DictReader(f) | |
| first_row = next(reader, None) | |
| if first_row and "schema_version" in first_row: | |
| return first_row.get("schema_version", "1.0") or "1.0" | |
| return "1.0" | |
| except Exception: | |
| return "1.0" |
🤖 Prompt for AI Agents
In dream_layer_backend/tools/report_schema.py around lines 91 to 116,
get_schema_version incorrectly resets the file and creates a new DictReader
after checking fieldnames, which treats the first data row as the header and
causes schema_version to fall back to "1.0"; fix it by using the already-created
reader to read the first data row (e.g., call next(reader) after confirming
fieldnames contains "schema_version"), handle StopIteration safely, and return
first_row.get('schema_version', "1.0") or "1.0" on missing/exception so the
function reliably returns the actual schema_version when present.
| | Dependency | Purpose | Test Behavior | | ||
| |------------|---------|---------------| | ||
| | **torch** | PyTorch for CLIP/LPIPS | Tests skipped if missing | | ||
| | **transformers** | HuggingFace models | Tests skipped if missing | | ||
| | **lpips** | Perceptual similarity | Tests skipped if missing | | ||
| | **scikit-image** | SSIM computation | Tests skipped if missing | | ||
| | **numpy** | Array operations | Always required | | ||
| | **PIL** | Image processing | Always required | | ||
|
|
||
| **Note**: Core functionality tests will always run, ensuring the application works without heavy ML dependencies. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Inconsistent SSIM dependency messaging (optional vs required)
Here SSIM is listed as “Tests skipped if missing,” which matches the PR objective. Elsewhere you state SSIM tests are “always enabled” and that SSIM “returns None with warning” if missing. Please unify: SSIM should be optional and gracefully degrade when scikit-image is not installed.
Recommended edit: keep “Tests skipped if missing” and remove/adjust any “always enabled” claims.
🤖 Prompt for AI Agents
In README.md around lines 106 to 116, the SSIM/scikit-image messaging is
inconsistent: change wording to clearly mark scikit-image/SSIM as optional
(tests skipped if missing) and remove or adjust any other statements claiming
SSIM tests are "always enabled" or that SSIM never degrades; update the note to
state SSIM will gracefully degrade (return None with a warning) when
scikit-image is absent, and ensure all README occurrences consistently reflect
that behavior.
| | Dependency | Purpose | Test Behavior | | ||
| |------------|---------|---------------| | ||
| | **torch + transformers** | CLIP scoring | Tests auto-skip with `@pytest.mark.requires_torch` | | ||
| | **lpips** | Perceptual similarity | Tests auto-skip with `@pytest.mark.requires_lpips` | | ||
| | **scikit-image** | SSIM computation | Always enabled (lightweight) | | ||
| | **tensorflow** | Legacy support | Not required for core functionality | | ||
|
|
||
| Edit your `.env` file at `dream_layer/.env`: | ||
| **Optional deps & test skipping:** | ||
| - `torch/transformers` → CLIP (auto-skip if missing) | ||
| - `lpips` → LPIPS (auto-skip if missing) | ||
| - `scikit-image` → SSIM (always on, lightweight) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Duplicate/conflicting test-dependency statements
This second table claims SSIM is “Always enabled (lightweight),” which conflicts with the earlier table and the PR objective (optional with fallbacks). Align this with “Tests auto-skip when scikit-image is missing” and remove duplication if possible.
-| **scikit-image** | SSIM computation | Always enabled (lightweight) |
+| **scikit-image** | SSIM computation | Tests auto-skip if missing |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | Dependency | Purpose | Test Behavior | | |
| |------------|---------|---------------| | |
| | **torch + transformers** | CLIP scoring | Tests auto-skip with `@pytest.mark.requires_torch` | | |
| | **lpips** | Perceptual similarity | Tests auto-skip with `@pytest.mark.requires_lpips` | | |
| | **scikit-image** | SSIM computation | Always enabled (lightweight) | | |
| | **tensorflow** | Legacy support | Not required for core functionality | | |
| Edit your `.env` file at `dream_layer/.env`: | |
| **Optional deps & test skipping:** | |
| - `torch/transformers` → CLIP (auto-skip if missing) | |
| - `lpips` → LPIPS (auto-skip if missing) | |
| - `scikit-image` → SSIM (always on, lightweight) | |
| | Dependency | Purpose | Test Behavior | | |
| |------------|---------|---------------| | |
| | **torch + transformers** | CLIP scoring | Tests auto-skip with `@pytest.mark.requires_torch` | | |
| | **lpips** | Perceptual similarity | Tests auto-skip with `@pytest.mark.requires_lpips` | | |
| | **scikit-image** | SSIM computation | Tests auto-skip if missing | | |
| | **tensorflow** | Legacy support | Not required for core functionality | | |
| **Optional deps & test skipping:** | |
| - `torch/transformers` → CLIP (auto-skip if missing) | |
| - `lpips` → LPIPS (auto-skip if missing) | |
| - `scikit-image` → SSIM (always on, lightweight) | |
🤖 Prompt for AI Agents
In README.md around lines 293 to 304, the second dependency table incorrectly
states that scikit-image/SSIM is "Always enabled (lightweight)" creating a
conflict with the earlier table and PR goal; update this section to match the
intended behavior by marking scikit-image as optional with tests auto-skipped
when missing (e.g., "SSIM computation — optional; tests auto-skip if
scikit-image is missing"), remove the duplicate/conflicting line or table entry,
and consolidate the two sections into a single clear entry describing each
optional dependency and its test auto-skip behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🔭 Outside diff range comments (1)
.pre-commit-config.yaml (1)
1-29: Add a few lightweight pre-commit hooks and bump pre-commit-hooks revCurrent config is fine; consider these optional, high-value additions and a rev bump for pre-commit-hooks.
Files to change
- .pre-commit-config.yaml
- bump pre-commit-hooks rev v4.5.0 → v4.6.0 and consider enabling debug-statements / name-tests-test (and mixed-line-ending).
- optionally add these repos/hooks (paste into .pre-commit-config.yaml):
- repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.6.0 hooks: - id: trailing-whitespace - id: end-of-file-fixer - id: check-yaml - id: check-added-large-files - id: check-merge-conflict - id: debug-statements - id: name-tests-test - id: mixed-line-ending - repo: https://github.com/asottile/add-trailing-comma rev: v3.1.0 hooks: - id: add-trailing-comma - repo: https://github.com/asottile/pyupgrade rev: v3.19.1 hooks: - id: pyupgrade args: [--py310-plus] - repo: https://github.com/pre-commit/pygrep-hooks rev: v1.10.0 hooks: - id: python-check-blanket-noqa - id: python-check-blanket-type-ignore - id: python-use-type-annotations - repo: https://github.com/codespell-project/codespell rev: v2.3.0 hooks: - id: codespell args: [--ignore-words-list=nd,te,fo]One-liner: run
pre-commit autoupdatelocally and test the hooks.
♻️ Duplicate comments (11)
dream_layer_backend/tests/test_quality_metrics.py (3)
132-139: Consider simplifying conditional logic in tests for better clarity.The nested conditionals in tests make them harder to understand and maintain. Consider using parametrized tests or helper methods to handle different test scenarios.
Example approach using pytest parametrization:
@pytest.mark.parametrize("clip_available,expected_scores", [ (True, lambda scores: all(s is None or 0.0 <= s <= 1.0 for s in scores)), (False, lambda scores: all(s is None for s in scores)), ]) def test_clip_similarity_with_availability(self, sample_images, sample_prompts, clip_available, expected_scores): with patch('metrics.clip_score.CLIP_AVAILABLE', clip_available): scorer = CLIPScorer() scores = scorer.clip_text_image_similarity(sample_images, sample_prompts) assert len(scores) == len(sample_images) assert expected_scores(scores)
375-378: Replace loop with more explicit test assertions.The loop in the test can be replaced with more explicit assertions that are clearer about what is being tested.
images = [] - for i in range(3): - # Create images with different patterns - img = Image.new('RGB', (100, 100), color=(i * 50, i * 50, i * 50)) - images.append(img) + # Create images with different patterns explicitly + images = [ + Image.new('RGB', (100, 100), color=(0, 0, 0)), # Dark image + Image.new('RGB', (100, 100), color=(50, 50, 50)), # Medium image + Image.new('RGB', (100, 100), color=(100, 100, 100)) # Bright image + ]
65-68: Clarify the conditional check logic in the test.The conditional check
assert scorer.model is not None or scorer.model is Noneis tautological and always evaluates to True. This appears to be testing that the model may or may not load successfully, but the intent should be clearer.Consider refactoring this test to be more explicit about what is being tested:
# Check availability based on dependencies if CLIP_AVAILABLE: - assert scorer.model is not None or scorer.model is None # May be None if loading failed + # Model should attempt to load when dependencies are available + # but may fail due to network/resource issues + assert hasattr(scorer, 'model') else: assert scorer.model is Nonedream_layer_backend/tests/test_presets_e2e.py (1)
185-203: Consider using parametrized tests instead of loops.The loop structure in tests can make them harder to debug when failures occur. Consider using pytest's parametrize decorator for clearer test output.
Example refactor:
@pytest.mark.parametrize("index,config", [ (0, {"steps": 20, "cfg_scale": 7.0}), (1, {"steps": 50, "cfg_scale": 8.0}), (2, {"steps": 10, "cfg_scale": 6.0}), ]) def test_preset_persistence_individual(self, temp_presets_file, index, config): manager = PresetManager(temp_presets_file) preset_name = f"persistent_preset_{index}" manager.create_preset_from_config(preset_name, config) # Verify persistence new_manager = PresetManager(temp_presets_file) assert preset_name in new_manager.list_presets() preset = new_manager.get_preset(preset_name) assert preset.params["steps"] == config["steps"] assert preset.params["cfg_scale"] == config["cfg_scale"]dream_layer_backend/tests/test_tiling_blend.py (3)
75-76: Consider using explicit assertions instead of a loop.The loop checking if each tile is in the expected list could be clearer with explicit assertions.
- for tile in tiles: - assert tile in expected_tiles + # Sort both lists for deterministic comparison + assert sorted(tiles) == sorted(expected_tiles)
232-241: Simplify conditional logic in test functions.The conditional check for crop parameter adds complexity. Consider splitting into separate test methods or using a more straightforward approach.
# Mock generate function - def mock_generate(crop=None, **kwargs): - if crop: - x0, y0, x1, y1 = crop - # Create a tile with coordinates drawn on it - tile = np.zeros((y1-y0, x1-x0, 3), dtype=np.uint8) - # Draw a simple pattern - tile[:, :] = [x0 % 255, y0 % 255, (x0 + y0) % 255] - return tile - else: - return np.zeros((512, 512, 3), dtype=np.uint8) + def mock_generate(**kwargs): + crop = kwargs.get('crop') + if not crop: + return np.zeros((512, 512, 3), dtype=np.uint8) + + x0, y0, x1, y1 = crop + tile = np.zeros((y1-y0, x1-x0, 3), dtype=np.uint8) + tile[:, :] = [x0 % 255, y0 % 255, (x0 + y0) % 255] + return tile
392-398: Extract nested loops into a helper function for clarity.The nested loops in the test make it harder to understand. Consider extracting the pattern generation logic.
+ def create_position_based_pattern(width, height, x_offset, y_offset): + """Create a test pattern based on position.""" + tile = np.zeros((height, width, 3), dtype=np.uint8) + for i in range(height): + for j in range(width): + tile[i, j] = [ + (x_offset + j) % 256, + (y_offset + i) % 256, + ((x_offset + j) + (y_offset + i)) % 256 + ] + return tile + def create_test_image(crop=None, **kwargs): if crop: x0, y0, x1, y1 = crop - # Create a test pattern that varies by position - tile = np.zeros((y1-y0, x1-x0, 3), dtype=np.uint8) - for i in range(y1-y0): - for j in range(x1-x0): - tile[i, j] = [ - (x0 + j) % 256, - (y0 + i) % 256, - ((x0 + j) + (y0 + i)) % 256 - ] - return tile + return create_position_based_pattern(x1-x0, y1-y0, x0, y0) else: # Return a 1024x1024 image for the full size case return np.zeros((1024, 1024, 3), dtype=np.uint8)dream_layer_backend/core/tiling.py (4)
72-79: Potential issue with tile generation for certain image sizesThe current loop may leave gaps at the right or bottom edges when the image dimensions aren't perfect multiples of (tile_size - overlap). The filtering condition on Line 79 could exclude necessary edge tiles.
101-101: Blend mask doesn't handle non-square tilesThe mask is created as a square (tile_size x tile_size) but edge tiles may be rectangular. This could cause dimension mismatches during blending.
Consider updating the create_blend_mask function to accept width and height separately:
def create_blend_mask( - tile_size: int, + width: int, + height: int, overlap: int, mode: str = "cosine" ) -> np.ndarray:
190-190: Potential shape mismatch in overlap mask slicingThe slicing [:y1-y0, :x1-x0] may not match the blend_mask dimensions for edge tiles, potentially causing broadcasting errors.
176-176: Incorrect mask dimensions for non-square tilesThe blend mask is created using
tile_widthfor both dimensions, which will fail for non-square tiles at image edges.- blend_mask = create_blend_mask(tile_width, overlap, mode) + # Calculate actual tile dimensions + blend_mask = create_blend_mask(tile_width, tile_height, overlap, mode)Note: This requires updating the
create_blend_maskfunction signature as suggested above.
🧹 Nitpick comments (26)
dream_layer_backend/metrics/__init__.py (1)
1-2: Replace the package comment with a module docstring in dream_layer_backend/metrics and consider selective re-exportsUse a module docstring for better introspection (doc/help()). The package init.py currently only contains a comment; replace it with a docstring. If you want a small public API (e.g., scorers/factories), confirm the symbols to re-export — my automated search didn't discover other metric modules, so please confirm or allow me to run a corrected discovery command.
Files to update:
- dream_layer_backend/metrics/init.py — replace the top-line comment with a module docstring and optionally add explicit imports or all for re-exports.
Proposed minimal change:
-# Metrics package for DreamLayer AI +"""Metrics package for DreamLayer AI."""dream_layer_backend/tools/__init__.py (1)
1-2: Use a docstring to describe the tools packageDocstrings are preferred in package/module headers for consistency and introspection.
-# Tools package for DreamLayer AI +"""Tools package for DreamLayer AI."""pytest.ini (1)
1-7: Minor pytest UX improvements: show skip/xfail reasons and pin minimum pytestAdding -ra surfaces skip/xfail summaries in CI logs; minversion prevents accidental older pytest usage. Consider strict xfail to avoid masking regressions.
[pytest] -addopts = -q +addopts = -q -ra +minversion = 7.0 +xfail_strict = true markers = requires_torch: needs PyTorch requires_transformers: needs HuggingFace transformers requires_lpips: needs lpips requires_skimage: needs scikit-imagedream_layer_backend/requirements.txt (1)
7-7: Confirm pytest-mock belongs in runtime requirements (vs dev/test extras)If this file is used for production deployment, consider moving test-only deps (pytest, pytest-mock) into a dev/CI requirements file or extras to keep runtime lean.
I can propose a split into:
- requirements.txt (runtime only)
- requirements-dev.txt (pytest, pytest-mock, etc.)
- requirements-metrics.txt (optional heavy deps)
Do you want me to generate these files and update docs/CI accordingly?.github/workflows/test.yml (2)
18-18: Remove trailing spaces for cleaner YAML formattingThe YAML file contains trailing spaces on several lines that should be removed for consistency and to follow YAML best practices.
Apply this diff to remove trailing spaces:
- uses: actions/checkout@v4 - + - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v4 with: python-version: ${{ matrix.python-version }} - + - name: Install dependencies run: | python -m pip install --upgrade pip python -m pip install -r requirements.txt - + - name: Run tests (no heavy deps) run: | cd dream_layer_backend python -m pytest tests/ -q --tb=short # Heavy metric tests should auto-skip when torch/transformers/lpips not installed - + - name: Verify test coverageAlso applies to: 23-23, 28-28, 34-34
35-38: Clarify test verification step and handle edge casesThe verification step might fail if no tests are collected at all, or if the grep command doesn't match the expected pattern. Consider making the verification more robust.
Apply this diff to improve the verification step:
- name: Verify test coverage run: | cd dream_layer_backend - python -m pytest tests/ --collect-only -q | grep -E "(CLIP|LPIPS)" || echo "No heavy metric tests found (expected)" + # Collect tests and check for heavy metric tests + test_output=$(python -m pytest tests/ --collect-only -q 2>&1) + if echo "$test_output" | grep -qE "(CLIP|LPIPS|test_clip|test_lpips)"; then + echo "Heavy metric tests found (will be skipped if deps missing)" + else + echo "No heavy metric tests found (expected behavior)" + fidream_layer_backend/tools/report_schema.py (2)
8-8: Remove unused importThe
jsonmodule is imported but never used in this file.Apply this diff to remove the unused import:
import csv -import json from pathlib import Path
41-89: Consider using context manager pattern for CSV reading consistencyThe file is opened with a context manager but the reader is accessed multiple times. For large CSV files, this could be inefficient as the file is re-read from the beginning.
Consider refactoring to read the data once and validate it:
with open(csv_path, 'r', newline='', encoding='utf-8') as f: reader = csv.DictReader(f) - # Check if required columns are present fieldnames = reader.fieldnames or [] + # Read all rows for validation + rows = list(reader)Then update the image path validation to use the cached rows:
# Validate image paths if bundle_root is provided if bundle_root: - for row_num, row in enumerate(reader, start=2): # Start at 2 for header row + for row_num, row in enumerate(rows, start=2): # Start at 2 for header rowdream_layer_backend/metrics/clip_score.py (1)
8-8: Remove unused importsSeveral imports are not used in this module and should be removed for cleaner code.
Apply this diff to remove unused imports:
import logging -from typing import List, Optional, Tuple +from typing import List, Optional from PIL import Image -import numpy as npAlso remove the unused
transformersimport:try: import torch - import transformers from transformers import CLIPProcessor, CLIPModel CLIP_AVAILABLE = TrueAlso applies to: 10-10, 20-20
dream_layer_backend/metrics/ssim_lpips.py (2)
8-8: Remove unused importThe
Tupletype hint is imported but never used.Apply this diff to remove the unused import:
import logging -from typing import Optional, Tuple, Union +from typing import Optional, Union from PIL import Image
72-76: Add error handling for missing resize dependencyThe code imports
resizefrom skimage.transform within a conditional block, but doesn't handle the case where this specific function might not be available.Apply this diff to make the import more robust:
# Ensure same shape if img_a.shape != img_b.shape: logger.warning("Image shapes don't match, resizing img_b to match img_a") - from skimage.transform import resize - img_b = resize(img_b, img_a.shape, preserve_range=True) + try: + from skimage.transform import resize + img_b = resize(img_b, img_a.shape, preserve_range=True) + except ImportError: + logger.error("Cannot resize images - skimage.transform not available") + return Nonedream_layer_backend/tests/test_quality_metrics.py (1)
22-31: Remove unused imports to improve code cleanliness.Several imports are not used in the test module. Removing them improves maintainability.
Apply this diff to remove unused imports:
from metrics.clip_score import ( - CLIPScorer, get_clip_scorer, clip_text_image_similarity, + CLIPScorer, CLIP_AVAILABLE, CLIP_MODEL_ID, CLIP_MODEL_HASH ) from metrics.ssim_lpips import ( SSIMScorer, LPIPSScorer, QualityMetrics, - get_ssim_scorer, get_lpips_scorer, get_quality_metrics, - compute_ssim, compute_lpips, compute_all_quality_metrics, - SSIM_AVAILABLE, LPIPS_AVAILABLE )dream_layer_backend/core/__init__.py (1)
1-3: Consider adding package exports for better API discoverability.The
__init__.pyfile currently only contains a comment. Consider explicitly importing and exporting the key classes and functions from the submodules (presets, tiling) to provide a cleaner public API.# Core package for DreamLayer AI + +from .presets import ( + Preset, + PresetManager, + get_preset_manager, + load_presets, + save_presets, + compute_preset_hash +) + +from .tiling import ( + TilingConfig, + tile_slices, + create_blend_mask, + blend_paste, + process_tiled, + calculate_optimal_tile_size, + validate_tiling_config +) + +__all__ = [ + # Presets + 'Preset', + 'PresetManager', + 'get_preset_manager', + 'load_presets', + 'save_presets', + 'compute_preset_hash', + # Tiling + 'TilingConfig', + 'tile_slices', + 'create_blend_mask', + 'blend_paste', + 'process_tiled', + 'calculate_optimal_tile_size', + 'validate_tiling_config', +]dream_layer_backend/tests/test_presets_e2e.py (2)
5-8: Remove unused imports to improve code cleanliness.The
json,patch, andMagicMockimports are not used in this test file.-import json import tempfile from pathlib import Path -from unittest.mock import patch, MagicMock
161-161: Remove unused variable assignment.The
presetvariable is assigned but never used in the test.# Create valid preset config = {"model_name": "valid_model.safetensors", "steps": 20} - preset = manager.create_preset_from_config("valid_preset", config) + manager.create_preset_from_config("valid_preset", config)dream_layer_backend/tests/test_tiling_blend.py (1)
8-8: Remove unused imports.The
MagicMockandpatchimports are not used in this test file.-from unittest.mock import MagicMock, patchREADME.md (2)
250-272: Add language specifier to fenced code blockThe code block should have a language specifier for better syntax highlighting.
-``` +```text DreamLayer/ ├── dream_layer_backend/ │ ├── dream_layer.py # Main Flask API
432-432: Fix Markdown heading syntaxThe heading should be on its own line without surrounding HTML tags for proper Markdown rendering.
-<p align="center">### Made with ❤️ by builders, for builders</p> +<p align="center">Made with ❤️ by builders, for builders</p>dream_layer_backend/core/tiling.py (1)
228-228: Remove unused variable assignmentThe
configvariable is created but never used.- config = TilingConfig(tile_size, overlap, blend_mode) + # Validate configuration + TilingConfig(tile_size, overlap, blend_mode)dream_layer_backend/tools/report_bundle.py (1)
11-11: Remove unused importsThe imports
osandcreate_schema_headerare not used in the code.-import os import shutil import zipfile from datetime import datetime from pathlib import Path from typing import List, Dict, Any, Optional -from .report_schema import validate_results_csv, create_schema_header, SCHEMA_VERSION +from .report_schema import validate_results_csv, SCHEMA_VERSIONAlso applies to: 18-18
dream_layer_backend/core/presets.py (3)
9-9: Remove unused importThe
osmodule is imported but never used.-import os from datetime import datetime
144-151: Consider using logging instead of print statementsUsing print statements for status messages is not ideal for a library module. Consider using the logging module for better control over output.
+import logging + +logger = logging.getLogger(__name__) + # ... in the class methods ... - print(f"Loaded {len(self.presets)} presets from {self.presets_file}") + logger.info(f"Loaded {len(self.presets)} presets from {self.presets_file}")
313-327: Simplify model key mapping logicThe conditional logic for mapping preset models to config could be simplified using a mapping dictionary.
# Apply preset models (only if not already set in config) + model_key_mapping = { + 'checkpoint': 'model_name', + 'vae_name': 'vae_name', + 'lora_name': 'lora_name', + 'controlnet_model': 'controlnet_model' + } + for key, value in preset.models.items(): - if key == 'checkpoint': - if 'model_name' not in updated_config: - updated_config['model_name'] = value - elif key == 'vae_name': - if 'vae_name' not in updated_config: - updated_config['vae_name'] = value - elif key == 'lora_name': - if 'lora_name' not in updated_config: - updated_config['lora_name'] = value - elif key == 'controlnet_model': - if 'controlnet_model' not in updated_config: - updated_config['controlnet_model'] = value - else: - if key not in updated_config: - updated_config[key] = value + config_key = model_key_mapping.get(key, key) + if config_key not in updated_config: + updated_config[config_key] = valuedream_layer_backend/tests/test_report_bundle.py (3)
11-11: Remove unused importsThe imports
patchandMagicMockfrom unittest.mock are not used.-from unittest.mock import patch, MagicMock - import pytest
154-154: Remove unused variable assignmentThe variable
resultis assigned but never used.- result = build_report_bundle(temp_run_dir, zip_path, custom_globs) + build_report_bundle(temp_run_dir, zip_path, custom_globs)
84-85: Use PIL to create actual image files for more realistic testsThe test creates empty files with
.touch()instead of actual images, which might not catch issues with image processing.# Create dummy image files - for img_path in ["output/image1.png", "output/image2.png", "grids/grid.png"]: - (run_dir / img_path).touch() + for img_path in ["output/image1.png", "output/image2.png", "grids/grid.png"]: + img = Image.new('RGB', (256, 256), color='red') + img.save(run_dir / img_path)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
.github/pull_request_template.md(1 hunks).github/workflows/test.yml(1 hunks).pre-commit-config.yaml(1 hunks)README.md(3 hunks)dream_layer_backend/core/__init__.py(1 hunks)dream_layer_backend/core/presets.py(1 hunks)dream_layer_backend/core/tiling.py(1 hunks)dream_layer_backend/metrics/__init__.py(1 hunks)dream_layer_backend/metrics/clip_score.py(1 hunks)dream_layer_backend/metrics/ssim_lpips.py(1 hunks)dream_layer_backend/presets/presets.json(1 hunks)dream_layer_backend/requirements.txt(1 hunks)dream_layer_backend/tests/test_presets_e2e.py(1 hunks)dream_layer_backend/tests/test_quality_metrics.py(1 hunks)dream_layer_backend/tests/test_report_bundle.py(1 hunks)dream_layer_backend/tests/test_tiling_blend.py(1 hunks)dream_layer_backend/tools/__init__.py(1 hunks)dream_layer_backend/tools/report_bundle.py(1 hunks)dream_layer_backend/tools/report_schema.py(1 hunks)pytest.ini(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (7)
dream_layer_backend/tests/test_presets_e2e.py (1)
dream_layer_backend/core/presets.py (16)
Preset(18-121)PresetManager(124-367)get_preset_manager(374-379)load_presets(382-385)save_presets(388-392)compute_preset_hash(395-398)create_preset_from_config(260-295)apply_preset_to_config(297-333)list_presets(241-243)get_preset(237-239)to_dict(75-86)update(102-116)add_preset(245-249)is_compatible_with(118-121)validate_preset(335-356)remove_preset(251-258)
dream_layer_backend/tests/test_quality_metrics.py (2)
dream_layer_backend/metrics/clip_score.py (6)
CLIPScorer(28-155)get_clip_scorer(158-162)clip_text_image_similarity(62-103)clip_text_image_similarity(165-186)model_hash(153-155)get_model_info(143-150)dream_layer_backend/metrics/ssim_lpips.py (15)
SSIMScorer(35-106)LPIPSScorer(109-198)QualityMetrics(201-244)get_ssim_scorer(253-258)get_lpips_scorer(261-266)get_quality_metrics(269-274)compute_ssim(43-97)compute_ssim(277-292)compute_lpips(134-188)compute_lpips(295-312)compute_all_quality_metrics(315-330)get_info(99-106)get_info(190-198)compute_all_metrics(208-237)get_metrics_info(239-244)
dream_layer_backend/tools/report_bundle.py (1)
dream_layer_backend/tools/report_schema.py (2)
validate_results_csv(26-88)create_schema_header(118-125)
dream_layer_backend/tests/test_tiling_blend.py (1)
dream_layer_backend/core/tiling.py (8)
TilingConfig(16-45)tile_slices(48-82)create_blend_mask(85-152)blend_paste(155-201)process_tiled(204-317)calculate_optimal_tile_size(320-349)validate_tiling_config(352-407)to_dict(39-45)
dream_layer_backend/metrics/ssim_lpips.py (2)
ComfyUI/custom_nodes/facerestore_cf/facelib/detection/retinaface/retinaface.py (1)
transform(167-192)dream_layer_backend/metrics/clip_score.py (1)
_load_model(42-60)
dream_layer_backend/metrics/clip_score.py (1)
dream_layer_backend/metrics/ssim_lpips.py (1)
_load_model(122-132)
dream_layer_backend/tests/test_report_bundle.py (2)
dream_layer_backend/tools/report_bundle.py (2)
build_report_bundle(21-102)validate_bundle(292-353)dream_layer_backend/tools/report_schema.py (1)
validate_results_csv(26-88)
🪛 LanguageTool
.github/pull_request_template.md
[grammar] ~1-~1: There might be a mistake here.
Context: ## 🚀 Feature Summary This PR implements 4 key features for...
(QB_NEW_EN)
[grammar] ~7-~7: There might be a mistake here.
Context: ... ZIP timestamps ensure identical SHA256 for same content 4. CI/CD Integration -...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ... heavy dependencies ## 🧪 Test Strategy ### Test Coverage - ✅ SSIM tests: Always ...
(QB_NEW_EN)
[grammar] ~12-~12: There might be a mistake here.
Context: ... ## 🧪 Test Strategy ### Test Coverage - ✅ SSIM tests: Always enabled (lightw...
(QB_NEW_EN)
[grammar] ~13-~13: There might be a mistake here.
Context: ...ed (lightweight scikit-image dependency) - ✅ CLIP tests: Auto-skip with `@pytes...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...s_torchwhen torch/transformers missing - ✅ **LPIPS tests**: Auto-skip with@pyte...
(QB_NEW_EN)
[grammar] ~15-~15: There might be a mistake here.
Context: ....mark.requires_lpips` when lpips missing - ✅ Fallback behavior: All tests verif...
(QB_NEW_EN)
[grammar] ~27-~27: There might be a mistake here.
Context: ...trics.py -v ``` ## 🔒 Determinism Notes - ZIP timestamps: Fixed to `(1980,1,1,0,0...
(QB_NEW_EN)
[grammar] ~29-~29: There might be a mistake here.
Context: ...1980,1,1,0,0,0)` for reproducible SHA256 - Bundle verification: Two identical run...
(QB_NEW_EN)
[grammar] ~30-~30: There might be a mistake here.
Context: ... identical runs produce identical hashes - Content integrity: SHA256 verification...
(QB_NEW_EN)
[grammar] ~33-~33: There might be a mistake here.
Context: ...cy ## 📦 Optional Dependencies Behavior | Dependency | Status | Test Behavior | |...
(QB_NEW_EN)
[grammar] ~35-~35: There might be a mistake here.
Context: ... | Dependency | Status | Test Behavior | |------------|--------|---------------| ...
(QB_NEW_EN)
[grammar] ~36-~36: There might be a mistake here.
Context: ... |------------|--------|---------------| | torch + transformers | Optional | CL...
(QB_NEW_EN)
[grammar] ~37-~37: There might be a mistake here.
Context: ...ers| Optional | CLIP tests auto-skip | |lpips` | Optional | LPIPS tests auto-...
(QB_NEW_EN)
[grammar] ~38-~38: There might be a mistake here.
Context: ...ps| Optional | LPIPS tests auto-skip | |scikit-image` | Required | SSIM tests...
(QB_NEW_EN)
[grammar] ~43-~43: There might be a mistake here.
Context: ...ashing. ## 🎯 Instructions to Reproduce ### 1. Test Heavy Dependency Skipping ```bash...
(QB_NEW_EN)
[grammar] ~78-~78: There might be a mistake here.
Context: ...ould be [None] " ``` ## 🔍 Code Quality - Pre-commit hooks: black, isort, flake8 ...
(QB_NEW_EN)
[grammar] ~80-~80: There might be a mistake here.
Context: ... isort, flake8 for consistent formatting - Type hints: Full type annotations for ...
(QB_NEW_EN)
[grammar] ~81-~81: There might be a mistake here.
Context: ...ull type annotations for maintainability - Error handling: Graceful fallbacks thr...
(QB_NEW_EN)
[grammar] ~82-~82: There might be a mistake here.
Context: ...ul fallbacks throughout metrics pipeline - Documentation: Comprehensive README up...
(QB_NEW_EN)
[grammar] ~85-~85: There might be a mistake here.
Context: ...E updates with examples ## 📋 Checklist - [x] Tests pass without heavy dependencies...
(QB_NEW_EN)
🪛 Ruff (0.12.2)
dream_layer_backend/tests/test_presets_e2e.py
5-5: json imported but unused
Remove unused import: json
(F401)
8-8: unittest.mock.patch imported but unused
Remove unused import
(F401)
8-8: unittest.mock.MagicMock imported but unused
Remove unused import
(F401)
161-161: Local variable preset is assigned to but never used
Remove assignment to unused variable preset
(F841)
dream_layer_backend/tests/test_quality_metrics.py
23-23: metrics.clip_score.get_clip_scorer imported but unused
Remove unused import
(F401)
23-23: metrics.clip_score.clip_text_image_similarity imported but unused
Remove unused import
(F401)
28-28: metrics.ssim_lpips.get_ssim_scorer imported but unused
Remove unused import
(F401)
28-28: metrics.ssim_lpips.get_lpips_scorer imported but unused
Remove unused import
(F401)
28-28: metrics.ssim_lpips.get_quality_metrics imported but unused
Remove unused import
(F401)
29-29: metrics.ssim_lpips.compute_ssim imported but unused
Remove unused import
(F401)
29-29: metrics.ssim_lpips.compute_lpips imported but unused
Remove unused import
(F401)
29-29: metrics.ssim_lpips.compute_all_quality_metrics imported but unused
Remove unused import
(F401)
30-30: metrics.ssim_lpips.SSIM_AVAILABLE imported but unused
Remove unused import
(F401)
30-30: metrics.ssim_lpips.LPIPS_AVAILABLE imported but unused
Remove unused import
(F401)
dream_layer_backend/tools/report_bundle.py
11-11: os imported but unused
Remove unused import: os
(F401)
18-18: .report_schema.create_schema_header imported but unused
Remove unused import: .report_schema.create_schema_header
(F401)
dream_layer_backend/tests/test_tiling_blend.py
8-8: unittest.mock.MagicMock imported but unused
Remove unused import
(F401)
8-8: unittest.mock.patch imported but unused
Remove unused import
(F401)
dream_layer_backend/metrics/ssim_lpips.py
8-8: typing.Tuple imported but unused
Remove unused import: typing.Tuple
(F401)
dream_layer_backend/core/tiling.py
228-228: Local variable config is assigned to but never used
Remove assignment to unused variable config
(F841)
314-314: Do not use bare except
(E722)
dream_layer_backend/tools/report_schema.py
8-8: json imported but unused
Remove unused import: json
(F401)
dream_layer_backend/core/presets.py
9-9: os imported but unused
Remove unused import: os
(F401)
dream_layer_backend/metrics/clip_score.py
8-8: typing.Tuple imported but unused
Remove unused import: typing.Tuple
(F401)
10-10: numpy imported but unused
Remove unused import: numpy
(F401)
20-20: transformers imported but unused; consider using importlib.util.find_spec to test for availability
(F401)
dream_layer_backend/tests/test_report_bundle.py
11-11: unittest.mock.patch imported but unused
Remove unused import
(F401)
11-11: unittest.mock.MagicMock imported but unused
Remove unused import
(F401)
154-154: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
🪛 actionlint (1.7.7)
.github/workflows/test.yml
20-20: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.37.1)
.github/workflows/test.yml
[error] 18-18: trailing spaces
(trailing-spaces)
[error] 23-23: trailing spaces
(trailing-spaces)
[error] 28-28: trailing spaces
(trailing-spaces)
[error] 34-34: trailing spaces
(trailing-spaces)
🪛 markdownlint-cli2 (0.17.2)
README.md
250-250: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
274-274: Multiple headings with the same content
(MD024, no-duplicate-heading)
326-326: Multiple headings with the same content
(MD024, no-duplicate-heading)
342-342: Multiple headings with the same content
(MD024, no-duplicate-heading)
🔇 Additional comments (2)
.github/pull_request_template.md (1)
1-94: Excellent modernization of the PR template with focus on testing and determinism!The new template structure effectively emphasizes test-driven development, deterministic behavior, and CI/CD integration, which aligns perfectly with the engineering practices introduced in this PR. The comprehensive checklist and clear reproduction steps will help maintain code quality standards.
dream_layer_backend/presets/presets.json (1)
1-70: LGTM! Well-structured preset definitions.The JSON structure is valid and follows a consistent schema with appropriate fields for version control, timestamps, and preset configurations. The preset hashes appear to be deterministic SHA256 values as expected.
| - uses: actions/checkout@v4 | ||
|
|
||
| - name: Set up Python ${{ matrix.python-version }} | ||
| uses: actions/setup-python@v4 | ||
| with: | ||
| python-version: ${{ matrix.python-version }} | ||
|
|
||
| - name: Install dependencies | ||
| run: | | ||
| python -m pip install --upgrade pip | ||
| python -m pip install -r requirements.txt | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Update GitHub Actions to latest versions for security and compatibility
The workflow is using outdated action versions that may have security vulnerabilities or compatibility issues with newer GitHub runners.
Apply this diff to update to the latest stable versions:
- - uses: actions/checkout@v4
+ - uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
- uses: actions/setup-python@v4
+ uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - uses: actions/checkout@v4 | |
| - name: Set up Python ${{ matrix.python-version }} | |
| uses: actions/setup-python@v4 | |
| with: | |
| python-version: ${{ matrix.python-version }} | |
| - name: Install dependencies | |
| run: | | |
| python -m pip install --upgrade pip | |
| python -m pip install -r requirements.txt | |
| - uses: actions/checkout@v4 | |
| - name: Set up Python ${{ matrix.python-version }} | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: ${{ matrix.python-version }} | |
| - name: Install dependencies | |
| run: | | |
| python -m pip install --upgrade pip | |
| python -m pip install -r requirements.txt | |
🧰 Tools
🪛 actionlint (1.7.7)
20-20: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.37.1)
[error] 18-18: trailing spaces
(trailing-spaces)
[error] 23-23: trailing spaces
(trailing-spaces)
[error] 28-28: trailing spaces
(trailing-spaces)
🤖 Prompt for AI Agents
In .github/workflows/test.yml around lines 17 to 28, the workflow references
older action versions; update the action references to the latest stable
releases by changing the uses lines to the current stable tags (e.g.,
actions/checkout@v4 and actions/setup-python@v4 or the specific latest patch
SHAs), keep the python-version matrix unchanged, and ensure the Install
dependencies step remains the same and runs successfully after the updates.
| from scipy.ndimage import zoom | ||
| zoom_factors = [expected_height / tile_array.shape[0], expected_width / tile_array.shape[1]] | ||
| if len(tile_array.shape) == 3: | ||
| zoom_factors.append(1) | ||
| tile_array = zoom(tile_array, zoom_factors, order=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider validating scipy import availability
The code conditionally imports scipy.ndimage.zoom but doesn't handle the ImportError if scipy is not installed.
else:
# Simple resize for numpy arrays
- from scipy.ndimage import zoom
- zoom_factors = [expected_height / tile_array.shape[0], expected_width / tile_array.shape[1]]
- if len(tile_array.shape) == 3:
- zoom_factors.append(1)
- tile_array = zoom(tile_array, zoom_factors, order=1)
+ try:
+ from scipy.ndimage import zoom
+ zoom_factors = [expected_height / tile_array.shape[0], expected_width / tile_array.shape[1]]
+ if len(tile_array.shape) == 3:
+ zoom_factors.append(1)
+ tile_array = zoom(tile_array, zoom_factors, order=1)
+ except ImportError:
+ logger.warning("scipy not available for resizing, using simple nearest neighbor")
+ # Fallback to simple nearest neighbor interpolation
+ # or skip resizing with a warning
+ passCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In dream_layer_backend/core/tiling.py around lines 292 to 296, the code imports
scipy.ndimage.zoom unguarded; wrap the import in a try/except ImportError and
handle the failure by raising a clear RuntimeError (or ValueError) that explains
scipy is required (e.g., "scipy is required for tile resampling, install with
'pip install scipy'") or fall back to an alternative resampling implementation
(e.g., use PIL.Image.resize or numpy-based resizing) before attempting to call
zoom; ensure the rest of the code only calls zoom when the import succeeded.
| try: | ||
| # Test if generate function returns PIL Image | ||
| test_result = generate_fn(**{**gen_kwargs, 'crop': (0, 0, 1, 1)}) | ||
| if isinstance(test_result, Image.Image): | ||
| return Image.fromarray(canvas) | ||
| except: | ||
| pass | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve exception handling specificity
Using a bare except clause can mask unexpected errors. Consider catching specific exceptions.
# Convert back to PIL Image if the generate function returns PIL Images
try:
# Test if generate function returns PIL Image
test_result = generate_fn(**{**gen_kwargs, 'crop': (0, 0, 1, 1)})
if isinstance(test_result, Image.Image):
return Image.fromarray(canvas)
- except:
+ except (TypeError, KeyError, AttributeError):
+ # generate_fn doesn't support crop parameter or other expected errors
pass📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| # Test if generate function returns PIL Image | |
| test_result = generate_fn(**{**gen_kwargs, 'crop': (0, 0, 1, 1)}) | |
| if isinstance(test_result, Image.Image): | |
| return Image.fromarray(canvas) | |
| except: | |
| pass | |
| try: | |
| # Test if generate function returns PIL Image | |
| test_result = generate_fn(**{**gen_kwargs, 'crop': (0, 0, 1, 1)}) | |
| if isinstance(test_result, Image.Image): | |
| return Image.fromarray(canvas) | |
| except (TypeError, KeyError, AttributeError): | |
| # generate_fn doesn't support crop parameter or other expected errors | |
| pass |
🧰 Tools
🪛 Ruff (0.12.2)
314-314: Do not use bare except
(E722)
🤖 Prompt for AI Agents
In dream_layer_backend/core/tiling.py around lines 309 to 316, the code uses a
bare `except:` which can swallow unexpected errors; replace it with a specific
exception handler (e.g., `except Exception as e:` or a narrower set like `except
(TypeError, ValueError, OSError) as e:`), log or handle the exception (include
the exception message) and only suppress known/expected failures while letting
truly unexpected errors propagate or be re-raised.
| text_features = self.model.get_text_features(**inputs) | ||
| image_features = self.model.get_image_features(**inputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify tensor input format for CLIP model
The code passes both text and image inputs to both get_text_features and get_image_features, which might not be correct. Each method typically expects only its respective input type.
Apply this diff to fix the input tensors:
# Get embeddings
with torch.no_grad():
- text_features = self.model.get_text_features(**inputs)
- image_features = self.model.get_image_features(**inputs)
+ text_features = self.model.get_text_features(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask)
+ image_features = self.model.get_image_features(pixel_values=inputs.pixel_values)Let me verify the correct API usage for CLIP model:
🏁 Script executed:
#!/bin/bash
# Search for CLIP model usage patterns in the codebase
ast-grep --pattern 'get_text_features($$$)'
# Also check for examples of get_image_features usage
ast-grep --pattern 'get_image_features($$$)'Length of output: 92
🏁 Script executed:
#!/bin/bash
set -euxo pipefail
echo "Searching for clip_score.py files..."
find . -type f -name "clip_score.py" -print || true
echo
echo "Searching repository for get_text_features/get_image_features usages..."
rg -n --hidden --no-ignore -S "get_text_features|get_image_features" || true
# If the file exists at the reported path in the review, print it with line numbers
target="dream_layer_backend/metrics/clip_score.py"
if [ -f "$target" ]; then
echo
echo "Displaying $target (first 300 lines):"
nl -ba "$target" | sed -n '1,300p'
else
echo
echo "$target not found in workspace."
fiLength of output: 1087
🏁 Script executed:
#!/bin/bash
set -u -o pipefail
target="dream_layer_backend/metrics/clip_score.py"
if [ -f "$target" ]; then
echo "---- Showing $target (up to 400 lines) ----"
sed -n '1,400p' "$target"
else
echo "$target not found"
fi
echo "---- Search for how 'inputs' is created and processor usage ----"
rg -n "inputs\s*=" dream_layer_backend/metrics/clip_score.py || true
rg -n "processor|CLIPProcessor|CLIPModel|return_tensors|pixel_values|input_ids" dream_layer_backend/metrics/clip_score.py || trueLength of output: 7276
Pass only text/image tensors to CLIP get_*_features (fix required)
The processor outputs are being expanded into both calls. Pass only text tensors to get_text_features and only image tensors to get_image_features.
- File: dream_layer_backend/metrics/clip_score.py — _process_batch, lines ~123–124
Apply patch:
- text_features = self.model.get_text_features(**inputs)
- image_features = self.model.get_image_features(**inputs)
+ text_features = self.model.get_text_features(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask)
+ image_features = self.model.get_image_features(pixel_values=inputs.pixel_values)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| text_features = self.model.get_text_features(**inputs) | |
| image_features = self.model.get_image_features(**inputs) | |
| text_features = self.model.get_text_features(input_ids=inputs.input_ids, attention_mask=inputs.attention_mask) | |
| image_features = self.model.get_image_features(pixel_values=inputs.pixel_values) |
🤖 Prompt for AI Agents
In dream_layer_backend/metrics/clip_score.py around lines 123–124, the code
currently passes the full processor outputs into both
self.model.get_text_features and self.model.get_image_features; change this to
pass only the text-related tensors to get_text_features (e.g., input_ids,
attention_mask, token_type_ids if present) and only the image-related tensors to
get_image_features (e.g., pixel_values or tensor name produced for images).
Extract or construct two small dicts (text_inputs and image_inputs) from the
processor outputs and call the respective get_*_features with the appropriate
dict so each method receives only the tensors it expects.
| def get_clip_scorer() -> Optional[CLIPScorer]: | ||
| """Get a CLIP scorer instance.""" | ||
| if CLIP_AVAILABLE: | ||
| return CLIPScorer() | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider implementing singleton pattern for CLIPScorer
The get_clip_scorer() function creates a new instance every time it's called, which could lead to multiple model loads and memory inefficiency.
Apply this diff to implement a singleton pattern:
+# Global instance
+_clip_scorer: Optional[CLIPScorer] = None
+
def get_clip_scorer() -> Optional[CLIPScorer]:
"""Get a CLIP scorer instance."""
+ global _clip_scorer
if CLIP_AVAILABLE:
- return CLIPScorer()
+ if _clip_scorer is None:
+ _clip_scorer = CLIPScorer()
+ return _clip_scorer
return None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def get_clip_scorer() -> Optional[CLIPScorer]: | |
| """Get a CLIP scorer instance.""" | |
| if CLIP_AVAILABLE: | |
| return CLIPScorer() | |
| return None | |
| # Global instance | |
| _clip_scorer: Optional[CLIPScorer] = None | |
| def get_clip_scorer() -> Optional[CLIPScorer]: | |
| """Get a CLIP scorer instance.""" | |
| global _clip_scorer | |
| if CLIP_AVAILABLE: | |
| if _clip_scorer is None: | |
| _clip_scorer = CLIPScorer() | |
| return _clip_scorer | |
| return None |
🤖 Prompt for AI Agents
In dream_layer_backend/metrics/clip_score.py around lines 158 to 162,
get_clip_scorer() currently returns a new CLIPScorer each call causing repeated
model loads; change it to use a module-level cached instance: add a private
variable (e.g. _clip_scorer: Optional[CLIPScorer] = None) and lazily instantiate
it inside get_clip_scorer() on first call, returning the cached instance
thereafter; if concurrency is a concern, guard instantiation with a simple
threading.Lock to ensure thread-safety; keep returning None when CLIP_AVAILABLE
is False.
| # Optional dependencies for quality metrics | ||
| # Uncomment the lines below to enable additional features: | ||
|
|
||
| # For SSIM (Structural Similarity Index) | ||
| # scikit-image>=0.19.0 | ||
|
|
||
| # For LPIPS (Learned Perceptual Image Patch Similarity) | ||
| # lpips>=0.1.4 | ||
| # torch>=1.9.0 | ||
| # torchvision>=0.10.0 | ||
|
|
||
| # For CLIP scoring | ||
| # transformers>=4.20.0 | ||
| # ftfy>=6.1.0 | ||
| # regex>=2022.1.18 | ||
|
|
||
| # For tiling and image processing | ||
| # numpy>=1.21.0 | ||
| # scipy>=1.7.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Optional deps: ensure they match actual imports; if used by tiling/metrics at runtime, promote to required or extras
Given new features:
- metrics/clip_score.py likely needs transformers, torch, etc.
- metrics/ssim_lpips.py likely needs lpips and scikit-image.
- core/tiling.py may rely on numpy/scipy for blending (cosine/linear).
If these modules import heavy libs unguarded, commented deps will break at runtime. Either:
- Guard imports and degrade gracefully (return None), or
- Move these deps to an extras group (e.g., pip install .[metrics]) and document.
Run this to scan for imports and confirm whether they are optional (inside try/except) or unconditional:
If numpy/scipy are used in core/tiling.py without guards, consider promoting them from comments to required deps or creating a “tiling” extra.
I can prepare a pyproject.toml with extras_require:
- metrics = ["torch>=...", "torchvision>=...", "lpips>=...", "scikit-image>=...", "transformers>=...", "ftfy>=...", "regex>=..."]
- tiling = ["numpy>=...", "scipy>=..."]
Let me know and I’ll generate it alongside doc updates.
🏁 Script executed:
#!/bin/bash
rg -n "import (numpy|scipy|torch|torchvision|lpips|skimage|transformers|ftfy|regex)" -A2 -B1Length of output: 59076
Action required: promote numpy to required; move heavy metric deps to an extras group
Short summary: rg shows unguarded numpy imports in the backend (tiling & metrics) while heavy libs (torch/transformers/lpips) are guarded — the commented deps in requirements.txt will break at runtime for numpy.
Files to address:
- dream_layer_backend/core/tiling.py — line ~8:
import numpy as np(unguarded) - dream_layer_backend/metrics/clip_score.py — line ~10:
import numpy as np(unguarded); lines ~18–22:try: import torch / transformers(guarded) - dream_layer_backend/metrics/ssim_lpips.py — line ~10:
import numpy as np(unguarded); lines ~27–30:try: import lpips(guarded); lines ~176–179:import torchused inside function - dream_layer_backend/tests/* (test_tiling_blend.py, test_quality_metrics.py) — import numpy in tests
Recommended fixes (pick one):
- Make numpy (>=1.21.0) a required dependency in dream_layer_backend/requirements.txt (or pyproject) since multiple modules + tests import it unguarded.
- Move heavy optional packages to an extras group (e.g. [metrics] => torch, torchvision, lpips, transformers, ftfy, regex) and document installation (pip install .[metrics]).
- Or, if you prefer to keep them commented, add defensive guards around unguarded imports/use-sites and have functions gracefully degrade when optional backends are not installed.
I can generate a pyproject.toml with extras (metrics = [...], tiling = [...]) and update docs if you want — say “yes” and I’ll produce the changes.
🤖 Prompt for AI Agents
In dream_layer_backend/requirements.txt around lines 9–27, numpy is currently
commented out while several modules import it unguarded; add numpy>=1.21.0 as a
required dependency (uncomment or insert it into the main requirements list) and
remove/uncomment heavy metric libs instead into an extras group (e.g. create a
[metrics] extras list in pyproject.toml or an extras_require entry with torch,
torchvision, lpips, transformers, ftfy, regex) and update README to instruct
users to pip install .[metrics]; alternatively, if you prefer not to add extras
support, add defensive guards around all unguarded numpy imports in
dream_layer_backend/core/tiling.py, dream_layer_backend/metrics/* and tests so
code degrades gracefully when optional backends are missing.
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve exception handling in config loading
Using a bare except clause can hide unexpected errors. Consider logging the exception for debugging.
- except Exception:
- pass
+ except (json.JSONDecodeError, IOError) as e:
+ # Log the error but continue with empty config_data
+ print(f"Warning: Failed to load config.json: {e}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception: | |
| pass | |
| except (json.JSONDecodeError, IOError) as e: | |
| # Log the error but continue with empty config_data | |
| print(f"Warning: Failed to load config.json: {e}") |
🤖 Prompt for AI Agents
In dream_layer_backend/tools/report_bundle.py around lines 212 to 213, the bare
"except:" silently swallows errors during config loading; replace it with
"except Exception as e:" and log the exception (e.g., logger.exception or
logging.exception) including the traceback so failures are visible for
debugging, and only suppress the exception if you have a clear fallback path —
otherwise re-raise or return a controlled default.
| # Set fixed timestamp and permissions for deterministic SHA256 | ||
| info = zipf.getinfo(filename) | ||
| info.date_time = (1980, 1, 1, 0, 0, 0) # Fixed date for determinism | ||
| info.external_attr = 0o644 << 16 # Fixed permissions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect usage of zipfile info modification
The code modifies the ZipInfo object after writing the file, which has no effect. The info should be created and modified before writing.
- # Use fixed permissions and timestamp for determinism
- zipf.write(
- file_path,
- filename,
- compress_type=zipfile.ZIP_DEFLATED
- )
-
- # Set fixed timestamp and permissions for deterministic SHA256
- info = zipf.getinfo(filename)
- info.date_time = (1980, 1, 1, 0, 0, 0) # Fixed date for determinism
- info.external_attr = 0o644 << 16 # Fixed permissions
+ # Create ZipInfo with fixed timestamp and permissions for determinism
+ info = zipfile.ZipInfo(filename=filename)
+ info.date_time = (1980, 1, 1, 0, 0, 0) # Fixed date for determinism
+ info.external_attr = 0o644 << 16 # Fixed permissions
+ info.compress_type = zipfile.ZIP_DEFLATED
+
+ # Write file with fixed metadata
+ with open(file_path, 'rb') as f:
+ zipf.writestr(info, f.read())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Set fixed timestamp and permissions for deterministic SHA256 | |
| info = zipf.getinfo(filename) | |
| info.date_time = (1980, 1, 1, 0, 0, 0) # Fixed date for determinism | |
| info.external_attr = 0o644 << 16 # Fixed permissions | |
| # Create ZipInfo with fixed timestamp and permissions for determinism | |
| info = zipfile.ZipInfo(filename=filename) | |
| info.date_time = (1980, 1, 1, 0, 0, 0) # Fixed date for determinism | |
| info.external_attr = 0o644 << 16 # Fixed permissions | |
| info.compress_type = zipfile.ZIP_DEFLATED | |
| # Write file with fixed metadata | |
| with open(file_path, 'rb') as f: | |
| zipf.writestr(info, f.read()) |
🤖 Prompt for AI Agents
In dream_layer_backend/tools/report_bundle.py around lines 269 to 272, the code
sets ZipInfo.date_time and external_attr after calling zipf.write/getinfo which
has no effect; create or obtain the ZipInfo for the target entry, set
info.date_time = (1980,1,1,0,0,0) and info.external_attr = 0o644 << 16 before
writing, then write the file using zipf.writestr(info, data) (or
zipf.writestr(info, file_bytes) if you read the file yourself) or construct
ZipInfo with the desired arcname and pass it to writestr/write so the
deterministic timestamp and permissions are applied.
| validate_results_csv(temp_csv) | ||
| csv_valid = True | ||
| except Exception as e: | ||
| csv_valid = False | ||
| csv_error = str(e) | ||
| finally: | ||
| temp_csv.unlink(missing_ok=True) | ||
| except Exception as e: | ||
| csv_valid = False | ||
| csv_error = str(e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Undefined variable in exception handling
The variable csv_error is referenced on line 345 but may not be defined if an exception occurs before line 333.
# Validate results.csv schema
+ csv_error = None
try:
with zipf.open("results.csv") as csv_file:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| validate_results_csv(temp_csv) | |
| csv_valid = True | |
| except Exception as e: | |
| csv_valid = False | |
| csv_error = str(e) | |
| finally: | |
| temp_csv.unlink(missing_ok=True) | |
| except Exception as e: | |
| csv_valid = False | |
| csv_error = str(e) | |
| # Validate results.csv schema | |
| csv_error = None | |
| try: | |
| with zipf.open("results.csv") as csv_file: | |
| with tempfile.NamedTemporaryFile(prefix="results-", suffix=".csv", delete=False) as tf: | |
| temp_csv = Path(tf.name) | |
| with open(temp_csv, "wb") as out_f: | |
| out_f.write(csv_file.read()) | |
| validate_results_csv(temp_csv) | |
| csv_valid = True | |
| except Exception as e: | |
| csv_valid = False | |
| csv_error = str(e) | |
| finally: | |
| temp_csv.unlink(missing_ok=True) | |
| except Exception as e: | |
| csv_valid = False | |
| csv_error = str(e) |
🤖 Prompt for AI Agents
In dream_layer_backend/tools/report_bundle.py around lines 329 to 338, csv_error
(and csv_valid) may be referenced later even if an exception occurs before they
are assigned; initialize csv_error (e.g. to an empty string or None) and
csv_valid (e.g. to False) before the outer try block so they always exist, or
set them in every except branch prior to raising/returning; ensure
temp_csv.unlink still runs in finally and that any early returns still leave
csv_error defined.
| f.seek(0) | ||
| next(f) # Skip header | ||
| first_row = next(csv.DictReader(f)) | ||
| return first_row.get('schema_version', "1.0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential issue with CSV reader reuse after seeking
Creating a new DictReader after seeking might not work as expected. The file position is changed but a new reader is created without resetting the file object's state.
Apply this diff to fix the CSV reader usage:
if "schema_version" in fieldnames:
# Read first row to get schema version
- f.seek(0)
- next(f) # Skip header
- first_row = next(csv.DictReader(f))
+ for row in reader:
+ return row.get('schema_version', "1.0")
return first_row.get('schema_version', "1.0")Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In dream_layer_backend/tools/report_schema.py around lines 108 to 111, the CSV
handling after f.seek(0) incorrectly instantiates a new csv.DictReader and then
calls next(csv.DictReader(f)) which can mis-handle the file object's state;
instead, after seeking, create a single csv.DictReader instance from the file
(reader = csv.DictReader(f)), advance it with next(reader) to skip the header
row, then call next(reader) to get the first data row and return its
'schema_version' (with the same default "1.0"); ensure you use the same reader
object rather than re-creating the reader after seeking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (4)
.pre-commit-config.yaml (4)
22-26: Avoid duplicative linting (flake8 + ruff) or scope flake8 to plugins onlyRunning both flake8 and ruff often duplicates checks and increases CI time. Either remove flake8 or keep it purely for plugins that ruff doesn’t cover.
Option A (remove flake8):
- repo: https://github.com/PyCQA/flake8 - rev: 7.0.0 - hooks: - - id: flake8 - args: [--config=.flake8] + rev: 7.0.0 + hooks: []Option B (keep flake8 only for plugins): ensure .flake8 enables only plugin checks not yet covered by ruff, and disable overlapping rules. If helpful, I can draft a .flake8 and pyproject.toml/ruff config to de-duplicate.
Also applies to: 27-31
5-12: Add a few low-cost safety hooks and enforce LF endingsThese pre-commit-hooks are cheap and catch common issues early. Also, enforce LF to prevent cross-OS churn.
- id: trailing-whitespace - id: end-of-file-fixer - id: check-yaml + - id: check-json + - id: check-toml + - id: check-ast - id: check-added-large-files - id: check-merge-conflict - id: debug-statements - id: name-tests-test - - id: mixed-line-ending + - id: mixed-line-ending + args: ["--fix=lf"]
1-1: Exclude generated artifacts and bulky paths to speed up hooksPre-commit scans staged files, but patterns help avoid accidental staging and speed up path matching. Tweak as needed for your repo layout.
-repos: +exclude: | + (?x)( + ^runs/| + ^dist/| + ^build/| + ^.venv/| + ^env/| + ^.mypy_cache/| + ^.pytest_cache/| + ^docs/_build/ + ) +repos:
36-39: Tighten codespell to reduce false positivesA few args make codespell less noisy on repos with data, notebooks, and lockfiles.
- repo: https://github.com/codespell-project/codespell rev: v2.3.0 hooks: - id: codespell + args: + - --skip=.git,*.lock,*.csv,*.tsv,*.ipynb,*.svg,*.png,*.jpg,*.jpeg,*.gif,*.pdf + - --ignore-words-list=nd,crate,fo,te + - --quiet-level=2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.pre-commit-config.yaml(1 hunks)dream_layer_backend/.flake8(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- dream_layer_backend/.flake8
.pre-commit-config.yaml
Outdated
| - repo: https://github.com/psf/black | ||
| rev: 24.4.2 | ||
| hooks: | ||
| - id: black | ||
| args: [--line-length=120] | ||
| - repo: https://github.com/PyCQA/isort | ||
| rev: 5.13.2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider bumping hook versions to current stable releases
Several hooks (black, ruff-pre-commit, flake8, pyupgrade, codespell) have newer releases with bugfixes/perf improvements. Recommend bumping after a quick local run.
Also applies to: 27-33, 31-35, 36-39
🌐 Web query:
What are the latest stable versions of:
- black (pre-commit hook),
- ruff-pre-commit,
- flake8,
- pyupgrade,
- codespell,
- pre-commit-hooks
?
💡 Result:
Here are the latest stable releases I found (checked live; information current as of 2025-08-14):
- black (psf/black, pre-commit hook): 25.1.0 — released Jan 29, 2025. (github.com)
- ruff-pre-commit (astral-sh/ruff-pre-commit): v0.12.8 — released Aug 07, 2025. (github.com)
- flake8: 7.3.0 — uploaded Jun 20, 2025. (pypi.org)
- pyupgrade: 3.20.0 — uploaded May 23, 2025. (pypi.org)
- codespell: 2.4.1 — uploaded Jan 28, 2025. (pypi.org)
- pre-commit-hooks (pre-commit/pre-commit-hooks): 5.0.0 — uploaded Oct 05, 2024. (pypi.org)
If you want, I can:
- produce a ready-to-copy pre-commit-config.yaml snippet using these exact revs/tags, or
- re-check for any releases after Aug 14, 2025.
Citations:
- 1: https://github.com/psf/black/releases
- 2: https://github.com/astral-sh/ruff-pre-commit/releases
- 3: https://pypi.org/project/flake8/?utm_source=chatgpt.com
- 4: https://pypi.org/project/pyupgrade/?utm_source=chatgpt.com
- 5: https://pypi.org/project/codespell/?utm_source=chatgpt.com
- 6: https://pypi.org/project/pre-commit-hooks/?utm_source=chatgpt.com
Bump pre-commit hook versions to latest stable releases
Verified current stable releases as of 2025-08-14 — please bump these hooks and run the hooks locally before merging.
- File: .pre-commit-config.yaml
- Lines 13-19: black — change rev 24.4.2 → 25.1.0
- Lines 27-33: ruff-pre-commit — recommend v0.12.8
- Lines 31-35: flake8 — recommend 7.3.0
- Lines 36-39: pyupgrade — recommend 3.20.0; codespell — recommend 2.4.1; pre-commit-hooks — recommend 5.0.0
Updated snippet (lines 13-19):
- repo: https://github.com/psf/black
rev: 25.1.0
hooks:
- id: black
args: [--line-length=120]
- repo: https://github.com/PyCQA/isort
rev: 5.13.2
Run a quick local test (pre-commit run --all-files) after updating to confirm there are no regressions.
Description
Implements four end-to-end features to improve reproducibility, high-res workflows, and quantitative evaluation:
report.zip(includesresults.csv,config.json, selected grids,README.txt) with schema validation and path rewrites.preset_nameandpreset_hashrecorded in run config andresults.csv.Nonewhen deps missing), tests gated via markers.Changes Made
tools/report_bundle.py+tools/report_schema.py(deterministic zip, schema checks, path rewrite, README).core/presets.py(create/select/save presets, stable hash, injectpreset_name/preset_hash).core/tiling.py(tiled processing with--tile-size,--tile-overlap,--blend-mode {cosine,linear}; SSIM seam test).metrics/clip_score.py,metrics/ssim_lpips.py(batched CLIP, SSIM; optional LPIPS).test_report_bundle.py,test_presets_e2e.py,test_tiling_blend.py,test_quality_metrics.py.pytest.iniunder[pytest](markers recognized).Evidence Required ✅
UI Screenshot
clip_score/ssim/lpipscolumnsGenerated Image
Logs