Skip to content

Conversation

@seacoastcreates
Copy link

@seacoastcreates seacoastcreates commented Aug 8, 2025

Description

Brief description of what this PR does.

Description

Captures wall time per image for the inference block and records device information such as GPU name and driver. UI shows a compact badge like “2.4 s per image · A6000”. CSV includes the same fields. Variance tolerance is documented.

Changes Made

  • Add Timer to Inference Block
  • Add GPU Info via NVML (NVIDIA Only)
  • Output to CSV (with variance data)
  • UI Badge: e.g. "2.4 s per image · A6000"

Evidence Required ✅

UI Screenshot

UI Screenshot

Generated Image

Generated Image
No API credits to generate image

Logs

Using output directory: /Users/kirby/Projects/DreamLayer/Dream_Layer_Resources/output

Starting Text2Image Handler Server...
Listening for requests at http://localhost:5001/api/txt2img
ControlNet endpoints available:
  - GET /api/controlnet/models
  - POST /api/upload-controlnet-image
  - GET /api/images/<filename>
 * Serving Flask app 'txt2img_server'
 * Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:5001
Press CTRL+C to quit
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 129-299-331
127.0.0.1 - - [07/Aug/2025 20:47:30] "OPTIONS /api/txt2img HTTP/1.1" 200 -
127.0.0.1 - - [07/Aug/2025 20:55:34] "OPTIONS /api/txt2img/interrupt HTTP/1.1" 200 -
127.0.0.1 - - [07/Aug/2025 20:55:34] "POST /api/txt2img/interrupt HTTP/1.1" 200 -
127.0.0.1 - - [07/Aug/2025 20:55:40] "OPTIONS /api/txt2img HTTP/1.1" 200 -

Using output directory: /Users/kirby/Projects/DreamLayer/Dream_Layer_Resources/output

Using output directory: /Users/kirby/Projects/DreamLayer/Dream_Layer_Resources/output

Starting Text2Image Handler Server...
Listening for requests at http://localhost:5001/api/txt2img
ControlNet endpoints available:
  - GET /api/controlnet/models
  - POST /api/upload-controlnet-image
  - GET /api/images/<filename>
Data: {
  "prompt": "futuristic robot exploring alien planet, sci-fi concept art",
  "negative_prompt": "",
  "model_name": "dall-e-2",
  "sampler_name": "euler",
  "scheduler": "normal",
  "steps": 20,
  "cfg_scale": 7,
  "denoising_strength": 0.75,
  "width": 512,
  "height": 512,
  "batch_size": 4,
  "batch_count": 1,
  "seed": -1,
  "random_seed": true,
  "clip_skip": 1,
  "tiling": false,
  "tile_size": 512,
  "tile_overlap": 64,
  "hires_fix": false,
  "karras_sigmas": false,
  "lora": null,
  "restore_faces": false,
  "face_restoration_model": "codeformer",
  "codeformer_weight": 0.5,
  "gfpgan_weight": 0.5,
  "hires_fix_enabled": false,
  "hires_fix_upscale_method": "upscale-by",
  "hires_fix_upscale_factor": 2.5,
  "hires_fix_hires_steps": 15,
  "hires_fix_denoising_strength": 0.5,
  "hires_fix_resize_width": 4000,
  "hires_fix_resize_height": 4000,
  "hires_fix_upscaler": "4x-ultrasharp",
  "refiner_enabled": false,
  "refiner_model": "none",
  "refiner_switch_at": 0.8,
  "custom_workflow": null
}

Key Parameters:
--------------------
Prompt: futuristic robot exploring alien planet, sci-fi concept art
Negative Prompt: 
Batch Size: 4

🎮 ControlNet Data:
--------------------
ControlNet enabled: False
No ControlNet units found

🔄 Transforming txt2img workflow:
----------------------------------------
📊 Data keys: ['prompt', 'negative_prompt', 'model_name', 'sampler_name', 'scheduler', 'steps', 'cfg_scale', 'denoising_strength', 'width', 'height', 'batch_size', 'batch_count', 'seed', 'random_seed', 'clip_skip', 'tiling', 'tile_size', 'tile_overlap', 'hires_fix', 'karras_sigmas', 'lora', 'restore_faces', 'face_restoration_model', 'codeformer_weight', 'gfpgan_weight', 'hires_fix_enabled', 'hires_fix_upscale_method', 'hires_fix_upscale_factor', 'hires_fix_hires_steps', 'hires_fix_denoising_strength', 'hires_fix_resize_width', 'hires_fix_resize_height', 'hires_fix_upscaler', 'refiner_enabled', 'refiner_model', 'refiner_switch_at', 'custom_workflow']

Batch size: 4

Mapping sampler name: euler -> euler
🎨 Using closed-source model: dall-e-2

Using model: dall-e-2
🎯 Core settings: {'prompt': 'futuristic robot exploring alien planet, sci-fi concept art', 'negative_prompt': '', 'width': 512, 'height': 512, 'batch_size': 4, 'steps': 20, 'cfg_scale': 7.0, 'sampler_name': 'euler', 'scheduler': 'normal', 'seed': 1941936085, 'ckpt_name': 'dall-e-2', 'denoise': 1.0}
🎮 ControlNet data: {}
👤 Face Restoration data: {'restore_faces': False, 'face_restoration_model': 'codeformer', 'codeformer_weight': 0.5, 'gfpgan_weight': 0.5}
🧩 Tiling data: {'tiling': False, 'tile_size': 512, 'tile_overlap': 64}
🖼️ Hires.fix data: {'hires_fix': False, 'hires_fix_upscale_method': 'upscale-by', 'hires_fix_upscale_factor': 2.5, 'hires_fix_hires_steps': 15, 'hires_fix_denoising_strength': 0.5, 'hires_fix_resize_width': 4000, 'hires_fix_resize_height': 4000, 'hires_fix_upscaler': '4x-ultrasharp'}
🖌️ Refiner data: {'refiner_enabled': False, 'refiner_model': 'none', 'refiner_switch_at': 0.8}
🔧 Use ControlNet: False
🔧 Use LoRA: None
🔧 Use Face Restoration: False
🔧 Use Tiling: False
📄 Workflow request: {'generation_flow': 'txt2img', 'model_name': 'dalle', 'controlnet': False, 'lora': None}
✅ Workflow loaded successfully
[DEBUG] Found BFL_API_KEY: 7733695b...735e
[DEBUG] Found OPENAI_API_KEY: sk-proj-...SwYA
[DEBUG] No IDEOGRAM_API_KEY found in environment
[DEBUG] Total API keys loaded: 2
[DEBUG] Created new extra_data section
[DEBUG] Scanning workflow for API nodes...
[DEBUG] Found OpenAIDalle3 node - needs OPENAI_API_KEY
[DEBUG] needed_env_keys: {'OPENAI_API_KEY'}
[DEBUG] all_api_keys keys: dict_keys(['BFL_API_KEY', 'OPENAI_API_KEY'])
[DEBUG] Using OPENAI_API_KEY for api_key_comfy_org
[DEBUG] Injected api_key_comfy_org into workflow
[DEBUG] Final extra_data: {'api_key_comfy_org': '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'}
✅ API keys injected
No valid custom workflow provided, using default workflow with overrides
✅ Core settings applied
✅ Workflow transformation complete
📋 Generated workflow: {
  "prompt": {
    "1": {
      "class_type": "OpenAIDalle3",
      "inputs": {
        "prompt": "futuristic robot exploring alien planet, sci-fi concept art",
        "style": "vivid",
        "quality": "hd",
        "size": "1024x1024",
        "revised_prompt": true,
        "seed": 1941936085,
        "batch_size": 4
      }
    },
    "2": {
      "class_type": "SaveImage",
      "inputs": {
        "images": [
          "1",
          0
        ],
        "filename_prefix": "DreamLayer_DALLE"
      }
    }
  },
  "meta": {
    "description": "DALL-E Core Generation Workflow",
    "model_options": {
      "dalle_3": "OpenAIDalle3",
      "dalle_2": "OpenAIDalle2"
    },
    "core_settings": {
      "prompt": "Main generation prompt",
      "style": "Image style (vivid, natural)",
      "quality": "Image quality (standard, hd)",
      "size": "Image dimensions",
      "revised_prompt": "Let OpenAI revise the prompt",
      "seed": 1941936085,
      "batch_size": 4
    },
    "style_options": [
      "vivid",
      "natural"
    ],
    "quality_options": [
      "standard",
      "hd"
    ],
    "size_options": [
      "256x256",
      "512x512",
      "1024x1024",
      "1792x1024",
      "1024x1792"
    ]
  },
  "extra_data": {
    "api_key_comfy_org": "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"
  }
}

Generated ComfyUI Workflow:
--------------------
{
  "prompt": {
    "1": {
      "class_type": "OpenAIDalle3",
      "inputs": {
        "prompt": "futuristic robot exploring alien planet, sci-fi concept art",
        "style": "vivid",
        "quality": "hd",
        "size": "1024x1024",
        "revised_prompt": true,
        "seed": 1941936085,
        "batch_size": 4
      }
    },
    "2": {
      "class_type": "SaveImage",
      "inputs": {
        "images": [
          "1",
          0
        ],
        "filename_prefix": "DreamLayer_DALLE"
      }
    }
  },
  "meta": {
    "description": "DALL-E Core Generation Workflow",
    "model_options": {
      "dalle_3": "OpenAIDalle3",
      "dalle_2": "OpenAIDalle2"
    },
    "core_settings": {
      "prompt": "Main generation prompt",
      "style": "Image style (vivid, natural)",
      "quality": "Image quality (standard, hd)",
      "size": "Image dimensions",
      "revised_prompt": "Let OpenAI revise the prompt",
      "seed": 1941936085,
      "batch_size": 4
    },
    "style_options": [
      "vivid",
      "natural"
    ],
    "quality_options": [
      "standard",
      "hd"
    ],
    "size_options": [
      "256x256",
      "512x512",
      "1024x1024",
      "1792x1024",
      "1024x1792"
    ]
  },
  "extra_data": {
    "api_key_comfy_org": "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"
  }
}

Tests (Optional)

# Test results

Checklist

  • UI screenshot provided
  • Generated image provided
  • Logs provided
  • Tests added (optional)
  • Code follows project style
  • Self-review completed

Summary by Sourcery

Introduce end-to-end inference tracing by measuring per-node execution time and GPU details, storing the data in a CSV, and exposing an API and UI badge for average inference time and GPU identification

New Features:

  • Add wall-clock timing around each inference node and record metrics to inference_trace.csv
  • Detect GPU name and driver via NVIDIA NVML and include them in the trace
  • Calculate and append mean, standard deviation, and tolerance of inference times when all nodes complete
  • Expose a new /internal/inference/badge API endpoint that returns a compact timing badge with average time and GPU name
  • Add an InferenceBadge React component to fetch and display the timing badge in the UI

Build:

  • Bump @types/react dependency to ^18.3.23

Summary by CodeRabbit

  • New Features

    • Added an Inference Badge display in the frontend, showing average inference time per image and GPU information.
    • Introduced a new API endpoint to provide inference performance badge data.
    • Backend now logs inference timings and GPU details for performance tracking.
  • Bug Fixes

    • Improved error handling and reporting during inference execution.
  • Chores

    • Updated React type definitions in frontend dependencies.

…ndling with headers, error handling, and thread safety, Fix star imports and add error handling for NVML initialization, Fix the CSV reading to match the format written by execution.py, Return a structured error message or dictionary instead, Use file locking
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Aug 8, 2025

Reviewer's Guide

This PR instruments the inference pipeline to record per-image execution time and GPU metadata into a persistent CSV trace, exposes an internal endpoint to compute a compact time-and-GPU badge, and integrates a frontend badge component for real-time display.

Class diagram for InferenceBadge frontend component

classDiagram
    class InferenceBadge {
      - badge: string
      + useEffect()
      + setBadge()
      + render()
    }
Loading

Class diagram for backend inference timing and CSV trace

classDiagram
    class Execution {
      + execute()
      + get_gpu_info()
      + write_inference_trace()
    }
    class CSVTrace {
      + node_id
      + inference_time
      + gpu_name
      + gpu_driver
    }
    Execution -- CSVTrace : writes to
Loading

Class diagram for API badge endpoint

classDiagram
    class InternalRoutes {
      + get_inference_badge(request)
    }
Loading

File-Level Changes

Change Details Files
Instrument inference execution to capture wall-time and GPU info and append to CSV
  • Add get_gpu_info() to retrieve GPU name and driver
  • Wrap get_output_data() with time.perf_counter() start/end
  • Implement thread-safe CSV writing with file truncation and headers
  • Compute and append mean, stddev, tolerance at end of execution list
ComfyUI/execution.py
Add internal API endpoint for inference badge
  • Create /internal/inference/badge route
  • Parse CSV, skip non-numeric rows, compute average time
  • Format badge string “X.XX s per image · GPU” and return as JSON
ComfyUI/api_server/routes/internal/internal_routes.py
Expose badge in frontend and update types
  • Add InferenceBadge.tsx component to fetch and display badge
  • Handle loading, error, and abort scenarios in fetch
  • Bump @types/react dependency version
dream_layer_frontend/package.json
dream_layer_frontend/src/components/InferenceBadge.tsx

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 8, 2025

Walkthrough

A new backend API endpoint and React component were added to provide real-time inference speed and GPU information as a badge. The backend now logs inference timing and GPU details to a CSV file, and exposes this data via an internal API. The frontend displays the badge by fetching from this endpoint. Minor dependency updates were also made.

Changes

Cohort / File(s) Change Summary
Backend Inference Badge API
ComfyUI/api_server/routes/internal/internal_routes.py
Added a new GET endpoint /inference/badge that reads inference timing and GPU info from a CSV file and returns a formatted badge string in JSON.
Backend Inference Timing & GPU Logging
ComfyUI/execution.py
Integrated GPU info retrieval, per-inference timing measurement, logging to a capped CSV file, and enhanced error handling and reporting during execution.
Frontend Inference Badge Component
dream_layer_frontend/src/components/InferenceBadge.tsx
Introduced a new React component that fetches and displays the inference badge from the backend API, with loading, error, and abort handling.
Frontend Dependency Update
dream_layer_frontend/package.json
Updated @types/react in devDependencies from ^18.3.3 to ^18.3.23.

Sequence Diagram(s)

Inference Badge Fetch Flow

sequenceDiagram
    participant User
    participant InferenceBadge (React)
    participant API Server
    participant CSV File

    User ->> InferenceBadge: Load component
    InferenceBadge ->> API Server: GET /internal/inference/badge
    API Server ->> CSV File: Read inference_trace.csv
    CSV File -->> API Server: Return timing & GPU info
    API Server -->> InferenceBadge: JSON { badge: "..." }
    InferenceBadge -->> User: Display badge
Loading

Inference Execution Logging Flow

sequenceDiagram
    participant Execution Engine
    participant GPU
    participant CSV File

    Execution Engine ->> GPU: Query name & driver
    Execution Engine ->> Execution Engine: Run node, measure wall time
    Execution Engine ->> CSV File: Append timing, GPU info
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~18 minutes

Poem

🐇
A badge of speed upon my chest,
GPU and seconds put to the test.
With logs and stats, I proudly show
How fast my inference bunnies go!
Fetch, display, and all is seen—
The quickest rabbit ever been!

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @seacoastcreates - I've reviewed your changes and found some issues that need to be addressed.

Blocking issues:

  • function get_inference_badge is defined inside a function but never used (link)
  • Variance calculation and CSV appending may cause file corruption due to incorrect file handle. (link)

General comments:

  • The timing and CSV I/O logic is deeply embedded in the execute function—consider extracting it into a separate tracing service or helper that appends rows rather than rewriting the file on every node execution.
  • There’s an inconsistency between your CSV header (inference_time) and the badge endpoint’s field name (wall_time_sec), and the hardcoded inference_trace.csv path reduces flexibility—align these names and make the trace file location configurable.
  • The get_gpu_info() function is never actually called before writing traces, so gpu_name/gpu_driver may be undefined—invoke it once at startup (or cache its result) instead of inside the hot execution path.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The timing and CSV I/O logic is deeply embedded in the `execute` function—consider extracting it into a separate tracing service or helper that appends rows rather than rewriting the file on every node execution.
- There’s an inconsistency between your CSV header (`inference_time`) and the badge endpoint’s field name (`wall_time_sec`), and the hardcoded `inference_trace.csv` path reduces flexibility—align these names and make the trace file location configurable.
- The `get_gpu_info()` function is never actually called before writing traces, so `gpu_name`/`gpu_driver` may be undefined—invoke it once at startup (or cache its result) instead of inside the hot execution path.

## Individual Comments

### Comment 1
<location> `ComfyUI/execution.py:270` </location>
<code_context>
     else:
         return str(x)
+    
+def get_gpu_info():
+    try:
+        import pynvml 
</code_context>

<issue_to_address>
get_gpu_info() is defined but never called in the new code.

If you intend to record GPU info in the CSV, call this function and use its output. Otherwise, consider removing it as unused code.
</issue_to_address>

### Comment 2
<location> `ComfyUI/execution.py:418` </location>
<code_context>
+            if execution_list.is_empty():
</code_context>

<issue_to_address>
Variance calculation and CSV appending may cause file corruption due to incorrect file handle.

CSV_LOCK is a threading.Lock, not a file path, so passing it to open() will cause an exception. Use CSV_PATH in the open() call instead.
</issue_to_address>

### Comment 3
<location> `ComfyUI/api_server/routes/internal/internal_routes.py:68` </location>
<code_context>
+        @self.routes.get('/inference/badge')
</code_context>

<issue_to_address>
The badge endpoint averages all rows, including possible summary rows.

Explicitly filter out rows with node_id values like 'mean', 'stddev', or 'tolerance' to ensure summary statistics are not included in the average, rather than relying on catching ValueError.

Suggested implementation:

```python
            # Read CSV with explicit fieldnames to match writing format (no header)
            with open(csv_path, newline='') as csvfile:

```

```python
            with open(csv_path, newline='') as csvfile:
                reader = csv.reader(csvfile)
                for row in reader:
                    if len(row) < 3:
                        continue
                    node_id, time, gpu = row[0], row[1], row[2]
                    # Explicitly filter out summary/statistic rows
                    if node_id in ('mean', 'stddev', 'tolerance'):
                        continue
                    try:
                        node_id_int = int(node_id)
                        times.append(float(time))
                        gpu_name = gpu
                    except ValueError:
                        continue

```
</issue_to_address>

## Security Issues

### Issue 1
<location> `ComfyUI/api_server/routes/internal/internal_routes.py:68` </location>

<issue_to_address>
**security (python.lang.maintainability.useless-inner-function):** function `get_inference_badge` is defined inside a function but never used

*Source: opengrep*
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

else:
return str(x)

def get_gpu_info():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: get_gpu_info() is defined but never called in the new code.

If you intend to record GPU info in the CSV, call this function and use its output. Otherwise, consider removing it as unused code.

Comment on lines +418 to +427
if execution_list.is_empty():
import pandas as pd
import numpy as np

try:
df = pd.read_csv(CSV_PATH, names=["node_id", "inference_time", "gpu_name", "gpu_driver"])
mean = np.mean(df["inference_time"])
stddev = np.std(df["inference_time"])
tolerance = stddev / mean

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Variance calculation and CSV appending may cause file corruption due to incorrect file handle.

CSV_LOCK is a threading.Lock, not a file path, so passing it to open() will cause an exception. Use CSV_PATH in the open() call instead.

Comment on lines +68 to +77
@self.routes.get('/inference/badge')
async def get_inference_badge(request):
csv_path = "inference_trace.csv"
if not os.path.exists(csv_path):
return web.json_response({"badge": "N/A"})

times = []
gpu_name = "Unknown"

# Read CSV with explicit fieldnames to match writing format (no header)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): The badge endpoint averages all rows, including possible summary rows.

Explicitly filter out rows with node_id values like 'mean', 'stddev', or 'tolerance' to ensure summary statistics are not included in the average, rather than relying on catching ValueError.

Suggested implementation:

            # Read CSV with explicit fieldnames to match writing format (no header)
            with open(csv_path, newline='') as csvfile:
            with open(csv_path, newline='') as csvfile:
                reader = csv.reader(csvfile)
                for row in reader:
                    if len(row) < 3:
                        continue
                    node_id, time, gpu = row[0], row[1], row[2]
                    # Explicitly filter out summary/statistic rows
                    if node_id in ('mean', 'stddev', 'tolerance'):
                        continue
                    try:
                        node_id_int = int(node_id)
                        times.append(float(time))
                        gpu_name = gpu
                    except ValueError:
                        continue

Comment on lines +68 to +93
@self.routes.get('/inference/badge')
async def get_inference_badge(request):
csv_path = "inference_trace.csv"
if not os.path.exists(csv_path):
return web.json_response({"badge": "N/A"})

times = []
gpu_name = "Unknown"

# Read CSV with explicit fieldnames to match writing format (no header)
with open(csv_path, newline='') as csvfile:
reader = csv.DictReader(csvfile, fieldnames=["node_id", "wall_time_sec", "gpu_name", "gpu_driver"])
for row in reader:
# Skip summary rows if present
try:
times.append(float(row["wall_time_sec"]))
except ValueError:
continue
gpu_name = row["gpu_name"]

if not times:
return web.json_response({"badge": "N/A"})

avg_time = sum(times) / len(times)
badge = f"{avg_time:.2f} s per image · {gpu_name}"
return web.json_response({"badge": badge})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security (python.lang.maintainability.useless-inner-function): function get_inference_badge is defined inside a function but never used

Source: opengrep

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
ComfyUI/execution.py (2)

360-365: elapsed_time computed but not recorded if write fails

elapsed_time is only passed to write_inference_trace; if that inner call raises the outer try swallows it and continues, losing the timing. Consider early return on failure or move timing logic outside the broad try-except.


366-384: CSV write reads entire file each node – O(N²)

Reading and rewriting the whole CSV on every node execution is O(rows²). Append-only with f.seek(0,2) and a simple row counter (or rotate files) would scale better.

dream_layer_frontend/src/components/InferenceBadge.tsx (1)

13-23: Surfacing backend errors to UI

Right now any non-Abort error logs to console and shows a generic “⚠️ Error”. Consider exposing the actual message in a tooltip for easier debugging.

ComfyUI/api_server/routes/internal/internal_routes.py (1)

78-93: Blocking file I/O inside aiohttp handler

open() and CSV parsing are synchronous; large trace files will block the event loop. Use aiofiles or run in a thread executor.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c99c4bc and 67946c4.

⛔ Files ignored due to path filters (1)
  • dream_layer_frontend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (4)
  • ComfyUI/api_server/routes/internal/internal_routes.py (2 hunks)
  • ComfyUI/execution.py (6 hunks)
  • dream_layer_frontend/package.json (1 hunks)
  • dream_layer_frontend/src/components/InferenceBadge.tsx (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-07-14T22:55:51.063Z
Learnt from: rockerBOO
PR: DreamLayer-AI/DreamLayer#28
File: dream_layer_frontend/src/components/WorkflowCustomNode.tsx:88-106
Timestamp: 2025-07-14T22:55:51.063Z
Learning: In the DreamLayer frontend codebase, the team prefers to rely on TypeScript's type system for data validation rather than adding defensive programming checks, when the types are well-defined and data flow is controlled.

Applied to files:

  • dream_layer_frontend/package.json
📚 Learning: 2025-07-14T23:02:36.431Z
Learnt from: rockerBOO
PR: DreamLayer-AI/DreamLayer#28
File: dream_layer_frontend/package.json:44-44
Timestamp: 2025-07-14T23:02:36.431Z
Learning: xyflow/react version 12.8.2 is a valid and current version available on the npm registry as of July 2025, contrary to previous incorrect information suggesting it wasn't available.

Applied to files:

  • dream_layer_frontend/package.json
🪛 Ruff (0.12.2)
ComfyUI/api_server/routes/internal/internal_routes.py

67-67: Blank line contains whitespace

Remove whitespace from blank line

(W293)

ComfyUI/execution.py

269-269: Blank line contains whitespace

Remove whitespace from blank line

(W293)


272-272: Trailing whitespace

Remove trailing whitespace

(W291)


364-364: Local variable elapsed_time is assigned to but never used

Remove assignment to unused variable elapsed_time

(F841)


366-366: Trailing whitespace

Remove trailing whitespace

(W291)


370-370: Blank line contains whitespace

Remove whitespace from blank line

(W293)


391-391: Blank line contains whitespace

Remove whitespace from blank line

(W293)


430-430: Undefined name gpu_name

(F821)


430-430: Undefined name gpu_driver

(F821)


431-431: Undefined name gpu_name

(F821)


431-431: Undefined name gpu_driver

(F821)


432-432: Undefined name gpu_name

(F821)


432-432: Undefined name gpu_driver

(F821)

🔇 Additional comments (2)
dream_layer_frontend/package.json (1)

69-71: Dev-dependency bump looks good

The upgrade to @types/react@^18.3.23 is safe and aligns with the current React 18 line.

ComfyUI/execution.py (1)

366-391: Lock is local and path variable mis-used

  1. CSV_LOCK = threading.Lock() is re-created for every node ⇒ no mutual exclusion across threads/processes.
  2. Inside the lock you do with open(CSV_LOCK, "a") (later) which passes the lock object as a file path — will raise TypeError.
  3. CSV_PATH / column names mismatch (inference_time vs wall_time_sec).
-CSV_LOCK = threading.Lock()
+CSV_LOCK = globals().setdefault("_INFERENCE_CSV_LOCK", threading.Lock())

...
-with open(CSV_LOCK, "a") as f:
+with open(CSV_PATH, "a") as f:

And keep field names consistent with the header written.

Likely an incorrect or invalid review comment.

Comment on lines +79 to +82
reader = csv.DictReader(csvfile, fieldnames=["node_id", "wall_time_sec", "gpu_name", "gpu_driver"])
for row in reader:
# Skip summary rows if present
try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Field-name mismatch with writer

Writer uses inference_time, reader expects wall_time_sec; average will always be empty when names differ. Align column names.

🤖 Prompt for AI Agents
In ComfyUI/api_server/routes/internal/internal_routes.py around lines 79 to 82,
the CSV reader uses the field name "wall_time_sec" while the writer uses
"inference_time", causing a mismatch and resulting in empty averages. To fix
this, update the reader's fieldnames list to use "inference_time" instead of
"wall_time_sec" so that the column names align correctly.

Comment on lines +270 to +280
def get_gpu_info():
try:
import pynvml
pynvml.nvmlInit()
handle = pynvml.nvmlDeviceGetHandleByIndex(0)
gpu_name = pynvml.nvmlDeviceGetName(handle).decode()
gpu_driver = pynvml.nvmlSystemGetDriverVersion().decode()
except Exception:
gpu_name = "Unknown"
gpu_driver = "Unknown"
return gpu_name, gpu_driver
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

get_gpu_info() never used

The helper returns GPU data but nothing assigns
gpu_name, gpu_driver = get_gpu_info() before they are referenced later, leading to NameError.

+            gpu_name, gpu_driver = get_gpu_info()

Place this immediately before start_time = time.perf_counter().

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.12.2)

272-272: Trailing whitespace

Remove trailing whitespace

(W291)

🤖 Prompt for AI Agents
In ComfyUI/execution.py around lines 270 to 280, the function get_gpu_info() is
defined but never called, causing gpu_name and gpu_driver to be undefined later
and resulting in a NameError. To fix this, add a call to get_gpu_info() and
assign its returned values to gpu_name and gpu_driver variables immediately
before the line where start_time = time.perf_counter() is set, ensuring these
variables are properly initialized before use.

Comment on lines +422 to +434
try:
df = pd.read_csv(CSV_PATH, names=["node_id", "inference_time", "gpu_name", "gpu_driver"])
mean = np.mean(df["inference_time"])
stddev = np.std(df["inference_time"])
tolerance = stddev / mean

with CSV_LOCK:
with open(CSV_LOCK, "a") as f:
f.write(f"mean,{mean:.4f},{gpu_name},{gpu_driver}\n")
f.write(f"stddev,{stddev:.4f},{gpu_name},{gpu_driver}\n")
f.write(f"tolerance,{tolerance:.4f},{gpu_name},{gpu_driver}\n")
except Exception as e:
logging.error(f"Failed to append variance to CSV: {e}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Undefined identifiers & summary write bug

gpu_name and gpu_driver used here are out of scope (see first comment) and will crash.
Additionally, with open(CSV_LOCK, "a") repeats the path error.

🧰 Tools
🪛 Ruff (0.12.2)

430-430: Undefined name gpu_name

(F821)


430-430: Undefined name gpu_driver

(F821)


431-431: Undefined name gpu_name

(F821)


431-431: Undefined name gpu_driver

(F821)


432-432: Undefined name gpu_name

(F821)


432-432: Undefined name gpu_driver

(F821)

🤖 Prompt for AI Agents
In ComfyUI/execution.py around lines 422 to 434, the variables gpu_name and
gpu_driver are used but not defined in this scope, causing a crash. Also, the
code mistakenly uses CSV_LOCK as the file path in the open() call instead of the
CSV_PATH. To fix this, ensure gpu_name and gpu_driver are properly retrieved or
passed into this block before use, and replace CSV_LOCK with CSV_PATH in the
open() call to append to the correct CSV file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant