Skip to content

Conversation

@nabinchha
Copy link
Contributor

@nabinchha nabinchha commented Dec 23, 2025

closes: #157

TODO:

  • Test with an api key

Default model configs after resetting with data-designer config reset
Screenshot 2025-12-22 at 8 15 24 PM

@nabinchha nabinchha marked this pull request as ready for review December 23, 2025 03:31
@nabinchha
Copy link
Contributor Author

[10:15:14] [INFO] 🔁 Preview generation in progress
[10:15:14] [INFO] ✅ Validation passed
[10:15:14] [INFO] ⛓️ Sorting column configs into a Directed Acyclic Graph
[10:15:14] [INFO] 🩺 Running health checks for models...
[10:15:14] [INFO]   |-- 👀 Checking 'openai/text-embedding-3-large' in provider named 'openrouter' for model alias 'openrouter-embedding'...
[10:15:15] [INFO]   |-- ✅ Passed!
[10:15:15] [INFO]   |-- 👀 Checking 'nvidia/nemotron-3-nano-30b-a3b' in provider named 'openrouter' for model alias 'openrouter-text'...
[10:15:17] [INFO]   |-- ✅ Passed!
[10:15:17] [INFO]   |-- 👀 Checking 'openai/gpt-oss-20b' in provider named 'openrouter' for model alias 'openrouter-reasoning'...
[10:15:19] [INFO]   |-- ✅ Passed!
[10:15:19] [INFO]   |-- 👀 Checking 'nvidia/nemotron-nano-12b-v2-vl' in provider named 'openrouter' for model alias 'openrouter-vision'...
[10:15:20] [INFO]   |-- ✅ Passed!
[10:15:20] [INFO] 🌱 Sampling 10 records from seed dataset
[10:15:20] [INFO]   |-- seed dataset size: 10 records
[10:15:20] [INFO]   |-- sampling strategy: ordered
[10:15:20] [INFO] 🎲 Preparing samplers to generate 10 records across 1 columns
[10:15:20] [INFO] (💾 + 💾) Concatenating 2 datasets
[10:15:20] [INFO] Preparing llm-text column generation
[10:15:20] [INFO]   |-- column name: 'greetings'
[10:15:20] [INFO]   |-- model config:
{
    "alias": "openrouter-text",
    "model": "nvidia/nemotron-3-nano-30b-a3b",
    "inference_parameters": {
        "generation_type": "chat-completion",
        "max_parallel_requests": 4,
        "timeout": null,
        "extra_body": null,
        "temperature": 1.0,
        "top_p": 1.0,
        "max_tokens": null
    },
    "provider": "openrouter"
}
[10:15:20] [INFO] 🐙 Processing llm-text column 'greetings' with 4 concurrent workers
[10:15:29] [INFO] Preparing llm-text column generation
[10:15:29] [INFO]   |-- column name: 'summary'
[10:15:29] [INFO]   |-- model config:
{
    "alias": "openrouter-vision",
    "model": "nvidia/nemotron-nano-12b-v2-vl",
    "inference_parameters": {
        "generation_type": "chat-completion",
        "max_parallel_requests": 4,
        "timeout": null,
        "extra_body": null,
        "temperature": 0.85,
        "top_p": 0.95,
        "max_tokens": null
    },
    "provider": "openrouter"
}
[10:15:29] [INFO] 🐙 Processing llm-text column 'summary' with 4 concurrent workers
[10:15:55] [INFO] Preparing llm-text column generation
[10:15:55] [INFO]   |-- column name: 'greetings_reasoning'
[10:15:55] [INFO]   |-- model config:
{
    "alias": "openrouter-reasoning",
    "model": "openai/gpt-oss-20b",
    "inference_parameters": {
        "generation_type": "chat-completion",
        "max_parallel_requests": 4,
        "timeout": null,
        "extra_body": null,
        "temperature": 0.35,
        "top_p": 0.95,
        "max_tokens": null
    },
    "provider": "openrouter"
}
[10:15:55] [INFO] 🐙 Processing llm-text column 'greetings_reasoning' with 4 concurrent workers
[10:16:42] [INFO] Preparing embedding column generation
[10:16:42] [INFO]   |-- column name: 'greetings_embedding'
[10:16:42] [INFO]   |-- model config:
{
    "alias": "openrouter-embedding",
    "model": "openai/text-embedding-3-large",
    "inference_parameters": {
        "generation_type": "embedding",
        "max_parallel_requests": 4,
        "timeout": null,
        "extra_body": null,
        "encoding_format": "float",
        "dimensions": null
    },
    "provider": "openrouter"
}
[10:16:42] [INFO] 🐙 Processing embedding column 'greetings_embedding' with 4 concurrent workers
[10:16:45] [INFO] 📊 Model usage summary:
{
    "openai/text-embedding-3-large": {
        "token_usage": {
            "input_tokens": 1330,
            "output_tokens": 0,
            "total_tokens": 1330
        },
        "request_usage": {
            "successful_requests": 10,
            "failed_requests": 0,
            "total_requests": 10
        },
        "tokens_per_second": 15,
        "requests_per_minute": 7
    },
    "nvidia/nemotron-3-nano-30b-a3b": {
        "token_usage": {
            "input_tokens": 310,
            "output_tokens": 3316,
            "total_tokens": 3626
        },
        "request_usage": {
            "successful_requests": 10,
            "failed_requests": 0,
            "total_requests": 10
        },
        "tokens_per_second": 42,
        "requests_per_minute": 7
    },
    "openai/gpt-oss-20b": {
        "token_usage": {
            "input_tokens": 2153,
            "output_tokens": 20093,
            "total_tokens": 22246
        },
        "request_usage": {
            "successful_requests": 10,
            "failed_requests": 0,
            "total_requests": 10
        },
        "tokens_per_second": 263,
        "requests_per_minute": 7
    },
    "nvidia/nemotron-nano-12b-v2-vl": {
        "token_usage": {
            "input_tokens": 30748,
            "output_tokens": 8737,
            "total_tokens": 39485
        },
        "request_usage": {
            "successful_requests": 10,
            "failed_requests": 0,
            "total_requests": 10
        },
        "tokens_per_second": 467,
        "requests_per_minute": 7
    }
}
[10:16:45] [INFO] 🙈 Dropping columns: ['language']
[10:16:45] [INFO] 📐 Measuring dataset column statistics:
[10:16:45] [INFO]   |-- 🌱 column: 'uuid'
[10:16:45] [INFO]   |-- 🌱 column: 'image_filename'
[10:16:45] [INFO]   |-- 🌱 column: 'base64_image'
[10:16:45] [INFO]   |-- 🌱 column: 'page'
[10:16:45] [INFO]   |-- 🌱 column: 'options'
[10:16:45] [INFO]   |-- 🌱 column: 'source'
[10:16:45] [INFO]   |-- 📝 column: 'greetings'
[10:16:45] [INFO]   |-- 📝 column: 'greetings_reasoning'
[10:16:45] [INFO]   |-- 🧬 column: 'greetings_embedding'
[10:16:45] [INFO]   |-- 📝 column: 'summary'
[10:16:45] [INFO] 👏 Preview complete!
Screenshot 2026-01-06 at 10 19 43 AM

@nabinchha nabinchha merged commit 3b4e296 into main Jan 6, 2026
15 checks passed
@nabinchha nabinchha deleted the nmulepati/feat-157-add-openrouter-as-default-provider branch January 6, 2026 17:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add OpenRouter as a default provider

4 participants