Skip to content

Conversation

@doxav
Copy link
Contributor

@doxav doxav commented Jun 16, 2025

ADDED: multi-LLM support via LLMFactory (fully backward compatible) and implementation demonstration in OptoPrimeMulti and associated test

…nd implementation demonstration in OptoPrimeMulti and associated test
@chinganc
Copy link
Member

Can you change this PR to merge into experimental branch?

@doxav doxav changed the base branch from main to experimental June 17, 2025 20:29
@chinganc
Copy link
Member

What is the expected way to call register_profile so as to modify the behavior of LLM (for default llm applications)? Is the following the intended usage?

from opto.utils.llm import LLM, LLMFactory

LLMFactory.register_profile(new_backend_1,  new_param_1)
LLMFactory.register_profile(new_backend_2,  new_param_2)

llm_1 = LLM(profile=new_param_1) 
llm_2 = LLM(profile=new_param_2) 

@doxav
Copy link
Contributor Author

doxav commented Jun 19, 2025

Correct Usage

# Register new profiles
LLMFactory.register_profile("custom_openai", "LiteLLM", model="gpt-4o", temperature=0.7, max_tokens=2000)
LLMFactory.register_profile("custom_claude", "LiteLLM", model="claude-3-5-sonnet-latest", temperature=0.3)
LLMFactory.register_profile("local_llama", "CustomLLM", model="llama-3.1-70b")

# Use the registered profiles
llm_1 = LLM(profile="custom_openai") 
llm_2 = LLM(profile="custom_claude")
llm_3 = LLM(profile="local_llama")

Pre-defined Profiles Available

The code comes with these built-in profiles:

llm_default = LLM(profile="default")     # gpt-4o-mini
llm_premium = LLM(profile="premium")     # gpt-4  
llm_cheap = LLM(profile="cheap")         # gpt-4o-mini
llm_fast = LLM(profile="fast")           # gpt-3.5-turbo-mini
llm_reasoning = LLM(profile="reasoning") # o1-mini

You can override those built-in profiles:

LLMFactory.register_profile("default", "LiteLLM", model="gpt-4o", temperature=0.5)
LLMFactory.register_profile("premium", "LiteLLM", model="o1-preview", max_tokens=8000)
LLMFactory.register_profile("cheap", "LiteLLM", model="gpt-3.5-turbo", temperature=0.9)
LLMFactory.register_profile("fast", "LiteLLM", model="gpt-3.5-turbo", max_tokens=500)
LLMFactory.register_profile("reasoning", "LiteLLM", model="o1-preview")

Examples with Different Backends

# Register custom profiles for different use cases
LLMFactory.register_profile("advanced_reasoning", "LiteLLM", model="o1-preview", max_tokens=4000)
LLMFactory.register_profile("claude_sonnet", "LiteLLM", model="claude-3-5-sonnet-latest", temperature=0.3)
LLMFactory.register_profile("custom_server", "CustomLLM", model="llama-3.1-8b")

# Use in different contexts
reasoning_llm = LLM(profile="advanced_reasoning")  # For complex reasoning
claude_llm = LLM(profile="claude_sonnet")          # For Claude responses
local_llm = LLM(profile="custom_server")           # For local deployment

# Single LLM optimizer with custom profile
optimizer1 = OptoPrime(parameters, llm=LLM(profile="advanced_reasoning"))

# Multi-LLM optimizer with multiple profiles
optimizer2 = OptoPrimeMulti(parameters, llm_profiles=["cheap", "premium", "claude_sonnet"], generation_technique="multi_llm")

@chinganc chinganc merged commit 2ff3c5a into AgentOpt:experimental Jun 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants