-
Notifications
You must be signed in to change notification settings - Fork 0
OpenAI fixes #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI fixes #2
Conversation
… and configuration organization - Modified docker-compose.yml to allow environment variables for USE_OLLAMA and OPENAI_API_KEY, enhancing flexibility in configuration. - Updated Dockerfile to include copying of the config directory, improving application structure and maintainability.
…nt variables - Introduced `from_yaml_and_env` method in `GraphitiEmbedderConfig` to load configuration from both YAML files and environment variables, enhancing flexibility and maintainability. - Updated the logic for Ollama and OpenAI configurations to utilize the new loading method, improving error handling and configuration management. - Modified `from_cli_and_env` to start with the new YAML+env based configuration, ensuring CLI arguments can override as needed.
Temperature is now only set in LLMConfig if the OPENAI_ENABLE_TEMPERATURE environment variable is explicitly set to true. This allows for more control over when the temperature parameter is included in requests to OpenAI and Azure OpenAI clients.
Updated GraphitiConfig to load the embedder configuration using from_yaml_and_env() instead of from_env(), aligning it with the LLM configuration method. This change ensures consistent configuration loading from both YAML files and environment variables.
Replaces ci-cd.yml and manual-test.yml with a simplified ci.yml workflow focused on code quality checks. Adds a Justfile to standardize development commands and automation tasks. This streamlines CI processes and improves local developer experience.
Allows the EMBEDDER_MODEL_NAME environment variable to override the model selection for OpenAI and Azure OpenAI providers, improving backward compatibility. Ollama provider is not affected by this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request introduces significant improvements to configuration management and environment variable handling, with a primary focus on making the system more flexible and robust. The changes modernize how configuration is loaded and managed across different components.
- Updated embedder configuration to support both YAML and environment variables, aligning with LLM configuration patterns
- Enhanced Docker and dependency management with environment variable expansion and version updates
- Improved OpenAI client parameter handling with conditional temperature configuration
Reviewed Changes
Copilot reviewed 12 out of 13 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| src/config/embedder_config.py | Refactored to support YAML-based configuration with environment fallbacks |
| src/config/server_config.py | Updated to use unified YAML+env configuration approach |
| src/config/llm_config.py | Added conditional temperature parameter handling for OpenAI clients |
| src/ollama_client.py | Enhanced compatibility by accepting and ignoring unsupported parameters |
| pyproject.toml | Updated graphiti-core dependency to version 0.21.0 |
| docker-compose.yml | Added environment variable expansion for USE_OLLAMA and OPENAI_API_KEY |
| config/providers/openai.yml | Updated default models to gpt-5-mini/gpt-5-nano with increased token limits |
| Dockerfile | Added config directory copy to Docker image |
| Justfile | Added new build automation configuration file |
| .github/workflows/ | Simplified CI workflow and removed complex CI/CD pipeline |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
This pull request introduces several improvements to configuration management and environment variable handling, with a focus on making the system more flexible and robust. The main changes include supporting YAML-based configuration for embedders, updating model defaults, and improving the way environment variables are handled for both Docker and runtime configuration. Additionally, there are enhancements to how optional parameters are managed for LLM clients and improvements to dependency versions.
Configuration and Environment Management Improvements:
src/config/embedder_config.py[1] [2] [3];src/config/server_config.py[4] [5]docker-compose.ymlnow use environment variable expansion for several keys, allowing for easier overrides and more consistent configuration across environments. (Dockerfile[1];docker-compose.yml[2]Model and Dependency Updates:
gpt-5-miniandgpt-5-nano, with a highermax_tokensvalue. (config/providers/openai.ymlconfig/providers/openai.ymlL7-R11)graphiti-coredependency has been updated to version0.21.0. (pyproject.tomlpyproject.tomlL27-R27)LLM Client Parameter Handling:
temperatureparameter for OpenAI and Azure OpenAI clients if theOPENAI_ENABLE_TEMPERATUREenvironment variable is set to"true", preventing accidental overriding of API defaults. (src/config/llm_config.py[1] [2]Ollama Client Compatibility Enhancement:
reasoning) in structured completion calls, improving compatibility with callers that may pass extra parameters. (src/ollama_client.py[1] [2]