A flexible command-line interface for seamlessly switching between multiple LLM providers. Chat with different AI models (Google Gemini and Mistral AI) through a unified CLI interface with provider selection at runtime.
This project provides an elegant abstraction layer for interacting with multiple Large Language Models (LLMs) through a simple command-line interface. Built with LangChain, it enables developers to test and compare different AI providers without changing code—just select your provider when launching the CLI.
- Multi-Provider Support: Seamlessly switch between Google Gemini and Mistral AI
- Provider Selection at Runtime: Choose your preferred LLM provider via interactive prompts
- Environment-Based Configuration: Secure API key management using
.envfiles - Configurable Models & Temperature: Support for custom model selection and temperature tuning
- Interactive Chat Interface: Clean, intuitive CLI for conversational AI interactions
- Modular Architecture: Well-organized codebase with separation of concerns
- LangChain - Framework for building LLM applications
- LangChain Google GenAI - Google Gemini integration
- LangChain Mistral AI - Mistral AI integration
- python-dotenv - Environment variable management
- Python 3.8+
Multi-Model-CLI-Assistant/
├── main.py # Entry point for the CLI application
├── requirements.txt # Python dependencies
├── run.sh # Convenience script to run the CLI
├── test.py # Test utilities
├── common_configs/ # Shared configuration module
│ ├── constants.py # API keys and model configs (from .env)
│ ├── utils.py # Utility functions (provider initialization)
│ └── __pycache__/
├── gen_ai/ # AI/LLM configuration module
│ ├── llm_config.py # LLM provider setup and chat functions
│ ├── prompts.py # Prompt templates (extensible)
│ ├── __pycache__/
│ └── agent_config/ # Agent configuration (expandable)
│ ├── agent.py # Agent logic (future enhancement)
│ └── tools.py # Agent tools (future enhancement)
└── __pycache__/
- Python 3.8 or higher
- Google Gemini API key (from Google AI Studio)
- Mistral AI API key (from Mistral Console)
-
Clone the repository
git clone <repository-url> cd Multi-Model-CLI-Assistant
-
Create a virtual environment
python3 -m venv .venv source .venv/bin/activate -
Install dependencies
pip install -r requirements.txt
-
Configure environment variables
Create a
.envfile in the project root:GEMINI_API_KEY=your_gemini_api_key_here GEMINI_MODEL=gemini-pro # or latest available model MISTRAL_API_KEY=your_mistral_api_key_here MISTRAL_MODEL=mistral-small # or your preferred Mistral model
Option 1: Using the provided script
sh run.shOption 2: Direct Python execution
python3 -m mainOnce launched, the CLI will prompt you to select a provider:
> Type '/exit' to exit the CLI
> Please choose your preferred provider:
1. Mistral
2. Gemini
>
Select your provider and start chatting:
You: > What is machine learning?
Model: > Machine learning is a subset of artificial intelligence that focuses on...
Exit the CLI:
You: > /exit
Powering Down CLI Agent, Bye!
- constants.py: Loads and exports API keys and model names from environment variables
- utils.py: Provides
initialize_chat()function for interactive provider selection
- llm_config.py:
get_llm(): Factory function for creating LLM instances- Instantiates both Mistral and Gemini models
- Provides
chat_mistral()andchat_gemini()wrapper functions - Temperature set to 0 for deterministic responses
- prompts.py: Extensible module for prompt templates (currently empty, ready for enhancement)
- agent.py: Reserved for agentic AI workflows
- tools.py: Reserved for tool/function calling capabilities
main.py
└─> initialize_chat() [utils.py]
└─> User selects provider (Mistral/Gemini)
└─> get_llm() [llm_config.py]
└─> Returns appropriate chat function
└─> User sends prompts
└─> LLM response processed & displayed
-
Update
.envwith the new provider's credentials:NEW_PROVIDER_API_KEY=your_key NEW_PROVIDER_MODEL=model_name
-
Extend
llm_config.py:from langchain_new_provider import ChatNewProvider def get_llm(provider, temperature, model=None): # Add new provider case elif provider == 'new_provider': llm = ChatNewProvider( api_key=NEW_PROVIDER_API_KEY, model=model or DEFAULT_MODEL, temperature=temperature )
-
Update
utils.pyto add the new provider to selection options
Modify temperature (randomness) in llm_config.py:
temperature=0: Deterministic responses (current)temperature=0.7: Balanced creativitytemperature=1.0: Maximum randomness