Skip to content

The goal of this project is to make a CLI which has multiple LLM providers and an agent that can perform tasks such as read to file write to file etc... Just like the other CLI's... Tho this is currently in development and will get updated.

License

Notifications You must be signed in to change notification settings

ZaidK07/Multi-Model-CLI-Assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Model CLI Assistant

A flexible command-line interface for seamlessly switching between multiple LLM providers. Chat with different AI models (Google Gemini and Mistral AI) through a unified CLI interface with provider selection at runtime.

Overview

This project provides an elegant abstraction layer for interacting with multiple Large Language Models (LLMs) through a simple command-line interface. Built with LangChain, it enables developers to test and compare different AI providers without changing code—just select your provider when launching the CLI.

Features

  • Multi-Provider Support: Seamlessly switch between Google Gemini and Mistral AI
  • Provider Selection at Runtime: Choose your preferred LLM provider via interactive prompts
  • Environment-Based Configuration: Secure API key management using .env files
  • Configurable Models & Temperature: Support for custom model selection and temperature tuning
  • Interactive Chat Interface: Clean, intuitive CLI for conversational AI interactions
  • Modular Architecture: Well-organized codebase with separation of concerns

Tech Stack

Core Dependencies

Language

  • Python 3.8+

Project Structure

Multi-Model-CLI-Assistant/
├── main.py                    # Entry point for the CLI application
├── requirements.txt           # Python dependencies
├── run.sh                     # Convenience script to run the CLI
├── test.py                    # Test utilities
├── common_configs/            # Shared configuration module
│   ├── constants.py          # API keys and model configs (from .env)
│   ├── utils.py              # Utility functions (provider initialization)
│   └── __pycache__/
├── gen_ai/                    # AI/LLM configuration module
│   ├── llm_config.py         # LLM provider setup and chat functions
│   ├── prompts.py            # Prompt templates (extensible)
│   ├── __pycache__/
│   └── agent_config/         # Agent configuration (expandable)
│       ├── agent.py          # Agent logic (future enhancement)
│       └── tools.py          # Agent tools (future enhancement)
└── __pycache__/

Installation

Prerequisites

Setup

  1. Clone the repository

    git clone <repository-url>
    cd Multi-Model-CLI-Assistant
  2. Create a virtual environment

    python3 -m venv .venv
    source .venv/bin/activate
  3. Install dependencies

    pip install -r requirements.txt
  4. Configure environment variables

    Create a .env file in the project root:

    GEMINI_API_KEY=your_gemini_api_key_here
    GEMINI_MODEL=gemini-pro  # or latest available model
    
    MISTRAL_API_KEY=your_mistral_api_key_here
    MISTRAL_MODEL=mistral-small  # or your preferred Mistral model

Usage

Quick Start

Option 1: Using the provided script

sh run.sh

Option 2: Direct Python execution

python3 -m main

Interactive Session

Once launched, the CLI will prompt you to select a provider:

> Type '/exit' to exit the CLI
> Please choose your preferred provider:
    1. Mistral
    2. Gemini
>

Select your provider and start chatting:

You: > What is machine learning?
Model: > Machine learning is a subset of artificial intelligence that focuses on...

Exit the CLI:

You: > /exit
Powering Down CLI Agent, Bye!

Architecture

Module Breakdown

common_configs/

  • constants.py: Loads and exports API keys and model names from environment variables
  • utils.py: Provides initialize_chat() function for interactive provider selection

gen_ai/

  • llm_config.py:
    • get_llm(): Factory function for creating LLM instances
    • Instantiates both Mistral and Gemini models
    • Provides chat_mistral() and chat_gemini() wrapper functions
    • Temperature set to 0 for deterministic responses
  • prompts.py: Extensible module for prompt templates (currently empty, ready for enhancement)

agent_config/ (Framework for future features)

  • agent.py: Reserved for agentic AI workflows
  • tools.py: Reserved for tool/function calling capabilities

Data Flow

main.py
  └─> initialize_chat() [utils.py]
      └─> User selects provider (Mistral/Gemini)
          └─> get_llm() [llm_config.py]
              └─> Returns appropriate chat function
                  └─> User sends prompts
                      └─> LLM response processed & displayed

Development Guide

Adding a New LLM Provider

  1. Update .env with the new provider's credentials:

    NEW_PROVIDER_API_KEY=your_key
    NEW_PROVIDER_MODEL=model_name
  2. Extend llm_config.py:

    from langchain_new_provider import ChatNewProvider
    
    def get_llm(provider, temperature, model=None):
        # Add new provider case
        elif provider == 'new_provider':
            llm = ChatNewProvider(
                api_key=NEW_PROVIDER_API_KEY,
                model=model or DEFAULT_MODEL,
                temperature=temperature
            )
  3. Update utils.py to add the new provider to selection options

Customizing Behavior

Modify temperature (randomness) in llm_config.py:

  • temperature=0: Deterministic responses (current)
  • temperature=0.7: Balanced creativity
  • temperature=1.0: Maximum randomness

About

The goal of this project is to make a CLI which has multiple LLM providers and an agent that can perform tasks such as read to file write to file etc... Just like the other CLI's... Tho this is currently in development and will get updated.

Resources

License

Stars

Watchers

Forks