Skip to content

INTERACT-LLM/ollama-app

Repository files navigation

INTERACT-LLM - Local Language Learning App

A second-language learning application powered by local LLMs via Ollama. This is a converted version of the original Google AI Studio prototype, now running completely offline with open-source language models.

demo

Features

  • Fully Local: No API keys required, runs completely offline
  • Conversational Practice: Interactive chat sessions for language learning
  • Real-time Grammar Feedback: Get corrections and explanations for your mistakes
  • Progress Tracking: Monitor student performance and common error patterns
  • Customizable Lessons: Create lessons with specific scenarios and vocabulary
  • Multiple Language Support: Works with any language supported by your chosen LLM

Prerequisites

  1. Node.js (v18 or higher)
  2. Ollama - Local LLM runtime

Quick Start

1. Install Ollama

macOS/Linux:

curl -fsSL https://ollama.com/install.sh | sh

Windows: Download from https://ollama.com/download

2. Setup the Application

# Clone or extract the project
cd interact-llm

# Run the setup script (macOS/Linux)
chmod +x setup-ollama.sh
./setup-ollama.sh

# Or manually setup:
# 1. Start Ollama service
ollama serve

# 2. Pull a language model (in another terminal)
ollama pull llama3.2:3b

# 3. Copy environment configuration
cp .env.example .env

# 4. Install dependencies
npm install

3. Start the Application

npm run dev

Open http://localhost:5173 in your browser.

Recommended Models for Language Learning

Model Size Speed Quality Best For
llama3.2:3b 2GB Fast Good General language learning, balanced performance
gemma2:2b 1.6GB Very Fast Good Educational content, grammar checking
qwen2.5:3b 2GB Fast Excellent Multilingual learning, Asian languages
mistral:7b 4GB Medium Very Good Advanced conversations, nuanced feedback
phi3:mini 2GB Very Fast Good Quick responses, basic conversations

Configuration

Edit .env file to customize:

# Ollama API endpoint (default: http://localhost:11434)
VITE_OLLAMA_BASE_URL=http://localhost:11434

# Model to use (must be installed via ollama pull)
VITE_OLLAMA_MODEL=llama3.2:3b

Switching Models

To use a different model:

  1. Pull the new model:

    ollama pull mistral:7b
  2. Update .env:

    VITE_OLLAMA_MODEL=mistral:7b
  3. Restart the development server

Architecture Changes from Original

What Changed

  • API Service: Replaced geminiService.ts with ollamaService.ts
  • Dependencies: Removed @google/genai, now using native fetch API
  • Response Handling: Adapted to handle both JSON and plain text responses from different models
  • Error Handling: Added fallbacks for models that don't strictly follow JSON formatting

What Stayed the Same

  • All UI components and user experience
  • Lesson structure and progression system
  • Student progress tracking
  • Feedback mechanisms

Troubleshooting

Ollama Service Not Running

# Start Ollama service
ollama serve

# Check if running
curl http://localhost:11434/api/tags

Model Not Found

# List installed models
ollama list

# Pull the required model
ollama pull llama3.2:3b

Slow Responses

  • Try a smaller model (e.g., phi3:mini or gemma2:2b)
  • Ensure you have enough RAM (at least 8GB recommended)
  • Close other applications to free up memory

JSON Parsing Errors

Some models may not always return perfect JSON. The app includes fallback handling, but for best results:

  • Use models known for good instruction following (llama3.2, gemma2)
  • Consider fine-tuning a model specifically for this task

Performance Tips

  1. Model Selection: Smaller models (2-3B parameters) often work better for structured tasks like language learning
  2. Context Length: Keep conversations reasonable length to maintain performance
  3. Hardware:
    • Minimum: 8GB RAM
    • Recommended: 16GB RAM, dedicated GPU (for faster inference)

Development

Project Structure

interact-llm/
├── components/          # React components
├── services/           
│   ├── ollamaService.ts # Ollama API integration (NEW)
│   ├── dataService.ts   # Data persistence
│   └── ...
├── types.ts            # TypeScript definitions
└── ...

Adding New Features

The OllamaService exposes the same interface as the original GeminiService:

  • getChatResponse(lesson, message) - Get AI response
  • analyzeConversation(lesson, conversation) - Analyze performance
  • getPostConversationFeedback(lesson, conversation) - Generate feedback
  • getDetailedFeedback(lesson, conversation, feedback) - Elaborate on feedback

Known Limitations

  1. JSON Consistency: Open-source models may not always return perfectly structured JSON
  2. Language Quality: Response quality varies by model and language
  3. Grammar Detection: Less sophisticated than commercial APIs for complex grammar rules
  4. Response Time: Depends on local hardware capabilities

Contributing

Feel free to contribute improvements, especially:

  • Model-specific prompting strategies
  • Better error handling for various models
  • Performance optimizations
  • Additional language-specific features

License

[Your License Here]

Acknowledgments

  • Original prototype built with Google AI Studio
  • Powered by Ollama and open-source LLMs
  • INTERACT-LLM research project

About

Prototype Ollama app for hosting locally

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published