FRIDA is a natural language command interpreter for robotics that converts human instructions into structured robot actions. It uses LLM-powered parsing with BAML (boundaryml.com) to reliably interpret commands and generate executable task plans.
Taming the LLM: Reliable Task Planning for Robotics Using Parsing and Grounding
DOI: 10.1007/978-3-032-09037-9_24
- Explanatory video demonstration of task planning test using the command interpreter with the robot
- Video demonstration of the GPSR task during the Mexican Robotics Tournament 2025
- Video demonstration of the GPSR task during the RoboCup Competition 2025
- Code repository of the complete deployed code of the robot for all tasks of the competition
Recommended: Use the π€ Space for the easiest way to try FRIDA without any setup.
Alternatively, there's another Playground online.
FRIDA interprets natural language commands and converts them into structured action sequences. For example:
Input: "Get a sponge from the pantry and deliver it to Jane in the living room"
Output:
{
"commands": [
{"action": "go_to", "location_to_go": "pantry"},
{"action": "pick_object", "object_to_pick": "sponge"},
{"action": "go_to", "location_to_go": "living room"},
{"action": "find_person_by_name", "name": "Jane"},
{"action": "give_object"}
]
}- Python 3.8+
- Docker (for local model inference)
- Git (for submodules)
- Clone the repository with submodules:
git submodule update --init --recursive --remote- Install dependencies:
pip install -r requirements.txt- Set up CommandGenerator (required for command generation):
cd dataset_generator/CommandGenerator
python -m venv venv
source venv/bin/activate
pip install .
athome-generator -d ../CompetitionTemplate
cd ../..- Generate BAML client:
baml-cli generate --from command_interpreter/baml_src- Configure environment (for API-based models):
cp .env.example .env
# Edit .env with your API keysRun the interactive command interpreter:
python3 command_interpreter/interpreter.pyThe interpreter supports:
- Natural language command input
- Model selection (press
m) - Random command generation (press
g) - Interactive execution of commands
FRIDA supports running a fine-tuned local model using Ollama. This allows you to use the system without API keys.
cd inference
./download-model.shThis script:
- Downloads the fine-tuned model from Hugging Face
- Detects available Docker images
- Creates the Ollama model configuration
cd inference
./run-inference.shThe script automatically:
- Detects your hardware (NVIDIA GPU, Apple Silicon, CPU)
- Configures Docker for optimal performance
- Starts the Ollama service on
http://localhost:11434
Platform-specific behavior:
- macOS (Apple Silicon): Uses ARM64-optimized containers, CPU mode
- macOS (Intel): Uses x86_64 containers with emulation, CPU mode
- Linux (no GPU): CPU mode with host networking
- Linux (NVIDIA GPU): GPU acceleration with host networking (requires NVIDIA Container Toolkit)
- Windows: Standard port mapping, GPU support via Docker Desktop WSL2 backend
Once the inference server is running, the LOCAL_FINETUNED model will be available in the command interpreter. The model is accessible at http://localhost:11434.
For CPU-only mode using Docker Compose:
docker compose upThe project includes a web interface for interacting with the command interpreter.
- Install web dependencies:
pip install -r app-requirements.txt- Configure environment:
cp .env.example .env
# Add your GOOGLE_API_KEY to .env- Run the Flask application:
python main.py- Access the interface:
Open
http://localhost:8080in your browser.
For a more advanced Next.js frontend with FastAPI backend, see the frontend/ and backend/ directories.
Generate training datasets for fine-tuning:
cd dataset_generator/
python3 structured_generator.pyThe dataset will be generated in dataset_generator/dataset.json.
frida-cortex/
βββ command_interpreter/ # Core command interpretation logic
β βββ baml_src/ # BAML model definitions
β βββ interpreter.py # Main CLI interpreter
β βββ ...
βββ dataset_generator/ # Dataset generation tools
βββ fine_tuning/ # Model fine-tuning scripts
βββ inference/ # Local model inference setup
βββ backend/ # FastAPI backend service
βββ frontend/ # Next.js frontend application
βββ main.py # Flask web application
FRIDA supports multiple LLM providers:
- Local Fine-tuned:
LOCAL_FINETUNED(requires local inference server) - Google:
GEMINI_PRO_2_5,GEMINI_FLASH_2_5 - OpenAI:
OPENAI_GPT_4_1_MINI - Anthropic:
ANTHROPIC_CLAUDE_SONNET_4 - Meta:
META_LLAMA_3_3_8B_IT_FREE,META_LLAMA_3_3_70B
- Fine-tuning Guide - How to fine-tune models for robot command interpretation
- Backend API - FastAPI backend documentation
- Frontend - Next.js frontend documentation
See LICENSE file for details.
