A real-time voice translation web application built with Laravel, React, and Inertia.js. This application allows users to speak in one language and instantly hear the translation in another language, with support for English, Spanish, and French.
This project was created while recording the following video:
- 🎤 Real-time Voice Recording: Record audio directly in the browser using the MediaRecorder API
- 🌍 Multi-language Support: Translate between English, Spanish, and French
- 🔄 Automatic Translation: Automatically translates and generates audio when recording stops
- 📊 Performance Benchmarking: Track individual API call timings (Whisper, GPT, TTS)
- 🎵 Audio Playback: Listen to translated audio directly in the browser
- 📜 Translation History: View and replay previous translations
- 🔀 Ultra-Low Latency TTS: Murf.ai Falcon engine with ~130ms time-to-first-audio
- 🎨 Modern UI: Built with React, Tailwind CSS, and Radix UI components
- Laravel 12: PHP framework
- OpenAI Whisper API: Speech-to-text transcription
- OpenAI GPT API: Text translation (gpt-4o-mini)
- Murf.ai Falcon API: Ultra-low latency text-to-speech (~130ms TTFA)
- React 19: UI library
- Inertia.js v2: Server-driven single-page applications
- Tailwind CSS v4: Utility-first CSS framework
- TypeScript: Type-safe JavaScript
- Radix UI: Accessible component primitives
- PHP 8.2 or higher
- Composer
- Node.js 18+ and npm
- MySQL, PostgreSQL, or SQLite database
- OpenAI API key (for Whisper and GPT)
- Murf.ai API key (for Falcon TTS)
git clone <repository-url>
cd laravel-murfai-falconcomposer installnpm installCopy the .env.example file to .env:
cp .env.example .envGenerate the application key:
php artisan key:generateEdit the .env file and add your API keys:
# Application
APP_NAME="Voice Translation"
APP_URL=http://localhost:8000
# Database
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=voice_translation
DB_USERNAME=your_username
DB_PASSWORD=your_password
# OpenAI API
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL_WHISPER=whisper-1
OPENAI_MODEL_TRANSLATION=gpt-4o-mini
OPENAI_MODEL_TTS=tts-1
# Murf.ai Falcon API (for TTS)
MURF_API_KEY=your_murf_api_key_here
MURF_API_URL=https://global.api.murf.ai/v1Run the migrations:
php artisan migrateCreate a symbolic link for public storage:
php artisan storage:linkFor development:
npm run devFor production:
npm run buildYou can use the convenient development script that runs everything:
composer run devOr manually:
# Terminal 1: Laravel server
php artisan serve
# Terminal 2: Vite dev server (if not using composer run dev)
npm run devThis application uses Murf.ai Falcon for ultra-low latency text-to-speech:
- Model:
FALCON - Format: MP3
- Sample Rate: 24000 Hz
- Channel: MONO
- Style: Conversation
- Time-to-first-audio: ~130ms
Murf Falcon API supports 13 languages with 18 dialects and 150+ voices:
| Language | Dialects/Variants |
|---|---|
| English | US/Canada, UK, Australia, India, Scottish |
| Spanish | Mexico, Spain |
| French | France |
| German | - |
| Italian | - |
| Hindi | - |
| Portuguese | Brazil |
| Dutch | - |
| Korean | - |
| Chinese | Mandarin |
| Bengali | - |
| Tamil | - |
| Polish | - |
- Falcon Model Documentation - Ultra-low latency TTS model details
- Voice Library & Explorer - Preview and select from 150+ voices
- Full API Documentation - Complete API reference
Currently configured for:
- English (
en) - Voice:en-US-matthew - Spanish (
es) - Voice:es-ES-carla - French (
fr) - Voice:fr-FR-axel
Navigate to http://localhost:8000 in your browser.
You need to be authenticated to use the translation feature. Register or log in to your account.
- Go to the Translations page (
/translations) - Select your source language (or use "Auto-detect")
- Select your target language
- Click "Start Recording"
- Speak into your microphone
- Click "Stop Recording" when finished
- The application will automatically:
- Transcribe your speech using Whisper
- Translate the text using GPT
- Generate audio using the selected TTS provider
- Display the results and play the translated audio
All translations are saved and displayed in the history section below the main interface. You can:
- View original and translated text
- See processing times and API benchmarks
- Replay audio from previous translations
All routes require authentication (auth middleware).
GET /translations- Display translation interfacePOST /translations- Create a new translationGET /translations/{translation}- Get a specific translationGET /translations/{translation}/audio- Serve the translated audio file
{
"audio": "<File>",
"source_language": "auto|en|es|fr",
"target_language": "en|es|fr"
}{
"success": true,
"translation": {
"id": 1,
"original_text": "Hello, how are you?",
"translated_text": "Hola, ¿cómo estás?",
"source_language": "en",
"target_language": "es",
"audio_url": "http://localhost:8000/storage/translations/generated/...",
"processing_time": 2345,
"api_timings": {
"transcribe": 850,
"translate": 1200,
"synthesize": 295
},
"created_at": "2025-01-15T10:30:00Z"
}
}The application tracks and displays individual API call timings:
- Whisper Time: Time taken for speech-to-text transcription
- GPT Time: Time taken for text translation
- TTS Time: Time taken for text-to-speech synthesis
- Total Processing Time: Sum of all operations
These metrics are displayed in the UI and stored in the database for each translation.
laravel-murfai-falcon/
├── app/
│ ├── Contracts/
│ │ └── TextToSpeechService.php
│ ├── Http/
│ │ ├── Controllers/
│ │ │ └── TranslationController.php
│ │ └── Requests/
│ │ └── StoreTranslationRequest.php
│ ├── Models/
│ │ └── Translation.php
│ ├── Providers/
│ │ └── AppServiceProvider.php
│ └── Services/
│ ├── MurfFalconService.php
│ ├── OpenAITranslationService.php
│ ├── OpenAITTSService.php
│ └── OpenAIWhisperService.php
├── config/
│ └── services.php
├── database/
│ └── migrations/
│ └── *_create_translations_table.php
├── resources/
│ ├── js/
│ │ ├── components/
│ │ │ └── translation-history.tsx
│ │ ├── hooks/
│ │ │ └── useAudioRecorder.ts
│ │ ├── pages/
│ │ │ └── translations/
│ │ │ └── index.tsx
│ │ ├── types/
│ │ │ └── translation.ts
│ │ └── lib/
│ │ └── api.ts
│ └── views/
│ └── app.blade.php
└── storage/
└── app/
└── public/
└── translations/
├── original/
└── generated/
- Microphone not working: Ensure you've granted microphone permissions in your browser
- No audio file created: Check browser console for errors and ensure MediaRecorder API is supported
- OpenAI API errors: Verify your
OPENAI_API_KEYis correct and has sufficient credits - Murf.ai API errors: Verify your
MURF_API_KEYis correct and the Falcon model is available on your plan - Transcription fails: Ensure the audio file is clear and contains speech
- Audio files not accessible: Run
php artisan storage:linkto create the symbolic link - Permission errors: Ensure the
storage/app/publicdirectory is writable
- Changes not reflected: Run
npm run buildornpm run devto rebuild assets - TypeScript errors: Run
npm run typesto check for type errors
php artisan test# PHP
vendor/bin/pint
# JavaScript/TypeScript
npm run format# JavaScript/TypeScript
npm run lintThis project is open-sourced software licensed under the MIT license.
Contributions are welcome! Please feel free to submit a Pull Request.
For issues and questions, please open an issue on the repository.
