A comprehensive starter app demonstrating RunAnywhere SDK capabilities - privacy-first, on-device AI for iOS.
This starter app showcases all the core capabilities of the RunAnywhere SDK:
- π€ Chat (LLM) - On-device text generation with streaming support
- π€ Speech to Text (STT) - On-device speech recognition using Whisper
- π Text to Speech (TTS) - On-device voice synthesis using Piper
- π― Voice Pipeline - Full voice agent: Speak β Transcribe β Generate β Speak
- iOS 17.0+ / macOS 14.0+
- Xcode 15.0+
- Swift 5.9+
open Swift-Starter-Example.xcodeprojThis project is pre-configured to fetch the RunAnywhere SDK directly from GitHub:
https://github.com/RunanywhereAI/runanywhere-sdks
Version: 0.16.0-test.39
The following SDK products are included:
- β
RunAnywhere- Core SDK (unified API for all AI capabilities) - β
RunAnywhereLlamaCPP- LLM text generation backend - β
RunAnywhereONNX- Speech-to-text, text-to-speech, VAD
When you open the project, Xcode will automatically fetch and resolve the packages from GitHub.
In Xcode:
- Select the project in the navigator
- Go to Signing & Capabilities
- Select your Team
- Update the Bundle Identifier if needed
Press Cmd + R to build and run on your device or simulator.
Note: The first build may take a few minutes as Xcode downloads the SDK and its dependencies from GitHub. For best AI inference performance, run on a physical device.
This app uses the RunAnywhere Swift SDK v0.16.0-test.39 from GitHub releases:
| Module | Import | Description |
|---|---|---|
| Core SDK | import RunAnywhere |
Unified API for all AI capabilities |
| LlamaCPP | import LlamaCPPRuntime |
LLM text generation backend |
| ONNX | import ONNXRuntime |
STT/TTS/VAD via Sherpa-ONNX |
| Capability | Model | Size |
|---|---|---|
| LLM | SmolLM2 360M Instruct Q8_0 | ~400MB |
| STT | Sherpa Whisper Tiny (English) | ~75MB |
| TTS | Piper (US English - Lessac Medium) | ~65MB |
Models are downloaded on-demand and cached locally on the device.
Swift-Starter-Example/
βββ Swift_Starter_ExampleApp.swift # App entry point & SDK initialization
βββ ContentView.swift # Main content view wrapper
βββ Info.plist # Privacy permissions (microphone)
βββ Theme/
β βββ AppTheme.swift # Colors, fonts, and styling
βββ Services/
β βββ ModelService.swift # AI model management
βββ Views/
β βββ HomeView.swift # Home screen with feature cards
β βββ ChatView.swift # LLM chat interface
β βββ SpeechToTextView.swift # Speech recognition
β βββ TextToSpeechView.swift # Voice synthesis
β βββ VoicePipelineView.swift # Voice agent pipeline
βββ Components/
βββ FeatureCard.swift # Reusable feature card
βββ ModelLoaderView.swift # Model download/load UI
βββ AudioVisualizer.swift # Audio level visualization
βββ ChatMessageBubble.swift # Chat message component
import RunAnywhere
import LlamaCPPRuntime
import ONNXRuntime
// Initialize SDK (call once at app launch)
try RunAnywhere.initialize(environment: .development)
// Register backends
LlamaCPP.register() // For LLM text generation
ONNX.register() // For STT, TTS, VAD// Simple chat (blocking)
let response = try await RunAnywhere.chat("What is the capital of France?")
// Streaming generation with metrics
let result = try await RunAnywhere.generateStream(
prompt,
options: LLMGenerationOptions(maxTokens: 256, temperature: 0.8)
)
for try await token in result.stream {
print(token, terminator: "")
}
let metrics = try await result.result.value
print("Speed: \(metrics.tokensPerSecond) tok/s")// Load STT model (once)
try await RunAnywhere.loadSTTModel("sherpa-onnx-whisper-tiny.en")
// Transcribe audio (Data from microphone)
let text = try await RunAnywhere.transcribe(audioData)// Load TTS voice (once)
try await RunAnywhere.loadTTSVoice("vits-piper-en_US-lessac-medium")
// Synthesize speech
let output = try await RunAnywhere.synthesize(
"Hello, world!",
options: TTSOptions(rate: 1.0)
)
// Play audio
let player = try AVAudioPlayer(data: output.audioData)
player.play()To add the RunAnywhere SDK to a new Swift project:
- In Xcode: File β Add Package Dependencies...
- Enter:
https://github.com/RunanywhereAI/runanywhere-sdks - Select Exact Version:
0.16.0-test.39 - Add all three products:
RunAnywhere,RunAnywhereLlamaCPP,RunAnywhereONNX
dependencies: [
.package(url: "https://github.com/RunanywhereAI/runanywhere-sdks", exact: "0.16.0-test.39")
],
targets: [
.target(
name: "YourApp",
dependencies: [
.product(name: "RunAnywhere", package: "runanywhere-sdks"),
.product(name: "RunAnywhereLlamaCPP", package: "runanywhere-sdks"),
.product(name: "RunAnywhereONNX", package: "runanywhere-sdks"),
]
),
]The app requires microphone access for speech recognition. The Info.plist includes:
NSMicrophoneUsageDescription- Required for recording audioNSSpeechRecognitionUsageDescription- Optional, for system speech recognition
- In Xcode: File β Packages β Reset Package Caches
- Clean build: Product β Clean Build Folder (Cmd+Shift+K)
- Close and reopen the project
Ensure all three SDK products are added to your target:
- Select your target in Xcode
- Go to General β Frameworks, Libraries, and Embedded Content
- Verify:
RunAnywhere,RunAnywhereLlamaCPP,RunAnywhereONNX
Check network connectivity. Models are downloaded from:
- HuggingFace (LLM models)
- GitHub (RunanywhereAI/sherpa-onnx for STT/TTS models)
All AI processing happens entirely on-device. No data is ever sent to external servers. This ensures:
- β Complete data privacy
- β Offline functionality
- β Low latency responses
- β No API costs
MIT License - See LICENSE for details.