Skip to content

RunanywhereAI/swift-starter-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

RunAnywhere Swift SDK Starter App

A comprehensive starter app demonstrating RunAnywhere SDK capabilities - privacy-first, on-device AI for iOS.

RunAnywhere Platform Swift

Features

This starter app showcases all the core capabilities of the RunAnywhere SDK:

  • πŸ€– Chat (LLM) - On-device text generation with streaming support
  • 🎀 Speech to Text (STT) - On-device speech recognition using Whisper
  • πŸ”Š Text to Speech (TTS) - On-device voice synthesis using Piper
  • 🎯 Voice Pipeline - Full voice agent: Speak β†’ Transcribe β†’ Generate β†’ Speak

Requirements

  • iOS 17.0+ / macOS 14.0+
  • Xcode 15.0+
  • Swift 5.9+

Getting Started

1. Open in Xcode

open Swift-Starter-Example.xcodeproj

2. SDK Package Dependencies (Pre-configured)

This project is pre-configured to fetch the RunAnywhere SDK directly from GitHub:

https://github.com/RunanywhereAI/runanywhere-sdks
Version: 0.16.0-test.39

The following SDK products are included:

  • βœ… RunAnywhere - Core SDK (unified API for all AI capabilities)
  • βœ… RunAnywhereLlamaCPP - LLM text generation backend
  • βœ… RunAnywhereONNX - Speech-to-text, text-to-speech, VAD

When you open the project, Xcode will automatically fetch and resolve the packages from GitHub.

3. Configure Signing

In Xcode:

  1. Select the project in the navigator
  2. Go to Signing & Capabilities
  3. Select your Team
  4. Update the Bundle Identifier if needed

4. Build and Run

Press Cmd + R to build and run on your device or simulator.

Note: The first build may take a few minutes as Xcode downloads the SDK and its dependencies from GitHub. For best AI inference performance, run on a physical device.

SDK Dependencies

This app uses the RunAnywhere Swift SDK v0.16.0-test.39 from GitHub releases:

Module Import Description
Core SDK import RunAnywhere Unified API for all AI capabilities
LlamaCPP import LlamaCPPRuntime LLM text generation backend
ONNX import ONNXRuntime STT/TTS/VAD via Sherpa-ONNX

Models Used

Capability Model Size
LLM SmolLM2 360M Instruct Q8_0 ~400MB
STT Sherpa Whisper Tiny (English) ~75MB
TTS Piper (US English - Lessac Medium) ~65MB

Models are downloaded on-demand and cached locally on the device.

Project Structure

Swift-Starter-Example/
β”œβ”€β”€ Swift_Starter_ExampleApp.swift   # App entry point & SDK initialization
β”œβ”€β”€ ContentView.swift                 # Main content view wrapper
β”œβ”€β”€ Info.plist                        # Privacy permissions (microphone)
β”œβ”€β”€ Theme/
β”‚   └── AppTheme.swift               # Colors, fonts, and styling
β”œβ”€β”€ Services/
β”‚   └── ModelService.swift           # AI model management
β”œβ”€β”€ Views/
β”‚   β”œβ”€β”€ HomeView.swift               # Home screen with feature cards
β”‚   β”œβ”€β”€ ChatView.swift               # LLM chat interface
β”‚   β”œβ”€β”€ SpeechToTextView.swift       # Speech recognition
β”‚   β”œβ”€β”€ TextToSpeechView.swift       # Voice synthesis
β”‚   └── VoicePipelineView.swift      # Voice agent pipeline
└── Components/
    β”œβ”€β”€ FeatureCard.swift            # Reusable feature card
    β”œβ”€β”€ ModelLoaderView.swift        # Model download/load UI
    β”œβ”€β”€ AudioVisualizer.swift        # Audio level visualization
    └── ChatMessageBubble.swift      # Chat message component

Usage Examples

Initialize the SDK

import RunAnywhere
import LlamaCPPRuntime
import ONNXRuntime

// Initialize SDK (call once at app launch)
try RunAnywhere.initialize(environment: .development)

// Register backends
LlamaCPP.register()  // For LLM text generation
ONNX.register()      // For STT, TTS, VAD

Text Generation (LLM)

// Simple chat (blocking)
let response = try await RunAnywhere.chat("What is the capital of France?")

// Streaming generation with metrics
let result = try await RunAnywhere.generateStream(
    prompt,
    options: LLMGenerationOptions(maxTokens: 256, temperature: 0.8)
)

for try await token in result.stream {
    print(token, terminator: "")
}

let metrics = try await result.result.value
print("Speed: \(metrics.tokensPerSecond) tok/s")

Speech to Text

// Load STT model (once)
try await RunAnywhere.loadSTTModel("sherpa-onnx-whisper-tiny.en")

// Transcribe audio (Data from microphone)
let text = try await RunAnywhere.transcribe(audioData)

Text to Speech

// Load TTS voice (once)
try await RunAnywhere.loadTTSVoice("vits-piper-en_US-lessac-medium")

// Synthesize speech
let output = try await RunAnywhere.synthesize(
    "Hello, world!",
    options: TTSOptions(rate: 1.0)
)

// Play audio
let player = try AVAudioPlayer(data: output.audioData)
player.play()

Adding the SDK to Your Own Project

To add the RunAnywhere SDK to a new Swift project:

Option 1: Xcode UI

  1. In Xcode: File β†’ Add Package Dependencies...
  2. Enter: https://github.com/RunanywhereAI/runanywhere-sdks
  3. Select Exact Version: 0.16.0-test.39
  4. Add all three products: RunAnywhere, RunAnywhereLlamaCPP, RunAnywhereONNX

Option 2: Package.swift

dependencies: [
    .package(url: "https://github.com/RunanywhereAI/runanywhere-sdks", exact: "0.16.0-test.39")
],
targets: [
    .target(
        name: "YourApp",
        dependencies: [
            .product(name: "RunAnywhere", package: "runanywhere-sdks"),
            .product(name: "RunAnywhereLlamaCPP", package: "runanywhere-sdks"),
            .product(name: "RunAnywhereONNX", package: "runanywhere-sdks"),
        ]
    ),
]

Privacy Permissions

The app requires microphone access for speech recognition. The Info.plist includes:

  • NSMicrophoneUsageDescription - Required for recording audio
  • NSSpeechRecognitionUsageDescription - Optional, for system speech recognition

Troubleshooting

Package Resolution Fails

  1. In Xcode: File β†’ Packages β†’ Reset Package Caches
  2. Clean build: Product β†’ Clean Build Folder (Cmd+Shift+K)
  3. Close and reopen the project

Build Errors with SDK Imports

Ensure all three SDK products are added to your target:

  1. Select your target in Xcode
  2. Go to General β†’ Frameworks, Libraries, and Embedded Content
  3. Verify: RunAnywhere, RunAnywhereLlamaCPP, RunAnywhereONNX

Models Not Downloading

Check network connectivity. Models are downloaded from:

  • HuggingFace (LLM models)
  • GitHub (RunanywhereAI/sherpa-onnx for STT/TTS models)

Privacy

All AI processing happens entirely on-device. No data is ever sent to external servers. This ensures:

  • βœ… Complete data privacy
  • βœ… Offline functionality
  • βœ… Low latency responses
  • βœ… No API costs

License

MIT License - See LICENSE for details.

Resources

About

Swift (iOS, macOS.. more) starter example app to get started with RunAnywhere SDK!

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages