-
speech_recognitionfor voice input -
pyttsx3for voice output (text-to-speech) -
openai(orollama+ local LLM like TinyLLaMA) -
Optional wake word: โHey Buddyโ
-
Voice-activated assistant listens for โHey Buddyโ
-
Transcribes commands and sends to GPT/local model
-
Reads out the response
# Setup recognizer, TTS engine
recognizer = sr.Recognizer()
engine = pyttsx3.init()
# Wait for wake word using recognizer.listen()
# On trigger, listen for command and forward to GPT/local model
# Use pyttsx3 to speak out the response-
Use
ollamato run a local model liketinyllama -
Replace OpenAI API calls with subprocess or HTTP calls to
ollama