An open-source AI-powered smart cane that fuses multi-modal perception with graph-gated agents to assist the visually impaired.
- Project Overview
- Key Features
- System Architecture
- Hardware Components
- Quick Start
- Directory Layout
- Models & Data
- Demo
- Performance Benchmarks
- Roadmap
- Contributing
- Community & Support
- License
- Acknowledgements
SmartifCane combines a lightweight visual-detection engine, graph-neural gates, and large-language-model prompts to deliver fine-grained spatial awareness and natural language feedback for outdoor and indoor navigation.
| Module | Keywords | Description |
|---|---|---|
| Visual Detection | Lightweight OD engine | Real-time obstacle & signage recognition |
| Semantic Voice | MoE Prompts οΌ LLM | Personalized, multi-lingual scene speech |
| Emotional Aid | Emotion + Memory Bank | Tone adaption & long-term user memory |
| Offline Mode | On-device tiny models | Core functions without Internet access |
Camera β Detection Engine β MSDN ROI β GNN Gate β Prompt Template
β β
Mic/Audio β ASR β LLM (DeepSeek)