RoadSafetyAI is an advanced web application designed to provide expert-level road safety intervention recommendations. It utilizes a powerful Retrieval-Augmented Generation (RAG) architecture powered by Google's Gemini large language model.
Users can describe a road safety issue in detail—such as problems with road geometry, environmental conditions, or traffic behavior. The application analyzes this input, searches a curated knowledge base of road safety documents (hosted on Vertex AI Search), and synthesizes the retrieved information to recommend the most suitable interventions. Each recommendation is accompanied by a detailed rationale and direct citations to the source documents, ensuring that the advice is transparent, reliable, and evidence-based.
This project is built with a modern, robust, and scalable tech stack:
- Frontend: Next.js (with App Router), React, TypeScript
- Styling: Tailwind CSS with ShadCN UI components for a clean and responsive interface.
- AI Orchestration: Google's Genkit is used to define, manage, and orchestrate the entire AI workflow, from model calls to tool usage.
- Language Model: Google's Gemini 2.5 Flash for its powerful reasoning and synthesis capabilities.
- Vector Search (RAG): Vertex AI Search hosts the specialized knowledge base of road safety documents, enabling fast and relevant semantic search.
- Deployment: The application is deployed and hosted on Firebase App Hosting.
The application's core is a sophisticated "two-step" Retrieval-Augmented Generation (RAG) flow that ensures both speed and accuracy.
-
Step 1: Brainstorming Search Queries
- When a user submits a safety issue, a dedicated Genkit flow sends the description to the Gemini model.
- The model's first task is to brainstorm a list of 3-5 concise, keyword-focused search queries that are most likely to find relevant information in the knowledge base.
-
Step 2: Parallel Search & Synthesis
- The application's backend receives this list of queries.
- It then executes all searches against the Vertex AI Search database in parallel using
Promise.all(). This is a critical optimization that significantly reduces latency. - The aggregated search results (the "retrieved context") and the original issue description are then sent back to the Gemini model in a final call.
- In this final step, the model's sole job is to synthesize a high-quality answer. It analyzes the context, formulates the recommendations, writes the rationale, and provides citations, strictly following the formatting rules defined in the prompt.
This architecture ensures that the AI's recommendations are grounded in factual information from the knowledge base, providing reliable and actionable advice.
npm install
GEMINI_API_KEY=<gemini_api_key>
RAG_DATASTORE_ID=<rag_database_id>
VERTEX_AI_PROJECT_ID=<vertex_ai_project>
VERTEX_AI_SEARCH_ENGINE_ID=<vertex)ai_search_engine_id>
This project was developed by Sameer Verma as part of a competition held by the Centre of Excellence for Road Safety (CoERS), IIT Madras in November 2025.
For any questions, feedback, or collaboration inquiries, please feel free to connect with the developer.
