A comprehensive educational platform that combines intelligent search capabilities with document-based Q&A functionality. This platform allows users to search the web for educational content and upload documents for AI-powered question answering.
- Multi-source Search: Integrates Wikipedia, Google Images, and YouTube videos
- Smart Filtering: Filter results by type (All, Text, Images, Videos)
- Dynamic Layout:
- Text results: Full-width with enhanced Wikipedia content display
- Image results: 2-column grid (6 images for Image toggle, 5 for All toggle)
- Video results: 2-column grid with embedded YouTube videos
- Real-time Search: Instant search with loading states and error handling
- Responsive Design: Optimized for desktop and mobile devices
- Multi-format Support: PDF, images (JPG, PNG, GIF), and text files
- OCR Processing: Automatic text extraction from images using Tesseract.js
- AI-Powered Q&A: Google Gemini integration for intelligent question answering
- Context-Aware Responses: Answers based on uploaded document content
- Interactive Chat: Real-time conversation interface with chat history
- Next.js 15.0.4 - React framework with App Router
- React 19.0.0 - UI library with latest features
- TypeScript - Type-safe development
- Tailwind CSS - Utility-first CSS framework
- Framer Motion - Animation library
- React Icons - Icon components
- React Dropzone - File upload handling
- Next.js API Routes - Serverless API endpoints
- Google Generative AI (Gemini) - AI question answering
- OpenAI API - Additional AI capabilities
- Tesseract.js - OCR for image text extraction
- PDF.js - PDF text extraction
- Wikipedia API - Text content and summaries
- Google Images - Image search results
- YouTube/Piped API - Video content
- Unsplash - Fallback image service
LearnScope/
├── src/
│ ├── app/
│ │ ├── api/
│ │ │ ├── qa/route.ts # Q&A API endpoint
│ │ │ └── search/route.ts # Search API endpoint
│ │ ├── fonts/ # Custom fonts (Geist)
│ │ ├── globals.css # Global styles
│ │ ├── layout.js # Root layout
│ │ └── page.js # Main page component
│ └── components/
│ ├── Search/
│ │ └── index.tsx # Basic search component
│ ├── SearchWithSidebar/
│ │ └── index.tsx # Advanced search with sidebar
│ ├── TopicUploader/
│ │ └── index.tsx # Document upload & processing
│ └── UploadChat/
│ └── index.tsx # Chat interface for Q&A
├── next.config.mjs # Next.js configuration
├── tailwind.config.js # Tailwind CSS configuration
├── tsconfig.json # TypeScript configuration
└── package.json # Dependencies and scripts
- Node.js 18+
- npm, yarn, pnpm, or bun package manager
-
Clone the repository
git clone <repository-url>
-
Install dependencies
npm install # or yarn install # or pnpm install # or bun install
-
Set up environment variables Create a
.env.localfile in the root directory:GEMINI_API_KEY=your_gemini_api_key_here OPENAI_API_KEY=your_openai_api_key_here
-
Run the development server
npm run dev # or yarn dev # or pnpm dev # or bun dev
-
Open your browser Navigate to http://localhost:3000
- User Input: Enter search query in the sidebar
- API Processing:
- Wikipedia API for text content
- Google Images API for visual content
- YouTube/Piped API for video content
- Result Processing: Filter and format results based on type
- Display: Render results in appropriate layout (grid/list)
- File Upload: Drag & drop or select files (PDF, images, text)
- Text Extraction:
- PDF: Extract text using PDF.js
- Images: OCR processing with Tesseract.js
- Text files: Direct reading
- AI Processing: Send extracted text to Gemini API
- Q&A Interface: Interactive chat for asking questions
- Context-Aware Responses: AI answers based on document content
- Search Input: Real-time search with Enter key support
- Filter Toggles:
- All: Shows 5 images + all text/video results
- Text: Full-width Wikipedia articles with enhanced formatting
- Image: 2-column grid showing 6 images
- Video: 2-column grid with embedded YouTube videos
- File Drop Zone: Drag & drop interface with visual feedback
- Supported Formats: PDF, JPG, PNG, GIF, TXT
- Processing Status: Real-time upload and extraction progress
- Chat History: Conversation log with user questions and AI responses
| Variable | Description | Required |
|---|---|---|
GEMINI_API_KEY |
Google Gemini API key for Q&A | Yes |
OPENAI_API_KEY |
OpenAI API key (optional) | No |
- Go to Google AI Studio (
https://aistudio.google.com). - Sign in with your Google account.
- Open the left sidebar → API Keys → Create API key.
- Choose "Create API key in Google AI Studio" (no Cloud project needed) and copy the key.
- Add it to
.env.local:GEMINI_API_KEY=your_gemini_api_key_here
- Restart the dev server.
Recommended model: gemini-1.5-flash (used by this project in src/app/api/qa/route.ts). It’s fast and has a more generous free-tier RPM than gemini-1.5-pro.
Notes:
- Free-tier can return 503 “model overloaded” during spikes. Just retry with backoff.
- Keep prompts short and cache answers where possible to stay within limits.
The platform is fully responsive and optimized for:
- Desktop: Full sidebar + main content layout
- Tablet: Adaptive grid layouts
- Mobile: Stacked layout with collapsible sidebar
- Push code to GitHub
- Connect repository to Vercel
- Add environment variables in Vercel dashboard
- Deploy automatically
- Netlify: Static export with API routes
- Railway: Full-stack deployment
- Docker: Containerized deployment