-
Notifications
You must be signed in to change notification settings - Fork 4
Deployment/edge function release #77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Update applicant creation to require at least one of CV file, LinkedIn profile, or GitHub URL. - Improve error handling and user feedback for missing data. - Make CV file optional in the submission form and adjust validation logic accordingly. - Refactor data processing to handle cases where multiple data sources are provided. - Update interfaces to reflect changes in applicant data structure.
- Introduce a new batch processing script to handle multiple applicants from a CSV file. - Define CSV format requirements, including optional fields for LinkedIn, GitHub, and CV files. - Implement parallel processing, error handling, and progress tracking for applicant data. - Add an example CSV file to demonstrate the expected format and usage.
- Introduce support for LinkedIn data processing alongside CV data. - Update `validateAndCleanCvData` function to accept a source parameter for better data tracking. - Refactor applicant interface to use `ProfileData` instead of `CvData` for improved consistency. - Implement new function `convertLinkedInApiToProfileData` to transform LinkedIn API responses into the application's profile format. - Ensure backward compatibility by aliasing `ProfileData` as `CvData` for existing references.
…e CSV format - Modify CSV format requirements to include `cv` instead of `github` as an optional field. - Update applicant submission logic to accept LinkedIn profile URLs alongside CV files. - Refactor applicant data processing to handle LinkedIn URLs, including new function for API integration. - Enhance validation to ensure at least one of `linkedin` or `cv` is provided for each applicant. - Improve user interface for applicant form to allow input of LinkedIn URLs or file uploads.
- Introduce a new `linkedin-api.ts` module for LinkedIn data processing. - Update `ProcessingLoader` to display LinkedIn progress and status. - Modify applicant processing to include LinkedIn URL handling with progress tracking. - Refactor existing functions to improve clarity and maintainability. - Enhance error handling and user feedback for LinkedIn data retrieval. - Update UI components to reflect changes in LinkedIn data processing.
- Introduce a new markdown file for debugging the unmask button issue on the /board page. - Enhance logging in the `handleCreateCandidate` function to trace form state and validation. - Update imports to reflect changes in the applicant interface structure. - Refactor error handling and button state management for improved user feedback. - Remove deprecated CV interface and consolidate applicant-related types for consistency.
Feature/supabase setup
Feature/img to image
- Code CRUD operations for applicants table with JSONB data handling
- Write functions for querying applicants by workspace with proper filtering
- Implement applicant status updates and validation
- Create unit tests for all applicant database operations
…tus updates - Introduce LinkedIn job tracking in the applicant interface, including job ID and status fields. - Refactor applicant processing to handle LinkedIn data retrieval and status updates. - Update ProcessingLoader to display LinkedIn processing status and progress. - Implement error handling for LinkedIn job failures and provide user feedback. - Enhance analysis logic to require multiple data sources for credibility checks. - Introduce new functions for managing LinkedIn job lifecycle and data processing.
IMPROVEMENTS: - Cleaned up debug logs, replaced with production-ready monitoring logs - Added bulk sync session management to prevent file processing overload - Implemented intelligent file processing deferral for large operations (100+ candidates) - Added background file processing with batching and rate limiting - Enhanced API response with detailed sync metrics (duration, batches, etc.) - Created database migration for bulk sync session management - Removed temporary test files RESULT: - ✅ Pagination works perfectly (tested with 1000 candidates) - ✅ No more 500 errors from simultaneous file processing - ✅ Optimized performance for large sync operations - ✅ Production-ready logging and monitoring
- Renamed migration from 20241201000000 to 20250827000000 - Ensures bulk file processing migration runs after users table creation - Database reset now works correctly
IMPROVEMENTS: - Increased retry attempts from 3 to 5 for rate limit handling - Implemented gradual exponential backoff (3s, 4.5s, 6.75s, 10.1s, 15.2s) - Added 1-second delays between pagination requests to prevent rate limits - Enhanced error messages with better retry suggestions - Applied rate limiting optimization to all sync operations (manual, auto, cron) RESULT: - ✅ Can now fetch 1000+ candidates without rate limit errors - ✅ Intelligent retry mechanism for temporary rate limits - ✅ Proactive rate limit prevention with request delays - ✅ Better user feedback for rate limit situations
IMPROVEMENTS: - Changed confusing 'analyzed' count (score >= 30) to accurate AI analysis tracking - 'AI analyzed' now shows candidates with ai_status === 'ready' and actual analysis data - Added 'complete data' metric for candidates with LinkedIn + CV (ready for analysis) - Simplified 'total' label instead of 'candidates' for cleaner UI - Updated tooltips to be more descriptive and helpful RESULT: - ✅ 'Complete data' shows candidates ready for AI analysis (LinkedIn + CV) - ✅ 'AI analyzed' shows candidates actually processed by AI - ✅ Metrics now accurately reflect the actual processing status - ✅ More intuitive user experience with clear data readiness indicators
- Deleted /api/cron/ashby-sync route as it's not actively used - Cleaned up codebase to remove unnecessary complexity - Build now passes without TypeScript errors The manual sync functionality in the ATS dashboard provides sufficient control for candidate syncing.
…ment Feature/ashby pagination enhancement
…g migration - Created comprehensive PRD for migrating Ashby file processing from Next.js API to Supabase Edge Functions - Added detailed technical analysis of code migration requirements - Documented Option A status tracking strategy using ashby_candidates.file_processing_status column - Outlined 5-phase implementation plan with clear acceptance criteria - Analysis shows ~420 lines of complex webhook code can be replaced with simpler edge function - Solution addresses 500 error issues during bulk sync operations (1000+ candidates) Next: Begin Phase 1 implementation (database schema updates)
✅ Implemented: - Created process-ashby-file edge function with full CV processing logic - Added file_processing_status column and database migration - Updated database trigger to call edge function via pg_net - Added comprehensive documentation and README - Created test scripts for edge function validation 🔧 Architecture: - Migrated 270+ lines from Next.js API route to Supabase Edge Function - Replaced webhook-based system with database trigger + edge function - Added status tracking (pending -> processing -> completed/failed) - Improved error handling and logging throughout pipeline 🐛 Known Issue - Local Development Only: The trigger fires correctly and pg_net makes HTTP requests (confirmed via result codes), but requests from PostgreSQL container to edge function (127.0.0.1:54321) appear to be failing due to networking isolation in local development environment. - ✅ Edge function works perfectly (manual curl tests successful) - ✅ Database trigger fires on INSERT/UPDATE operations - ✅ pg_net extension can queue HTTP requests (returns success codes) - ❌ HTTP requests from trigger don't reach edge function in local dev This is likely a Docker networking issue where PostgreSQL container cannot reach host.docker.internal:54321. The system should work correctly in production where all services run in the same Supabase environment. Manual testing shows full edge function pipeline works end-to-end when called directly.
Documents the current local development networking issue where PostgreSQL container cannot reach edge function at 127.0.0.1:54321. All components work individually but Docker networking isolation prevents full integration testing locally.
Provides manual testing option to simulate database trigger → edge function flow using real candidate data when local networking prevents full integration testing.
Edge functions use JSR imports which TypeScript doesn't understand. Excluding supabase/functions directory from TypeScript compilation to prevent build errors while maintaining type safety for the main application.
|
Claude finished @HenryAllen04's task —— View job PR Review: Edge Function ReleaseThis is a comprehensive review of a large-scale migration to edge functions and significant architectural changes. Overall, this is solid work with thoughtful patterns, but there are some security and architectural concerns that should be addressed. Code Quality & Architecture ✅Strengths:
Security Concerns
|
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 19399242 | Triggered | Supabase Service Role JWT | ccf005d | test-edge-function.sh | View secret |
| 19496380 | Triggered | JSON Web Token | d01e838 | docs/supabase_curl.mdc | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secrets safely. Learn here the best practices.
- Revoke and rotate these secrets.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
- Change deploy.yml to use .env.prod instead of .env.local - Add required environment variables to build-test.yml for build process - Ensures proper separation of dev/prod environments
|
Claude finished @HenryAllen04's task —— View job PR Review: Edge Function ReleaseI've conducted a comprehensive review of this large-scale migration focusing on code quality, security, performance, bugs, and test coverage. Review Checklist:
🔒 CRITICAL Security Issues - IMMEDIATE ACTION REQUIRED1. Hardcoded Service Role Keys
|
- Add NEXT_PUBLIC_SUPABASE_URL, NEXT_PUBLIC_SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY - Fixes build error: 'Your project's URL and API key are required to create a Supabase client'
|
Claude finished @HenryAllen04's task —— View job PR Review: Edge Function ReleaseI've conducted a comprehensive review of this large-scale migration focusing on code quality, security, performance, bugs, and test coverage. Review Checklist:
🔒 CRITICAL Security Issues - IMMEDIATE ACTION REQUIRED1. Hardcoded Service Role Keys
|
- Change .env.production to .env.prod in .gitignore - Aligns with deploy.yml workflow configuration - Ensures proper environment file handling for production
|
Claude finished @HenryAllen04's task —— View job PR Review: Edge Function ReleaseI've conducted a comprehensive review of this large-scale migration focusing on code quality, security, performance, bugs, and test coverage. Review Checklist:
🔒 CRITICAL Security Issues - IMMEDIATE ACTION REQUIRED1. Hardcoded Service Role Keys
|
- Add NEXT_PUBLIC_SUPABASE_URL, NEXT_PUBLIC_SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY to Dockerfile ARG and ENV - Update deploy.yml to pass Supabase secrets as Docker build arguments - Fixes Docker build error: 'Your project's URL and API key are required to create a Supabase client'
|
Claude finished @HenryAllen04's task —— View job PR Review: Edge Function ReleaseI've conducted a comprehensive review of this large-scale migration focusing on the requested areas. Let me analyze the code systematically. Review Checklist:
🔒 CRITICAL Security Issues - IMMEDIATE ACTION REQUIRED1. Hardcoded Service Role Keys
|
- Add explicit checks for NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_ANON_KEY - Add checks for SUPABASE_SERVICE_ROLE_KEY in service role client - Provides better error messages when environment variables are missing during build - Should help diagnose Docker build issues with Supabase client creation
|
Claude finished @HenryAllen04's task —— View job PR Review: Edge Function ReleaseI've conducted a comprehensive review of this large-scale migration focusing on code quality, security, performance, bugs, and test coverage. Review Checklist:
🔒 CRITICAL Security Issues - IMMEDIATE ACTION REQUIRED1. Hardcoded Service Role Keys
|
…orkflow - Add all environment variables as build-args to Docker build step - Fixes 'Missing Supabase environment variables. URL: false, Key: false' error - Ensures Docker build has access to same secrets as npm build step
|
Claude finished @HenryAllen04's task —— View job PR Review: Edge Function ReleaseI've conducted a comprehensive review of this large-scale migration focusing on code quality, security, performance, bugs, and test coverage. Review Checklist:
🔒 CRITICAL Security Issues - IMMEDIATE ACTION REQUIRED1. Hardcoded Service Role Keys
|
|
Claude finished @HenryAllen04's task —— View job PR Review: Edge Function ReleaseI've conducted a comprehensive review of this large-scale migration focusing on code quality, security, performance, bugs, and test coverage. Review Checklist:
🔒 CRITICAL Security Issues - IMMEDIATE ACTION REQUIRED1. Hardcoded Service Role Keys
|
|
Got build and review approved. Will rewrite git history to remove the old secrets |
Testing in Prod
Summary by cubic
Introduce Ashby ATS integration with paginated candidate sync, CV storage via Supabase, and a new ATS UI, plus processing APIs and Supabase tooling for an event-driven flow. This also preps the edge-function migration for file processing and updates docs, envs, and deploy setup.
New Features
Migration