A Rust-based cloud API for AI model inference, conversation management, and organization administration. Part of the NEAR AI platform alongside the Chat API.
- Rust (latest stable version)
- Docker & Docker Compose (for local development)
- PostgreSQL (for production or local testing without Docker)
-
Clone the repository:
git clone <repository-url> cd cloud-api
-
Start services with Docker Compose:
docker-compose up -d
This starts:
- PostgreSQL database on port 5432
- NEAR AI Cloud API on port 3000
-
Run without Docker:
make dev
This automatically:
- Runs all database migrations
- Seeds the database with development data
- Starts the API server on http://localhost:3000
- PostgreSQL database running
- Database must be accessible with the credentials specified in test configuration
# Run unit tests only
make test-unit
# Run integration/e2e tests only (requires database)
make test-integration
# Run both unit and integration tests
make testTests use the same environment variables as the main application (from env.example).
If environment variables are not set, tests use sensible defaults for local testing.
# Start test database
docker run --name test-postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=platform_api \
-p 5432:5432 \
-d postgres:latest
# Run tests with default values (or override with env vars)
make test-integration
# Or with custom database settings
DATABASE_HOST=localhost \
DATABASE_PORT=5432 \
DATABASE_NAME=platform_api \
DATABASE_USERNAME=postgres \
DATABASE_PASSWORD=postgres \
make test-integrationSet these environment variables before running tests:
export DATABASE_HOST=localhost
export DATABASE_PORT=5432
export DATABASE_NAME=platform_api_test # Use a dedicated test database
export DATABASE_USERNAME=your_username
export DATABASE_PASSWORD=your_password
export DATABASE_MAX_CONNECTIONS=5
export DATABASE_TLS_ENABLED=falseCopy env.example to .env and configure your test database:
cp env.example .env
# Edit .env with your database credentials
make test-integrationThe vLLM integration tests require a running vLLM instance. Configure using environment variables:
# Configure vLLM endpoint
export VLLM_BASE_URL=http://localhost:8002
export VLLM_API_KEY=your_vllm_api_key_here # Optional
export VLLM_TEST_TIMEOUT_SECS=30 # Optional
# Run vLLM integration tests
cargo test --test integration_testsIf not set, tests will use default values but may fail if vLLM is not running at the default URL.
The application uses YAML configuration files located in the config/ directory.
This project uses cargo-audit to check for security vulnerabilities in dependencies. The audit runs automatically:
- On every push/PR that modifies
Cargo.tomlorCargo.lock - Daily at midnight UTC to catch newly published advisories
Failed audits will block PRs. To run locally:
cargo install cargo-audit
cargo auditKnown advisories without available fixes are documented and ignored in .cargo/audit.toml.
Dependabot is enabled to automatically create PRs for:
- Cargo dependencies: Weekly updates for Rust crates
- GitHub Actions: Weekly updates for workflow actions
Before committing code:
Run all checks with a single command:
make preflightThis runs:
- Clippy linter (strict mode with
-D warnings) - Code formatting check and fix
- Unit tests
- Full build
Once all checks pass, you're ready to commit!
Interactive API documentation is available when running the server:
- Scalar UI:
http://localhost:3000/docs- Modern, beautiful API documentation with interactive playground - OpenAPI Spec:
http://localhost:3000/api-docs/openapi.json- Machine-readable OpenAPI specification
The documentation is generated from Rust code using utoipa and served via Scalar for an enhanced developer experience.
Licensed under the PolyForm Strict License 1.0.0.