Reliability layer for API calls: retries, caching, dedup, circuit breakers.
- Product Hunt: ⭐ Support us on Product Hunt - Help us reach more developers!
- RapidAPI: Try ReliAPI on RapidAPI - No installation required, use directly from RapidAPI
- NPM Package: reliapi-sdk -
npm install reliapi-sdk - PyPI Package: reliapi-sdk -
pip install reliapi-sdk - Docker Image: kikudoc/reliapi -
docker pull kikudoc/reliapi - CLI Package: reliapi-cli -
pip install reliapi-cli
- Retries with Backoff - Automatic retries with exponential backoff
- Circuit Breaker - Prevent cascading failures
- Caching - TTL cache for GET requests and LLM responses
- Idempotency - Request coalescing with idempotency keys
- Rate Limiting - Built-in rate limiting per tier
- LLM Proxy - Unified interface for OpenAI, Anthropic, Mistral
- Cost Control - Budget caps and cost estimation
Try ReliAPI directly on RapidAPI - no SDK installation needed. Just subscribe to the API and start making requests!
JavaScript/TypeScript:
npm install reliapi-sdkimport { ReliAPI } from 'reliapi-sdk';
const client = new ReliAPI({
baseUrl: 'https://api.reliapi.dev',
apiKey: 'your-api-key'
});
// HTTP proxy with retries
const response = await client.proxyHttp({
target: 'my-api',
method: 'GET',
path: '/users/123',
cache: 300 // cache for 5 minutes
});
// LLM proxy with idempotency
const llmResponse = await client.proxyLlm({
target: 'openai',
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
idempotencyKey: 'unique-key-123'
});Python:
pip install reliapi-sdkfrom reliapi_sdk import ReliAPI
client = ReliAPI(
base_url="https://api.reliapi.dev",
api_key="your-api-key"
)
# HTTP proxy with retries
response = client.proxy_http(
target="my-api",
method="GET",
path="/users/123",
cache=300
)
# LLM proxy with idempotency
llm_response = client.proxy_llm(
target="openai",
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
idempotency_key="unique-key-123"
)pip install reliapi-cli# Check health
reli ping
# Make HTTP request
reli request --method GET --url https://api.example.com/users
# Make LLM request
reli llm --target openai --message "Hello, world!"- uses: KikuAI-Lab/reliapi@v1
with:
api-url: 'https://api.reliapi.dev'
api-key: ${{ secrets.RELIAPI_KEY }}
endpoint: '/proxy/http'
method: 'POST'
body: '{"target": "my-api", "method": "GET", "path": "/health"}'POST /proxy/http
Proxy any HTTP API with reliability layers.
POST /proxy/llm
Proxy LLM requests with idempotency, caching, and cost control.
GET /healthz
Health check endpoint for monitoring.
docker run -d -p 8000:8000 \
-e REDIS_URL="redis://localhost:6379/0" \
kikudoc/reliapi:latestMIT License - see LICENSE for details.
- GitHub Issues: https://github.com/KikuAI-Lab/reliapi/issues
- Email: dev@kikuai.dev