A production-ready FastAPI microservice for OpenAI Agent Builder that provides prediction market analysis and decision-making capabilities.
- Python 3.11+
- Docker (for containerization)
- OpenAI API key (for production use)
- Fly.io or Render account (for deployment)
-
Clone and setup:
git clone <your-repo> cd prediction-market-watcher python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install fastapi uvicorn pydantic pyyaml
-
Configure environment:
cp .env.example .env # Edit .env with your configuration -
Run locally:
python pm_watcher_single.py api
-
Test endpoints:
curl http://127.0.0.1:8787/health
- GET
/health- Service health status
- POST
/filter_evidence- Filter and rank evidence items - POST
/thesis- Generate market analysis thesis - POST
/decide- Make trading decisions based on thesis
curl -X POST http://127.0.0.1:8787/filter_evidence \
-H "Content-Type: application/json" \
-d '{
"items": [
{
"url": "https://example.com/news",
"title": "Market News",
"summary": "Important market development",
"timestamp": "2024-01-01T12:00:00Z",
"relevance_score": 0.8
}
],
"market": {
"id": "market-123",
"question": "Will event X happen?",
"description": "Market description"
}
}'curl -X POST http://127.0.0.1:8787/thesis \
-H "Content-Type: application/json" \
-d '{
"market": {
"id": "market-123",
"question": "Will event X happen?",
"description": "Market description"
},
"evidence": [
{
"url": "https://example.com/news",
"title": "Market News",
"summary": "Important market development",
"timestamp": "2024-01-01T12:00:00Z",
"relevance_score": 0.8
}
]
}'curl -X POST http://127.0.0.1:8787/decide \
-H "Content-Type: application/json" \
-d '{
"market": {
"id": "market-123",
"question": "Will event X happen?",
"description": "Market description"
},
"thesis": {
"p_star": 0.65,
"confidence": 0.8,
"rationale": "Analysis shows positive indicators",
"supporting_sources": ["https://example.com/news"]
}
}'# Build image
docker build -t pm-watcher .
# Test locally
docker run -p 8787:8787 \
-e LLM_BACKEND=mock \
-e EXECUTION_ENABLED=false \
pm-watcher
# Test with OpenAI
docker run -p 8787:8787 \
-e LLM_BACKEND=openai \
-e OPENAI_API_KEY=your_key_here \
-e EXECUTION_ENABLED=false \
pm-watcher-
Install Fly CLI:
curl -L https://fly.io/install.sh | sh -
Login and deploy:
fly auth login cd deploy/ fly deploy -
Set environment variables:
fly secrets set OPENAI_API_KEY=your_key_here fly secrets set LLM_BACKEND=openai fly secrets set EXECUTION_ENABLED=false
-
Check status:
fly status fly logs
-
Connect repository to Render dashboard
-
Create new Web Service from repository
-
Configure environment variables:
OPENAI_API_KEY: Your OpenAI API keyLLM_BACKEND:openaiEXECUTION_ENABLED:falsePORT:8787
-
Deploy - Render will automatically build and deploy
In your OpenAI Agent Builder canvas, create three HTTP Action tools:
- Name:
filter_evidence - Description: "Filter and rank evidence items for market analysis"
- Method: POST
- URL:
https://your-deployed-url.com/filter_evidence - Headers:
Content-Type: application/json - Schema:
{
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"url": {"type": "string"},
"title": {"type": "string"},
"summary": {"type": "string"},
"timestamp": {"type": "string"},
"relevance_score": {"type": "number"}
},
"required": ["url", "title", "summary"]
}
},
"market": {
"type": "object",
"properties": {
"id": {"type": "string"},
"question": {"type": "string"},
"description": {"type": "string"}
},
"required": ["id", "question"]
}
},
"required": ["items", "market"]
}- Name:
generate_thesis - Description: "Generate market analysis thesis based on evidence"
- Method: POST
- URL:
https://your-deployed-url.com/thesis - Headers:
Content-Type: application/json - Schema:
{
"type": "object",
"properties": {
"market": {
"type": "object",
"properties": {
"id": {"type": "string"},
"question": {"type": "string"},
"description": {"type": "string"}
},
"required": ["id", "question"]
},
"evidence": {
"type": "array",
"items": {
"type": "object",
"properties": {
"url": {"type": "string"},
"title": {"type": "string"},
"summary": {"type": "string"},
"timestamp": {"type": "string"},
"relevance_score": {"type": "number"}
},
"required": ["url", "title", "summary"]
}
}
},
"required": ["market", "evidence"]
}- Name:
make_decision - Description: "Make trading decision based on market thesis"
- Method: POST
- URL:
https://your-deployed-url.com/decide - Headers:
Content-Type: application/json - Schema:
{
"type": "object",
"properties": {
"market": {
"type": "object",
"properties": {
"id": {"type": "string"},
"question": {"type": "string"},
"description": {"type": "string"}
},
"required": ["id", "question"]
},
"thesis": {
"type": "object",
"properties": {
"p_star": {"type": "number"},
"confidence": {"type": "number"},
"rationale": {"type": "string"},
"supporting_sources": {
"type": "array",
"items": {"type": "string"}
}
},
"required": ["p_star", "confidence", "rationale", "supporting_sources"]
}
},
"required": ["market", "thesis"]
}Add these instructions to your Agent Builder:
You are a prediction market analysis agent. Use the following workflow:
1. **Filter Evidence**: Use the filter_evidence tool to process and rank evidence items
2. **Generate Thesis**: Use the generate_thesis tool to create market analysis
3. **Make Decision**: Use the make_decision tool to determine trading actions
Always present results using the ApprovalCard component for user review before any execution.
For each analysis:
- Clearly explain your reasoning
- Highlight key risk factors
- Provide confidence levels
- Show supporting evidence sources
The ui/ApprovalCard.tsx component provides a user interface for reviewing decisions:
// Copy this component to your Agent Builder's custom components
// See ui/ApprovalCard.tsx for full implementation| Variable | Description | Default | Required |
|---|---|---|---|
PORT |
Server port | 8787 |
No |
LLM_BACKEND |
LLM provider (openai, mock) |
mock |
Yes |
OPENAI_API_KEY |
OpenAI API key | - | Yes (if openai) |
OPENAI_MODEL |
OpenAI model | gpt-4 |
No |
EXECUTION_ENABLED |
Enable trading execution | false |
No |
LOG_LEVEL |
Logging level | INFO |
No |
- Always set
EXECUTION_ENABLED=falseinitially - Use
LLM_BACKEND=openaifor production - Never commit API keys to version control
- Use environment variables for all secrets
- Enable HTTPS in production deployments
Run the smoke test script to verify all endpoints:
python scripts/smoke_test.py- Health check:
curl http://localhost:8787/health - Filter evidence: See example requests above
- Generate thesis: See example requests above
- Make decision: See example requests above
The /health endpoint provides:
- Service status
- Current configuration
- Timestamp
- LLM backend status
Logs include:
- Request/response details
- LLM provider interactions
- Error tracking
- Performance metrics
- API keys managed via environment variables
- CORS configured for web access
- Input validation on all endpoints
- Execution disabled by default
- No sensitive data in logs
Once deployed, visit https://your-url.com/docs for interactive API documentation.
- Import errors: Ensure all dependencies are installed
- Port conflicts: Change PORT environment variable
- API key errors: Verify OPENAI_API_KEY is set correctly
- Docker issues: Check Dockerfile and build logs
- Check logs:
fly logs(Fly.io) or Render dashboard - Verify environment variables are set
- Test locally first before deploying
- Use mock backend for testing
MIT License - see LICENSE file for details.