# Production Deployment This is the recommended production deployment mode for the current repository. ## Recommendation Use: - split FastAPI services - `systemd` as the process supervisor - `nginx` as TLS terminator and reverse proxy - static frontend build served by `nginx` - Docker-based skill sandbox This matches the current architecture better than a monolithic process and is lower-risk than introducing Kubernetes at the current stage. ## Why This Mode Fits Best 1. The repository already uses a split-service runtime model. 2. `runtime_service` is the correct control-plane entrypoint for starting and stopping Gateway subprocesses. 3. The Gateway is run-scoped and ephemeral, which fits `systemd` + subprocess management better than forcing everything into a single service binary. 4. Skill execution has security requirements; Docker sandboxing is the practical production default. ## Service Layout | Component | Bind | |----------|------| | `agent_service` | `127.0.0.1:8000` | | `trading_service` | `127.0.0.1:8001` | | `news_service` | `127.0.0.1:8002` | | `runtime_service` | `127.0.0.1:8003` | | gateway websocket | spawned by `runtime_service` | | `nginx` | `:80` / `:443` | ## Frontend Recommended frontend mode: ```bash cd frontend npm ci npm run build ``` Then point `nginx` root at: ```text /opt/bigtime/app/frontend/dist ``` This is preferred over running `backend.apps.frontend_service` in production, because static serving via `nginx` is simpler and more reliable. ## Environment Create a shared environment file, for example: ```bash sudo mkdir -p /etc/bigtime sudo cp .env /etc/bigtime/bigtime.env ``` Required production settings: ```bash AGENT_SERVICE_URL=http://127.0.0.1:8000 TRADING_SERVICE_URL=http://127.0.0.1:8001 NEWS_SERVICE_URL=http://127.0.0.1:8002 RUNTIME_SERVICE_URL=http://127.0.0.1:8003 SKILL_SANDBOX_MODE=docker SKILL_SANDBOX_MEMORY_LIMIT=512m SKILL_SANDBOX_CPU_LIMIT=1.0 SKILL_SANDBOX_NETWORK=none SKILL_SANDBOX_TIMEOUT=60 ``` Also supply the required market/model API keys in the same environment file or through your secret-management system. ## Data Persistence Persist these paths on durable storage: - `runs/` - `logs/` if you keep service logs on disk - optional `.env`-backed secrets should not live inside the repo working tree The key runtime source of truth is: - `runs//state/runtime_state.json` - `runs//state/server_state.json` - `runs//logs/gateway.log` ## nginx Pattern Recommended routing: - `/` -> static frontend - `/api/runtime/*` -> `127.0.0.1:8003` - `/api/dynamic-team/*` -> `127.0.0.1:8003` - `/api/trading/*` -> `127.0.0.1:8001` - `/api/news/*` -> `127.0.0.1:8002` - `/api/*` -> `127.0.0.1:8000` - `/ws` -> gateway websocket The checked-in nginx config should be treated as a starting point, not a full multi-service production config. ## Operational Notes - Use `workers=1` for `runtime_service` unless you deliberately redesign the runtime manager around multi-process coordination. - Keep the other API services stateless and scale them separately if needed. - Monitor: - `runtime_service` - run-scoped `gateway.log` - Docker daemon health - Rotate logs outside the app, e.g. with journald or logrotate. ## Best Next Step Deploy with: - `systemd` units from [deploy/systemd](/Users/cillin/workspeace/evotraders/deploy/systemd) - `nginx` in front - one VM first Only move to containers/orchestration after the runtime/gateway operational behavior is stable in that simpler topology.