# Deployment Notes This directory contains the current production-oriented deployment artifacts for the 大时代 frontend site and the live gateway process. This deployment shape is narrower than the current application architecture. For the code-level architecture, see [docs/current-architecture.md](../docs/current-architecture.md). For the planned convergence work, see [docs/development-roadmap.md](../docs/development-roadmap.md). ## Contents - [deploy/systemd/evotraders.service](./systemd/evotraders.service) - systemd unit for the long-running 大时代 gateway process - [scripts/run_prod.sh](../scripts/run_prod.sh) - production launch script used by the systemd unit - [deploy/nginx/bigtime.cillinn.com.conf](./nginx/bigtime.cillinn.com.conf) - HTTPS nginx config with WebSocket proxying - [deploy/nginx/bigtime.cillinn.com.http.conf](./nginx/bigtime.cillinn.com.http.conf) - plain HTTP/static-site variant ## Deployment Topology Options This directory documents two deployment topologies: ### 1. Compatibility Topology (backend.main) - CURRENT PRODUCTION DEFAULT The checked-in production path uses the **compatibility gateway** (`backend.main`): - nginx serves the built frontend from `/var/www/bigtime/current` - public domain examples use `bigtime.cillinn.com` - nginx proxies `/ws` to `127.0.0.1:8765` - systemd runs `scripts/run_prod.sh` - `scripts/run_prod.sh` starts `python3 -m backend.main` in live mode on `127.0.0.1:8765` This is a **monolithic gateway** that embeds all services internally. It is the current production default for simplicity but does not expose the split FastAPI services directly. **When to use**: Single-server deployments, simpler operational requirements, backwards compatibility with existing monitoring. ### 2. Preferred Topology (Split Services) - RECOMMENDED FOR NEW DEPLOYMENTS The modern architecture exposes individual FastAPI services: | Service | Port | Purpose | |---------|------|---------| | agent_service | 8000 | Control plane for workspaces, agents, skills | | trading_service | 8001 | Read-only trading data APIs | | news_service | 8002 | Read-only explain/news APIs | | runtime_service | 8003 | Runtime lifecycle APIs | | gateway | 8765 | WebSocket event channel | **When to use**: Multi-service deployments, independent scaling needs, service-level monitoring, or when following the architecture documented in [docs/current-architecture.md](../docs/current-architecture.md). To deploy in split-service mode, you would: 1. Deploy each service with its own systemd unit 2. Configure nginx to route `/api/*` to the appropriate service 3. Keep WebSocket proxy to gateway on port 8765 4. Set environment variables for service discovery: ``` TRADING_SERVICE_URL=http://localhost:8001 NEWS_SERVICE_URL=http://localhost:8002 RUNTIME_SERVICE_URL=http://localhost:8003 ``` ## Important Paths And Ports - frontend root: `/var/www/bigtime/current` - gateway bind: `127.0.0.1:8765` - public WebSocket path: `/ws` - working directory expected by systemd: `/root/code/evotraders` ## systemd The current systemd unit: - uses `WorkingDirectory=/root/code/evotraders` - executes [scripts/run_prod.sh](../scripts/run_prod.sh) - restarts automatically on failure Enable and start: ```bash sudo cp deploy/systemd/evotraders.service /etc/systemd/system/evotraders.service sudo systemctl daemon-reload sudo systemctl enable evotraders sudo systemctl start evotraders ``` Check status and logs: ```bash sudo systemctl status evotraders journalctl -u evotraders -f ``` ## nginx The HTTPS nginx config does two things: - redirects `http://bigtime.cillinn.com` to HTTPS - proxies `/ws` to the local gateway process with WebSocket upgrade headers Typical install flow: ```bash sudo cp deploy/nginx/bigtime.cillinn.com.conf /etc/nginx/sites-available/bigtime.cillinn.com.conf sudo ln -s /etc/nginx/sites-available/bigtime.cillinn.com.conf /etc/nginx/sites-enabled/ sudo nginx -t sudo systemctl reload nginx ``` The checked-in TLS config expects Let's Encrypt assets at: - `/etc/letsencrypt/live/bigtime.cillinn.com/fullchain.pem` - `/etc/letsencrypt/live/bigtime.cillinn.com/privkey.pem` ## Environment Expectations Before using the production scripts, ensure the runtime environment has: - a usable Python environment - backend dependencies installed from `requirements.txt` - the package installed with `pip install -e .` or `uv pip install -e .` - frontend dependencies installed with `npm ci` - repo dependencies installed - required market/model API keys - any desired `TICKERS` override Recommended production install sequence: ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt pip install -e . cd frontend && npm ci && npm run build && cd .. ``` The production script currently sets: ```bash PYTHONPATH=/root/code/evotraders/.pydeps:. TICKERS=${TICKERS:-AAPL,MSFT,GOOGL,AMZN,NVDA,META,TSLA,AMD,NFLX,AVGO,PLTR,COIN} ``` It then launches the current compatibility gateway/runtime process: ```bash python3 -m backend.main \ --mode live \ --config-name production \ --host 127.0.0.1 \ --port 8765 \ --trigger-time now \ --poll-interval 15 ``` ## Skill Sandbox Configuration Production deployments should enable Docker-based skill sandbox for security isolation: ```bash # Install with sandbox support pip install -e ".[docker-sandbox]" # Verify Docker daemon is running docker info ``` Environment variables (set by `scripts/run_prod.sh` with defaults): | Variable | Default | Description | |----------|---------|-------------| | `SKILL_SANDBOX_MODE` | `docker` | Sandbox mode: `none` \| `docker` \| `kubernetes` | | `SKILL_SANDBOX_IMAGE` | `python:3.11-slim` | Docker image for sandbox | | `SKILL_SANDBOX_MEMORY_LIMIT` | `512m` | Memory limit per skill execution | | `SKILL_SANDBOX_CPU_LIMIT` | `1.0` | CPU limit per skill execution | | `SKILL_SANDBOX_NETWORK` | `none` | Network mode: `none` \| `bridge` | | `SKILL_SANDBOX_TIMEOUT` | `60` | Execution timeout in seconds | **Security recommendation**: Always use `SKILL_SANDBOX_MODE=docker` in production. The `none` mode (direct execution) is for development only and displays a security warning. ## What This Deployment Does Not Yet Cover The checked-in deployment artifacts do not currently document or automate: - split FastAPI service deployment on `8000` to `8003` - OpenClaw gateway deployment on `18789` - database backup/retention workflows - frontend build/publish steps - secret management If you move production fully to split-service mode, update this directory so it documents the new service topology explicitly instead of relying on the gateway- only path.