feat(agent): complete EvoAgent integration for all 6 agent roles
Migrate all agent roles from Legacy to EvoAgent architecture: - fundamentals_analyst, technical_analyst, sentiment_analyst, valuation_analyst - risk_manager, portfolio_manager Key changes: - EvoAgent now supports Portfolio Manager compatibility methods (_make_decision, get_decisions, get_portfolio_state, load_portfolio_state, update_portfolio) - Add UnifiedAgentFactory for centralized agent creation - ToolGuard with batch approval API and WebSocket broadcast - Legacy agents marked deprecated (AnalystAgent, RiskAgent, PMAgent) - Remove backend/agents/compat.py migration shim - Add run_id alongside workspace_id for semantic clarity - Complete integration test coverage (13 tests) - All smoke tests passing for 6 agent roles Constraint: Must maintain backward compatibility with existing run configs Constraint: Memory support must work with EvoAgent (no fallback to Legacy) Rejected: Separate PM implementation for EvoAgent | unified approach cleaner Confidence: high Scope-risk: broad Directive: EVO_AGENT_IDS env var still respected but defaults to all roles Not-tested: Kubernetes sandbox mode for skill execution
This commit is contained in:
18
.env.example
18
.env.example
@@ -26,6 +26,10 @@ EXPLAIN_RANGE_USE_LLM=
|
|||||||
# Memory module
|
# Memory module
|
||||||
MEMORY_API_KEY=
|
MEMORY_API_KEY=
|
||||||
|
|
||||||
|
# Experimental EvoAgent rollout for selected analysts only.
|
||||||
|
# Example: EVO_AGENT_IDS=fundamentals_analyst,risk_manager,portfolio_manager
|
||||||
|
EVO_AGENT_IDS=
|
||||||
|
|
||||||
# ================== Agent-Specific Model Configuration | Agent特定模型配置 ==================
|
# ================== Agent-Specific Model Configuration | Agent特定模型配置 ==================
|
||||||
AGENT_SENTIMENT_ANALYST_MODEL_NAME=deepseek-v3.2-exp
|
AGENT_SENTIMENT_ANALYST_MODEL_NAME=deepseek-v3.2-exp
|
||||||
AGENT_TECHNICAL_ANALYST_MODEL_NAME=glm-4.6
|
AGENT_TECHNICAL_ANALYST_MODEL_NAME=glm-4.6
|
||||||
@@ -35,6 +39,20 @@ AGENT_RISK_MANAGER_MODEL_NAME=qwen3-max-preview
|
|||||||
AGENT_PORTFOLIO_MANAGER_MODEL_NAME=qwen3-max-preview
|
AGENT_PORTFOLIO_MANAGER_MODEL_NAME=qwen3-max-preview
|
||||||
|
|
||||||
# ================== Advanced Configuration | 高阶配置 ==================
|
# ================== Advanced Configuration | 高阶配置 ==================
|
||||||
|
|
||||||
|
# Skill Sandbox Mode | 技能沙盒执行模式
|
||||||
|
# none = direct execution (default, development only) | 直接执行(默认,仅开发环境)
|
||||||
|
# docker = Docker container isolation | Docker 容器隔离
|
||||||
|
# kubernetes = Kubernetes Pod isolation (reserved) | Kubernetes Pod 隔离(预留)
|
||||||
|
SKILL_SANDBOX_MODE=none
|
||||||
|
|
||||||
|
# Docker Sandbox Settings (only used when SKILL_SANDBOX_MODE=docker) | Docker 沙盒配置
|
||||||
|
SKILL_SANDBOX_IMAGE=python:3.11-slim
|
||||||
|
SKILL_SANDBOX_MEMORY_LIMIT=512m
|
||||||
|
SKILL_SANDBOX_CPU_LIMIT=1.0
|
||||||
|
SKILL_SANDBOX_NETWORK=none
|
||||||
|
SKILL_SANDBOX_TIMEOUT=60
|
||||||
|
|
||||||
MAX_COMM_CYCLES=2
|
MAX_COMM_CYCLES=2
|
||||||
MARGIN_REQUIREMENT=0.5
|
MARGIN_REQUIREMENT=0.5
|
||||||
DATA_START_DATE=2022-01-01
|
DATA_START_DATE=2022-01-01
|
||||||
|
|||||||
101
README.md
101
README.md
@@ -39,22 +39,41 @@ The frontend exposes the trading room, runtime controls, logs, approvals, agent
|
|||||||
|
|
||||||
## Current Architecture
|
## Current Architecture
|
||||||
|
|
||||||
The repository is currently in a transition from a modular monolith to split service surfaces. The split-service path is the default local development mode.
|
The repository uses a **split-service runtime model** for local development and is the default supported path.
|
||||||
|
|
||||||
Current app surfaces:
|
### Runtime vs Design-Time
|
||||||
|
|
||||||
- `backend.apps.agent_service` on `:8000`: control plane for workspaces, agents, skills, and guard/approval APIs
|
- **runtime** — the active execution layer (scheduler, gateway, pipeline, approvals during a live run)
|
||||||
- `backend.apps.trading_service` on `:8001`: read-only trading data APIs
|
- **run** — one concrete execution instance (`runs/<run_id>/`)
|
||||||
- `backend.apps.news_service` on `:8002`: read-only explain/news APIs
|
- **design-time** — configuration and control-plane concepts before a specific runtime is launched
|
||||||
- `backend.apps.runtime_service` on `:8003`: runtime lifecycle APIs
|
- **workspace** — the design-time registry exposed by `agent_service` (`workspaces/`)
|
||||||
- `backend.apps.openclaw_service` on `:8004`: read-only OpenClaw facade
|
|
||||||
- WebSocket gateway on `:8765`: live event/feed channel for the frontend
|
|
||||||
|
|
||||||
The most important runtime path today is:
|
### Service Surfaces
|
||||||
|
|
||||||
`frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage`
|
| Service | Port | Responsibility |
|
||||||
|
|---------|------|----------------|
|
||||||
|
| `backend.apps.agent_service` | `:8000` | Control plane for workspaces, agents, skills, and guard/approval APIs |
|
||||||
|
| `backend.apps.trading_service` | `:8001` | Read-only trading data APIs |
|
||||||
|
| `backend.apps.news_service` | `:8002` | Read-only explain/news APIs |
|
||||||
|
| `backend.apps.runtime_service` | `:8003` | Runtime lifecycle APIs |
|
||||||
|
| `backend.apps.openclaw_service` | `:8004` | Read-only OpenClaw facade |
|
||||||
|
| WebSocket gateway | `:8765` | Live event/feed channel for the frontend |
|
||||||
|
|
||||||
Reference notes for the migration live in [services/README.md](./services/README.md).
|
### Active Runtime Path
|
||||||
|
|
||||||
|
```
|
||||||
|
frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage
|
||||||
|
```
|
||||||
|
|
||||||
|
Runtime state is stored in `runs/<run_id>/` — this is the **runtime source of truth**. The `workspaces/` directory is the **design-time registry**, not the runtime execution path.
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
|
||||||
|
- [docs/current-architecture.md](./docs/current-architecture.md) — canonical architecture facts
|
||||||
|
- [services/README.md](./services/README.md) — service boundaries and migration details
|
||||||
|
- [docs/current-architecture.excalidraw](./docs/current-architecture.excalidraw) — visual diagram
|
||||||
|
- [docs/development-roadmap.md](./docs/development-roadmap.md) — next-step execution plan
|
||||||
|
- [docs/terminology.md](./docs/terminology.md) — consistent terminology guide
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -114,6 +133,9 @@ MODEL_NAME=qwen3-max-preview
|
|||||||
|
|
||||||
# memory (optional unless --enable-memory is used)
|
# memory (optional unless --enable-memory is used)
|
||||||
MEMORY_API_KEY=
|
MEMORY_API_KEY=
|
||||||
|
|
||||||
|
# experimental: switch selected analyst / risk roles to EvoAgent
|
||||||
|
EVO_AGENT_IDS=
|
||||||
```
|
```
|
||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
@@ -121,6 +143,52 @@ Notes:
|
|||||||
- `FINNHUB_API_KEY` is required for live mode.
|
- `FINNHUB_API_KEY` is required for live mode.
|
||||||
- `POLYGON_API_KEY` enables long-lived market-store ingestion and refresh helpers.
|
- `POLYGON_API_KEY` enables long-lived market-store ingestion and refresh helpers.
|
||||||
- `MEMORY_API_KEY` is only required when long-term memory is enabled.
|
- `MEMORY_API_KEY` is only required when long-term memory is enabled.
|
||||||
|
- `EVO_AGENT_IDS` currently supports analyst roles plus `risk_manager` and `portfolio_manager`, and is intended for staged rollout.
|
||||||
|
|
||||||
|
### Skill Sandbox Security | 技能沙盒安全
|
||||||
|
|
||||||
|
Skill scripts can be executed in multiple sandbox modes controlled by `SKILL_SANDBOX_MODE`:
|
||||||
|
|
||||||
|
| Mode | Description | Use Case |
|
||||||
|
|------|-------------|----------|
|
||||||
|
| `none` | Direct execution, no isolation | Development only (default) |
|
||||||
|
| `docker` | Docker container isolation | Production with Docker |
|
||||||
|
| `kubernetes` | Kubernetes Pod isolation | Enterprise (reserved) |
|
||||||
|
|
||||||
|
Default configuration (development):
|
||||||
|
```bash
|
||||||
|
SKILL_SANDBOX_MODE=none
|
||||||
|
```
|
||||||
|
|
||||||
|
For production with Docker isolation:
|
||||||
|
```bash
|
||||||
|
SKILL_SANDBOX_MODE=docker
|
||||||
|
SKILL_SANDBOX_MEMORY_LIMIT=512m
|
||||||
|
SKILL_SANDBOX_CPU_LIMIT=1.0
|
||||||
|
SKILL_SANDBOX_NETWORK=none
|
||||||
|
```
|
||||||
|
|
||||||
|
When running in `none` mode, a runtime security warning is displayed on first skill execution as a reminder that scripts execute directly without isolation.
|
||||||
|
|
||||||
|
Smoke test for a specific staged EvoAgent rollout target:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 scripts/smoke_evo_runtime.py --agent-id fundamentals_analyst
|
||||||
|
```
|
||||||
|
|
||||||
|
This script starts a temporary runtime, verifies the gateway log contains the
|
||||||
|
selected `EvoAgent`, checks `runtime_state.json`, validates the approval wake-up
|
||||||
|
path, and then stops the runtime.
|
||||||
|
|
||||||
|
You can also include it in the local release check:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/check-prod-env.sh --smoke-evo
|
||||||
|
```
|
||||||
|
|
||||||
|
Without `EVO_AGENT_IDS`, this release check now runs
|
||||||
|
`fundamentals_analyst`, `risk_manager`, and `portfolio_manager`
|
||||||
|
smoke paths by default.
|
||||||
|
|
||||||
For a production-style local start flow, you can also use:
|
For a production-style local start flow, you can also use:
|
||||||
|
|
||||||
@@ -128,6 +196,9 @@ For a production-style local start flow, you can also use:
|
|||||||
./start.sh
|
./start.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The checked-in `production` label in the deploy scripts is only an example run
|
||||||
|
label. It should not be treated as a canonical root-level runtime directory.
|
||||||
|
|
||||||
### 3. Start the stack
|
### 3. Start the stack
|
||||||
|
|
||||||
Recommended local development flow:
|
Recommended local development flow:
|
||||||
@@ -159,6 +230,7 @@ python -m uvicorn backend.apps.agent_service:app --host 0.0.0.0 --port 8000 --re
|
|||||||
python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload
|
python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload
|
||||||
python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload
|
python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload
|
||||||
python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload
|
python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload
|
||||||
|
# compatibility gateway path, not the recommended primary dev entrypoint
|
||||||
python -m backend.main --mode live --host 0.0.0.0 --port 8765
|
python -m backend.main --mode live --host 0.0.0.0 --port 8765
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -208,6 +280,11 @@ unzip ret_data.zip -d backend/data
|
|||||||
- `runs/<run_id>/BOOTSTRAP.md` stores run-specific bootstrap values and prompt body
|
- `runs/<run_id>/BOOTSTRAP.md` stores run-specific bootstrap values and prompt body
|
||||||
- `runs/<run_id>/state/runtime_state.json` stores runtime snapshot state
|
- `runs/<run_id>/state/runtime_state.json` stores runtime snapshot state
|
||||||
- `runs/<run_id>/team_dashboard/*.json` is a compatibility/export layer for dashboard consumers, not the primary runtime source of truth
|
- `runs/<run_id>/team_dashboard/*.json` is a compatibility/export layer for dashboard consumers, not the primary runtime source of truth
|
||||||
|
- `ENABLE_DASHBOARD_COMPAT_EXPORTS=false` can disable those compatibility JSON exports in controlled environments while keeping runtime state persistence intact
|
||||||
|
|
||||||
|
Legacy root-level directories such as `live/`, `production/`, and `backtest/`
|
||||||
|
should be treated as historical compatibility artifacts, not the default runtime
|
||||||
|
location for new work.
|
||||||
|
|
||||||
Optional retention control:
|
Optional retention control:
|
||||||
|
|
||||||
@@ -304,7 +381,7 @@ trigger_time: "09:30"
|
|||||||
enable_memory: false
|
enable_memory: false
|
||||||
```
|
```
|
||||||
|
|
||||||
Initialize a run workspace with:
|
Initialize run-scoped assets with:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
evotraders init-workspace --config-name my_run
|
evotraders init-workspace --config-name my_run
|
||||||
|
|||||||
76
README_zh.md
76
README_zh.md
@@ -37,22 +37,41 @@
|
|||||||
|
|
||||||
## 当前架构
|
## 当前架构
|
||||||
|
|
||||||
仓库目前处于“模块化单体 -> 拆分服务”的迁移阶段,本地开发默认走 split-service 路径。
|
仓库目前使用 **split-service 运行时模型** 进行本地开发,这是默认支持的运行路径。
|
||||||
|
|
||||||
当前 app surface:
|
### 运行时 vs 设计时
|
||||||
|
|
||||||
- `backend.apps.agent_service`,端口 `8000`:控制面,负责 workspaces、agents、skills、审批接口
|
- **runtime** — 活跃的执行层(scheduler、gateway、pipeline、实盘运行期间的审批)
|
||||||
- `backend.apps.trading_service`,端口 `8001`:只读交易数据接口
|
- **run** — 一次具体的执行实例(`runs/<run_id>/`)
|
||||||
- `backend.apps.news_service`,端口 `8002`:只读 explain/news 接口
|
- **design-time** — 启动特定 runtime 之前的配置和控制面概念
|
||||||
- `backend.apps.runtime_service`,端口 `8003`:运行时生命周期接口
|
- **workspace** — `agent_service` 暴露的设计时注册表(`workspaces/`)
|
||||||
- `backend.apps.openclaw_service`,端口 `8004`:只读 OpenClaw facade
|
|
||||||
- WebSocket gateway,端口 `8765`:前端实时事件和 feed 通道
|
|
||||||
|
|
||||||
当前最关键的主链路是:
|
### 服务表面
|
||||||
|
|
||||||
`frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage`
|
| 服务 | 端口 | 职责 |
|
||||||
|
|------|------|------|
|
||||||
|
| `backend.apps.agent_service` | `:8000` | workspaces、agents、skills 和 guard/approval API 的控制面 |
|
||||||
|
| `backend.apps.trading_service` | `:8001` | 只读交易数据 API |
|
||||||
|
| `backend.apps.news_service` | `:8002` | 只读 explain/news API |
|
||||||
|
| `backend.apps.runtime_service` | `:8003` | 运行时生命周期 API |
|
||||||
|
| `backend.apps.openclaw_service` | `:8004` | 只读 OpenClaw facade |
|
||||||
|
| WebSocket gateway | `:8765` | 前端实时事件/feed 通道 |
|
||||||
|
|
||||||
迁移背景可参考 [services/README.md](./services/README.md)。
|
### 活跃运行时路径
|
||||||
|
|
||||||
|
```
|
||||||
|
frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage
|
||||||
|
```
|
||||||
|
|
||||||
|
运行时状态存储在 `runs/<run_id>/` — 这是 **运行时唯一真相源**。`workspaces/` 目录是 **设计时注册表**,不是运行时执行路径。
|
||||||
|
|
||||||
|
### 文档
|
||||||
|
|
||||||
|
- [docs/current-architecture.md](./docs/current-architecture.md) — 权威架构事实
|
||||||
|
- [services/README.md](./services/README.md) — 服务边界和迁移详情
|
||||||
|
- [docs/current-architecture.excalidraw](./docs/current-architecture.excalidraw) — 架构图
|
||||||
|
- [docs/development-roadmap.md](./docs/development-roadmap.md) — 下一步执行计划
|
||||||
|
- [docs/terminology.md](./docs/terminology.md) — 术语规范指南
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -112,6 +131,9 @@ MODEL_NAME=qwen3-max-preview
|
|||||||
|
|
||||||
# 长期记忆(只有启用 --enable-memory 才需要)
|
# 长期记忆(只有启用 --enable-memory 才需要)
|
||||||
MEMORY_API_KEY=
|
MEMORY_API_KEY=
|
||||||
|
|
||||||
|
# 实验性:将选定的 analyst / risk 角色切换到 EvoAgent
|
||||||
|
EVO_AGENT_IDS=
|
||||||
```
|
```
|
||||||
|
|
||||||
说明:
|
说明:
|
||||||
@@ -119,6 +141,23 @@ MEMORY_API_KEY=
|
|||||||
- live 模式必须配置 `FINNHUB_API_KEY`
|
- live 模式必须配置 `FINNHUB_API_KEY`
|
||||||
- `POLYGON_API_KEY` 用于长期 market store 的补数和刷新
|
- `POLYGON_API_KEY` 用于长期 market store 的补数和刷新
|
||||||
- `MEMORY_API_KEY` 仅在启用长期记忆时需要
|
- `MEMORY_API_KEY` 仅在启用长期记忆时需要
|
||||||
|
- `EVO_AGENT_IDS` 目前支持 analyst 角色以及 `risk_manager` 和 `portfolio_manager`,用于分阶段灰度发布
|
||||||
|
|
||||||
|
特定 EvoAgent 灰度目标的冒烟测试:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 scripts/smoke_evo_runtime.py --agent-id fundamentals_analyst
|
||||||
|
```
|
||||||
|
|
||||||
|
该脚本启动临时运行时,验证 gateway 日志包含选定的 `EvoAgent`,检查 `runtime_state.json`,验证审批唤醒路径,然后停止运行时。
|
||||||
|
|
||||||
|
你也可以将其包含在本地发布检查中:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/check-prod-env.sh --smoke-evo
|
||||||
|
```
|
||||||
|
|
||||||
|
未设置 `EVO_AGENT_IDS` 时,此发布检查默认运行 `fundamentals_analyst`、`risk_manager` 和 `portfolio_manager` 的冒烟路径。
|
||||||
|
|
||||||
如果要用更接近生产的本地启动方式,也可以直接执行:
|
如果要用更接近生产的本地启动方式,也可以直接执行:
|
||||||
|
|
||||||
@@ -157,9 +196,13 @@ python -m uvicorn backend.apps.agent_service:app --host 0.0.0.0 --port 8000 --re
|
|||||||
python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload
|
python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload
|
||||||
python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload
|
python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload
|
||||||
python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload
|
python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload
|
||||||
|
# 兼容性 gateway 路径,不是推荐的主要开发入口
|
||||||
python -m backend.main --mode live --host 0.0.0.0 --port 8765
|
python -m backend.main --mode live --host 0.0.0.0 --port 8765
|
||||||
```
|
```
|
||||||
|
|
||||||
|
仓库里部署脚本使用的 `production` 只是一个示例 run label,不应再把它理解成
|
||||||
|
系统规定的根目录运行目录名。
|
||||||
|
|
||||||
### 4. 使用 CLI 运行回测或实盘
|
### 4. 使用 CLI 运行回测或实盘
|
||||||
|
|
||||||
回测:
|
回测:
|
||||||
@@ -205,7 +248,10 @@ unzip ret_data.zip -d backend/data
|
|||||||
- 每次 run 的状态写入 `runs/<run_id>/`
|
- 每次 run 的状态写入 `runs/<run_id>/`
|
||||||
- `runs/<run_id>/BOOTSTRAP.md` 保存该 run 的 bootstrap 值和 prompt body
|
- `runs/<run_id>/BOOTSTRAP.md` 保存该 run 的 bootstrap 值和 prompt body
|
||||||
- `runs/<run_id>/state/runtime_state.json` 保存运行时快照
|
- `runs/<run_id>/state/runtime_state.json` 保存运行时快照
|
||||||
- `runs/<run_id>/team_dashboard/*.json` 主要是给 dashboard 用的兼容导出层,不是唯一真相源
|
- `runs/<run_id>/team_dashboard/*.json` 主要是给 dashboard 用的兼容导出层,不是运行时唯一真相源
|
||||||
|
- 在受控环境里可通过 `ENABLE_DASHBOARD_COMPAT_EXPORTS=false` 关闭这层兼容 JSON 导出,而不影响 runtime state 持久化
|
||||||
|
|
||||||
|
遗留的根级目录如 `live/`、`production/` 和 `backtest/` 应被视为历史兼容性产物,不是新工作的默认运行时位置。
|
||||||
|
|
||||||
可选保留策略:
|
可选保留策略:
|
||||||
|
|
||||||
@@ -231,7 +277,7 @@ VITE_TRADING_SERVICE_URL=http://localhost:8001
|
|||||||
VITE_WS_URL=ws://localhost:8765
|
VITE_WS_URL=ws://localhost:8765
|
||||||
```
|
```
|
||||||
|
|
||||||
如果不配置,前端会按本地默认值和兼容回退逻辑运行。
|
如果未设置这些变量,前端会回退到本地默认值和兼容性路径。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -302,7 +348,7 @@ trigger_time: "09:30"
|
|||||||
enable_memory: false
|
enable_memory: false
|
||||||
```
|
```
|
||||||
|
|
||||||
初始化一个 run 工作区:
|
初始化一个 run 运行资产目录:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
evotraders init-workspace --config-name my_run
|
evotraders init-workspace --config-name my_run
|
||||||
@@ -324,7 +370,7 @@ evotraders/
|
|||||||
│ └── cli.py # Typer CLI 入口
|
│ └── cli.py # Typer CLI 入口
|
||||||
├── frontend/ # React + Vite 前端
|
├── frontend/ # React + Vite 前端
|
||||||
├── shared/ # 拆分服务共用 client 和 schema
|
├── shared/ # 拆分服务共用 client 和 schema
|
||||||
├── runs/ # run 级状态和 dashboard 导出
|
├── runs/ # run-scoped 状态和 dashboards
|
||||||
├── data/ # 长期研究数据
|
├── data/ # 长期研究数据
|
||||||
└── services/README.md
|
└── services/README.md
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,14 +1,14 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
"""
|
"""
|
||||||
Agents package - EvoAgent architecture for trading system.
|
Agents package for the current mixed runtime.
|
||||||
|
|
||||||
Exports:
|
Exports:
|
||||||
- EvoAgent: Next-generation agent with workspace support
|
- EvoAgent: Next-generation agent with workspace support
|
||||||
- ToolGuardMixin: Tool call approval/denial flow
|
- ToolGuardMixin: Tool call approval/denial flow
|
||||||
- CommandHandler: System command handling
|
- CommandHandler: System command handling
|
||||||
- AgentFactory: Dynamic agent creation and management
|
- AgentFactory: Design-time agent creation under `workspaces/`
|
||||||
- WorkspaceManager: Legacy name for the persistent workspace registry
|
- WorkspaceManager: Legacy alias for the persistent `workspaces/` registry
|
||||||
- WorkspaceRegistry: Explicit run-time-agnostic workspace registry
|
- WorkspaceRegistry: Explicit design-time `workspaces/` registry
|
||||||
- RunWorkspaceManager: Run-scoped workspace asset manager
|
- RunWorkspaceManager: Run-scoped workspace asset manager
|
||||||
- AgentRegistry: Central agent registry
|
- AgentRegistry: Central agent registry
|
||||||
- Legacy compatibility: AnalystAgent, PMAgent, RiskAgent
|
- Legacy compatibility: AnalystAgent, PMAgent, RiskAgent
|
||||||
@@ -26,9 +26,6 @@ from .analyst import AnalystAgent
|
|||||||
from .portfolio_manager import PMAgent
|
from .portfolio_manager import PMAgent
|
||||||
from .risk_manager import RiskAgent
|
from .risk_manager import RiskAgent
|
||||||
|
|
||||||
# Compatibility layer
|
|
||||||
from .compat import LegacyAgentAdapter, adapt_agent, adapt_agents, is_legacy_agent
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
# New architecture
|
# New architecture
|
||||||
"EvoAgent",
|
"EvoAgent",
|
||||||
@@ -48,9 +45,4 @@ __all__ = [
|
|||||||
"AnalystAgent",
|
"AnalystAgent",
|
||||||
"PMAgent",
|
"PMAgent",
|
||||||
"RiskAgent",
|
"RiskAgent",
|
||||||
# Compatibility layer
|
|
||||||
"LegacyAgentAdapter",
|
|
||||||
"adapt_agent",
|
|
||||||
"adapt_agents",
|
|
||||||
"is_legacy_agent",
|
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -2,7 +2,13 @@
|
|||||||
"""
|
"""
|
||||||
Analyst Agent - Based on AgentScope ReActAgent
|
Analyst Agent - Based on AgentScope ReActAgent
|
||||||
Performs analysis using tools and LLM
|
Performs analysis using tools and LLM
|
||||||
|
|
||||||
|
.. deprecated:: 0.2.0
|
||||||
|
AnalystAgent is deprecated and will be removed in a future version.
|
||||||
|
Use :class:`backend.agents.base.evo_agent.EvoAgent` instead.
|
||||||
|
See docs/CRITICAL_FIXES.md for migration guide.
|
||||||
"""
|
"""
|
||||||
|
import warnings
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
from agentscope.agent import ReActAgent
|
from agentscope.agent import ReActAgent
|
||||||
@@ -13,11 +19,23 @@ from ..config.constants import ANALYST_TYPES
|
|||||||
from ..utils.progress import progress
|
from ..utils.progress import progress
|
||||||
from .prompt_factory import build_agent_system_prompt, clear_prompt_factory_cache
|
from .prompt_factory import build_agent_system_prompt, clear_prompt_factory_cache
|
||||||
|
|
||||||
|
# Emit deprecation warning on module import
|
||||||
|
warnings.warn(
|
||||||
|
"AnalystAgent is deprecated. Use EvoAgent instead. "
|
||||||
|
"See docs/CRITICAL_FIXES.md for migration guide.",
|
||||||
|
DeprecationWarning,
|
||||||
|
stacklevel=2,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class AnalystAgent(ReActAgent):
|
class AnalystAgent(ReActAgent):
|
||||||
"""
|
"""
|
||||||
Analyst Agent - Uses LLM for tool selection and analysis
|
Analyst Agent - Uses LLM for tool selection and analysis
|
||||||
Inherits from AgentScope's ReActAgent
|
Inherits from AgentScope's ReActAgent
|
||||||
|
|
||||||
|
.. deprecated:: 0.2.0
|
||||||
|
Use :class:`backend.agents.base.evo_agent.EvoAgent` with
|
||||||
|
workspace-driven configuration instead.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
@@ -33,6 +51,10 @@ class AnalystAgent(ReActAgent):
|
|||||||
"""
|
"""
|
||||||
Initialize Analyst Agent
|
Initialize Analyst Agent
|
||||||
|
|
||||||
|
.. deprecated:: 0.2.0
|
||||||
|
Use :class:`backend.agents.unified_factory.UnifiedAgentFactory`
|
||||||
|
or :class:`backend.agents.base.evo_agent.EvoAgent` instead.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
analyst_type: Type of analyst (e.g., "fundamentals", etc.)
|
analyst_type: Type of analyst (e.g., "fundamentals", etc.)
|
||||||
toolkit: AgentScope Toolkit instance
|
toolkit: AgentScope Toolkit instance
|
||||||
@@ -42,6 +64,14 @@ class AnalystAgent(ReActAgent):
|
|||||||
config: Configuration dictionary
|
config: Configuration dictionary
|
||||||
long_term_memory: Optional ReMeTaskLongTermMemory instance
|
long_term_memory: Optional ReMeTaskLongTermMemory instance
|
||||||
"""
|
"""
|
||||||
|
# Emit runtime deprecation warning
|
||||||
|
warnings.warn(
|
||||||
|
f"AnalystAgent('{analyst_type}') is deprecated. "
|
||||||
|
"Use EvoAgent via UnifiedAgentFactory instead.",
|
||||||
|
DeprecationWarning,
|
||||||
|
stacklevel=2,
|
||||||
|
)
|
||||||
|
|
||||||
if analyst_type not in ANALYST_TYPES:
|
if analyst_type not in ANALYST_TYPES:
|
||||||
raise ValueError(
|
raise ValueError(
|
||||||
f"Unknown analyst type: {analyst_type}. "
|
f"Unknown analyst type: {analyst_type}. "
|
||||||
|
|||||||
@@ -90,6 +90,8 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
|
|||||||
sys_prompt: Optional[str] = None,
|
sys_prompt: Optional[str] = None,
|
||||||
max_iters: int = 10,
|
max_iters: int = 10,
|
||||||
memory: Optional[Any] = None,
|
memory: Optional[Any] = None,
|
||||||
|
long_term_memory: Optional[Any] = None,
|
||||||
|
long_term_memory_mode: str = "static_control",
|
||||||
enable_tool_guard: bool = True,
|
enable_tool_guard: bool = True,
|
||||||
enable_bootstrap_hook: bool = True,
|
enable_bootstrap_hook: bool = True,
|
||||||
enable_memory_compaction: bool = False,
|
enable_memory_compaction: bool = False,
|
||||||
@@ -97,6 +99,9 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
|
|||||||
memory_compact_threshold: Optional[int] = None,
|
memory_compact_threshold: Optional[int] = None,
|
||||||
env_context: Optional[str] = None,
|
env_context: Optional[str] = None,
|
||||||
prompt_files: Optional[List[str]] = None,
|
prompt_files: Optional[List[str]] = None,
|
||||||
|
# Portfolio manager specific parameters
|
||||||
|
initial_cash: Optional[float] = None,
|
||||||
|
margin_requirement: Optional[float] = None,
|
||||||
):
|
):
|
||||||
"""Initialize EvoAgent.
|
"""Initialize EvoAgent.
|
||||||
|
|
||||||
@@ -144,16 +149,24 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
|
|||||||
# Initialize hook manager
|
# Initialize hook manager
|
||||||
self._hook_manager = HookManager()
|
self._hook_manager = HookManager()
|
||||||
|
|
||||||
|
# Build kwargs for parent ReActAgent
|
||||||
|
kwargs = {
|
||||||
|
"name": agent_id,
|
||||||
|
"model": model,
|
||||||
|
"sys_prompt": self._sys_prompt,
|
||||||
|
"toolkit": toolkit,
|
||||||
|
"memory": memory or InMemoryMemory(),
|
||||||
|
"formatter": formatter,
|
||||||
|
"max_iters": max_iters,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add long-term memory if provided
|
||||||
|
if long_term_memory:
|
||||||
|
kwargs["long_term_memory"] = long_term_memory
|
||||||
|
kwargs["long_term_memory_mode"] = long_term_memory_mode
|
||||||
|
|
||||||
# Initialize parent ReActAgent
|
# Initialize parent ReActAgent
|
||||||
super().__init__(
|
super().__init__(**kwargs)
|
||||||
name=agent_id,
|
|
||||||
model=model,
|
|
||||||
sys_prompt=self._sys_prompt,
|
|
||||||
toolkit=toolkit,
|
|
||||||
memory=memory or InMemoryMemory(),
|
|
||||||
formatter=formatter,
|
|
||||||
max_iters=max_iters,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Register hooks
|
# Register hooks
|
||||||
self._register_hooks(
|
self._register_hooks(
|
||||||
@@ -366,6 +379,110 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
|
|||||||
self.toolkit = new_toolkit
|
self.toolkit = new_toolkit
|
||||||
logger.info("Skills reloaded for agent: %s", self.agent_id)
|
logger.info("Skills reloaded for agent: %s", self.agent_id)
|
||||||
|
|
||||||
|
def _make_decision(
|
||||||
|
self,
|
||||||
|
ticker: str,
|
||||||
|
action: str,
|
||||||
|
quantity: int,
|
||||||
|
confidence: int = 50,
|
||||||
|
reasoning: str = "",
|
||||||
|
) -> "ToolResponse":
|
||||||
|
"""Record a trading decision for a ticker (PM agent compatibility).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
ticker: Stock ticker symbol (e.g., "AAPL")
|
||||||
|
action: Decision - "long", "short" or "hold"
|
||||||
|
quantity: Number of shares to trade (0 for hold)
|
||||||
|
confidence: Confidence level 0-100
|
||||||
|
reasoning: Explanation for this decision
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ToolResponse confirming decision recorded
|
||||||
|
"""
|
||||||
|
from agentscope.message import TextBlock
|
||||||
|
from agentscope.tool import ToolResponse
|
||||||
|
|
||||||
|
if action not in ["long", "short", "hold"]:
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text=f"Invalid action: {action}. Must be 'long', 'short', or 'hold'.",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Store decision in metadata for retrieval
|
||||||
|
if not hasattr(self, "_decisions"):
|
||||||
|
self._decisions = {}
|
||||||
|
|
||||||
|
self._decisions[ticker] = {
|
||||||
|
"action": action,
|
||||||
|
"quantity": quantity if action != "hold" else 0,
|
||||||
|
"confidence": confidence,
|
||||||
|
"reasoning": reasoning,
|
||||||
|
}
|
||||||
|
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text=f"Decision recorded: {action} {quantity} shares of {ticker} "
|
||||||
|
f"(confidence: {confidence}%)",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_decisions(self) -> Dict[str, Dict]:
|
||||||
|
"""Get decisions from current cycle (PM compatibility)."""
|
||||||
|
return getattr(self, "_decisions", {}).copy()
|
||||||
|
|
||||||
|
def get_portfolio_state(self) -> Dict[str, Any]:
|
||||||
|
"""Get current portfolio state (PM compatibility)."""
|
||||||
|
return getattr(self, "_portfolio", {}).copy()
|
||||||
|
|
||||||
|
def load_portfolio_state(self, portfolio: Dict[str, Any]) -> None:
|
||||||
|
"""Load portfolio state (PM compatibility).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
portfolio: Portfolio state dict with cash, positions, margin_used
|
||||||
|
"""
|
||||||
|
if not portfolio:
|
||||||
|
return
|
||||||
|
|
||||||
|
if not hasattr(self, "_portfolio"):
|
||||||
|
self._portfolio = {
|
||||||
|
"cash": 100000.0,
|
||||||
|
"positions": {},
|
||||||
|
"margin_used": 0.0,
|
||||||
|
"margin_requirement": 0.25,
|
||||||
|
}
|
||||||
|
|
||||||
|
self._portfolio = {
|
||||||
|
"cash": portfolio.get("cash", self._portfolio["cash"]),
|
||||||
|
"positions": portfolio.get("positions", {}).copy(),
|
||||||
|
"margin_used": portfolio.get("margin_used", 0.0),
|
||||||
|
"margin_requirement": portfolio.get(
|
||||||
|
"margin_requirement",
|
||||||
|
self._portfolio["margin_requirement"],
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
def update_portfolio(self, portfolio: Dict[str, Any]) -> None:
|
||||||
|
"""Update portfolio after external execution (PM compatibility).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
portfolio: Portfolio updates to apply
|
||||||
|
"""
|
||||||
|
if not hasattr(self, "_portfolio"):
|
||||||
|
self._portfolio = {
|
||||||
|
"cash": 100000.0,
|
||||||
|
"positions": {},
|
||||||
|
"margin_used": 0.0,
|
||||||
|
"margin_requirement": 0.25,
|
||||||
|
}
|
||||||
|
self._portfolio.update(portfolio)
|
||||||
|
|
||||||
def rebuild_sys_prompt(self) -> None:
|
def rebuild_sys_prompt(self) -> None:
|
||||||
"""Rebuild and replace the system prompt at runtime.
|
"""Rebuild and replace the system prompt at runtime.
|
||||||
|
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ import asyncio
|
|||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from datetime import datetime
|
from datetime import UTC, datetime
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
|
|
||||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Set
|
from typing import Any, Callable, Dict, Iterable, List, Optional, Set
|
||||||
@@ -73,11 +73,13 @@ class ApprovalRecord:
|
|||||||
self.tool_name = tool_name
|
self.tool_name = tool_name
|
||||||
self.tool_input = tool_input
|
self.tool_input = tool_input
|
||||||
self.agent_id = agent_id
|
self.agent_id = agent_id
|
||||||
|
# run_id is the new preferred name; workspace_id is kept for backward compatibility
|
||||||
|
self.run_id = workspace_id
|
||||||
self.workspace_id = workspace_id
|
self.workspace_id = workspace_id
|
||||||
self.session_id = session_id
|
self.session_id = session_id
|
||||||
self.status = ApprovalStatus.PENDING
|
self.status = ApprovalStatus.PENDING
|
||||||
self.findings = findings or []
|
self.findings = findings or []
|
||||||
self.created_at = datetime.utcnow()
|
self.created_at = datetime.now(UTC)
|
||||||
self.resolved_at: Optional[datetime] = None
|
self.resolved_at: Optional[datetime] = None
|
||||||
self.resolved_by: Optional[str] = None
|
self.resolved_by: Optional[str] = None
|
||||||
self.metadata: Dict[str, Any] = {}
|
self.metadata: Dict[str, Any] = {}
|
||||||
@@ -90,6 +92,7 @@ class ApprovalRecord:
|
|||||||
"tool_name": self.tool_name,
|
"tool_name": self.tool_name,
|
||||||
"tool_input": self.tool_input,
|
"tool_input": self.tool_input,
|
||||||
"agent_id": self.agent_id,
|
"agent_id": self.agent_id,
|
||||||
|
"run_id": self.run_id,
|
||||||
"workspace_id": self.workspace_id,
|
"workspace_id": self.workspace_id,
|
||||||
"session_id": self.session_id,
|
"session_id": self.session_id,
|
||||||
"findings": [f.to_dict() for f in self.findings],
|
"findings": [f.to_dict() for f in self.findings],
|
||||||
@@ -161,7 +164,7 @@ class ToolGuardStore:
|
|||||||
return record
|
return record
|
||||||
|
|
||||||
record.status = status
|
record.status = status
|
||||||
record.resolved_at = datetime.utcnow()
|
record.resolved_at = datetime.now(UTC)
|
||||||
record.resolved_by = resolved_by
|
record.resolved_by = resolved_by
|
||||||
if notify_request and record.pending_request:
|
if notify_request and record.pending_request:
|
||||||
if status == ApprovalStatus.APPROVED:
|
if status == ApprovalStatus.APPROVED:
|
||||||
@@ -395,18 +398,34 @@ class ToolGuardMixin:
|
|||||||
)
|
)
|
||||||
|
|
||||||
manager = get_global_runtime_manager()
|
manager = get_global_runtime_manager()
|
||||||
|
approval_data = {
|
||||||
|
"tool_name": record.tool_name,
|
||||||
|
"agent_id": record.agent_id,
|
||||||
|
"workspace_id": record.workspace_id,
|
||||||
|
"session_id": record.session_id,
|
||||||
|
"tool_input": record.tool_input,
|
||||||
|
}
|
||||||
|
|
||||||
if manager:
|
if manager:
|
||||||
manager.register_pending_approval(
|
manager.register_pending_approval(
|
||||||
record.approval_id,
|
record.approval_id,
|
||||||
{
|
approval_data,
|
||||||
"tool_name": record.tool_name,
|
|
||||||
"agent_id": record.agent_id,
|
|
||||||
"workspace_id": record.workspace_id,
|
|
||||||
"session_id": record.session_id,
|
|
||||||
"tool_input": record.tool_input,
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Broadcast WebSocket event for real-time UI updates
|
||||||
|
try:
|
||||||
|
if hasattr(manager, 'broadcast_event'):
|
||||||
|
await manager.broadcast_event({
|
||||||
|
"type": "approval_requested",
|
||||||
|
"approval_id": record.approval_id,
|
||||||
|
"agent_id": record.agent_id,
|
||||||
|
"tool_name": record.tool_name,
|
||||||
|
"timestamp": record.created_at.isoformat(),
|
||||||
|
"data": approval_data,
|
||||||
|
})
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Failed to broadcast approval event: {e}")
|
||||||
|
|
||||||
self._pending_approval = ToolApprovalRequest(
|
self._pending_approval = ToolApprovalRequest(
|
||||||
approval_id=record.approval_id,
|
approval_id=record.approval_id,
|
||||||
tool_name=tool_name,
|
tool_name=tool_name,
|
||||||
|
|||||||
@@ -1,146 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
Compatibility Layer - Adapters for legacy to EvoAgent migration.
|
|
||||||
|
|
||||||
Provides:
|
|
||||||
- LegacyAgentAdapter: Wraps old AnalystAgent to work with new interfaces
|
|
||||||
- Migration utilities for gradual adoption
|
|
||||||
"""
|
|
||||||
from typing import Any, Dict, Optional
|
|
||||||
|
|
||||||
from agentscope.message import Msg
|
|
||||||
|
|
||||||
from .agent_core import EvoAgent
|
|
||||||
|
|
||||||
|
|
||||||
class LegacyAgentAdapter:
|
|
||||||
"""
|
|
||||||
Adapter to make legacy AnalystAgent compatible with EvoAgent interfaces.
|
|
||||||
|
|
||||||
This allows gradual migration by wrapping existing agents.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, legacy_agent: Any):
|
|
||||||
"""
|
|
||||||
Initialize adapter.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
legacy_agent: Legacy AnalystAgent instance
|
|
||||||
"""
|
|
||||||
self._agent = legacy_agent
|
|
||||||
self.agent_id = getattr(legacy_agent, 'agent_id', getattr(legacy_agent, 'name', 'unknown'))
|
|
||||||
self.analyst_type = getattr(legacy_agent, 'analyst_type_key', None)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
"""Get agent name."""
|
|
||||||
return getattr(self._agent, 'name', self.agent_id)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def toolkit(self) -> Any:
|
|
||||||
"""Get agent toolkit."""
|
|
||||||
return getattr(self._agent, 'toolkit', None)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def model(self) -> Any:
|
|
||||||
"""Get agent model."""
|
|
||||||
return getattr(self._agent, 'model', None)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def memory(self) -> Any:
|
|
||||||
"""Get agent memory."""
|
|
||||||
return getattr(self._agent, 'memory', None)
|
|
||||||
|
|
||||||
async def reply(self, x: Msg = None) -> Msg:
|
|
||||||
"""
|
|
||||||
Delegate to legacy agent's reply method.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
x: Input message
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Response message
|
|
||||||
"""
|
|
||||||
return await self._agent.reply(x)
|
|
||||||
|
|
||||||
def reload_runtime_assets(self, active_skill_dirs: Optional[list] = None) -> None:
|
|
||||||
"""
|
|
||||||
Reload runtime assets if supported.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
active_skill_dirs: Optional list of active skill directories
|
|
||||||
"""
|
|
||||||
if hasattr(self._agent, 'reload_runtime_assets'):
|
|
||||||
self._agent.reload_runtime_assets(active_skill_dirs)
|
|
||||||
|
|
||||||
def to_evo_agent(
|
|
||||||
self,
|
|
||||||
workspace_manager: Optional[Any] = None,
|
|
||||||
enable_tool_guard: bool = False,
|
|
||||||
) -> EvoAgent:
|
|
||||||
"""
|
|
||||||
Convert legacy agent to EvoAgent.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
workspace_manager: Optional workspace manager
|
|
||||||
enable_tool_guard: Whether to enable tool guard
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
New EvoAgent instance with same configuration
|
|
||||||
"""
|
|
||||||
return EvoAgent(
|
|
||||||
agent_id=self.agent_id,
|
|
||||||
model=self.model,
|
|
||||||
formatter=getattr(self._agent, 'formatter', None),
|
|
||||||
toolkit=self.toolkit,
|
|
||||||
workspace_manager=workspace_manager,
|
|
||||||
config=getattr(self._agent, 'config', {}),
|
|
||||||
long_term_memory=getattr(self._agent, 'long_term_memory', None),
|
|
||||||
enable_tool_guard=enable_tool_guard,
|
|
||||||
sys_prompt=getattr(self._agent, '_sys_prompt', None),
|
|
||||||
)
|
|
||||||
|
|
||||||
def __getattr__(self, name: str) -> Any:
|
|
||||||
"""Delegate unknown attributes to wrapped agent."""
|
|
||||||
return getattr(self._agent, name)
|
|
||||||
|
|
||||||
|
|
||||||
def is_legacy_agent(agent: Any) -> bool:
|
|
||||||
"""
|
|
||||||
Check if an agent is a legacy agent.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
agent: Agent instance to check
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if legacy agent
|
|
||||||
"""
|
|
||||||
return hasattr(agent, 'analyst_type_key') and not isinstance(agent, EvoAgent)
|
|
||||||
|
|
||||||
|
|
||||||
def adapt_agent(agent: Any) -> Any:
|
|
||||||
"""
|
|
||||||
Wrap agent in adapter if it's a legacy agent.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
agent: Agent instance
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Adapted agent or original if already EvoAgent
|
|
||||||
"""
|
|
||||||
if is_legacy_agent(agent):
|
|
||||||
return LegacyAgentAdapter(agent)
|
|
||||||
return agent
|
|
||||||
|
|
||||||
|
|
||||||
def adapt_agents(agents: list) -> list:
|
|
||||||
"""
|
|
||||||
Wrap multiple agents in adapters.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
agents: List of agent instances
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of adapted agents
|
|
||||||
"""
|
|
||||||
return [adapt_agent(agent) for agent in agents]
|
|
||||||
@@ -2,8 +2,13 @@
|
|||||||
"""
|
"""
|
||||||
Portfolio Manager Agent - Based on AgentScope ReActAgent
|
Portfolio Manager Agent - Based on AgentScope ReActAgent
|
||||||
Responsible for decision-making (NOT trade execution)
|
Responsible for decision-making (NOT trade execution)
|
||||||
"""
|
|
||||||
|
|
||||||
|
.. deprecated:: 0.2.0
|
||||||
|
PMAgent is deprecated and will be removed in a future version.
|
||||||
|
Use :class:`backend.agents.base.evo_agent.EvoAgent` instead.
|
||||||
|
See docs/CRITICAL_FIXES.md for migration guide.
|
||||||
|
"""
|
||||||
|
import warnings
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Optional, Callable
|
from typing import Any, Dict, Optional, Callable
|
||||||
|
|
||||||
@@ -17,11 +22,31 @@ from .prompt_factory import build_agent_system_prompt, clear_prompt_factory_cach
|
|||||||
from .team_pipeline_config import update_active_analysts
|
from .team_pipeline_config import update_active_analysts
|
||||||
from ..config.constants import ANALYST_TYPES
|
from ..config.constants import ANALYST_TYPES
|
||||||
|
|
||||||
|
# Emit deprecation warning on module import
|
||||||
|
warnings.warn(
|
||||||
|
"PMAgent is deprecated. Use EvoAgent instead. "
|
||||||
|
"See docs/CRITICAL_FIXES.md for migration guide.",
|
||||||
|
DeprecationWarning,
|
||||||
|
stacklevel=2,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class PMAgent(ReActAgent):
|
class PMAgent(ReActAgent):
|
||||||
"""
|
"""
|
||||||
Portfolio Manager Agent - Makes investment decisions
|
Portfolio Manager Agent - Makes investment decisions
|
||||||
|
|
||||||
|
Key features:
|
||||||
|
1. PM outputs decisions only (action + quantity per ticker)
|
||||||
|
2. Trade execution happens externally (in pipeline/executor)
|
||||||
|
3. Supports both backtest and live modes
|
||||||
|
|
||||||
|
.. deprecated:: 0.2.0
|
||||||
|
Use :class:`backend.agents.base.evo_agent.EvoAgent` with
|
||||||
|
workspace-driven configuration instead.
|
||||||
|
"""
|
||||||
|
"""
|
||||||
|
Portfolio Manager Agent - Makes investment decisions
|
||||||
|
|
||||||
Key features:
|
Key features:
|
||||||
1. PM outputs decisions only (action + quantity per ticker)
|
1. PM outputs decisions only (action + quantity per ticker)
|
||||||
2. Trade execution happens externally (in pipeline/executor)
|
2. Trade execution happens externally (in pipeline/executor)
|
||||||
@@ -41,6 +66,13 @@ class PMAgent(ReActAgent):
|
|||||||
toolkit_factory_kwargs: Optional[Dict[str, Any]] = None,
|
toolkit_factory_kwargs: Optional[Dict[str, Any]] = None,
|
||||||
toolkit: Optional[Toolkit] = None,
|
toolkit: Optional[Toolkit] = None,
|
||||||
):
|
):
|
||||||
|
# Emit runtime deprecation warning
|
||||||
|
warnings.warn(
|
||||||
|
"PMAgent is deprecated. Use EvoAgent via UnifiedAgentFactory instead.",
|
||||||
|
DeprecationWarning,
|
||||||
|
stacklevel=2,
|
||||||
|
)
|
||||||
|
|
||||||
object.__setattr__(self, "config", config or {})
|
object.__setattr__(self, "config", config or {})
|
||||||
|
|
||||||
# Portfolio state
|
# Portfolio state
|
||||||
|
|||||||
@@ -2,7 +2,13 @@
|
|||||||
"""
|
"""
|
||||||
Risk Manager Agent - Based on AgentScope ReActAgent
|
Risk Manager Agent - Based on AgentScope ReActAgent
|
||||||
Uses LLM for risk assessment
|
Uses LLM for risk assessment
|
||||||
|
|
||||||
|
.. deprecated:: 0.2.0
|
||||||
|
RiskAgent is deprecated and will be removed in a future version.
|
||||||
|
Use :class:`backend.agents.base.evo_agent.EvoAgent` instead.
|
||||||
|
See docs/CRITICAL_FIXES.md for migration guide.
|
||||||
"""
|
"""
|
||||||
|
import warnings
|
||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
from agentscope.agent import ReActAgent
|
from agentscope.agent import ReActAgent
|
||||||
@@ -13,11 +19,23 @@ from agentscope.tool import Toolkit
|
|||||||
from ..utils.progress import progress
|
from ..utils.progress import progress
|
||||||
from .prompt_factory import build_agent_system_prompt, clear_prompt_factory_cache
|
from .prompt_factory import build_agent_system_prompt, clear_prompt_factory_cache
|
||||||
|
|
||||||
|
# Emit deprecation warning on module import
|
||||||
|
warnings.warn(
|
||||||
|
"RiskAgent is deprecated. Use EvoAgent instead. "
|
||||||
|
"See docs/CRITICAL_FIXES.md for migration guide.",
|
||||||
|
DeprecationWarning,
|
||||||
|
stacklevel=2,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class RiskAgent(ReActAgent):
|
class RiskAgent(ReActAgent):
|
||||||
"""
|
"""
|
||||||
Risk Manager Agent - Uses LLM for risk assessment
|
Risk Manager Agent - Uses LLM for risk assessment
|
||||||
Inherits from AgentScope's ReActAgent
|
Inherits from AgentScope's ReActAgent
|
||||||
|
|
||||||
|
.. deprecated:: 0.2.0
|
||||||
|
Use :class:`backend.agents.base.evo_agent.EvoAgent` with
|
||||||
|
workspace-driven configuration instead.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
@@ -32,6 +50,10 @@ class RiskAgent(ReActAgent):
|
|||||||
"""
|
"""
|
||||||
Initialize Risk Manager Agent
|
Initialize Risk Manager Agent
|
||||||
|
|
||||||
|
.. deprecated:: 0.2.0
|
||||||
|
Use :class:`backend.agents.unified_factory.UnifiedAgentFactory`
|
||||||
|
or :class:`backend.agents.base.evo_agent.EvoAgent` instead.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
model: LLM model instance
|
model: LLM model instance
|
||||||
formatter: Message formatter instance
|
formatter: Message formatter instance
|
||||||
@@ -39,6 +61,13 @@ class RiskAgent(ReActAgent):
|
|||||||
config: Configuration dictionary
|
config: Configuration dictionary
|
||||||
long_term_memory: Optional ReMeTaskLongTermMemory instance
|
long_term_memory: Optional ReMeTaskLongTermMemory instance
|
||||||
"""
|
"""
|
||||||
|
# Emit runtime deprecation warning
|
||||||
|
warnings.warn(
|
||||||
|
"RiskAgent is deprecated. Use EvoAgent via UnifiedAgentFactory instead.",
|
||||||
|
DeprecationWarning,
|
||||||
|
stacklevel=2,
|
||||||
|
)
|
||||||
|
|
||||||
object.__setattr__(self, "config", config or {})
|
object.__setattr__(self, "config", config or {})
|
||||||
object.__setattr__(self, "agent_id", name)
|
object.__setattr__(self, "agent_id", name)
|
||||||
|
|
||||||
|
|||||||
433
backend/agents/unified_factory.py
Normal file
433
backend/agents/unified_factory.py
Normal file
@@ -0,0 +1,433 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""Unified Agent Factory - Centralized agent creation for 大时代.
|
||||||
|
|
||||||
|
This module provides a unified factory for creating all agent types (analysts,
|
||||||
|
risk manager, portfolio manager) with consistent configuration. It replaces
|
||||||
|
the scattered agent creation logic in main.py, pipeline.py, and pipeline_runner.py.
|
||||||
|
|
||||||
|
Key features:
|
||||||
|
- Single entry point for all agent creation
|
||||||
|
- Automatic EvoAgent vs Legacy Agent selection based on _resolve_evo_agent_ids()
|
||||||
|
- Consistent parameter handling across all agent types
|
||||||
|
- Support for workspace-driven configuration
|
||||||
|
- Long-term memory integration
|
||||||
|
"""
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import os
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import TYPE_CHECKING, Any, Optional, Protocol, TypeVar, Union
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from backend.agents.base.evo_agent import EvoAgent
|
||||||
|
from backend.agents.analyst import AnalystAgent
|
||||||
|
from backend.agents.risk_manager import RiskAgent
|
||||||
|
from backend.agents.portfolio_manager import PMAgent
|
||||||
|
|
||||||
|
# Type aliases for agent types
|
||||||
|
AgentType = Union["EvoAgent", "AnalystAgent", "RiskAgent", "PMAgent"]
|
||||||
|
T = TypeVar("T")
|
||||||
|
|
||||||
|
|
||||||
|
class AgentFactoryProtocol(Protocol):
|
||||||
|
"""Protocol for agent factory implementations."""
|
||||||
|
|
||||||
|
def create_analyst(
|
||||||
|
self,
|
||||||
|
analyst_type: str,
|
||||||
|
model: Any,
|
||||||
|
formatter: Any,
|
||||||
|
active_skill_dirs: Optional[list[Path]] = None,
|
||||||
|
long_term_memory: Optional[Any] = None,
|
||||||
|
) -> AnalystAgent | EvoAgent: ...
|
||||||
|
|
||||||
|
def create_risk_manager(
|
||||||
|
self,
|
||||||
|
model: Any,
|
||||||
|
formatter: Any,
|
||||||
|
active_skill_dirs: Optional[list[Path]] = None,
|
||||||
|
long_term_memory: Optional[Any] = None,
|
||||||
|
) -> RiskAgent | EvoAgent: ...
|
||||||
|
|
||||||
|
def create_portfolio_manager(
|
||||||
|
self,
|
||||||
|
model: Any,
|
||||||
|
formatter: Any,
|
||||||
|
initial_cash: float,
|
||||||
|
margin_requirement: float,
|
||||||
|
active_skill_dirs: Optional[list[Path]] = None,
|
||||||
|
long_term_memory: Optional[Any] = None,
|
||||||
|
) -> PMAgent | EvoAgent: ...
|
||||||
|
|
||||||
|
|
||||||
|
class UnifiedAgentFactory:
|
||||||
|
"""Unified factory for creating agents with consistent configuration.
|
||||||
|
|
||||||
|
This factory centralizes agent creation logic and automatically selects
|
||||||
|
between EvoAgent (new) and Legacy Agent based on the EVO_AGENT_IDS
|
||||||
|
environment variable configuration.
|
||||||
|
|
||||||
|
By default, all supported roles use EvoAgent. Set EVO_AGENT_IDS=legacy
|
||||||
|
to disable EvoAgent entirely.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
factory = UnifiedAgentFactory(
|
||||||
|
config_name="smoke_fullstack",
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create analyst
|
||||||
|
analyst = factory.create_analyst(
|
||||||
|
analyst_type="fundamentals_analyst",
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create risk manager
|
||||||
|
risk_mgr = factory.create_risk_manager(
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create portfolio manager
|
||||||
|
pm = factory.create_portfolio_manager(
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
initial_cash=100000.0,
|
||||||
|
margin_requirement=0.5,
|
||||||
|
)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
config_name: str,
|
||||||
|
skills_manager: Any,
|
||||||
|
toolkit_factory: Optional[Any] = None,
|
||||||
|
evo_agent_ids: Optional[set[str]] = None,
|
||||||
|
):
|
||||||
|
"""Initialize the agent factory.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config_name: Run configuration name (e.g., "smoke_fullstack")
|
||||||
|
skills_manager: SkillsManager instance for skill/asset management
|
||||||
|
toolkit_factory: Optional factory function for creating toolkits
|
||||||
|
evo_agent_ids: Optional set of agent IDs to use EvoAgent.
|
||||||
|
If None, uses _resolve_evo_agent_ids() default.
|
||||||
|
"""
|
||||||
|
self.config_name = config_name
|
||||||
|
self.skills_manager = skills_manager
|
||||||
|
self.toolkit_factory = toolkit_factory
|
||||||
|
|
||||||
|
# Determine which agents should use EvoAgent
|
||||||
|
if evo_agent_ids is not None:
|
||||||
|
self._evo_agent_ids = evo_agent_ids
|
||||||
|
else:
|
||||||
|
self._evo_agent_ids = self._resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
def _resolve_evo_agent_ids(self) -> set[str]:
|
||||||
|
"""Return agent ids selected to use EvoAgent.
|
||||||
|
|
||||||
|
By default, all supported roles use EvoAgent.
|
||||||
|
EVO_AGENT_IDS can be used to limit to specific roles.
|
||||||
|
"""
|
||||||
|
from backend.config.constants import ANALYST_TYPES
|
||||||
|
|
||||||
|
all_supported = set(ANALYST_TYPES) | {"risk_manager", "portfolio_manager"}
|
||||||
|
|
||||||
|
raw = os.getenv("EVO_AGENT_IDS", "")
|
||||||
|
if not raw.strip():
|
||||||
|
# Default: all supported roles use EvoAgent
|
||||||
|
return all_supported
|
||||||
|
|
||||||
|
if raw.strip().lower() in ("legacy", "old", "none"):
|
||||||
|
return set()
|
||||||
|
|
||||||
|
requested = {item.strip() for item in raw.split(",") if item.strip()}
|
||||||
|
return {
|
||||||
|
agent_id
|
||||||
|
for agent_id in requested
|
||||||
|
if agent_id in ANALYST_TYPES
|
||||||
|
or agent_id in {"risk_manager", "portfolio_manager"}
|
||||||
|
}
|
||||||
|
|
||||||
|
def _should_use_evo_agent(self, agent_id: str) -> bool:
|
||||||
|
"""Check if an agent should use EvoAgent."""
|
||||||
|
return agent_id in self._evo_agent_ids
|
||||||
|
|
||||||
|
def _create_toolkit(
|
||||||
|
self,
|
||||||
|
agent_type: str,
|
||||||
|
active_skill_dirs: Optional[list[Path]] = None,
|
||||||
|
owner: Optional[Any] = None,
|
||||||
|
) -> Any:
|
||||||
|
"""Create toolkit for an agent."""
|
||||||
|
if self.toolkit_factory is None:
|
||||||
|
from backend.agents.toolkit_factory import create_agent_toolkit
|
||||||
|
|
||||||
|
self.toolkit_factory = create_agent_toolkit
|
||||||
|
|
||||||
|
kwargs: dict[str, Any] = {
|
||||||
|
"active_skill_dirs": active_skill_dirs or [],
|
||||||
|
}
|
||||||
|
if owner is not None:
|
||||||
|
kwargs["owner"] = owner
|
||||||
|
|
||||||
|
return self.toolkit_factory(agent_type, self.config_name, **kwargs)
|
||||||
|
|
||||||
|
def _load_agent_config(self, agent_id: str) -> Any:
|
||||||
|
"""Load agent configuration from workspace."""
|
||||||
|
from backend.agents.agent_workspace import load_agent_workspace_config
|
||||||
|
|
||||||
|
workspace_dir = self.skills_manager.get_agent_asset_dir(
|
||||||
|
self.config_name, agent_id
|
||||||
|
)
|
||||||
|
config_path = workspace_dir / "agent.yaml"
|
||||||
|
|
||||||
|
if config_path.exists():
|
||||||
|
return load_agent_workspace_config(config_path)
|
||||||
|
|
||||||
|
# Return default config if no agent.yaml
|
||||||
|
return type(
|
||||||
|
"AgentConfig",
|
||||||
|
(),
|
||||||
|
{"prompt_files": ["SOUL.md"]},
|
||||||
|
)()
|
||||||
|
|
||||||
|
def _create_evo_agent(
|
||||||
|
self,
|
||||||
|
agent_id: str,
|
||||||
|
model: Any,
|
||||||
|
formatter: Any,
|
||||||
|
toolkit: Any,
|
||||||
|
agent_config: Any,
|
||||||
|
long_term_memory: Optional[Any] = None,
|
||||||
|
extra_kwargs: Optional[dict[str, Any]] = None,
|
||||||
|
) -> "EvoAgent":
|
||||||
|
"""Create an EvoAgent instance."""
|
||||||
|
from backend.agents.base.evo_agent import EvoAgent
|
||||||
|
|
||||||
|
workspace_dir = self.skills_manager.get_agent_asset_dir(
|
||||||
|
self.config_name, agent_id
|
||||||
|
)
|
||||||
|
|
||||||
|
kwargs: dict[str, Any] = {
|
||||||
|
"agent_id": agent_id,
|
||||||
|
"config_name": self.config_name,
|
||||||
|
"workspace_dir": workspace_dir,
|
||||||
|
"model": model,
|
||||||
|
"formatter": formatter,
|
||||||
|
"skills_manager": self.skills_manager,
|
||||||
|
"prompt_files": getattr(agent_config, "prompt_files", ["SOUL.md"]),
|
||||||
|
"long_term_memory": long_term_memory,
|
||||||
|
}
|
||||||
|
|
||||||
|
if extra_kwargs:
|
||||||
|
kwargs.update(extra_kwargs)
|
||||||
|
|
||||||
|
agent = EvoAgent(**kwargs)
|
||||||
|
agent.toolkit = toolkit
|
||||||
|
setattr(agent, "run_id", self.config_name)
|
||||||
|
# Keep workspace_id for backward compatibility
|
||||||
|
setattr(agent, "workspace_id", self.config_name)
|
||||||
|
|
||||||
|
return agent
|
||||||
|
|
||||||
|
def create_analyst(
|
||||||
|
self,
|
||||||
|
analyst_type: str,
|
||||||
|
model: Any,
|
||||||
|
formatter: Any,
|
||||||
|
active_skill_dirs: Optional[list[Path]] = None,
|
||||||
|
long_term_memory: Optional[Any] = None,
|
||||||
|
) -> "AnalystAgent | EvoAgent":
|
||||||
|
"""Create an analyst agent.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
analyst_type: Type of analyst (fundamentals, technical, sentiment, valuation)
|
||||||
|
model: LLM model instance
|
||||||
|
formatter: Message formatter instance
|
||||||
|
active_skill_dirs: Optional list of active skill directories
|
||||||
|
long_term_memory: Optional long-term memory instance
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
AnalystAgent or EvoAgent instance
|
||||||
|
"""
|
||||||
|
toolkit = self._create_toolkit(analyst_type, active_skill_dirs)
|
||||||
|
|
||||||
|
if self._should_use_evo_agent(analyst_type):
|
||||||
|
agent_config = self._load_agent_config(analyst_type)
|
||||||
|
return self._create_evo_agent(
|
||||||
|
agent_id=analyst_type,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
toolkit=toolkit,
|
||||||
|
agent_config=agent_config,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Legacy path
|
||||||
|
from backend.agents.analyst import AnalystAgent
|
||||||
|
|
||||||
|
return AnalystAgent(
|
||||||
|
analyst_type=analyst_type,
|
||||||
|
toolkit=toolkit,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
agent_id=analyst_type,
|
||||||
|
config={"config_name": self.config_name},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_risk_manager(
|
||||||
|
self,
|
||||||
|
model: Any,
|
||||||
|
formatter: Any,
|
||||||
|
active_skill_dirs: Optional[list[Path]] = None,
|
||||||
|
long_term_memory: Optional[Any] = None,
|
||||||
|
) -> "RiskAgent | EvoAgent":
|
||||||
|
"""Create a risk manager agent.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
model: LLM model instance
|
||||||
|
formatter: Message formatter instance
|
||||||
|
active_skill_dirs: Optional list of active skill directories
|
||||||
|
long_term_memory: Optional long-term memory instance
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
RiskAgent or EvoAgent instance
|
||||||
|
"""
|
||||||
|
toolkit = self._create_toolkit("risk_manager", active_skill_dirs)
|
||||||
|
|
||||||
|
if self._should_use_evo_agent("risk_manager"):
|
||||||
|
agent_config = self._load_agent_config("risk_manager")
|
||||||
|
return self._create_evo_agent(
|
||||||
|
agent_id="risk_manager",
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
toolkit=toolkit,
|
||||||
|
agent_config=agent_config,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Legacy path
|
||||||
|
from backend.agents.risk_manager import RiskAgent
|
||||||
|
|
||||||
|
return RiskAgent(
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
name="risk_manager",
|
||||||
|
config={"config_name": self.config_name},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
toolkit=toolkit,
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_portfolio_manager(
|
||||||
|
self,
|
||||||
|
model: Any,
|
||||||
|
formatter: Any,
|
||||||
|
initial_cash: float,
|
||||||
|
margin_requirement: float,
|
||||||
|
active_skill_dirs: Optional[list[Path]] = None,
|
||||||
|
long_term_memory: Optional[Any] = None,
|
||||||
|
) -> "PMAgent | EvoAgent":
|
||||||
|
"""Create a portfolio manager agent.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
model: LLM model instance
|
||||||
|
formatter: Message formatter instance
|
||||||
|
initial_cash: Initial cash allocation
|
||||||
|
margin_requirement: Margin requirement ratio
|
||||||
|
active_skill_dirs: Optional list of active skill directories
|
||||||
|
long_term_memory: Optional long-term memory instance
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
PMAgent or EvoAgent instance
|
||||||
|
"""
|
||||||
|
if self._should_use_evo_agent("portfolio_manager"):
|
||||||
|
agent_config = self._load_agent_config("portfolio_manager")
|
||||||
|
|
||||||
|
# For PM, toolkit is created after agent (needs owner reference)
|
||||||
|
from backend.agents.base.evo_agent import EvoAgent
|
||||||
|
|
||||||
|
workspace_dir = self.skills_manager.get_agent_asset_dir(
|
||||||
|
self.config_name, "portfolio_manager"
|
||||||
|
)
|
||||||
|
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id="portfolio_manager",
|
||||||
|
config_name=self.config_name,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
skills_manager=self.skills_manager,
|
||||||
|
prompt_files=getattr(agent_config, "prompt_files", ["SOUL.md"]),
|
||||||
|
initial_cash=initial_cash,
|
||||||
|
margin_requirement=margin_requirement,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
agent.toolkit = self._create_toolkit(
|
||||||
|
"portfolio_manager", active_skill_dirs, owner=agent
|
||||||
|
)
|
||||||
|
setattr(agent, "run_id", self.config_name)
|
||||||
|
# Keep workspace_id for backward compatibility
|
||||||
|
setattr(agent, "workspace_id", self.config_name)
|
||||||
|
return agent
|
||||||
|
|
||||||
|
# Legacy path
|
||||||
|
from backend.agents.portfolio_manager import PMAgent
|
||||||
|
|
||||||
|
return PMAgent(
|
||||||
|
name="portfolio_manager",
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
initial_cash=initial_cash,
|
||||||
|
margin_requirement=margin_requirement,
|
||||||
|
config={"config_name": self.config_name},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
toolkit_factory=self.toolkit_factory,
|
||||||
|
toolkit_factory_kwargs={"active_skill_dirs": active_skill_dirs or []},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# Singleton factory instance cache
|
||||||
|
_factory_cache: dict[str, UnifiedAgentFactory] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def get_agent_factory(
|
||||||
|
config_name: str,
|
||||||
|
skills_manager: Any,
|
||||||
|
toolkit_factory: Optional[Any] = None,
|
||||||
|
) -> UnifiedAgentFactory:
|
||||||
|
"""Get or create a cached agent factory instance.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config_name: Run configuration name
|
||||||
|
skills_manager: SkillsManager instance
|
||||||
|
toolkit_factory: Optional toolkit factory function
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
UnifiedAgentFactory instance (cached per config_name)
|
||||||
|
"""
|
||||||
|
cache_key = f"{config_name}:{id(skills_manager)}"
|
||||||
|
|
||||||
|
if cache_key not in _factory_cache:
|
||||||
|
_factory_cache[cache_key] = UnifiedAgentFactory(
|
||||||
|
config_name=config_name,
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
toolkit_factory=toolkit_factory,
|
||||||
|
)
|
||||||
|
|
||||||
|
return _factory_cache[cache_key]
|
||||||
|
|
||||||
|
|
||||||
|
def clear_factory_cache() -> None:
|
||||||
|
"""Clear the factory cache. Useful for testing."""
|
||||||
|
_factory_cache.clear()
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"UnifiedAgentFactory",
|
||||||
|
"AgentFactoryProtocol",
|
||||||
|
"get_agent_factory",
|
||||||
|
"clear_factory_cache",
|
||||||
|
]
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
"""Workspace Manager - Create and manage agent workspaces."""
|
"""Design-time workspace registry stored under `workspaces/`."""
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
@@ -323,5 +323,6 @@ class WorkspaceRegistry:
|
|||||||
yaml.safe_dump(config.to_dict(), f, allow_unicode=True, sort_keys=False)
|
yaml.safe_dump(config.to_dict(), f, allow_unicode=True, sort_keys=False)
|
||||||
|
|
||||||
|
|
||||||
# Backward-compatible alias: legacy imports expect WorkspaceManager.
|
# Backward-compatible alias: legacy imports expect WorkspaceManager to mean the
|
||||||
|
# design-time `workspaces/` registry.
|
||||||
WorkspaceManager = WorkspaceRegistry
|
WorkspaceManager = WorkspaceRegistry
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
"""Initialize run-scoped agent workspace assets."""
|
"""Initialize run-scoped agent workspace assets under `runs/<run_id>/`."""
|
||||||
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Dict, Iterable, Optional
|
from typing import Dict, Iterable, Optional
|
||||||
@@ -479,5 +479,6 @@ class RunWorkspaceManager:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# Backward-compatible alias: code importing WorkspaceManager from this module should continue to work.
|
# Backward-compatible alias: many runtime paths still import WorkspaceManager
|
||||||
|
# from this module when they mean the run-scoped manager.
|
||||||
WorkspaceManager = RunWorkspaceManager
|
WorkspaceManager = RunWorkspaceManager
|
||||||
|
|||||||
@@ -2,7 +2,10 @@
|
|||||||
"""
|
"""
|
||||||
Agent API Routes
|
Agent API Routes
|
||||||
|
|
||||||
Provides REST API endpoints for agent management within workspaces.
|
Provides REST API endpoints for both:
|
||||||
|
|
||||||
|
- design-time agent management under `workspaces/`
|
||||||
|
- run-scoped agent asset access under `runs/<run_id>/`
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
@@ -24,6 +27,30 @@ from backend.llm.models import get_agent_model_info
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
router = APIRouter(prefix="/api/workspaces/{workspace_id}/agents", tags=["agents"])
|
router = APIRouter(prefix="/api/workspaces/{workspace_id}/agents", tags=["agents"])
|
||||||
|
DESIGN_SCOPE = "design_workspace"
|
||||||
|
RUNTIME_SCOPE = "runtime_run"
|
||||||
|
RUNTIME_SCOPE_NOTE = (
|
||||||
|
"For profile, skills, and editable agent files, `workspace_id` is treated "
|
||||||
|
"as the active run id under `runs/<run_id>/`, not as the design-time "
|
||||||
|
"`workspaces/` registry."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _runtime_scope_fields() -> dict[str, str]:
|
||||||
|
return {
|
||||||
|
"scope_type": RUNTIME_SCOPE,
|
||||||
|
"scope_note": RUNTIME_SCOPE_NOTE,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _design_scope_fields() -> dict[str, str]:
|
||||||
|
return {
|
||||||
|
"scope_type": DESIGN_SCOPE,
|
||||||
|
"scope_note": (
|
||||||
|
"For design-time CRUD routes on this surface, `workspace_id` refers "
|
||||||
|
"to the persistent registry under `workspaces/`."
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
# Request/Response Models
|
# Request/Response Models
|
||||||
@@ -68,30 +95,40 @@ class AgentResponse(BaseModel):
|
|||||||
config_path: str
|
config_path: str
|
||||||
agent_dir: str
|
agent_dir: str
|
||||||
status: str = "inactive"
|
status: str = "inactive"
|
||||||
|
scope_type: str = DESIGN_SCOPE
|
||||||
|
scope_note: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class AgentFileResponse(BaseModel):
|
class AgentFileResponse(BaseModel):
|
||||||
"""Agent file content response."""
|
"""Agent file content response."""
|
||||||
filename: str
|
filename: str
|
||||||
content: str
|
content: str
|
||||||
|
scope_type: str = RUNTIME_SCOPE
|
||||||
|
scope_note: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class AgentProfileResponse(BaseModel):
|
class AgentProfileResponse(BaseModel):
|
||||||
agent_id: str
|
agent_id: str
|
||||||
workspace_id: str
|
workspace_id: str
|
||||||
profile: Dict[str, Any]
|
profile: Dict[str, Any]
|
||||||
|
scope_type: str = RUNTIME_SCOPE
|
||||||
|
scope_note: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class AgentSkillsResponse(BaseModel):
|
class AgentSkillsResponse(BaseModel):
|
||||||
agent_id: str
|
agent_id: str
|
||||||
workspace_id: str
|
workspace_id: str
|
||||||
skills: List[Dict[str, Any]]
|
skills: List[Dict[str, Any]]
|
||||||
|
scope_type: str = RUNTIME_SCOPE
|
||||||
|
scope_note: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
class SkillDetailResponse(BaseModel):
|
class SkillDetailResponse(BaseModel):
|
||||||
agent_id: str
|
agent_id: str
|
||||||
workspace_id: str
|
workspace_id: str
|
||||||
skill: Dict[str, Any]
|
skill: Dict[str, Any]
|
||||||
|
scope_type: str = RUNTIME_SCOPE
|
||||||
|
scope_note: Optional[str] = None
|
||||||
|
|
||||||
|
|
||||||
# Dependencies
|
# Dependencies
|
||||||
@@ -101,7 +138,7 @@ def get_agent_factory():
|
|||||||
|
|
||||||
|
|
||||||
def get_workspace_manager():
|
def get_workspace_manager():
|
||||||
"""Get run-scoped workspace manager instance."""
|
"""Get run-scoped asset manager for one runtime workspace/run id."""
|
||||||
return RunWorkspaceManager()
|
return RunWorkspaceManager()
|
||||||
|
|
||||||
|
|
||||||
@@ -119,7 +156,7 @@ async def create_agent(
|
|||||||
registry = Depends(get_registry),
|
registry = Depends(get_registry),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Create a new agent in a workspace.
|
Create a new agent in a design-time workspace registry entry.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
workspace_id: Workspace identifier
|
workspace_id: Workspace identifier
|
||||||
@@ -162,6 +199,7 @@ async def create_agent(
|
|||||||
config_path=str(agent.config_path),
|
config_path=str(agent.config_path),
|
||||||
agent_dir=str(agent.agent_dir),
|
agent_dir=str(agent.agent_dir),
|
||||||
status="inactive",
|
status="inactive",
|
||||||
|
**_design_scope_fields(),
|
||||||
)
|
)
|
||||||
|
|
||||||
except ValueError as e:
|
except ValueError as e:
|
||||||
@@ -174,7 +212,7 @@ async def list_agents(
|
|||||||
factory: AgentFactory = Depends(get_agent_factory),
|
factory: AgentFactory = Depends(get_agent_factory),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
List all agents in a workspace.
|
List all agents in a design-time workspace registry entry.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
workspace_id: Workspace identifier
|
workspace_id: Workspace identifier
|
||||||
@@ -192,6 +230,7 @@ async def list_agents(
|
|||||||
config_path=agent["config_path"],
|
config_path=agent["config_path"],
|
||||||
agent_dir=str(Path(agent["config_path"]).parent),
|
agent_dir=str(Path(agent["config_path"]).parent),
|
||||||
status="inactive",
|
status="inactive",
|
||||||
|
**_design_scope_fields(),
|
||||||
)
|
)
|
||||||
for agent in agents_data
|
for agent in agents_data
|
||||||
]
|
]
|
||||||
@@ -206,7 +245,7 @@ async def get_agent(
|
|||||||
registry = Depends(get_registry),
|
registry = Depends(get_registry),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Get agent details.
|
Get design-time agent details from the persistent workspace registry.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
workspace_id: Workspace identifier
|
workspace_id: Workspace identifier
|
||||||
@@ -227,6 +266,7 @@ async def get_agent(
|
|||||||
config_path=agent_info.config_path,
|
config_path=agent_info.config_path,
|
||||||
agent_dir=agent_info.agent_dir,
|
agent_dir=agent_info.agent_dir,
|
||||||
status=agent_info.status,
|
status=agent_info.status,
|
||||||
|
**_design_scope_fields(),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -275,6 +315,7 @@ async def get_agent_profile(
|
|||||||
"enabled_skills": agent_config.enabled_skills,
|
"enabled_skills": agent_config.enabled_skills,
|
||||||
"disabled_skills": agent_config.disabled_skills,
|
"disabled_skills": agent_config.disabled_skills,
|
||||||
},
|
},
|
||||||
|
**_runtime_scope_fields(),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -310,7 +351,12 @@ async def get_agent_skills(
|
|||||||
"status": status,
|
"status": status,
|
||||||
})
|
})
|
||||||
|
|
||||||
return AgentSkillsResponse(agent_id=agent_id, workspace_id=workspace_id, skills=payload)
|
return AgentSkillsResponse(
|
||||||
|
agent_id=agent_id,
|
||||||
|
workspace_id=workspace_id,
|
||||||
|
skills=payload,
|
||||||
|
**_runtime_scope_fields(),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@router.get("/{agent_id}/skills/{skill_name}", response_model=SkillDetailResponse)
|
@router.get("/{agent_id}/skills/{skill_name}", response_model=SkillDetailResponse)
|
||||||
@@ -329,7 +375,12 @@ async def get_agent_skill_detail(
|
|||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
raise HTTPException(status_code=404, detail=f"Unknown skill: {skill_name}")
|
raise HTTPException(status_code=404, detail=f"Unknown skill: {skill_name}")
|
||||||
|
|
||||||
return SkillDetailResponse(agent_id=agent_id, workspace_id=workspace_id, skill=detail)
|
return SkillDetailResponse(
|
||||||
|
agent_id=agent_id,
|
||||||
|
workspace_id=workspace_id,
|
||||||
|
skill=detail,
|
||||||
|
**_runtime_scope_fields(),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@router.delete("/{agent_id}")
|
@router.delete("/{agent_id}")
|
||||||
@@ -416,6 +467,7 @@ async def update_agent(
|
|||||||
config_path=agent_info.config_path,
|
config_path=agent_info.config_path,
|
||||||
agent_dir=agent_info.agent_dir,
|
agent_dir=agent_info.agent_dir,
|
||||||
status=agent_info.status,
|
status=agent_info.status,
|
||||||
|
**_design_scope_fields(),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -656,7 +708,7 @@ async def get_agent_file(
|
|||||||
workspace_manager: RunWorkspaceManager = Depends(get_workspace_manager),
|
workspace_manager: RunWorkspaceManager = Depends(get_workspace_manager),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Read an agent's workspace file.
|
Read an agent file from the run-scoped asset tree under `runs/<run_id>/`.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
workspace_id: Workspace identifier
|
workspace_id: Workspace identifier
|
||||||
@@ -672,7 +724,11 @@ async def get_agent_file(
|
|||||||
agent_id=agent_id,
|
agent_id=agent_id,
|
||||||
filename=filename,
|
filename=filename,
|
||||||
)
|
)
|
||||||
return AgentFileResponse(filename=filename, content=content)
|
return AgentFileResponse(
|
||||||
|
filename=filename,
|
||||||
|
content=content,
|
||||||
|
**_runtime_scope_fields(),
|
||||||
|
)
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
raise HTTPException(status_code=404, detail=f"File '{filename}' not found")
|
raise HTTPException(status_code=404, detail=f"File '{filename}' not found")
|
||||||
|
|
||||||
@@ -686,7 +742,7 @@ async def update_agent_file(
|
|||||||
workspace_manager: RunWorkspaceManager = Depends(get_workspace_manager),
|
workspace_manager: RunWorkspaceManager = Depends(get_workspace_manager),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Update an agent's workspace file.
|
Update an agent file in the run-scoped asset tree under `runs/<run_id>/`.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
workspace_id: Workspace identifier
|
workspace_id: Workspace identifier
|
||||||
@@ -704,6 +760,10 @@ async def update_agent_file(
|
|||||||
filename=filename,
|
filename=filename,
|
||||||
content=content,
|
content=content,
|
||||||
)
|
)
|
||||||
return AgentFileResponse(filename=filename, content=content)
|
return AgentFileResponse(
|
||||||
|
filename=filename,
|
||||||
|
content=content,
|
||||||
|
**_runtime_scope_fields(),
|
||||||
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise HTTPException(status_code=500, detail=str(e))
|
raise HTTPException(status_code=500, detail=str(e))
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ Provides REST API endpoints for tool guard operations.
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from typing import Any, Dict, List, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
from datetime import datetime
|
from datetime import UTC, datetime
|
||||||
|
|
||||||
from fastapi import APIRouter, HTTPException
|
from fastapi import APIRouter, HTTPException
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
@@ -29,7 +29,7 @@ class ToolCallRequest(BaseModel):
|
|||||||
tool_name: str = Field(..., description="Name of the tool")
|
tool_name: str = Field(..., description="Name of the tool")
|
||||||
tool_input: Dict[str, Any] = Field(default_factory=dict, description="Tool parameters")
|
tool_input: Dict[str, Any] = Field(default_factory=dict, description="Tool parameters")
|
||||||
agent_id: str = Field(..., description="Agent making the request")
|
agent_id: str = Field(..., description="Agent making the request")
|
||||||
workspace_id: str = Field(..., description="Workspace context")
|
workspace_id: str = Field(..., description="Run context; historical field name retained for compatibility")
|
||||||
session_id: Optional[str] = Field(None, description="Session identifier")
|
session_id: Optional[str] = Field(None, description="Session identifier")
|
||||||
|
|
||||||
|
|
||||||
@@ -46,6 +46,21 @@ class DenyRequest(BaseModel):
|
|||||||
reason: Optional[str] = Field(None, description="Reason for denial")
|
reason: Optional[str] = Field(None, description="Reason for denial")
|
||||||
|
|
||||||
|
|
||||||
|
class BatchApprovalRequest(BaseModel):
|
||||||
|
"""Request to approve multiple tool calls."""
|
||||||
|
approval_ids: List[str] = Field(..., description="List of approval request IDs")
|
||||||
|
one_time: bool = Field(True, description="Whether these are one-time approvals")
|
||||||
|
|
||||||
|
|
||||||
|
class BatchApprovalResponse(BaseModel):
|
||||||
|
"""Response for batch approval operation."""
|
||||||
|
approved: List[ApprovalResponse] = Field(default_factory=list, description="Successfully approved")
|
||||||
|
failed: List[Dict[str, Any]] = Field(default_factory=list, description="Failed approvals with errors")
|
||||||
|
total_requested: int
|
||||||
|
total_approved: int
|
||||||
|
total_failed: int
|
||||||
|
|
||||||
|
|
||||||
class ToolFinding(BaseModel):
|
class ToolFinding(BaseModel):
|
||||||
"""Tool guard finding."""
|
"""Tool guard finding."""
|
||||||
severity: SeverityLevel
|
severity: SeverityLevel
|
||||||
@@ -61,11 +76,17 @@ class ApprovalResponse(BaseModel):
|
|||||||
tool_input: Dict[str, Any]
|
tool_input: Dict[str, Any]
|
||||||
agent_id: str
|
agent_id: str
|
||||||
workspace_id: str
|
workspace_id: str
|
||||||
|
run_id: str
|
||||||
session_id: Optional[str] = None
|
session_id: Optional[str] = None
|
||||||
findings: List[ToolFinding] = Field(default_factory=list)
|
findings: List[ToolFinding] = Field(default_factory=list)
|
||||||
created_at: str
|
created_at: str
|
||||||
resolved_at: Optional[str] = None
|
resolved_at: Optional[str] = None
|
||||||
resolved_by: Optional[str] = None
|
resolved_by: Optional[str] = None
|
||||||
|
scope_type: str = "runtime_run"
|
||||||
|
scope_note: str = (
|
||||||
|
"Approvals are scoped to the active runtime run. `workspace_id` is "
|
||||||
|
"retained as a compatibility field name; prefer `run_id` for display."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class PendingApprovalsResponse(BaseModel):
|
class PendingApprovalsResponse(BaseModel):
|
||||||
@@ -91,6 +112,7 @@ def _to_response(record: ApprovalRecord) -> ApprovalResponse:
|
|||||||
tool_input=record.tool_input,
|
tool_input=record.tool_input,
|
||||||
agent_id=record.agent_id,
|
agent_id=record.agent_id,
|
||||||
workspace_id=record.workspace_id,
|
workspace_id=record.workspace_id,
|
||||||
|
run_id=record.workspace_id,
|
||||||
session_id=record.session_id,
|
session_id=record.session_id,
|
||||||
findings=[ToolFinding(**f.to_dict()) for f in record.findings],
|
findings=[ToolFinding(**f.to_dict()) for f in record.findings],
|
||||||
created_at=record.created_at.isoformat(),
|
created_at=record.created_at.isoformat(),
|
||||||
@@ -124,7 +146,7 @@ async def check_tool_call(
|
|||||||
|
|
||||||
if request.tool_name in SAFE_TOOLS:
|
if request.tool_name in SAFE_TOOLS:
|
||||||
record.status = ApprovalStatus.APPROVED
|
record.status = ApprovalStatus.APPROVED
|
||||||
record.resolved_at = datetime.utcnow()
|
record.resolved_at = datetime.now(UTC)
|
||||||
record.resolved_by = "system"
|
record.resolved_by = "system"
|
||||||
STORE.set_status(
|
STORE.set_status(
|
||||||
record.approval_id,
|
record.approval_id,
|
||||||
@@ -156,9 +178,12 @@ async def approve_tool_call(
|
|||||||
if record.status != ApprovalStatus.PENDING:
|
if record.status != ApprovalStatus.PENDING:
|
||||||
raise HTTPException(status_code=400, detail=f"Approval already {record.status}")
|
raise HTTPException(status_code=400, detail=f"Approval already {record.status}")
|
||||||
|
|
||||||
record.status = ApprovalStatus.APPROVED
|
record = STORE.set_status(
|
||||||
record.resolved_at = datetime.utcnow()
|
request.approval_id,
|
||||||
record.resolved_by = "user"
|
ApprovalStatus.APPROVED,
|
||||||
|
resolved_by="user",
|
||||||
|
notify_request=True,
|
||||||
|
)
|
||||||
|
|
||||||
return _to_response(record)
|
return _to_response(record)
|
||||||
|
|
||||||
@@ -183,9 +208,12 @@ async def deny_tool_call(
|
|||||||
if record.status != ApprovalStatus.PENDING:
|
if record.status != ApprovalStatus.PENDING:
|
||||||
raise HTTPException(status_code=400, detail=f"Approval already {record.status}")
|
raise HTTPException(status_code=400, detail=f"Approval already {record.status}")
|
||||||
|
|
||||||
record.status = ApprovalStatus.DENIED
|
record = STORE.set_status(
|
||||||
record.resolved_at = datetime.utcnow()
|
request.approval_id,
|
||||||
record.resolved_by = "user"
|
ApprovalStatus.DENIED,
|
||||||
|
resolved_by="user",
|
||||||
|
notify_request=True,
|
||||||
|
)
|
||||||
record.metadata["denial_reason"] = request.reason
|
record.metadata["denial_reason"] = request.reason
|
||||||
|
|
||||||
return _to_response(record)
|
return _to_response(record)
|
||||||
@@ -200,7 +228,7 @@ async def list_pending_approvals(
|
|||||||
List pending tool approval requests.
|
List pending tool approval requests.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
workspace_id: Filter by workspace
|
workspace_id: Filter by run id (historical query parameter name retained)
|
||||||
agent_id: Filter by agent
|
agent_id: Filter by agent
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
@@ -255,3 +283,58 @@ async def cancel_approval(
|
|||||||
|
|
||||||
STORE.cancel(approval_id)
|
STORE.cancel(approval_id)
|
||||||
return _to_response(record)
|
return _to_response(record)
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/approve/batch", response_model=BatchApprovalResponse)
|
||||||
|
async def batch_approve_tool_calls(
|
||||||
|
request: BatchApprovalRequest,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Approve multiple pending tool calls in a single request.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
request: Batch approval parameters with list of approval IDs
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Batch approval results with successful and failed approvals
|
||||||
|
"""
|
||||||
|
approved: List[ApprovalResponse] = []
|
||||||
|
failed: List[Dict[str, Any]] = []
|
||||||
|
|
||||||
|
for approval_id in request.approval_ids:
|
||||||
|
record = STORE.get(approval_id)
|
||||||
|
if not record:
|
||||||
|
failed.append({
|
||||||
|
"approval_id": approval_id,
|
||||||
|
"error": "Approval request not found",
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
if record.status != ApprovalStatus.PENDING:
|
||||||
|
failed.append({
|
||||||
|
"approval_id": approval_id,
|
||||||
|
"error": f"Approval already {record.status}",
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
record = STORE.set_status(
|
||||||
|
approval_id,
|
||||||
|
ApprovalStatus.APPROVED,
|
||||||
|
resolved_by="user",
|
||||||
|
notify_request=True,
|
||||||
|
)
|
||||||
|
approved.append(_to_response(record))
|
||||||
|
except Exception as e:
|
||||||
|
failed.append({
|
||||||
|
"approval_id": approval_id,
|
||||||
|
"error": str(e),
|
||||||
|
})
|
||||||
|
|
||||||
|
return BatchApprovalResponse(
|
||||||
|
approved=approved,
|
||||||
|
failed=failed,
|
||||||
|
total_requested=len(request.approval_ids),
|
||||||
|
total_approved=len(approved),
|
||||||
|
total_failed=len(failed),
|
||||||
|
)
|
||||||
|
|||||||
@@ -219,6 +219,22 @@ class GatewayStatusResponse(BaseModel):
|
|||||||
is_running: bool
|
is_running: bool
|
||||||
port: int
|
port: int
|
||||||
run_id: Optional[str] = None
|
run_id: Optional[str] = None
|
||||||
|
process_status: Optional[str] = None
|
||||||
|
pid: Optional[int] = None
|
||||||
|
|
||||||
|
|
||||||
|
class GatewayHealthResponse(BaseModel):
|
||||||
|
status: str
|
||||||
|
checks: Dict[str, Any]
|
||||||
|
timestamp: str
|
||||||
|
|
||||||
|
|
||||||
|
class RuntimeModeResponse(BaseModel):
|
||||||
|
mode: str
|
||||||
|
is_backtest: bool
|
||||||
|
run_id: Optional[str] = None
|
||||||
|
schedule_mode: Optional[str] = None
|
||||||
|
is_running: bool
|
||||||
|
|
||||||
|
|
||||||
class RuntimeConfigResponse(BaseModel):
|
class RuntimeConfigResponse(BaseModel):
|
||||||
@@ -264,6 +280,49 @@ def _load_run_snapshot(run_id: str) -> Dict[str, Any]:
|
|||||||
return json.loads(snapshot_path.read_text(encoding="utf-8"))
|
return json.loads(snapshot_path.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def _load_run_server_state(run_dir: Path) -> Dict[str, Any]:
|
||||||
|
"""Load persisted runtime server state if present."""
|
||||||
|
server_state_path = run_dir / "state" / "server_state.json"
|
||||||
|
if not server_state_path.exists():
|
||||||
|
return {}
|
||||||
|
try:
|
||||||
|
return json.loads(server_state_path.read_text(encoding="utf-8"))
|
||||||
|
except Exception:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def _extract_history_metrics(run_dir: Path) -> tuple[int, Optional[float]]:
|
||||||
|
"""Prefer runtime state files over dashboard exports for history summaries."""
|
||||||
|
server_state = _load_run_server_state(run_dir)
|
||||||
|
portfolio = server_state.get("portfolio") or {}
|
||||||
|
trades = server_state.get("trades")
|
||||||
|
total_trades = len(trades) if isinstance(trades, list) else 0
|
||||||
|
total_asset_value = None
|
||||||
|
if portfolio.get("total_value") is not None:
|
||||||
|
try:
|
||||||
|
total_asset_value = float(portfolio.get("total_value"))
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
total_asset_value = None
|
||||||
|
|
||||||
|
if total_trades or total_asset_value is not None:
|
||||||
|
return total_trades, total_asset_value
|
||||||
|
|
||||||
|
summary_path = run_dir / "team_dashboard" / "summary.json"
|
||||||
|
if not summary_path.exists():
|
||||||
|
return 0, None
|
||||||
|
try:
|
||||||
|
summary = json.loads(summary_path.read_text(encoding="utf-8"))
|
||||||
|
total_trades = int(summary.get("totalTrades") or 0)
|
||||||
|
total_asset_value = (
|
||||||
|
float(summary.get("totalAssetValue"))
|
||||||
|
if summary.get("totalAssetValue") is not None
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
return total_trades, total_asset_value
|
||||||
|
except Exception:
|
||||||
|
return 0, None
|
||||||
|
|
||||||
|
|
||||||
def _copy_path_if_exists(src: Path, dst: Path) -> None:
|
def _copy_path_if_exists(src: Path, dst: Path) -> None:
|
||||||
if not src.exists():
|
if not src.exists():
|
||||||
return
|
return
|
||||||
@@ -281,7 +340,7 @@ def _restore_run_assets(source_run_id: str, target_run_dir: Path) -> None:
|
|||||||
raise HTTPException(status_code=404, detail=f"Source run not found: {source_run_id}")
|
raise HTTPException(status_code=404, detail=f"Source run not found: {source_run_id}")
|
||||||
|
|
||||||
for relative in [
|
for relative in [
|
||||||
"team_dashboard",
|
"team_dashboard/_internal_state.json",
|
||||||
"agents",
|
"agents",
|
||||||
"skills",
|
"skills",
|
||||||
"memory",
|
"memory",
|
||||||
@@ -307,12 +366,10 @@ def _list_runs(limit: int = 50) -> list[RuntimeHistoryItem]:
|
|||||||
for run_dir in run_dirs[: max(1, int(limit))]:
|
for run_dir in run_dirs[: max(1, int(limit))]:
|
||||||
run_id = run_dir.name
|
run_id = run_dir.name
|
||||||
runtime_state_path = run_dir / "state" / "runtime_state.json"
|
runtime_state_path = run_dir / "state" / "runtime_state.json"
|
||||||
summary_path = run_dir / "team_dashboard" / "summary.json"
|
|
||||||
|
|
||||||
bootstrap: Dict[str, Any] = {}
|
bootstrap: Dict[str, Any] = {}
|
||||||
updated_at: Optional[str] = None
|
updated_at: Optional[str] = None
|
||||||
total_trades = 0
|
total_trades, total_asset_value = _extract_history_metrics(run_dir)
|
||||||
total_asset_value: Optional[float] = None
|
|
||||||
|
|
||||||
if runtime_state_path.exists():
|
if runtime_state_path.exists():
|
||||||
try:
|
try:
|
||||||
@@ -323,15 +380,6 @@ def _list_runs(limit: int = 50) -> list[RuntimeHistoryItem]:
|
|||||||
except Exception:
|
except Exception:
|
||||||
bootstrap = {}
|
bootstrap = {}
|
||||||
|
|
||||||
if summary_path.exists():
|
|
||||||
try:
|
|
||||||
summary = json.loads(summary_path.read_text(encoding="utf-8"))
|
|
||||||
total_trades = int(summary.get("totalTrades") or 0)
|
|
||||||
total_asset_value = float(summary.get("totalAssetValue")) if summary.get("totalAssetValue") is not None else None
|
|
||||||
except Exception:
|
|
||||||
total_trades = 0
|
|
||||||
total_asset_value = None
|
|
||||||
|
|
||||||
items.append(
|
items.append(
|
||||||
RuntimeHistoryItem(
|
RuntimeHistoryItem(
|
||||||
run_id=run_id,
|
run_id=run_id,
|
||||||
@@ -436,6 +484,14 @@ def _start_gateway_process(
|
|||||||
port: int
|
port: int
|
||||||
) -> subprocess.Popen:
|
) -> subprocess.Popen:
|
||||||
"""Start Gateway as a separate process."""
|
"""Start Gateway as a separate process."""
|
||||||
|
# Validate configuration before starting
|
||||||
|
validation_errors = _validate_gateway_config(bootstrap)
|
||||||
|
if validation_errors:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail=f"Gateway configuration validation failed: {'; '.join(validation_errors)}"
|
||||||
|
)
|
||||||
|
|
||||||
# Prepare environment
|
# Prepare environment
|
||||||
env = os.environ.copy()
|
env = os.environ.copy()
|
||||||
|
|
||||||
@@ -467,6 +523,168 @@ def _start_gateway_process(
|
|||||||
return process
|
return process
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_gateway_config(bootstrap: Dict[str, Any]) -> List[str]:
|
||||||
|
"""Validate Gateway bootstrap configuration.
|
||||||
|
|
||||||
|
Returns a list of validation error messages. Empty list means valid.
|
||||||
|
"""
|
||||||
|
errors: List[str] = []
|
||||||
|
|
||||||
|
# Check required environment variables based on mode
|
||||||
|
mode = bootstrap.get("mode", "live")
|
||||||
|
is_backtest = mode == "backtest"
|
||||||
|
|
||||||
|
# Validate mode
|
||||||
|
if mode not in ("live", "backtest"):
|
||||||
|
errors.append(f"Invalid mode '{mode}': must be 'live' or 'backtest'")
|
||||||
|
|
||||||
|
# Check API keys based on mode
|
||||||
|
if not is_backtest:
|
||||||
|
# Live mode requires FINNHUB_API_KEY
|
||||||
|
finnhub_key = os.getenv("FINNHUB_API_KEY")
|
||||||
|
if not finnhub_key:
|
||||||
|
errors.append("FINNHUB_API_KEY environment variable is required for live mode")
|
||||||
|
|
||||||
|
# Check LLM configuration
|
||||||
|
model_name = os.getenv("MODEL_NAME")
|
||||||
|
openai_key = os.getenv("OPENAI_API_KEY")
|
||||||
|
if not model_name:
|
||||||
|
errors.append("MODEL_NAME environment variable is not set")
|
||||||
|
if not openai_key:
|
||||||
|
errors.append("OPENAI_API_KEY environment variable is not set")
|
||||||
|
|
||||||
|
# Validate tickers
|
||||||
|
tickers = bootstrap.get("tickers", [])
|
||||||
|
if not tickers:
|
||||||
|
errors.append("No tickers specified in configuration")
|
||||||
|
elif not isinstance(tickers, list):
|
||||||
|
errors.append("Tickers must be a list")
|
||||||
|
|
||||||
|
# Validate numeric values
|
||||||
|
try:
|
||||||
|
initial_cash = float(bootstrap.get("initial_cash", 0))
|
||||||
|
if initial_cash <= 0:
|
||||||
|
errors.append("initial_cash must be greater than 0")
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
errors.append("initial_cash must be a valid number")
|
||||||
|
|
||||||
|
try:
|
||||||
|
margin_requirement = float(bootstrap.get("margin_requirement", 0))
|
||||||
|
if margin_requirement < 0 or margin_requirement > 1:
|
||||||
|
errors.append("margin_requirement must be between 0 and 1")
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
errors.append("margin_requirement must be a valid number")
|
||||||
|
|
||||||
|
# Validate backtest dates
|
||||||
|
if is_backtest:
|
||||||
|
start_date = bootstrap.get("start_date")
|
||||||
|
end_date = bootstrap.get("end_date")
|
||||||
|
if not start_date:
|
||||||
|
errors.append("start_date is required for backtest mode")
|
||||||
|
if not end_date:
|
||||||
|
errors.append("end_date is required for backtest mode")
|
||||||
|
if start_date and end_date:
|
||||||
|
try:
|
||||||
|
from datetime import datetime
|
||||||
|
start = datetime.strptime(start_date, "%Y-%m-%d")
|
||||||
|
end = datetime.strptime(end_date, "%Y-%m-%d")
|
||||||
|
if start >= end:
|
||||||
|
errors.append("start_date must be before end_date")
|
||||||
|
except ValueError:
|
||||||
|
errors.append("Dates must be in YYYY-MM-DD format")
|
||||||
|
|
||||||
|
# Validate schedule mode
|
||||||
|
schedule_mode = bootstrap.get("schedule_mode", "daily")
|
||||||
|
if schedule_mode not in ("daily", "intraday"):
|
||||||
|
errors.append(f"Invalid schedule_mode '{schedule_mode}': must be 'daily' or 'intraday'")
|
||||||
|
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def _get_gateway_process_details() -> Dict[str, Any]:
|
||||||
|
"""Get detailed information about the Gateway process."""
|
||||||
|
process = _runtime_state.gateway_process
|
||||||
|
details = {
|
||||||
|
"pid": None,
|
||||||
|
"status": "not_running",
|
||||||
|
"returncode": None,
|
||||||
|
}
|
||||||
|
|
||||||
|
if process is None:
|
||||||
|
return details
|
||||||
|
|
||||||
|
details["pid"] = process.pid
|
||||||
|
returncode = process.poll()
|
||||||
|
|
||||||
|
if returncode is None:
|
||||||
|
details["status"] = "running"
|
||||||
|
details["returncode"] = None
|
||||||
|
else:
|
||||||
|
details["status"] = "exited"
|
||||||
|
details["returncode"] = returncode
|
||||||
|
|
||||||
|
return details
|
||||||
|
|
||||||
|
|
||||||
|
def _check_gateway_health() -> Dict[str, Any]:
|
||||||
|
"""Perform comprehensive health checks on Gateway."""
|
||||||
|
checks = {
|
||||||
|
"process": {"status": "unknown", "details": {}},
|
||||||
|
"port": {"status": "unknown", "details": {}},
|
||||||
|
"configuration": {"status": "unknown", "details": {}},
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check process status
|
||||||
|
process_details = _get_gateway_process_details()
|
||||||
|
checks["process"]["details"] = process_details
|
||||||
|
|
||||||
|
if process_details["status"] == "running":
|
||||||
|
checks["process"]["status"] = "healthy"
|
||||||
|
elif process_details["status"] == "exited":
|
||||||
|
checks["process"]["status"] = "unhealthy"
|
||||||
|
checks["process"]["details"]["error"] = f"Process exited with code {process_details['returncode']}"
|
||||||
|
else:
|
||||||
|
checks["process"]["status"] = "unknown"
|
||||||
|
|
||||||
|
# Check port connectivity
|
||||||
|
import socket
|
||||||
|
port = _runtime_state.gateway_port
|
||||||
|
try:
|
||||||
|
with socket.create_connection(("127.0.0.1", port), timeout=2):
|
||||||
|
checks["port"]["status"] = "healthy"
|
||||||
|
checks["port"]["details"] = {"port": port, "accessible": True}
|
||||||
|
except OSError as e:
|
||||||
|
checks["port"]["status"] = "unhealthy"
|
||||||
|
checks["port"]["details"] = {"port": port, "accessible": False, "error": str(e)}
|
||||||
|
|
||||||
|
# Check configuration
|
||||||
|
try:
|
||||||
|
if _runtime_state.runtime_manager is not None:
|
||||||
|
checks["configuration"]["status"] = "healthy"
|
||||||
|
checks["configuration"]["details"]["has_runtime_manager"] = True
|
||||||
|
else:
|
||||||
|
checks["configuration"]["status"] = "degraded"
|
||||||
|
checks["configuration"]["details"]["has_runtime_manager"] = False
|
||||||
|
except Exception as e:
|
||||||
|
checks["configuration"]["status"] = "unknown"
|
||||||
|
checks["configuration"]["details"]["error"] = str(e)
|
||||||
|
|
||||||
|
# Determine overall status
|
||||||
|
statuses = [c["status"] for c in checks.values()]
|
||||||
|
if any(s == "unhealthy" for s in statuses):
|
||||||
|
overall_status = "unhealthy"
|
||||||
|
elif all(s == "healthy" for s in statuses):
|
||||||
|
overall_status = "healthy"
|
||||||
|
else:
|
||||||
|
overall_status = "degraded"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": overall_status,
|
||||||
|
"checks": checks,
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
@router.get("/context", response_model=RunContextResponse)
|
@router.get("/context", response_model=RunContextResponse)
|
||||||
async def get_run_context() -> RunContextResponse:
|
async def get_run_context() -> RunContextResponse:
|
||||||
"""Return active runtime context, or latest persisted context when stopped."""
|
"""Return active runtime context, or latest persisted context when stopped."""
|
||||||
@@ -512,9 +730,10 @@ async def get_runtime_history(limit: int = 20) -> RuntimeHistoryResponse:
|
|||||||
|
|
||||||
@router.get("/gateway/status", response_model=GatewayStatusResponse)
|
@router.get("/gateway/status", response_model=GatewayStatusResponse)
|
||||||
async def get_gateway_status() -> GatewayStatusResponse:
|
async def get_gateway_status() -> GatewayStatusResponse:
|
||||||
"""Get Gateway process status and port."""
|
"""Get Gateway process status and port with detailed process information."""
|
||||||
is_running = _is_gateway_running()
|
is_running = _is_gateway_running()
|
||||||
run_id = None
|
run_id = None
|
||||||
|
process_details = _get_gateway_process_details()
|
||||||
|
|
||||||
if is_running:
|
if is_running:
|
||||||
try:
|
try:
|
||||||
@@ -525,10 +744,55 @@ async def get_gateway_status() -> GatewayStatusResponse:
|
|||||||
return GatewayStatusResponse(
|
return GatewayStatusResponse(
|
||||||
is_running=is_running,
|
is_running=is_running,
|
||||||
port=_runtime_state.gateway_port,
|
port=_runtime_state.gateway_port,
|
||||||
run_id=run_id
|
run_id=run_id,
|
||||||
|
process_status=process_details["status"],
|
||||||
|
pid=process_details["pid"],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/gateway/health", response_model=GatewayHealthResponse)
|
||||||
|
async def get_gateway_health() -> GatewayHealthResponse:
|
||||||
|
"""Get comprehensive Gateway health check including process, port, and configuration status."""
|
||||||
|
health = _check_gateway_health()
|
||||||
|
return GatewayHealthResponse(**health)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/mode", response_model=RuntimeModeResponse)
|
||||||
|
async def get_runtime_mode() -> RuntimeModeResponse:
|
||||||
|
"""Get current runtime mode (live or backtest) and related configuration."""
|
||||||
|
is_running = _is_gateway_running()
|
||||||
|
|
||||||
|
if not is_running:
|
||||||
|
return RuntimeModeResponse(
|
||||||
|
mode="stopped",
|
||||||
|
is_backtest=False,
|
||||||
|
run_id=None,
|
||||||
|
schedule_mode=None,
|
||||||
|
is_running=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
context = _get_active_runtime_context()
|
||||||
|
bootstrap = context.get("bootstrap_values", {})
|
||||||
|
mode = bootstrap.get("mode", "live")
|
||||||
|
|
||||||
|
return RuntimeModeResponse(
|
||||||
|
mode=mode,
|
||||||
|
is_backtest=mode == "backtest",
|
||||||
|
run_id=context.get("config_name"),
|
||||||
|
schedule_mode=bootstrap.get("schedule_mode"),
|
||||||
|
is_running=True,
|
||||||
|
)
|
||||||
|
except HTTPException:
|
||||||
|
return RuntimeModeResponse(
|
||||||
|
mode="unknown",
|
||||||
|
is_backtest=False,
|
||||||
|
run_id=None,
|
||||||
|
schedule_mode=None,
|
||||||
|
is_running=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@router.get("/gateway/port")
|
@router.get("/gateway/port")
|
||||||
async def get_gateway_port(request: Request) -> Dict[str, Any]:
|
async def get_gateway_port(request: Request) -> Dict[str, Any]:
|
||||||
"""Get WebSocket Gateway port for frontend connection."""
|
"""Get WebSocket Gateway port for frontend connection."""
|
||||||
@@ -807,14 +1071,38 @@ async def start_runtime(
|
|||||||
_runtime_state.gateway_process = None
|
_runtime_state.gateway_process = None
|
||||||
log_path = _get_gateway_log_path_for_run(run_id)
|
log_path = _get_gateway_log_path_for_run(run_id)
|
||||||
log_tail = _read_log_tail(log_path, max_chars=4000)
|
log_tail = _read_log_tail(log_path, max_chars=4000)
|
||||||
|
|
||||||
|
# Build detailed error message
|
||||||
|
error_details = []
|
||||||
|
error_details.append(f"Gateway process exited unexpectedly")
|
||||||
|
|
||||||
|
process_details = _get_gateway_process_details()
|
||||||
|
if process_details.get("returncode") is not None:
|
||||||
|
error_details.append(f"Exit code: {process_details['returncode']}")
|
||||||
|
|
||||||
|
if log_tail:
|
||||||
|
error_details.append(f"Recent log output:\n{log_tail}")
|
||||||
|
else:
|
||||||
|
error_details.append("No log output available. Check environment configuration.")
|
||||||
|
|
||||||
|
# Check common configuration issues
|
||||||
|
config_errors = _validate_gateway_config(bootstrap)
|
||||||
|
if config_errors:
|
||||||
|
error_details.append(f"Configuration issues detected: {'; '.join(config_errors)}")
|
||||||
|
|
||||||
raise HTTPException(
|
raise HTTPException(
|
||||||
status_code=500,
|
status_code=500,
|
||||||
detail=f"Gateway failed to start: {log_tail or 'Unknown error'}"
|
detail="\n".join(error_details)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
except HTTPException:
|
||||||
|
raise
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
_stop_gateway()
|
_stop_gateway()
|
||||||
raise HTTPException(status_code=500, detail=f"Failed to start Gateway: {str(e)}")
|
raise HTTPException(
|
||||||
|
status_code=500,
|
||||||
|
detail=f"Failed to start Gateway: {type(e).__name__}: {str(e)}"
|
||||||
|
)
|
||||||
|
|
||||||
return LaunchResponse(
|
return LaunchResponse(
|
||||||
run_id=run_id,
|
run_id=run_id,
|
||||||
@@ -861,17 +1149,38 @@ async def stop_runtime(force: bool = True) -> StopResponse:
|
|||||||
was_running = _is_gateway_running()
|
was_running = _is_gateway_running()
|
||||||
|
|
||||||
if not was_running:
|
if not was_running:
|
||||||
|
process_details = _get_gateway_process_details()
|
||||||
|
if process_details["status"] == "exited":
|
||||||
|
# Process exited but we have a record of it
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=404,
|
||||||
|
detail=(
|
||||||
|
f"No runtime is currently running. "
|
||||||
|
f"Previous Gateway process exited with code {process_details['returncode']}. "
|
||||||
|
f"PID: {process_details['pid']}"
|
||||||
|
)
|
||||||
|
)
|
||||||
raise HTTPException(status_code=404, detail="No runtime is currently running")
|
raise HTTPException(status_code=404, detail="No runtime is currently running")
|
||||||
|
|
||||||
|
# Get process details before stopping for the response
|
||||||
|
process_details = _get_gateway_process_details()
|
||||||
|
pid_info = f" (PID: {process_details.get('pid')})" if process_details.get('pid') else ""
|
||||||
|
|
||||||
# Stop Gateway process
|
# Stop Gateway process
|
||||||
_stop_gateway()
|
stop_success = _stop_gateway()
|
||||||
|
|
||||||
|
if not stop_success:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=500,
|
||||||
|
detail=f"Failed to stop Gateway process{pid_info}. Process may have already terminated."
|
||||||
|
)
|
||||||
|
|
||||||
# Unregister runtime manager
|
# Unregister runtime manager
|
||||||
unregister_runtime_manager()
|
unregister_runtime_manager()
|
||||||
|
|
||||||
return StopResponse(
|
return StopResponse(
|
||||||
status="stopped",
|
status="stopped",
|
||||||
message="Runtime stopped successfully",
|
message=f"Runtime stopped successfully{pid_info}",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
"""
|
"""
|
||||||
Workspace API Routes
|
Workspace API Routes.
|
||||||
|
|
||||||
Provides REST API endpoints for workspace management.
|
These routes manage the design-time `workspaces/` registry, not the run-scoped
|
||||||
|
runtime data under `runs/<run_id>/`.
|
||||||
"""
|
"""
|
||||||
from typing import Any, Dict, List, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
@@ -31,7 +32,7 @@ class UpdateWorkspaceRequest(BaseModel):
|
|||||||
|
|
||||||
|
|
||||||
class WorkspaceResponse(BaseModel):
|
class WorkspaceResponse(BaseModel):
|
||||||
"""Workspace information response."""
|
"""Design-time workspace information response."""
|
||||||
workspace_id: str
|
workspace_id: str
|
||||||
name: str
|
name: str
|
||||||
description: str
|
description: str
|
||||||
@@ -89,10 +90,10 @@ async def list_workspaces(
|
|||||||
manager: WorkspaceManager = Depends(get_workspace_manager),
|
manager: WorkspaceManager = Depends(get_workspace_manager),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
List all workspaces.
|
List all design-time workspaces.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
List of workspaces
|
List of design-time workspaces
|
||||||
"""
|
"""
|
||||||
workspaces = manager.list_workspaces()
|
workspaces = manager.list_workspaces()
|
||||||
return WorkspaceListResponse(
|
return WorkspaceListResponse(
|
||||||
|
|||||||
@@ -19,13 +19,31 @@ agent_factory: AgentFactory | None = None
|
|||||||
workspace_manager: WorkspaceManager | None = None
|
workspace_manager: WorkspaceManager | None = None
|
||||||
|
|
||||||
|
|
||||||
|
def _build_scope_payload(project_root: Path) -> dict[str, object]:
|
||||||
|
return {
|
||||||
|
"design_time_registry": {
|
||||||
|
"root": str(project_root / "workspaces"),
|
||||||
|
"meaning": "Persistent control-plane workspace registry",
|
||||||
|
},
|
||||||
|
"runtime_assets": {
|
||||||
|
"root": str(project_root / "runs"),
|
||||||
|
"meaning": "Run-scoped runtime state and agent assets",
|
||||||
|
},
|
||||||
|
"agent_route_note": (
|
||||||
|
"On `/api/workspaces/{workspace_id}/agents/...`, design-time CRUD "
|
||||||
|
"routes still use `workspaces/`, while profile/skills/file routes "
|
||||||
|
"use `workspace_id` as a run id under `runs/<run_id>/`."
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
def create_app(project_root: Path | None = None) -> FastAPI:
|
def create_app(project_root: Path | None = None) -> FastAPI:
|
||||||
"""Create the agent control-plane app."""
|
"""Create the agent control-plane app."""
|
||||||
resolved_project_root = project_root or Path(__file__).resolve().parents[2]
|
resolved_project_root = project_root or Path(__file__).resolve().parents[2]
|
||||||
|
|
||||||
@asynccontextmanager
|
@asynccontextmanager
|
||||||
async def lifespan(_app: FastAPI) -> AsyncGenerator[None, None]:
|
async def lifespan(_app: FastAPI) -> AsyncGenerator[None, None]:
|
||||||
"""Initialize workspace and registry state for the control plane."""
|
"""Initialize design-time workspace and registry state for the control plane."""
|
||||||
global agent_factory, workspace_manager
|
global agent_factory, workspace_manager
|
||||||
|
|
||||||
workspace_manager = WorkspaceManager(project_root=resolved_project_root)
|
workspace_manager = WorkspaceManager(project_root=resolved_project_root)
|
||||||
@@ -34,7 +52,7 @@ def create_app(project_root: Path | None = None) -> FastAPI:
|
|||||||
|
|
||||||
registry = get_registry()
|
registry = get_registry()
|
||||||
print("✓ 大时代 API started")
|
print("✓ 大时代 API started")
|
||||||
print(f" - Workspaces root: {agent_factory.workspaces_root}")
|
print(f" - Design workspaces root: {agent_factory.workspaces_root}")
|
||||||
print(f" - Registered agents: {registry.get_agent_count()}")
|
print(f" - Registered agents: {registry.get_agent_count()}")
|
||||||
|
|
||||||
yield
|
yield
|
||||||
@@ -63,6 +81,7 @@ def create_app(project_root: Path | None = None) -> FastAPI:
|
|||||||
if workspace_manager
|
if workspace_manager
|
||||||
else 0
|
else 0
|
||||||
),
|
),
|
||||||
|
"scope_roots": _build_scope_payload(resolved_project_root),
|
||||||
}
|
}
|
||||||
|
|
||||||
@app.get("/api/status")
|
@app.get("/api/status")
|
||||||
@@ -72,6 +91,7 @@ def create_app(project_root: Path | None = None) -> FastAPI:
|
|||||||
return {
|
return {
|
||||||
"status": "operational",
|
"status": "operational",
|
||||||
"registry": registry.get_stats(),
|
"registry": registry.get_stats(),
|
||||||
|
"scope": _build_scope_payload(resolved_project_root),
|
||||||
}
|
}
|
||||||
|
|
||||||
app.include_router(workspaces_router)
|
app.include_router(workspaces_router)
|
||||||
|
|||||||
@@ -1,5 +1,21 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
"""Read-only OpenClaw CLI FastAPI surface."""
|
"""Read-only OpenClaw CLI FastAPI surface.
|
||||||
|
|
||||||
|
COMPATIBILITY_SURFACE: deferred
|
||||||
|
OWNER: runtime-team
|
||||||
|
SEE: docs/legacy-inventory.md#openclaw-dual-integration
|
||||||
|
|
||||||
|
This is the REST facade (port 8004) for OpenClaw integration.
|
||||||
|
For the WebSocket gateway integration, see:
|
||||||
|
- backend/services/gateway_openclaw_handlers.py
|
||||||
|
- shared/client/openclaw_websocket_client.py
|
||||||
|
|
||||||
|
Key differences:
|
||||||
|
- REST facade: typed Pydantic models, request/response, polling
|
||||||
|
- WebSocket: event-driven, real-time updates, bidirectional
|
||||||
|
|
||||||
|
Decision needed: which surface becomes the long-term contract?
|
||||||
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ from __future__ import annotations
|
|||||||
from fastapi import FastAPI
|
from fastapi import FastAPI
|
||||||
|
|
||||||
from backend.api import runtime_router
|
from backend.api import runtime_router
|
||||||
from backend.api.runtime import get_runtime_state
|
from backend.api.runtime import get_runtime_state, _check_gateway_health, _get_gateway_process_details
|
||||||
from backend.apps.cors import add_cors_middleware
|
from backend.apps.cors import add_cors_middleware
|
||||||
|
|
||||||
|
|
||||||
@@ -22,29 +22,57 @@ def create_app() -> FastAPI:
|
|||||||
|
|
||||||
@app.get("/health")
|
@app.get("/health")
|
||||||
async def health_check() -> dict[str, object]:
|
async def health_check() -> dict[str, object]:
|
||||||
"""Health check for the runtime service."""
|
"""Health check for the runtime service with Gateway process status."""
|
||||||
runtime_state = get_runtime_state()
|
runtime_state = get_runtime_state()
|
||||||
process = runtime_state.gateway_process
|
process = runtime_state.gateway_process
|
||||||
|
process_details = _get_gateway_process_details()
|
||||||
|
|
||||||
is_running = process is not None and process.poll() is None
|
is_running = process is not None and process.poll() is None
|
||||||
|
|
||||||
|
# Determine overall health status
|
||||||
|
if is_running:
|
||||||
|
status = "healthy"
|
||||||
|
elif process is not None:
|
||||||
|
# Process existed but exited
|
||||||
|
status = "degraded"
|
||||||
|
else:
|
||||||
|
status = "healthy" # Service is healthy even without Gateway running
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"status": "healthy",
|
"status": status,
|
||||||
"service": "runtime-service",
|
"service": "runtime-service",
|
||||||
"gateway_running": is_running,
|
"gateway": {
|
||||||
"gateway_port": runtime_state.gateway_port,
|
"running": is_running,
|
||||||
|
"port": runtime_state.gateway_port,
|
||||||
|
"pid": process_details.get("pid"),
|
||||||
|
"process_status": process_details.get("status"),
|
||||||
|
"returncode": process_details.get("returncode"),
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@app.get("/health/gateway")
|
||||||
|
async def gateway_health_check() -> dict[str, object]:
|
||||||
|
"""Detailed health check for the Gateway subprocess."""
|
||||||
|
health = _check_gateway_health()
|
||||||
|
return health
|
||||||
|
|
||||||
@app.get("/api/status")
|
@app.get("/api/status")
|
||||||
async def api_status() -> dict[str, object]:
|
async def api_status() -> dict[str, object]:
|
||||||
"""Service-level status payload for runtime orchestration."""
|
"""Service-level status payload for runtime orchestration."""
|
||||||
runtime_state = get_runtime_state()
|
runtime_state = get_runtime_state()
|
||||||
process = runtime_state.gateway_process
|
process = runtime_state.gateway_process
|
||||||
|
process_details = _get_gateway_process_details()
|
||||||
|
|
||||||
is_running = process is not None and process.poll() is None
|
is_running = process is not None and process.poll() is None
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"status": "operational",
|
"status": "operational",
|
||||||
"service": "runtime-service",
|
"service": "runtime-service",
|
||||||
"runtime": {
|
"runtime": {
|
||||||
"gateway_running": is_running,
|
"gateway_running": is_running,
|
||||||
"gateway_port": runtime_state.gateway_port,
|
"gateway_port": runtime_state.gateway_port,
|
||||||
|
"gateway_pid": process_details.get("pid"),
|
||||||
|
"gateway_process_status": process_details.get("status"),
|
||||||
"has_runtime_manager": runtime_state.runtime_manager is not None,
|
"has_runtime_manager": runtime_state.runtime_manager is not None,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
328
backend/cli.py
328
backend/cli.py
@@ -5,12 +5,36 @@
|
|||||||
|
|
||||||
This module provides easy-to-use commands for running backtest, live trading,
|
This module provides easy-to-use commands for running backtest, live trading,
|
||||||
and frontend development server.
|
and frontend development server.
|
||||||
|
|
||||||
|
ARCHITECTURE NOTE:
|
||||||
|
==================
|
||||||
|
This CLI supports TWO distinct runtime modes:
|
||||||
|
|
||||||
|
1. STANDALONE MODE (default):
|
||||||
|
- Uses `evotraders backtest` or `evotraders live` commands
|
||||||
|
- Starts a self-contained monolithic Gateway process with all agents
|
||||||
|
- Suitable for: quick testing, single-machine deployment, development
|
||||||
|
- WebSocket server runs on port 8765 (default)
|
||||||
|
- No external service dependencies
|
||||||
|
|
||||||
|
2. MICROSERVICE MODE (production):
|
||||||
|
- Uses `./start-dev.sh` or manual service orchestration
|
||||||
|
- Runs 4 separate FastAPI services (agent, runtime, trading, news)
|
||||||
|
- Gateway runs as a subprocess of runtime_service
|
||||||
|
- Suitable for: production scaling, distributed deployment
|
||||||
|
- Services communicate via REST APIs
|
||||||
|
|
||||||
|
When microservices are already running, standalone mode will warn you about
|
||||||
|
port conflicts and potential confusion. Use `--force` to override.
|
||||||
|
|
||||||
|
For more details, see: docs/current-architecture.md
|
||||||
"""
|
"""
|
||||||
# flake8: noqa: E501
|
# flake8: noqa: E501
|
||||||
# pylint: disable=R0912, R0915
|
# pylint: disable=R0912, R0915
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
|
import socket
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
@@ -42,6 +66,17 @@ from backend.data.market_store import MarketStore
|
|||||||
from backend.enrich.llm_enricher import get_explain_model_info, llm_enrichment_enabled
|
from backend.enrich.llm_enricher import get_explain_model_info, llm_enrichment_enabled
|
||||||
from backend.enrich.news_enricher import enrich_symbols
|
from backend.enrich.news_enricher import enrich_symbols
|
||||||
|
|
||||||
|
# Microservice port definitions (for conflict detection)
|
||||||
|
MICROSERVICE_PORTS = {
|
||||||
|
"agent_service": 8000,
|
||||||
|
"trading_service": 8001,
|
||||||
|
"news_service": 8002,
|
||||||
|
"runtime_service": 8003,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Gateway default port
|
||||||
|
GATEWAY_PORT = 8765
|
||||||
|
|
||||||
app = typer.Typer(
|
app = typer.Typer(
|
||||||
name="evotraders",
|
name="evotraders",
|
||||||
help="大时代:自进化多智能体交易系统",
|
help="大时代:自进化多智能体交易系统",
|
||||||
@@ -72,6 +107,101 @@ def get_project_root() -> Path:
|
|||||||
return Path(__file__).parent.parent
|
return Path(__file__).parent.parent
|
||||||
|
|
||||||
|
|
||||||
|
def _is_port_in_use(port: int, host: str = "127.0.0.1") -> bool:
|
||||||
|
"""Check if a port is already in use."""
|
||||||
|
try:
|
||||||
|
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
|
||||||
|
sock.settimeout(1.0)
|
||||||
|
result = sock.connect_ex((host, port))
|
||||||
|
return result == 0
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _detect_running_microservices() -> dict[str, int]:
|
||||||
|
"""Detect which microservices are already running."""
|
||||||
|
running = {}
|
||||||
|
for service_name, port in MICROSERVICE_PORTS.items():
|
||||||
|
if _is_port_in_use(port):
|
||||||
|
running[service_name] = port
|
||||||
|
return running
|
||||||
|
|
||||||
|
|
||||||
|
def _check_gateway_port_conflict(port: int) -> bool:
|
||||||
|
"""Check if the Gateway port is already in use."""
|
||||||
|
return _is_port_in_use(port)
|
||||||
|
|
||||||
|
|
||||||
|
def _display_mode_warning(
|
||||||
|
running_services: dict[str, int],
|
||||||
|
gateway_port: int,
|
||||||
|
force: bool = False,
|
||||||
|
) -> bool:
|
||||||
|
"""
|
||||||
|
Display warning when microservices are detected.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if should proceed, False if should abort
|
||||||
|
"""
|
||||||
|
if not running_services and not _check_gateway_port_conflict(gateway_port):
|
||||||
|
return True
|
||||||
|
|
||||||
|
console.print()
|
||||||
|
console.print(
|
||||||
|
Panel.fit(
|
||||||
|
"[bold yellow]⚠️ MICROSERVICE MODE DETECTED[/bold yellow]\n\n"
|
||||||
|
"You are attempting to start in STANDALONE mode, but microservices "
|
||||||
|
"appear to already be running. This can cause confusion and port conflicts.",
|
||||||
|
border_style="yellow",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if running_services:
|
||||||
|
console.print("\n[bold]Detected running services:[/bold]")
|
||||||
|
for service, port in running_services.items():
|
||||||
|
console.print(f" • {service}: [cyan]http://localhost:{port}[/cyan]")
|
||||||
|
|
||||||
|
if _check_gateway_port_conflict(gateway_port):
|
||||||
|
console.print(
|
||||||
|
f"\n[bold red]Port {gateway_port} is already in use![/bold red] "
|
||||||
|
"Another Gateway instance may be running."
|
||||||
|
)
|
||||||
|
|
||||||
|
console.print("\n[bold]Options:[/bold]")
|
||||||
|
console.print(" 1. Stop microservices first: [cyan]pkill -f 'uvicorn|backend.main'[/cyan]")
|
||||||
|
console.print(" 2. Use microservice mode instead: [cyan]./start-dev.sh[/cyan]")
|
||||||
|
console.print(" 3. Use a different port: [cyan]--port <other_port>[/cyan]")
|
||||||
|
|
||||||
|
if force:
|
||||||
|
console.print(
|
||||||
|
"\n[yellow]⚠️ --force flag used. Proceeding despite conflicts...[/yellow]"
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
|
||||||
|
console.print()
|
||||||
|
should_proceed = Confirm.ask(
|
||||||
|
"Do you want to proceed anyway?",
|
||||||
|
default=False,
|
||||||
|
)
|
||||||
|
return should_proceed
|
||||||
|
|
||||||
|
|
||||||
|
def _display_standalone_banner(mode: str, config_name: str) -> None:
|
||||||
|
"""Display standalone mode startup banner."""
|
||||||
|
console.print(
|
||||||
|
Panel.fit(
|
||||||
|
f"[bold cyan]大时代 {mode.upper()} Mode[/bold cyan]\n"
|
||||||
|
"[dim]Standalone Mode (Monolithic Gateway)[/dim]",
|
||||||
|
border_style="cyan",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
console.print("\n[dim]Architecture:[/dim]")
|
||||||
|
console.print(" Mode: [yellow]Standalone (Single Process)[/yellow]")
|
||||||
|
console.print(f" Config: [cyan]{config_name}[/cyan]")
|
||||||
|
console.print("\n[dim]Note: This is NOT microservice mode. For distributed deployment,")
|
||||||
|
console.print(" use ./start-dev.sh instead.[/dim]\n")
|
||||||
|
|
||||||
|
|
||||||
def handle_history_cleanup(config_name: str, auto_clean: bool = False) -> None:
|
def handle_history_cleanup(config_name: str, auto_clean: bool = False) -> None:
|
||||||
"""
|
"""
|
||||||
Handle cleanup of historical data for a given config.
|
Handle cleanup of historical data for a given config.
|
||||||
@@ -215,8 +345,8 @@ def run_data_updater(project_root: Path) -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def initialize_workspace(config_name: str) -> Path:
|
def initialize_run_assets(config_name: str) -> Path:
|
||||||
"""Create run-scoped workspace files for a config."""
|
"""Create run-scoped agent assets and bootstrap files for a config."""
|
||||||
workspace_manager = WorkspaceManager(project_root=get_project_root())
|
workspace_manager = WorkspaceManager(project_root=get_project_root())
|
||||||
workspace_manager.initialize_default_assets(
|
workspace_manager.initialize_default_assets(
|
||||||
config_name=config_name,
|
config_name=config_name,
|
||||||
@@ -438,14 +568,18 @@ def init_workspace(
|
|||||||
"default",
|
"default",
|
||||||
"--config-name",
|
"--config-name",
|
||||||
"-c",
|
"-c",
|
||||||
help="Configuration name for the workspace",
|
help="Run label under runs/<config_name> for the initialized asset tree.",
|
||||||
),
|
),
|
||||||
):
|
):
|
||||||
"""Initialize run-scoped BOOTSTRAP and agent prompt asset files."""
|
"""Initialize run-scoped BOOTSTRAP and agent asset files.
|
||||||
run_dir = initialize_workspace(config_name)
|
|
||||||
|
The command name is retained for compatibility even though the target is
|
||||||
|
the run-scoped asset tree under `runs/<config_name>/`.
|
||||||
|
"""
|
||||||
|
run_dir = initialize_run_assets(config_name)
|
||||||
console.print(
|
console.print(
|
||||||
Panel.fit(
|
Panel.fit(
|
||||||
f"[bold green]Workspace initialized[/bold green]\n[cyan]{run_dir}[/cyan]",
|
f"[bold green]Run assets initialized[/bold green]\n[cyan]{run_dir}[/cyan]",
|
||||||
border_style="green",
|
border_style="green",
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
@@ -861,6 +995,13 @@ def team_show(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# STANDALONE MODE COMMANDS (backtest/live)
|
||||||
|
# =============================================================================
|
||||||
|
# These commands start a self-contained monolithic Gateway process.
|
||||||
|
# For microservice mode, use ./start-dev.sh instead.
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
@app.command()
|
@app.command()
|
||||||
def backtest(
|
def backtest(
|
||||||
start: Optional[str] = typer.Option(
|
start: Optional[str] = typer.Option(
|
||||||
@@ -876,10 +1017,10 @@ def backtest(
|
|||||||
help="End date for backtest (YYYY-MM-DD)",
|
help="End date for backtest (YYYY-MM-DD)",
|
||||||
),
|
),
|
||||||
config_name: str = typer.Option(
|
config_name: str = typer.Option(
|
||||||
"backtest",
|
"default_backtest_run",
|
||||||
"--config-name",
|
"--config-name",
|
||||||
"-c",
|
"-c",
|
||||||
help="Configuration name for this backtest run",
|
help="Run label under runs/<config_name> for this backtest runtime.",
|
||||||
),
|
),
|
||||||
host: str = typer.Option(
|
host: str = typer.Option(
|
||||||
"0.0.0.0",
|
"0.0.0.0",
|
||||||
@@ -887,7 +1028,7 @@ def backtest(
|
|||||||
help="WebSocket server host",
|
help="WebSocket server host",
|
||||||
),
|
),
|
||||||
port: int = typer.Option(
|
port: int = typer.Option(
|
||||||
8765,
|
GATEWAY_PORT,
|
||||||
"--port",
|
"--port",
|
||||||
"-p",
|
"-p",
|
||||||
help="WebSocket server port",
|
help="WebSocket server port",
|
||||||
@@ -907,22 +1048,24 @@ def backtest(
|
|||||||
"--enable-memory",
|
"--enable-memory",
|
||||||
help="Enable ReMeTaskLongTermMemory for agents (requires MEMORY_API_KEY)",
|
help="Enable ReMeTaskLongTermMemory for agents (requires MEMORY_API_KEY)",
|
||||||
),
|
),
|
||||||
|
force: bool = typer.Option(
|
||||||
|
False,
|
||||||
|
"--force",
|
||||||
|
help="Force start even if microservices are detected (may cause conflicts)",
|
||||||
|
),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Run backtest mode with historical data.
|
Run backtest mode in STANDALONE mode (monolithic Gateway).
|
||||||
|
|
||||||
Example:
|
This starts a self-contained process with all agents. For microservice
|
||||||
|
mode (distributed services), use ./start-dev.sh instead.
|
||||||
|
|
||||||
|
Examples:
|
||||||
evotraders backtest --start 2025-11-01 --end 2025-12-01
|
evotraders backtest --start 2025-11-01 --end 2025-12-01
|
||||||
evotraders backtest --config-name my_strategy --port 9000
|
evotraders backtest --config-name my_strategy --port 9000
|
||||||
evotraders backtest --clean # Clear historical data before starting
|
evotraders backtest --clean # Clear historical data before starting
|
||||||
evotraders backtest --enable-memory # Enable long-term memory
|
evotraders backtest --enable-memory # Enable long-term memory
|
||||||
"""
|
"""
|
||||||
console.print(
|
|
||||||
Panel.fit(
|
|
||||||
"[bold cyan]大时代 Backtest Mode[/bold cyan]",
|
|
||||||
border_style="cyan",
|
|
||||||
),
|
|
||||||
)
|
|
||||||
poll_interval = int(_normalize_typer_value(poll_interval, 10))
|
poll_interval = int(_normalize_typer_value(poll_interval, 10))
|
||||||
|
|
||||||
# Validate dates - required for backtest
|
# Validate dates - required for backtest
|
||||||
@@ -948,13 +1091,18 @@ def backtest(
|
|||||||
)
|
)
|
||||||
raise typer.Exit(1) from exc
|
raise typer.Exit(1) from exc
|
||||||
|
|
||||||
# Handle historical data cleanup
|
# Check for microservice conflicts
|
||||||
handle_history_cleanup(config_name, auto_clean=clean)
|
running_services = _detect_running_microservices()
|
||||||
|
if running_services or _check_gateway_port_conflict(port):
|
||||||
|
if not _display_mode_warning(running_services, port, force=force):
|
||||||
|
console.print("\n[yellow]Startup aborted.[/yellow]")
|
||||||
|
raise typer.Exit(0)
|
||||||
|
|
||||||
|
# Display standalone mode banner
|
||||||
|
_display_standalone_banner("backtest", config_name)
|
||||||
|
|
||||||
# Display configuration
|
# Display configuration
|
||||||
console.print("\n[bold]Configuration:[/bold]")
|
console.print("\n[bold]Configuration:[/bold]")
|
||||||
console.print(" Mode: Backtest")
|
|
||||||
console.print(f" Config: {config_name}")
|
|
||||||
console.print(f" Period: {start} -> {end}")
|
console.print(f" Period: {start} -> {end}")
|
||||||
console.print(f" Server: {host}:{port}")
|
console.print(f" Server: {host}:{port}")
|
||||||
console.print(f" Poll Interval: {poll_interval}s")
|
console.print(f" Poll Interval: {poll_interval}s")
|
||||||
@@ -964,6 +1112,9 @@ def backtest(
|
|||||||
console.print("\nAccess frontend at: [cyan]http://localhost:5173[/cyan]")
|
console.print("\nAccess frontend at: [cyan]http://localhost:5173[/cyan]")
|
||||||
console.print("Press Ctrl+C to stop\n")
|
console.print("Press Ctrl+C to stop\n")
|
||||||
|
|
||||||
|
# Handle historical data cleanup
|
||||||
|
handle_history_cleanup(config_name, auto_clean=clean)
|
||||||
|
|
||||||
# Change to project root
|
# Change to project root
|
||||||
project_root = get_project_root()
|
project_root = get_project_root()
|
||||||
os.chdir(project_root)
|
os.chdir(project_root)
|
||||||
@@ -1020,10 +1171,10 @@ def backtest(
|
|||||||
@app.command()
|
@app.command()
|
||||||
def live(
|
def live(
|
||||||
config_name: str = typer.Option(
|
config_name: str = typer.Option(
|
||||||
"live",
|
"default_live_run",
|
||||||
"--config-name",
|
"--config-name",
|
||||||
"-c",
|
"-c",
|
||||||
help="Configuration name for this live run",
|
help="Run label under runs/<config_name> for this live runtime.",
|
||||||
),
|
),
|
||||||
host: str = typer.Option(
|
host: str = typer.Option(
|
||||||
"0.0.0.0",
|
"0.0.0.0",
|
||||||
@@ -1031,7 +1182,7 @@ def live(
|
|||||||
help="WebSocket server host",
|
help="WebSocket server host",
|
||||||
),
|
),
|
||||||
port: int = typer.Option(
|
port: int = typer.Option(
|
||||||
8765,
|
GATEWAY_PORT,
|
||||||
"--port",
|
"--port",
|
||||||
"-p",
|
"-p",
|
||||||
help="WebSocket server port",
|
help="WebSocket server port",
|
||||||
@@ -1067,11 +1218,19 @@ def live(
|
|||||||
"--enable-memory",
|
"--enable-memory",
|
||||||
help="Enable ReMeTaskLongTermMemory for agents (requires MEMORY_API_KEY)",
|
help="Enable ReMeTaskLongTermMemory for agents (requires MEMORY_API_KEY)",
|
||||||
),
|
),
|
||||||
|
force: bool = typer.Option(
|
||||||
|
False,
|
||||||
|
"--force",
|
||||||
|
help="Force start even if microservices are detected (may cause conflicts)",
|
||||||
|
),
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Run live trading mode with real-time data.
|
Run live trading mode in STANDALONE mode (monolithic Gateway).
|
||||||
|
|
||||||
Example:
|
This starts a self-contained process with all agents. For microservice
|
||||||
|
mode (distributed services), use ./start-dev.sh instead.
|
||||||
|
|
||||||
|
Examples:
|
||||||
evotraders live # Run immediately (default)
|
evotraders live # Run immediately (default)
|
||||||
evotraders live -t 22:30 # Run at 22:30 local time daily
|
evotraders live -t 22:30 # Run at 22:30 local time daily
|
||||||
evotraders live --schedule-mode intraday --interval-minutes 60
|
evotraders live --schedule-mode intraday --interval-minutes 60
|
||||||
@@ -1080,12 +1239,16 @@ def live(
|
|||||||
"""
|
"""
|
||||||
schedule_mode = str(_normalize_typer_value(schedule_mode, "daily"))
|
schedule_mode = str(_normalize_typer_value(schedule_mode, "daily"))
|
||||||
interval_minutes = int(_normalize_typer_value(interval_minutes, 60))
|
interval_minutes = int(_normalize_typer_value(interval_minutes, 60))
|
||||||
console.print(
|
|
||||||
Panel.fit(
|
# Check for microservice conflicts
|
||||||
"[bold cyan]大时代 LIVE Mode[/bold cyan]",
|
running_services = _detect_running_microservices()
|
||||||
border_style="cyan",
|
if running_services or _check_gateway_port_conflict(port):
|
||||||
),
|
if not _display_mode_warning(running_services, port, force=force):
|
||||||
)
|
console.print("\n[yellow]Startup aborted.[/yellow]")
|
||||||
|
raise typer.Exit(0)
|
||||||
|
|
||||||
|
# Display standalone mode banner
|
||||||
|
_display_standalone_banner("live", config_name)
|
||||||
|
|
||||||
# Check for required API key in live mode
|
# Check for required API key in live mode
|
||||||
env_file = get_project_root() / ".env"
|
env_file = get_project_root() / ".env"
|
||||||
@@ -1161,9 +1324,8 @@ def live(
|
|||||||
# Display configuration
|
# Display configuration
|
||||||
console.print("\n[bold]Configuration:[/bold]")
|
console.print("\n[bold]Configuration:[/bold]")
|
||||||
console.print(
|
console.print(
|
||||||
" Mode: [green]LIVE[/green] (Real-time prices via Finnhub)",
|
" Data Mode: [green]LIVE[/green] (Real-time prices via Finnhub)",
|
||||||
)
|
)
|
||||||
console.print(f" Config: {config_name}")
|
|
||||||
console.print(f" Server: {host}:{port}")
|
console.print(f" Server: {host}:{port}")
|
||||||
console.print(f" Poll Interval: {poll_interval}s")
|
console.print(f" Poll Interval: {poll_interval}s")
|
||||||
console.print(
|
console.print(
|
||||||
@@ -1230,7 +1392,7 @@ def live(
|
|||||||
@app.command()
|
@app.command()
|
||||||
def frontend(
|
def frontend(
|
||||||
port: int = typer.Option(
|
port: int = typer.Option(
|
||||||
8765,
|
GATEWAY_PORT,
|
||||||
"--ws-port",
|
"--ws-port",
|
||||||
"-p",
|
"-p",
|
||||||
help="WebSocket server port to connect to",
|
help="WebSocket server port to connect to",
|
||||||
@@ -1317,6 +1479,90 @@ def frontend(
|
|||||||
raise typer.Exit(1)
|
raise typer.Exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
@app.command()
|
||||||
|
def status(
|
||||||
|
detailed: bool = typer.Option(
|
||||||
|
False,
|
||||||
|
"--detailed",
|
||||||
|
"-d",
|
||||||
|
help="Show detailed service information",
|
||||||
|
),
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Check the status of running services (microservice or standalone mode).
|
||||||
|
|
||||||
|
Detects whether microservices are running and shows their health status.
|
||||||
|
"""
|
||||||
|
console.print(
|
||||||
|
Panel.fit(
|
||||||
|
"[bold cyan]大时代 Service Status[/bold cyan]",
|
||||||
|
border_style="cyan",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
running_services = _detect_running_microservices()
|
||||||
|
gateway_running = _check_gateway_port_conflict(GATEWAY_PORT)
|
||||||
|
|
||||||
|
# Determine mode
|
||||||
|
if running_services:
|
||||||
|
mode = "microservice"
|
||||||
|
console.print(f"\n[bold]Mode:[/bold] [green]{mode.upper()}[/green]")
|
||||||
|
console.print("[dim]Microservices are running on the following ports:[/dim]\n")
|
||||||
|
|
||||||
|
table = Table(title="Running Microservices")
|
||||||
|
table.add_column("Service", style="cyan")
|
||||||
|
table.add_column("Port", justify="right")
|
||||||
|
table.add_column("URL")
|
||||||
|
|
||||||
|
for service, port in running_services.items():
|
||||||
|
url = f"http://localhost:{port}"
|
||||||
|
table.add_row(service, str(port), url)
|
||||||
|
|
||||||
|
if gateway_running:
|
||||||
|
table.add_row(
|
||||||
|
"gateway (WebSocket)",
|
||||||
|
str(GATEWAY_PORT),
|
||||||
|
f"ws://localhost:{GATEWAY_PORT}",
|
||||||
|
)
|
||||||
|
|
||||||
|
console.print(table)
|
||||||
|
elif gateway_running:
|
||||||
|
mode = "standalone"
|
||||||
|
console.print(f"\n[bold]Mode:[/bold] [yellow]{mode.upper()}[/yellow]")
|
||||||
|
console.print("[dim]Standalone Gateway is running (monolithic mode)[/dim]")
|
||||||
|
console.print(f"\n Gateway: [cyan]ws://localhost:{GATEWAY_PORT}[/cyan]")
|
||||||
|
else:
|
||||||
|
console.print(f"\n[bold]Mode:[/bold] [red]NOT RUNNING[/red]")
|
||||||
|
console.print("\n[dim]No services detected. Start with:[/dim]")
|
||||||
|
console.print(" • Standalone: [cyan]evotraders backtest[/cyan] or [cyan]evotraders live[/cyan]")
|
||||||
|
console.print(" • Microservice: [cyan]./start-dev.sh[/cyan]")
|
||||||
|
|
||||||
|
if detailed and running_services:
|
||||||
|
console.print("\n[bold]Health Checks:[/bold]")
|
||||||
|
import urllib.request
|
||||||
|
import json
|
||||||
|
|
||||||
|
for service, port in running_services.items():
|
||||||
|
try:
|
||||||
|
req = urllib.request.Request(
|
||||||
|
f"http://localhost:{port}/health",
|
||||||
|
method="GET",
|
||||||
|
headers={"Accept": "application/json"},
|
||||||
|
)
|
||||||
|
with urllib.request.urlopen(req, timeout=2) as response:
|
||||||
|
if response.status == 200:
|
||||||
|
data = json.loads(response.read().decode())
|
||||||
|
status_text = data.get("status", "unknown")
|
||||||
|
color = "green" if status_text == "healthy" else "yellow"
|
||||||
|
console.print(f" {service}: [{color}]{status_text}[/{color}]")
|
||||||
|
else:
|
||||||
|
console.print(f" {service}: [yellow]HTTP {response.status}[/yellow]")
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f" {service}: [red]unreachable ({type(e).__name__})[/red]")
|
||||||
|
|
||||||
|
console.print()
|
||||||
|
|
||||||
|
|
||||||
@app.command()
|
@app.command()
|
||||||
def version():
|
def version():
|
||||||
"""Show the version of 大时代."""
|
"""Show the version of 大时代."""
|
||||||
@@ -1330,7 +1576,17 @@ def main():
|
|||||||
"""
|
"""
|
||||||
大时代:自进化多智能体交易系统
|
大时代:自进化多智能体交易系统
|
||||||
|
|
||||||
Use 'evotraders --help' to see available commands.
|
RUNTIME MODES:
|
||||||
|
--------------
|
||||||
|
• STANDALONE (default): Use 'evotraders backtest' or 'evotraders live'
|
||||||
|
Starts a self-contained monolithic Gateway with all agents.
|
||||||
|
Best for: quick testing, single-machine deployment
|
||||||
|
|
||||||
|
• MICROSERVICE: Use './start-dev.sh'
|
||||||
|
Starts 4 separate FastAPI services + Gateway subprocess.
|
||||||
|
Best for: production scaling, distributed deployment
|
||||||
|
|
||||||
|
Use 'evotraders status' to check which mode is currently running.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -77,7 +77,7 @@ def get_bootstrap_config_for_run(
|
|||||||
project_root: Path,
|
project_root: Path,
|
||||||
config_name: str,
|
config_name: str,
|
||||||
) -> BootstrapConfig:
|
) -> BootstrapConfig:
|
||||||
"""Load BOOTSTRAP.md from the run workspace."""
|
"""Load BOOTSTRAP.md from the run-scoped asset tree."""
|
||||||
return load_bootstrap_config(
|
return load_bootstrap_config(
|
||||||
project_root / "runs" / config_name / "BOOTSTRAP.md",
|
project_root / "runs" / config_name / "BOOTSTRAP.md",
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -26,13 +26,45 @@ from backend.agents.team_pipeline_config import (
|
|||||||
resolve_active_analysts,
|
resolve_active_analysts,
|
||||||
update_active_analysts,
|
update_active_analysts,
|
||||||
)
|
)
|
||||||
from backend.agents import AnalystAgent
|
from backend.agents import AnalystAgent, EvoAgent
|
||||||
|
from backend.agents.agent_workspace import load_agent_workspace_config
|
||||||
from backend.agents.toolkit_factory import create_agent_toolkit
|
from backend.agents.toolkit_factory import create_agent_toolkit
|
||||||
from backend.agents.workspace_manager import WorkspaceManager
|
from backend.agents.workspace_manager import WorkspaceManager
|
||||||
from backend.agents.prompt_loader import get_prompt_loader
|
from backend.agents.prompt_loader import get_prompt_loader
|
||||||
from backend.llm.models import get_agent_formatter, get_agent_model
|
from backend.llm.models import get_agent_formatter, get_agent_model
|
||||||
from backend.config.constants import ANALYST_TYPES
|
from backend.config.constants import ANALYST_TYPES
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_evo_agent_ids() -> set[str]:
|
||||||
|
"""Return agent ids selected to use EvoAgent.
|
||||||
|
|
||||||
|
By default, all supported roles use EvoAgent.
|
||||||
|
EVO_AGENT_IDS can be used to limit to specific roles.
|
||||||
|
|
||||||
|
Supported roles:
|
||||||
|
- analyst roles (fundamentals, technical, sentiment, valuation)
|
||||||
|
- risk_manager
|
||||||
|
- portfolio_manager
|
||||||
|
|
||||||
|
Example:
|
||||||
|
EVO_AGENT_IDS=fundamentals_analyst,risk_manager,portfolio_manager
|
||||||
|
"""
|
||||||
|
raw = os.getenv("EVO_AGENT_IDS", "")
|
||||||
|
if not raw.strip():
|
||||||
|
# Default: all supported roles use EvoAgent
|
||||||
|
return set(ANALYST_TYPES) | {"risk_manager", "portfolio_manager"}
|
||||||
|
|
||||||
|
requested = {
|
||||||
|
item.strip()
|
||||||
|
for item in raw.split(",")
|
||||||
|
if item.strip()
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
agent_id
|
||||||
|
for agent_id in requested
|
||||||
|
if agent_id in ANALYST_TYPES or agent_id in {"risk_manager", "portfolio_manager"}
|
||||||
|
}
|
||||||
|
|
||||||
# Team infrastructure imports (graceful import - may not exist yet)
|
# Team infrastructure imports (graceful import - may not exist yet)
|
||||||
try:
|
try:
|
||||||
from backend.agents.team.team_coordinator import TeamCoordinator
|
from backend.agents.team.team_coordinator import TeamCoordinator
|
||||||
@@ -140,6 +172,10 @@ class TradingPipeline:
|
|||||||
session_key = TradingSessionKey(date=date).key()
|
session_key = TradingSessionKey(date=date).key()
|
||||||
self._session_key = session_key
|
self._session_key = session_key
|
||||||
active_analysts = self._get_active_analysts()
|
active_analysts = self._get_active_analysts()
|
||||||
|
self._sync_agent_runtime_context(
|
||||||
|
agents=active_analysts + [self.risk_manager, self.pm],
|
||||||
|
session_key=session_key,
|
||||||
|
)
|
||||||
if self.runtime_manager:
|
if self.runtime_manager:
|
||||||
self.runtime_manager.set_session_key(session_key)
|
self.runtime_manager.set_session_key(session_key)
|
||||||
self._runtime_log_event("cycle:start", {"tickers": tickers, "date": date})
|
self._runtime_log_event("cycle:start", {"tickers": tickers, "date": date})
|
||||||
@@ -1488,108 +1524,6 @@ class TradingPipeline:
|
|||||||
return "Decisions: " + "; ".join(decision_texts)
|
return "Decisions: " + "; ".join(decision_texts)
|
||||||
return "Portfolio analysis completed. No trades recommended."
|
return "Portfolio analysis completed. No trades recommended."
|
||||||
|
|
||||||
def load_agents_from_workspace(
|
|
||||||
self,
|
|
||||||
workspace_id: str,
|
|
||||||
agent_factory: Optional[Any] = None,
|
|
||||||
) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Load agents from workspace using AgentFactory.
|
|
||||||
|
|
||||||
This method supports the new EvoAgent architecture by loading
|
|
||||||
agents from a workspace instead of using hardcoded agents.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
workspace_id: Workspace identifier
|
|
||||||
agent_factory: Optional AgentFactory instance (uses self.agent_factory if None)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dictionary with loaded agents:
|
|
||||||
{
|
|
||||||
"analysts": List[EvoAgent],
|
|
||||||
"risk_manager": EvoAgent,
|
|
||||||
"portfolio_manager": EvoAgent,
|
|
||||||
}
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If workspace doesn't exist or no agents found
|
|
||||||
"""
|
|
||||||
factory = agent_factory or self.agent_factory
|
|
||||||
if factory is None:
|
|
||||||
from backend.agents import AgentFactory
|
|
||||||
factory = AgentFactory()
|
|
||||||
|
|
||||||
# Check workspace exists
|
|
||||||
if not factory.workspaces_root.exists():
|
|
||||||
raise ValueError(f"Workspaces root does not exist: {factory.workspaces_root}")
|
|
||||||
|
|
||||||
workspace_dir = factory.workspaces_root / workspace_id
|
|
||||||
if not workspace_dir.exists():
|
|
||||||
raise ValueError(f"Workspace '{workspace_id}' does not exist")
|
|
||||||
|
|
||||||
# Load agents from workspace
|
|
||||||
agents_data = factory.list_agents(workspace_id=workspace_id)
|
|
||||||
|
|
||||||
if not agents_data:
|
|
||||||
raise ValueError(f"No agents found in workspace '{workspace_id}'")
|
|
||||||
|
|
||||||
# Categorize agents by type
|
|
||||||
analysts = []
|
|
||||||
risk_manager = None
|
|
||||||
portfolio_manager = None
|
|
||||||
|
|
||||||
for agent_data in agents_data:
|
|
||||||
agent_type = agent_data.get("agent_type", "unknown")
|
|
||||||
agent_id = agent_data.get("agent_id")
|
|
||||||
|
|
||||||
# Load full agent configuration
|
|
||||||
config_path = Path(agent_data.get("config_path", ""))
|
|
||||||
if config_path.exists():
|
|
||||||
agent = factory.load_agent(agent_id, workspace_id)
|
|
||||||
|
|
||||||
if agent_type.endswith("_analyst"):
|
|
||||||
analysts.append(agent)
|
|
||||||
elif agent_type == "risk_manager":
|
|
||||||
risk_manager = agent
|
|
||||||
elif agent_type == "portfolio_manager":
|
|
||||||
portfolio_manager = agent
|
|
||||||
|
|
||||||
if not analysts:
|
|
||||||
raise ValueError(f"No analysts found in workspace '{workspace_id}'")
|
|
||||||
if risk_manager is None:
|
|
||||||
raise ValueError(f"No risk_manager found in workspace '{workspace_id}'")
|
|
||||||
if portfolio_manager is None:
|
|
||||||
raise ValueError(f"No portfolio_manager found in workspace '{workspace_id}'")
|
|
||||||
|
|
||||||
return {
|
|
||||||
"analysts": analysts,
|
|
||||||
"risk_manager": risk_manager,
|
|
||||||
"portfolio_manager": portfolio_manager,
|
|
||||||
}
|
|
||||||
|
|
||||||
def reload_agents_from_workspace(self, workspace_id: Optional[str] = None) -> None:
|
|
||||||
"""
|
|
||||||
Reload all agents from workspace.
|
|
||||||
|
|
||||||
This updates self.analysts, self.risk_manager, and self.pm
|
|
||||||
with agents loaded from the specified workspace.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
workspace_id: Workspace ID (uses self.workspace_id if None)
|
|
||||||
"""
|
|
||||||
ws_id = workspace_id or self.workspace_id
|
|
||||||
if not ws_id:
|
|
||||||
raise ValueError("No workspace_id specified")
|
|
||||||
|
|
||||||
loaded = self.load_agents_from_workspace(ws_id)
|
|
||||||
|
|
||||||
self.analysts = loaded["analysts"]
|
|
||||||
self.risk_manager = loaded["risk_manager"]
|
|
||||||
self.pm = loaded["portfolio_manager"]
|
|
||||||
self.workspace_id = ws_id
|
|
||||||
|
|
||||||
logger.info(f"Reloaded {len(self.analysts)} analysts from workspace '{ws_id}'")
|
|
||||||
|
|
||||||
def _runtime_update_status(self, agent: Any, status: str) -> None:
|
def _runtime_update_status(self, agent: Any, status: str) -> None:
|
||||||
if not self.runtime_manager:
|
if not self.runtime_manager:
|
||||||
return
|
return
|
||||||
@@ -1602,6 +1536,28 @@ class TradingPipeline:
|
|||||||
for agent in agents:
|
for agent in agents:
|
||||||
self._runtime_update_status(agent, status)
|
self._runtime_update_status(agent, status)
|
||||||
|
|
||||||
|
def _sync_agent_runtime_context(
|
||||||
|
self,
|
||||||
|
agents: List[Any],
|
||||||
|
session_key: str,
|
||||||
|
) -> None:
|
||||||
|
"""Propagate run/session identifiers onto agent instances.
|
||||||
|
|
||||||
|
EvoAgent's tool-guard approval records depend on workspace/session
|
||||||
|
context being present on the agent object at runtime.
|
||||||
|
"""
|
||||||
|
config_name = getattr(self.pm, "config", {}).get("config_name", "default")
|
||||||
|
for agent in agents:
|
||||||
|
try:
|
||||||
|
setattr(agent, "session_id", session_key)
|
||||||
|
if not getattr(agent, "run_id", None):
|
||||||
|
setattr(agent, "run_id", config_name)
|
||||||
|
# Keep workspace_id for backward compatibility
|
||||||
|
if not getattr(agent, "workspace_id", None):
|
||||||
|
setattr(agent, "workspace_id", config_name)
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
|
||||||
def _all_analysts(self) -> List[Any]:
|
def _all_analysts(self) -> List[Any]:
|
||||||
"""Return static analysts plus runtime-created analysts."""
|
"""Return static analysts plus runtime-created analysts."""
|
||||||
return list(self.analysts) + list(self._dynamic_analysts.values())
|
return list(self.analysts) + list(self._dynamic_analysts.values())
|
||||||
@@ -1630,18 +1586,46 @@ class TradingPipeline:
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
agent = AnalystAgent(
|
# Determine whether to use EvoAgent based on EVO_AGENT_IDS
|
||||||
analyst_type=analyst_type,
|
use_evo_agent = analyst_type in _resolve_evo_agent_ids()
|
||||||
toolkit=create_agent_toolkit(
|
|
||||||
|
if use_evo_agent:
|
||||||
|
from backend.agents.skills_manager import SkillsManager
|
||||||
|
skills_manager = SkillsManager(project_root=project_root)
|
||||||
|
workspace_dir = skills_manager.get_agent_asset_dir(
|
||||||
|
config_name,
|
||||||
|
agent_id,
|
||||||
|
)
|
||||||
|
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id=agent_id,
|
||||||
|
config_name=config_name,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=get_agent_model(analyst_type),
|
||||||
|
formatter=get_agent_formatter(analyst_type),
|
||||||
|
prompt_files=agent_config.prompt_files,
|
||||||
|
)
|
||||||
|
agent.toolkit = create_agent_toolkit(
|
||||||
agent_id=agent_id,
|
agent_id=agent_id,
|
||||||
config_name=config_name,
|
config_name=config_name,
|
||||||
active_skill_dirs=[],
|
active_skill_dirs=[],
|
||||||
),
|
)
|
||||||
model=get_agent_model(analyst_type),
|
setattr(agent, "run_id", config_name)
|
||||||
formatter=get_agent_formatter(analyst_type),
|
# Keep workspace_id for backward compatibility
|
||||||
agent_id=agent_id,
|
setattr(agent, "workspace_id", config_name)
|
||||||
config={"config_name": config_name},
|
else:
|
||||||
)
|
agent = AnalystAgent(
|
||||||
|
analyst_type=analyst_type,
|
||||||
|
toolkit=create_agent_toolkit(
|
||||||
|
agent_id=agent_id,
|
||||||
|
config_name=config_name,
|
||||||
|
active_skill_dirs=[],
|
||||||
|
),
|
||||||
|
model=get_agent_model(analyst_type),
|
||||||
|
formatter=get_agent_formatter(analyst_type),
|
||||||
|
agent_id=agent_id,
|
||||||
|
config={"config_name": config_name},
|
||||||
|
)
|
||||||
self._dynamic_analysts[agent_id] = agent
|
self._dynamic_analysts[agent_id] = agent
|
||||||
update_active_analysts(
|
update_active_analysts(
|
||||||
project_root=project_root,
|
project_root=project_root,
|
||||||
|
|||||||
@@ -12,9 +12,10 @@ import asyncio
|
|||||||
import os
|
import os
|
||||||
from contextlib import AsyncExitStack
|
from contextlib import AsyncExitStack
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, Optional, Callable
|
from typing import Any, Dict, List, Optional, Callable
|
||||||
|
|
||||||
from backend.agents import AnalystAgent, PMAgent, RiskAgent
|
from backend.agents import AnalystAgent, EvoAgent, PMAgent, RiskAgent
|
||||||
|
from backend.agents.agent_workspace import load_agent_workspace_config
|
||||||
from backend.agents.skills_manager import SkillsManager
|
from backend.agents.skills_manager import SkillsManager
|
||||||
from backend.agents.toolkit_factory import create_agent_toolkit, load_agent_profiles
|
from backend.agents.toolkit_factory import create_agent_toolkit, load_agent_profiles
|
||||||
from backend.agents.prompt_loader import get_prompt_loader
|
from backend.agents.prompt_loader import get_prompt_loader
|
||||||
@@ -41,6 +42,9 @@ _prompt_loader = get_prompt_loader()
|
|||||||
# Global gateway reference for cleanup
|
# Global gateway reference for cleanup
|
||||||
_gateway_instance: Optional[Gateway] = None
|
_gateway_instance: Optional[Gateway] = None
|
||||||
|
|
||||||
|
# Global long-term memory references for persistence
|
||||||
|
_long_term_memories: List[Any] = []
|
||||||
|
|
||||||
|
|
||||||
def _set_gateway(gateway: Optional[Gateway]) -> None:
|
def _set_gateway(gateway: Optional[Gateway]) -> None:
|
||||||
"""Set global gateway reference."""
|
"""Set global gateway reference."""
|
||||||
@@ -61,6 +65,101 @@ def stop_gateway() -> None:
|
|||||||
_gateway_instance = None
|
_gateway_instance = None
|
||||||
|
|
||||||
|
|
||||||
|
def _set_long_term_memories(memories: List[Any]) -> None:
|
||||||
|
"""Set global long-term memory references."""
|
||||||
|
global _long_term_memories
|
||||||
|
_long_term_memories = memories
|
||||||
|
|
||||||
|
|
||||||
|
def _clear_long_term_memories() -> None:
|
||||||
|
"""Clear global long-term memory references."""
|
||||||
|
global _long_term_memories
|
||||||
|
_long_term_memories = []
|
||||||
|
|
||||||
|
|
||||||
|
def _persist_long_term_memories_sync() -> None:
|
||||||
|
"""
|
||||||
|
Synchronously persist all long-term memories before shutdown.
|
||||||
|
|
||||||
|
This function ensures all memory data is flushed to disk/vector store
|
||||||
|
before the process exits. Should be called during cleanup.
|
||||||
|
"""
|
||||||
|
global _long_term_memories
|
||||||
|
if not _long_term_memories:
|
||||||
|
return
|
||||||
|
|
||||||
|
import logging
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
logger.info(f"[MemoryPersistence] Persisting {len(_long_term_memories)} memory instances...")
|
||||||
|
|
||||||
|
for i, memory in enumerate(_long_term_memories):
|
||||||
|
try:
|
||||||
|
# Try to save memory if it has a save method
|
||||||
|
if hasattr(memory, 'save') and callable(getattr(memory, 'save')):
|
||||||
|
if hasattr(memory, 'sync') and callable(getattr(memory, 'sync')):
|
||||||
|
# Use sync version if available
|
||||||
|
memory.sync()
|
||||||
|
logger.debug(f"[MemoryPersistence] Synced memory {i}")
|
||||||
|
else:
|
||||||
|
# Try async save with event loop
|
||||||
|
import asyncio
|
||||||
|
try:
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
if loop.is_running():
|
||||||
|
# Schedule save in running loop
|
||||||
|
loop.create_task(memory.save())
|
||||||
|
logger.debug(f"[MemoryPersistence] Scheduled save for memory {i}")
|
||||||
|
else:
|
||||||
|
loop.run_until_complete(memory.save())
|
||||||
|
logger.debug(f"[MemoryPersistence] Saved memory {i}")
|
||||||
|
except RuntimeError:
|
||||||
|
# No event loop, skip async save
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Try to flush any pending writes
|
||||||
|
if hasattr(memory, 'flush') and callable(getattr(memory, 'flush')):
|
||||||
|
memory.flush()
|
||||||
|
logger.debug(f"[MemoryPersistence] Flushed memory {i}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[MemoryPersistence] Failed to persist memory {i}: {e}")
|
||||||
|
|
||||||
|
logger.info("[MemoryPersistence] Memory persistence complete")
|
||||||
|
|
||||||
|
|
||||||
|
async def _persist_long_term_memories_async() -> None:
|
||||||
|
"""
|
||||||
|
Asynchronously persist all long-term memories.
|
||||||
|
|
||||||
|
This is the preferred method for persisting memories when
|
||||||
|
an async context is available.
|
||||||
|
"""
|
||||||
|
global _long_term_memories
|
||||||
|
if not _long_term_memories:
|
||||||
|
return
|
||||||
|
|
||||||
|
import logging
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
logger.info(f"[MemoryPersistence] Persisting {len(_long_term_memories)} memory instances async...")
|
||||||
|
|
||||||
|
for i, memory in enumerate(_long_term_memories):
|
||||||
|
try:
|
||||||
|
# Try async save first
|
||||||
|
if hasattr(memory, 'save') and callable(getattr(memory, 'save')):
|
||||||
|
await memory.save()
|
||||||
|
logger.debug(f"[MemoryPersistence] Saved memory {i} (async)")
|
||||||
|
|
||||||
|
# Try flush if available
|
||||||
|
if hasattr(memory, 'flush') and callable(getattr(memory, 'flush')):
|
||||||
|
memory.flush()
|
||||||
|
logger.debug(f"[MemoryPersistence] Flushed memory {i}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[MemoryPersistence] Failed to persist memory {i}: {e}")
|
||||||
|
|
||||||
|
logger.info("[MemoryPersistence] Async memory persistence complete")
|
||||||
|
|
||||||
|
|
||||||
def create_long_term_memory(agent_name: str, run_id: str, run_dir: Path):
|
def create_long_term_memory(agent_name: str, run_id: str, run_dir: Path):
|
||||||
"""Create ReMeTaskLongTermMemory for an agent."""
|
"""Create ReMeTaskLongTermMemory for an agent."""
|
||||||
try:
|
try:
|
||||||
@@ -96,6 +195,179 @@ def create_long_term_memory(agent_name: str, run_id: str, run_dir: Path):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_evo_agent_ids() -> set[str]:
|
||||||
|
"""Return agent ids selected to use EvoAgent.
|
||||||
|
|
||||||
|
By default, all supported roles use EvoAgent.
|
||||||
|
"""
|
||||||
|
raw = os.getenv("EVO_AGENT_IDS", "")
|
||||||
|
if not raw.strip():
|
||||||
|
# Default: all supported roles use EvoAgent
|
||||||
|
return set(ANALYST_TYPES) | {"risk_manager", "portfolio_manager"}
|
||||||
|
|
||||||
|
requested = {
|
||||||
|
item.strip()
|
||||||
|
for item in raw.split(",")
|
||||||
|
if item.strip()
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
agent_id
|
||||||
|
for agent_id in requested
|
||||||
|
if agent_id in ANALYST_TYPES or agent_id in {"risk_manager", "portfolio_manager"}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _create_analyst_agent(
|
||||||
|
*,
|
||||||
|
analyst_type: str,
|
||||||
|
run_id: str,
|
||||||
|
model,
|
||||||
|
formatter,
|
||||||
|
skills_manager: SkillsManager,
|
||||||
|
active_skill_map: Dict[str, list[Path]],
|
||||||
|
long_term_memory=None,
|
||||||
|
):
|
||||||
|
"""Create one analyst agent, optionally using EvoAgent."""
|
||||||
|
active_skill_dirs = active_skill_map.get(analyst_type, [])
|
||||||
|
toolkit = create_agent_toolkit(
|
||||||
|
analyst_type,
|
||||||
|
run_id,
|
||||||
|
active_skill_dirs=active_skill_dirs,
|
||||||
|
)
|
||||||
|
|
||||||
|
use_evo_agent = analyst_type in _resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
if use_evo_agent:
|
||||||
|
workspace_dir = skills_manager.get_agent_asset_dir(run_id, analyst_type)
|
||||||
|
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id=analyst_type,
|
||||||
|
config_name=run_id,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
prompt_files=agent_config.prompt_files,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
agent.toolkit = toolkit
|
||||||
|
setattr(agent, "workspace_id", run_id)
|
||||||
|
return agent
|
||||||
|
|
||||||
|
return AnalystAgent(
|
||||||
|
analyst_type=analyst_type,
|
||||||
|
toolkit=toolkit,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
agent_id=analyst_type,
|
||||||
|
config={"config_name": run_id},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_risk_manager_agent(
|
||||||
|
*,
|
||||||
|
run_id: str,
|
||||||
|
model,
|
||||||
|
formatter,
|
||||||
|
skills_manager: SkillsManager,
|
||||||
|
active_skill_map: Dict[str, list[Path]],
|
||||||
|
long_term_memory=None,
|
||||||
|
):
|
||||||
|
"""Create the risk manager, optionally using EvoAgent."""
|
||||||
|
active_skill_dirs = active_skill_map.get("risk_manager", [])
|
||||||
|
toolkit = create_agent_toolkit(
|
||||||
|
"risk_manager",
|
||||||
|
run_id,
|
||||||
|
active_skill_dirs=active_skill_dirs,
|
||||||
|
)
|
||||||
|
|
||||||
|
use_evo_agent = "risk_manager" in _resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
if use_evo_agent:
|
||||||
|
workspace_dir = skills_manager.get_agent_asset_dir(run_id, "risk_manager")
|
||||||
|
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id="risk_manager",
|
||||||
|
config_name=run_id,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
prompt_files=agent_config.prompt_files,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
agent.toolkit = toolkit
|
||||||
|
setattr(agent, "workspace_id", run_id)
|
||||||
|
return agent
|
||||||
|
|
||||||
|
return RiskAgent(
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
name="risk_manager",
|
||||||
|
config={"config_name": run_id},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
toolkit=toolkit,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_portfolio_manager_agent(
|
||||||
|
*,
|
||||||
|
run_id: str,
|
||||||
|
model,
|
||||||
|
formatter,
|
||||||
|
initial_cash: float,
|
||||||
|
margin_requirement: float,
|
||||||
|
skills_manager: SkillsManager,
|
||||||
|
active_skill_map: Dict[str, list[Path]],
|
||||||
|
long_term_memory=None,
|
||||||
|
):
|
||||||
|
"""Create the portfolio manager, optionally using EvoAgent."""
|
||||||
|
active_skill_dirs = active_skill_map.get("portfolio_manager", [])
|
||||||
|
use_evo_agent = "portfolio_manager" in _resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
if use_evo_agent:
|
||||||
|
workspace_dir = skills_manager.get_agent_asset_dir(
|
||||||
|
run_id,
|
||||||
|
"portfolio_manager",
|
||||||
|
)
|
||||||
|
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id="portfolio_manager",
|
||||||
|
config_name=run_id,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
prompt_files=agent_config.prompt_files,
|
||||||
|
initial_cash=initial_cash,
|
||||||
|
margin_requirement=margin_requirement,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
agent.toolkit = create_agent_toolkit(
|
||||||
|
"portfolio_manager",
|
||||||
|
run_id,
|
||||||
|
owner=agent,
|
||||||
|
active_skill_dirs=active_skill_dirs,
|
||||||
|
)
|
||||||
|
setattr(agent, "workspace_id", run_id)
|
||||||
|
return agent
|
||||||
|
|
||||||
|
return PMAgent(
|
||||||
|
name="portfolio_manager",
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
initial_cash=initial_cash,
|
||||||
|
margin_requirement=margin_requirement,
|
||||||
|
config={"config_name": run_id},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
toolkit_factory=create_agent_toolkit,
|
||||||
|
toolkit_factory_kwargs={
|
||||||
|
"active_skill_dirs": active_skill_dirs,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def create_agents(
|
def create_agents(
|
||||||
run_id: str,
|
run_id: str,
|
||||||
run_dir: Path,
|
run_dir: Path,
|
||||||
@@ -129,11 +401,6 @@ def create_agents(
|
|||||||
for analyst_type in ANALYST_TYPES:
|
for analyst_type in ANALYST_TYPES:
|
||||||
model = get_agent_model(analyst_type)
|
model = get_agent_model(analyst_type)
|
||||||
formatter = get_agent_formatter(analyst_type)
|
formatter = get_agent_formatter(analyst_type)
|
||||||
toolkit = create_agent_toolkit(
|
|
||||||
analyst_type,
|
|
||||||
run_id,
|
|
||||||
active_skill_dirs=active_skill_map.get(analyst_type, []),
|
|
||||||
)
|
|
||||||
|
|
||||||
long_term_memory = None
|
long_term_memory = None
|
||||||
if enable_long_term_memory:
|
if enable_long_term_memory:
|
||||||
@@ -141,13 +408,13 @@ def create_agents(
|
|||||||
if long_term_memory:
|
if long_term_memory:
|
||||||
long_term_memories.append(long_term_memory)
|
long_term_memories.append(long_term_memory)
|
||||||
|
|
||||||
analyst = AnalystAgent(
|
analyst = _create_analyst_agent(
|
||||||
analyst_type=analyst_type,
|
analyst_type=analyst_type,
|
||||||
toolkit=toolkit,
|
run_id=run_id,
|
||||||
model=model,
|
model=model,
|
||||||
formatter=formatter,
|
formatter=formatter,
|
||||||
agent_id=analyst_type,
|
skills_manager=skills_manager,
|
||||||
config={"config_name": run_id},
|
active_skill_map=active_skill_map,
|
||||||
long_term_memory=long_term_memory,
|
long_term_memory=long_term_memory,
|
||||||
)
|
)
|
||||||
analysts.append(analyst)
|
analysts.append(analyst)
|
||||||
@@ -159,17 +426,13 @@ def create_agents(
|
|||||||
if risk_long_term_memory:
|
if risk_long_term_memory:
|
||||||
long_term_memories.append(risk_long_term_memory)
|
long_term_memories.append(risk_long_term_memory)
|
||||||
|
|
||||||
risk_manager = RiskAgent(
|
risk_manager = _create_risk_manager_agent(
|
||||||
|
run_id=run_id,
|
||||||
model=get_agent_model("risk_manager"),
|
model=get_agent_model("risk_manager"),
|
||||||
formatter=get_agent_formatter("risk_manager"),
|
formatter=get_agent_formatter("risk_manager"),
|
||||||
name="risk_manager",
|
skills_manager=skills_manager,
|
||||||
config={"config_name": run_id},
|
active_skill_map=active_skill_map,
|
||||||
long_term_memory=risk_long_term_memory,
|
long_term_memory=risk_long_term_memory,
|
||||||
toolkit=create_agent_toolkit(
|
|
||||||
"risk_manager",
|
|
||||||
run_id,
|
|
||||||
active_skill_dirs=active_skill_map.get("risk_manager", []),
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Create portfolio manager
|
# Create portfolio manager
|
||||||
@@ -179,18 +442,15 @@ def create_agents(
|
|||||||
if pm_long_term_memory:
|
if pm_long_term_memory:
|
||||||
long_term_memories.append(pm_long_term_memory)
|
long_term_memories.append(pm_long_term_memory)
|
||||||
|
|
||||||
portfolio_manager = PMAgent(
|
portfolio_manager = _create_portfolio_manager_agent(
|
||||||
name="portfolio_manager",
|
run_id=run_id,
|
||||||
model=get_agent_model("portfolio_manager"),
|
model=get_agent_model("portfolio_manager"),
|
||||||
formatter=get_agent_formatter("portfolio_manager"),
|
formatter=get_agent_formatter("portfolio_manager"),
|
||||||
initial_cash=initial_cash,
|
initial_cash=initial_cash,
|
||||||
margin_requirement=margin_requirement,
|
margin_requirement=margin_requirement,
|
||||||
config={"config_name": run_id},
|
skills_manager=skills_manager,
|
||||||
|
active_skill_map=active_skill_map,
|
||||||
long_term_memory=pm_long_term_memory,
|
long_term_memory=pm_long_term_memory,
|
||||||
toolkit_factory=create_agent_toolkit,
|
|
||||||
toolkit_factory_kwargs={
|
|
||||||
"active_skill_dirs": active_skill_map.get("portfolio_manager", []),
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return analysts, risk_manager, portfolio_manager, long_term_memories
|
return analysts, risk_manager, portfolio_manager, long_term_memories
|
||||||
@@ -400,6 +660,9 @@ async def run_pipeline(
|
|||||||
)
|
)
|
||||||
_set_gateway(gateway)
|
_set_gateway(gateway)
|
||||||
|
|
||||||
|
# Set global memory references for persistence
|
||||||
|
_set_long_term_memories(long_term_memories)
|
||||||
|
|
||||||
# Start pipeline execution
|
# Start pipeline execution
|
||||||
async with AsyncExitStack() as stack:
|
async with AsyncExitStack() as stack:
|
||||||
# Enter long-term memory contexts
|
# Enter long-term memory contexts
|
||||||
@@ -467,6 +730,12 @@ async def run_pipeline(
|
|||||||
# Cleanup
|
# Cleanup
|
||||||
logger.info("[Pipeline] Cleaning up...")
|
logger.info("[Pipeline] Cleaning up...")
|
||||||
|
|
||||||
|
# Persist long-term memories before cleanup
|
||||||
|
try:
|
||||||
|
await _persist_long_term_memories_async()
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"[Pipeline] Memory persistence error: {e}")
|
||||||
|
|
||||||
# Stop Gateway
|
# Stop Gateway
|
||||||
try:
|
try:
|
||||||
stop_gateway()
|
stop_gateway()
|
||||||
@@ -474,6 +743,9 @@ async def run_pipeline(
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"[Pipeline] Error stopping gateway: {e}")
|
logger.error(f"[Pipeline] Error stopping gateway: {e}")
|
||||||
|
|
||||||
|
# Clear memory references
|
||||||
|
_clear_long_term_memories()
|
||||||
|
|
||||||
clear_shutdown_event()
|
clear_shutdown_event()
|
||||||
clear_global_runtime_manager()
|
clear_global_runtime_manager()
|
||||||
from backend.api.runtime import unregister_runtime_manager
|
from backend.api.runtime import unregister_runtime_manager
|
||||||
|
|||||||
@@ -463,6 +463,34 @@ class StateSync:
|
|||||||
limit=self.storage.max_feed_history,
|
limit=self.storage.max_feed_history,
|
||||||
) or self._state.get("last_day_history", [])
|
) or self._state.get("last_day_history", [])
|
||||||
|
|
||||||
|
persisted_state = self.storage.read_persisted_server_state()
|
||||||
|
dashboard_snapshot = (
|
||||||
|
self.storage.build_dashboard_snapshot_from_state(self._state)
|
||||||
|
if include_dashboard
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
dashboard_holdings = (
|
||||||
|
dashboard_snapshot.get("holdings", [])
|
||||||
|
if dashboard_snapshot is not None
|
||||||
|
else self._state.get("holdings", [])
|
||||||
|
)
|
||||||
|
dashboard_trades = (
|
||||||
|
dashboard_snapshot.get("trades", [])
|
||||||
|
if dashboard_snapshot is not None
|
||||||
|
else self._state.get("trades", [])
|
||||||
|
)
|
||||||
|
dashboard_stats = (
|
||||||
|
dashboard_snapshot.get("stats", {})
|
||||||
|
if dashboard_snapshot is not None
|
||||||
|
else self._state.get("stats", {})
|
||||||
|
)
|
||||||
|
dashboard_leaderboard = (
|
||||||
|
dashboard_snapshot.get("leaderboard", [])
|
||||||
|
if dashboard_snapshot is not None
|
||||||
|
else self._state.get("leaderboard", [])
|
||||||
|
)
|
||||||
|
portfolio_state = self._state.get("portfolio") or persisted_state.get("portfolio") or {}
|
||||||
|
|
||||||
payload = {
|
payload = {
|
||||||
"server_mode": self._state.get("server_mode", "live"),
|
"server_mode": self._state.get("server_mode", "live"),
|
||||||
"is_backtest": self._state.get("is_backtest", False),
|
"is_backtest": self._state.get("is_backtest", False),
|
||||||
@@ -476,24 +504,23 @@ class StateSync:
|
|||||||
"trading_days_completed",
|
"trading_days_completed",
|
||||||
0,
|
0,
|
||||||
),
|
),
|
||||||
"holdings": self._state.get("holdings", []),
|
"holdings": dashboard_holdings,
|
||||||
"trades": self._state.get("trades", []),
|
"trades": dashboard_trades,
|
||||||
"stats": self._state.get("stats", {}),
|
"stats": dashboard_stats,
|
||||||
"leaderboard": self._state.get("leaderboard", []),
|
"leaderboard": dashboard_leaderboard,
|
||||||
"portfolio": self._state.get("portfolio", {}),
|
"portfolio": portfolio_state,
|
||||||
"realtime_prices": self._state.get("realtime_prices", {}),
|
"realtime_prices": self._state.get("realtime_prices", {}),
|
||||||
"data_sources": self._state.get("data_sources", {}),
|
"data_sources": self._state.get("data_sources", {}),
|
||||||
"price_history": self._state.get("price_history", {}),
|
"price_history": self._state.get("price_history", {}),
|
||||||
}
|
}
|
||||||
|
|
||||||
if include_dashboard:
|
if include_dashboard:
|
||||||
dashboard_snapshot = self.storage.build_dashboard_snapshot_from_state(self._state)
|
|
||||||
payload["dashboard"] = {
|
payload["dashboard"] = {
|
||||||
"summary": dashboard_snapshot.get("summary"),
|
"summary": dashboard_snapshot.get("summary"),
|
||||||
"holdings": dashboard_snapshot.get("holdings"),
|
"holdings": dashboard_holdings,
|
||||||
"stats": dashboard_snapshot.get("stats"),
|
"stats": dashboard_stats,
|
||||||
"trades": dashboard_snapshot.get("trades"),
|
"trades": dashboard_trades,
|
||||||
"leaderboard": dashboard_snapshot.get("leaderboard"),
|
"leaderboard": dashboard_leaderboard,
|
||||||
}
|
}
|
||||||
|
|
||||||
return payload
|
return payload
|
||||||
|
|||||||
275
backend/main.py
275
backend/main.py
@@ -13,10 +13,13 @@ import loguru
|
|||||||
|
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
from backend.agents import AnalystAgent, PMAgent, RiskAgent
|
from backend.agents import AnalystAgent, EvoAgent, PMAgent, RiskAgent
|
||||||
|
from backend.agents.agent_workspace import load_agent_workspace_config
|
||||||
from backend.agents.skills_manager import SkillsManager
|
from backend.agents.skills_manager import SkillsManager
|
||||||
from backend.agents.toolkit_factory import create_agent_toolkit, load_agent_profiles
|
from backend.agents.toolkit_factory import create_agent_toolkit, load_agent_profiles
|
||||||
from backend.agents.prompt_loader import get_prompt_loader
|
from backend.agents.prompt_loader import get_prompt_loader
|
||||||
|
# WorkspaceManager is RunWorkspaceManager - provides run-scoped asset management
|
||||||
|
# All runtime state lives under runs/<run_id>/
|
||||||
from backend.agents.workspace_manager import WorkspaceManager
|
from backend.agents.workspace_manager import WorkspaceManager
|
||||||
from backend.config.bootstrap_config import resolve_runtime_config
|
from backend.config.bootstrap_config import resolve_runtime_config
|
||||||
from backend.config.constants import ANALYST_TYPES
|
from backend.config.constants import ANALYST_TYPES
|
||||||
@@ -44,8 +47,13 @@ _prompt_loader = get_prompt_loader()
|
|||||||
|
|
||||||
|
|
||||||
def _get_run_dir(config_name: str) -> Path:
|
def _get_run_dir(config_name: str) -> Path:
|
||||||
"""Return the canonical run-scoped directory for a config."""
|
"""Return the canonical run-scoped directory for a config.
|
||||||
|
|
||||||
|
This is the authoritative path for runtime state under runs/<run_id>/.
|
||||||
|
All runtime assets, state, and exports are scoped to this directory.
|
||||||
|
"""
|
||||||
project_root = Path(__file__).resolve().parents[1]
|
project_root = Path(__file__).resolve().parents[1]
|
||||||
|
# Use RunWorkspaceManager for run-scoped path resolution
|
||||||
return WorkspaceManager(project_root=project_root).get_run_dir(config_name)
|
return WorkspaceManager(project_root=project_root).get_run_dir(config_name)
|
||||||
|
|
||||||
|
|
||||||
@@ -102,6 +110,204 @@ def create_long_term_memory(agent_name: str, config_name: str):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_evo_agent_ids() -> set[str]:
|
||||||
|
"""Return agent ids selected to use EvoAgent.
|
||||||
|
|
||||||
|
By default, all supported roles use EvoAgent.
|
||||||
|
EVO_AGENT_IDS can be used to limit to specific roles (legacy behavior).
|
||||||
|
Set EVO_AGENT_LEGACY=1 to disable EvoAgent entirely.
|
||||||
|
|
||||||
|
Supported roles:
|
||||||
|
- analyst roles (fundamentals, technical, sentiment, valuation)
|
||||||
|
- risk_manager
|
||||||
|
- portfolio_manager
|
||||||
|
|
||||||
|
Example:
|
||||||
|
EVO_AGENT_IDS=fundamentals_analyst,risk_manager,portfolio_manager
|
||||||
|
"""
|
||||||
|
from backend.config.constants import ANALYST_TYPES
|
||||||
|
|
||||||
|
all_supported = set(ANALYST_TYPES) | {"risk_manager", "portfolio_manager"}
|
||||||
|
|
||||||
|
raw = os.getenv("EVO_AGENT_IDS", "")
|
||||||
|
if not raw.strip():
|
||||||
|
# Default: all supported roles use EvoAgent
|
||||||
|
return all_supported
|
||||||
|
|
||||||
|
if raw.strip().lower() in ("legacy", "old", "none"):
|
||||||
|
return set()
|
||||||
|
|
||||||
|
requested = {
|
||||||
|
item.strip()
|
||||||
|
for item in raw.split(",")
|
||||||
|
if item.strip()
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
agent_id
|
||||||
|
for agent_id in requested
|
||||||
|
if agent_id in ANALYST_TYPES or agent_id in {"risk_manager", "portfolio_manager"}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _create_analyst_agent(
|
||||||
|
*,
|
||||||
|
analyst_type: str,
|
||||||
|
config_name: str,
|
||||||
|
model,
|
||||||
|
formatter,
|
||||||
|
skills_manager: SkillsManager,
|
||||||
|
active_skill_map: dict[str, list[Path]],
|
||||||
|
long_term_memory=None,
|
||||||
|
):
|
||||||
|
"""Create one analyst agent, optionally using EvoAgent."""
|
||||||
|
active_skill_dirs = active_skill_map.get(analyst_type, [])
|
||||||
|
toolkit = create_agent_toolkit(
|
||||||
|
analyst_type,
|
||||||
|
config_name,
|
||||||
|
active_skill_dirs=active_skill_dirs,
|
||||||
|
)
|
||||||
|
|
||||||
|
use_evo_agent = analyst_type in _resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
if use_evo_agent:
|
||||||
|
workspace_dir = skills_manager.get_agent_asset_dir(config_name, analyst_type)
|
||||||
|
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id=analyst_type,
|
||||||
|
config_name=config_name,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
prompt_files=agent_config.prompt_files,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
# Preserve existing analysis tool-group coverage while the EvoAgent
|
||||||
|
# migration is still partial.
|
||||||
|
agent.toolkit = toolkit
|
||||||
|
setattr(agent, "run_id", config_name)
|
||||||
|
# Keep workspace_id for backward compatibility
|
||||||
|
setattr(agent, "workspace_id", config_name)
|
||||||
|
return agent
|
||||||
|
|
||||||
|
return AnalystAgent(
|
||||||
|
analyst_type=analyst_type,
|
||||||
|
toolkit=toolkit,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
agent_id=analyst_type,
|
||||||
|
config={"config_name": config_name},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_risk_manager_agent(
|
||||||
|
*,
|
||||||
|
config_name: str,
|
||||||
|
model,
|
||||||
|
formatter,
|
||||||
|
skills_manager: SkillsManager,
|
||||||
|
active_skill_map: dict[str, list[Path]],
|
||||||
|
long_term_memory=None,
|
||||||
|
):
|
||||||
|
"""Create the risk manager, optionally using EvoAgent."""
|
||||||
|
active_skill_dirs = active_skill_map.get("risk_manager", [])
|
||||||
|
toolkit = create_agent_toolkit(
|
||||||
|
"risk_manager",
|
||||||
|
config_name,
|
||||||
|
active_skill_dirs=active_skill_dirs,
|
||||||
|
)
|
||||||
|
|
||||||
|
use_evo_agent = "risk_manager" in _resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
if use_evo_agent:
|
||||||
|
workspace_dir = skills_manager.get_agent_asset_dir(config_name, "risk_manager")
|
||||||
|
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id="risk_manager",
|
||||||
|
config_name=config_name,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
prompt_files=agent_config.prompt_files,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
agent.toolkit = toolkit
|
||||||
|
setattr(agent, "run_id", config_name)
|
||||||
|
# Keep workspace_id for backward compatibility
|
||||||
|
setattr(agent, "workspace_id", config_name)
|
||||||
|
return agent
|
||||||
|
|
||||||
|
return RiskAgent(
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
name="risk_manager",
|
||||||
|
config={"config_name": config_name},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
toolkit=toolkit,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_portfolio_manager_agent(
|
||||||
|
*,
|
||||||
|
config_name: str,
|
||||||
|
model,
|
||||||
|
formatter,
|
||||||
|
initial_cash: float,
|
||||||
|
margin_requirement: float,
|
||||||
|
skills_manager: SkillsManager,
|
||||||
|
active_skill_map: dict[str, list[Path]],
|
||||||
|
long_term_memory=None,
|
||||||
|
):
|
||||||
|
"""Create the portfolio manager, optionally using EvoAgent."""
|
||||||
|
active_skill_dirs = active_skill_map.get("portfolio_manager", [])
|
||||||
|
use_evo_agent = "portfolio_manager" in _resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
if use_evo_agent:
|
||||||
|
workspace_dir = skills_manager.get_agent_asset_dir(
|
||||||
|
config_name,
|
||||||
|
"portfolio_manager",
|
||||||
|
)
|
||||||
|
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id="portfolio_manager",
|
||||||
|
config_name=config_name,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
prompt_files=agent_config.prompt_files,
|
||||||
|
initial_cash=initial_cash,
|
||||||
|
margin_requirement=margin_requirement,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
)
|
||||||
|
agent.toolkit = create_agent_toolkit(
|
||||||
|
"portfolio_manager",
|
||||||
|
config_name,
|
||||||
|
owner=agent,
|
||||||
|
active_skill_dirs=active_skill_dirs,
|
||||||
|
)
|
||||||
|
setattr(agent, "run_id", config_name)
|
||||||
|
# Keep workspace_id for backward compatibility
|
||||||
|
setattr(agent, "workspace_id", config_name)
|
||||||
|
return agent
|
||||||
|
|
||||||
|
return PMAgent(
|
||||||
|
name="portfolio_manager",
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
initial_cash=initial_cash,
|
||||||
|
margin_requirement=margin_requirement,
|
||||||
|
config={"config_name": config_name},
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
toolkit_factory=create_agent_toolkit,
|
||||||
|
toolkit_factory_kwargs={
|
||||||
|
"active_skill_dirs": active_skill_dirs,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def create_agents(
|
def create_agents(
|
||||||
config_name: str,
|
config_name: str,
|
||||||
initial_cash: float,
|
initial_cash: float,
|
||||||
@@ -136,11 +342,6 @@ def create_agents(
|
|||||||
for analyst_type in ANALYST_TYPES:
|
for analyst_type in ANALYST_TYPES:
|
||||||
model = get_agent_model(analyst_type)
|
model = get_agent_model(analyst_type)
|
||||||
formatter = get_agent_formatter(analyst_type)
|
formatter = get_agent_formatter(analyst_type)
|
||||||
toolkit = create_agent_toolkit(
|
|
||||||
analyst_type,
|
|
||||||
config_name,
|
|
||||||
active_skill_dirs=active_skill_map.get(analyst_type, []),
|
|
||||||
)
|
|
||||||
|
|
||||||
long_term_memory = None
|
long_term_memory = None
|
||||||
if enable_long_term_memory:
|
if enable_long_term_memory:
|
||||||
@@ -151,13 +352,13 @@ def create_agents(
|
|||||||
if long_term_memory:
|
if long_term_memory:
|
||||||
long_term_memories.append(long_term_memory)
|
long_term_memories.append(long_term_memory)
|
||||||
|
|
||||||
analyst = AnalystAgent(
|
analyst = _create_analyst_agent(
|
||||||
analyst_type=analyst_type,
|
analyst_type=analyst_type,
|
||||||
toolkit=toolkit,
|
config_name=config_name,
|
||||||
model=model,
|
model=model,
|
||||||
formatter=formatter,
|
formatter=formatter,
|
||||||
agent_id=analyst_type,
|
skills_manager=skills_manager,
|
||||||
config={"config_name": config_name},
|
active_skill_map=active_skill_map,
|
||||||
long_term_memory=long_term_memory,
|
long_term_memory=long_term_memory,
|
||||||
)
|
)
|
||||||
analysts.append(analyst)
|
analysts.append(analyst)
|
||||||
@@ -171,17 +372,13 @@ def create_agents(
|
|||||||
if risk_long_term_memory:
|
if risk_long_term_memory:
|
||||||
long_term_memories.append(risk_long_term_memory)
|
long_term_memories.append(risk_long_term_memory)
|
||||||
|
|
||||||
risk_manager = RiskAgent(
|
risk_manager = _create_risk_manager_agent(
|
||||||
|
config_name=config_name,
|
||||||
model=get_agent_model("risk_manager"),
|
model=get_agent_model("risk_manager"),
|
||||||
formatter=get_agent_formatter("risk_manager"),
|
formatter=get_agent_formatter("risk_manager"),
|
||||||
name="risk_manager",
|
skills_manager=skills_manager,
|
||||||
config={"config_name": config_name},
|
active_skill_map=active_skill_map,
|
||||||
long_term_memory=risk_long_term_memory,
|
long_term_memory=risk_long_term_memory,
|
||||||
toolkit=create_agent_toolkit(
|
|
||||||
"risk_manager",
|
|
||||||
config_name,
|
|
||||||
active_skill_dirs=active_skill_map.get("risk_manager", []),
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
pm_long_term_memory = None
|
pm_long_term_memory = None
|
||||||
@@ -193,21 +390,15 @@ def create_agents(
|
|||||||
if pm_long_term_memory:
|
if pm_long_term_memory:
|
||||||
long_term_memories.append(pm_long_term_memory)
|
long_term_memories.append(pm_long_term_memory)
|
||||||
|
|
||||||
portfolio_manager = PMAgent(
|
portfolio_manager = _create_portfolio_manager_agent(
|
||||||
name="portfolio_manager",
|
config_name=config_name,
|
||||||
model=get_agent_model("portfolio_manager"),
|
model=get_agent_model("portfolio_manager"),
|
||||||
formatter=get_agent_formatter("portfolio_manager"),
|
formatter=get_agent_formatter("portfolio_manager"),
|
||||||
initial_cash=initial_cash,
|
initial_cash=initial_cash,
|
||||||
margin_requirement=margin_requirement,
|
margin_requirement=margin_requirement,
|
||||||
config={"config_name": config_name},
|
skills_manager=skills_manager,
|
||||||
|
active_skill_map=active_skill_map,
|
||||||
long_term_memory=pm_long_term_memory,
|
long_term_memory=pm_long_term_memory,
|
||||||
toolkit_factory=create_agent_toolkit,
|
|
||||||
toolkit_factory_kwargs={
|
|
||||||
"active_skill_dirs": active_skill_map.get(
|
|
||||||
"portfolio_manager",
|
|
||||||
[],
|
|
||||||
),
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return analysts, risk_manager, portfolio_manager, long_term_memories
|
return analysts, risk_manager, portfolio_manager, long_term_memories
|
||||||
@@ -343,15 +534,29 @@ async def run_with_gateway(args):
|
|||||||
await stack.enter_async_context(memory)
|
await stack.enter_async_context(memory)
|
||||||
await gateway.start(host=args.host, port=args.port)
|
await gateway.start(host=args.host, port=args.port)
|
||||||
finally:
|
finally:
|
||||||
|
# Persist long-term memories before cleanup
|
||||||
|
for memory in long_term_memories:
|
||||||
|
try:
|
||||||
|
if hasattr(memory, 'save') and callable(getattr(memory, 'save')):
|
||||||
|
await memory.save()
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Failed to persist memory: {e}")
|
||||||
unregister_runtime_manager()
|
unregister_runtime_manager()
|
||||||
clear_global_runtime_manager()
|
clear_global_runtime_manager()
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def build_arg_parser() -> argparse.ArgumentParser:
|
||||||
"""Main entry point"""
|
"""Build the CLI parser for the gateway runtime entrypoint."""
|
||||||
parser = argparse.ArgumentParser(description="Trading System")
|
parser = argparse.ArgumentParser(description="Trading System")
|
||||||
parser.add_argument("--mode", choices=["live", "backtest"], default="live")
|
parser.add_argument("--mode", choices=["live", "backtest"], default="live")
|
||||||
parser.add_argument("--config-name", default="live")
|
parser.add_argument(
|
||||||
|
"--config-name",
|
||||||
|
default="default_run",
|
||||||
|
help=(
|
||||||
|
"Run label under runs/<config_name>; not a special root-level "
|
||||||
|
"live/backtest/production directory."
|
||||||
|
),
|
||||||
|
)
|
||||||
parser.add_argument("--host", default="0.0.0.0")
|
parser.add_argument("--host", default="0.0.0.0")
|
||||||
parser.add_argument("--port", type=int, default=8765)
|
parser.add_argument("--port", type=int, default=8765)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
@@ -369,6 +574,12 @@ def main():
|
|||||||
action="store_true",
|
action="store_true",
|
||||||
help="Enable ReMeTaskLongTermMemory for agents",
|
help="Enable ReMeTaskLongTermMemory for agents",
|
||||||
)
|
)
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main entry point"""
|
||||||
|
parser = build_arg_parser()
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,21 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
"""OpenClaw WebSocket handlers — gateway calls OpenClaw Gateway via WebSocket."""
|
"""OpenClaw WebSocket handlers — gateway calls OpenClaw Gateway via WebSocket.
|
||||||
|
|
||||||
|
COMPATIBILITY_SURFACE: deferred
|
||||||
|
OWNER: runtime-team
|
||||||
|
SEE: docs/legacy-inventory.md#openclaw-dual-integration
|
||||||
|
|
||||||
|
This is the WebSocket gateway integration for OpenClaw (port 18789).
|
||||||
|
For the REST facade, see:
|
||||||
|
- backend/apps/openclaw_service.py (port 8004)
|
||||||
|
- backend/api/openclaw.py
|
||||||
|
|
||||||
|
Key differences:
|
||||||
|
- WebSocket: event-driven, real-time updates, bidirectional
|
||||||
|
- REST facade: typed Pydantic models, request/response, polling
|
||||||
|
|
||||||
|
Decision needed: which surface becomes the long-term contract?
|
||||||
|
"""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ Handles reading/writing dashboard JSON files and portfolio state
|
|||||||
# pylint: disable=R0904
|
# pylint: disable=R0904
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
import os
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Dict, List, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
@@ -21,25 +22,31 @@ class StorageService:
|
|||||||
Storage service for data persistence
|
Storage service for data persistence
|
||||||
|
|
||||||
Responsibilities:
|
Responsibilities:
|
||||||
1. Export dashboard JSON files
|
1. Export dashboard JSON files (compatibility layer)
|
||||||
(summary, holdings, stats, trades, leaderboard)
|
(summary, holdings, stats, trades, leaderboard)
|
||||||
2. Load/save internal state (_internal_state.json)
|
2. Load/save internal state (_internal_state.json)
|
||||||
3. Load/save server state (server_state.json) with feed history
|
3. Load/save server state (server_state.json) with feed history
|
||||||
4. Manage portfolio state persistence
|
4. Manage portfolio state persistence
|
||||||
5. Support loading from saved state to resume execution
|
5. Support loading from saved state to resume execution
|
||||||
|
|
||||||
Notes:
|
Architecture Notes:
|
||||||
- team_dashboard/*.json is treated as an export/compatibility layer
|
- runs/<run_id>/ is the authoritative runtime state root
|
||||||
rather than the authoritative runtime source of truth.
|
- team_dashboard/*.json is a NON-AUTHORITATIVE export/compatibility layer
|
||||||
- authoritative runtime reads should prefer in-memory state, server_state,
|
for external consumers (frontend, reports, etc.)
|
||||||
runtime.db, and market_research.db.
|
- Authoritative runtime reads should prefer:
|
||||||
|
1. In-memory state (runtime manager)
|
||||||
|
2. state/server_state.json
|
||||||
|
3. state/runtime.db
|
||||||
|
4. market_research.db
|
||||||
|
- Compatibility exports can be disabled via ENABLE_DASHBOARD_COMPAT_EXPORTS=false
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
dashboard_dir: Path,
|
dashboard_dir: Path,
|
||||||
initial_cash: float = 100000.0,
|
initial_cash: float = 100000.0,
|
||||||
config_name: str = "live",
|
config_name: str = "runtime",
|
||||||
|
enable_compat_exports: Optional[bool] = None,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Initialize storage service
|
Initialize storage service
|
||||||
@@ -47,12 +54,18 @@ class StorageService:
|
|||||||
Args:
|
Args:
|
||||||
dashboard_dir: Directory for dashboard files
|
dashboard_dir: Directory for dashboard files
|
||||||
initial_cash: Initial cash amount
|
initial_cash: Initial cash amount
|
||||||
config_name: Configuration name for state directory
|
config_name: Logical runtime config/run label for state directory context
|
||||||
|
enable_compat_exports: Whether to keep writing team_dashboard/*.json
|
||||||
"""
|
"""
|
||||||
self.dashboard_dir = Path(dashboard_dir)
|
self.dashboard_dir = Path(dashboard_dir)
|
||||||
self.dashboard_dir.mkdir(parents=True, exist_ok=True)
|
self.dashboard_dir.mkdir(parents=True, exist_ok=True)
|
||||||
self.initial_cash = initial_cash
|
self.initial_cash = initial_cash
|
||||||
self.config_name = config_name
|
self.config_name = config_name
|
||||||
|
self.enable_compat_exports = (
|
||||||
|
self._resolve_compat_exports_default()
|
||||||
|
if enable_compat_exports is None
|
||||||
|
else bool(enable_compat_exports)
|
||||||
|
)
|
||||||
|
|
||||||
# Dashboard export file paths
|
# Dashboard export file paths
|
||||||
self.files = {
|
self.files = {
|
||||||
@@ -88,6 +101,12 @@ class StorageService:
|
|||||||
|
|
||||||
logger.info(f"Storage service initialized: {self.dashboard_dir}")
|
logger.info(f"Storage service initialized: {self.dashboard_dir}")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _resolve_compat_exports_default() -> bool:
|
||||||
|
"""Default compatibility export policy, overridable via env."""
|
||||||
|
raw = str(os.getenv("ENABLE_DASHBOARD_COMPAT_EXPORTS", "true")).strip().lower()
|
||||||
|
return raw not in {"0", "false", "no", "off"}
|
||||||
|
|
||||||
def load_export_file(self, file_type: str) -> Optional[Any]:
|
def load_export_file(self, file_type: str) -> Optional[Any]:
|
||||||
"""Load dashboard export JSON file."""
|
"""Load dashboard export JSON file."""
|
||||||
file_path = self.files.get(file_type)
|
file_path = self.files.get(file_type)
|
||||||
@@ -106,7 +125,9 @@ class StorageService:
|
|||||||
return self.load_export_file(file_type)
|
return self.load_export_file(file_type)
|
||||||
|
|
||||||
def save_export_file(self, file_type: str, data: Any):
|
def save_export_file(self, file_type: str, data: Any):
|
||||||
"""Save dashboard export JSON file."""
|
"""Save one compatibility dashboard export JSON file."""
|
||||||
|
if not self.enable_compat_exports:
|
||||||
|
return
|
||||||
file_path = self.files.get(file_type)
|
file_path = self.files.get(file_type)
|
||||||
if not file_path:
|
if not file_path:
|
||||||
logger.error(f"Unknown file type: {file_type}")
|
logger.error(f"Unknown file type: {file_type}")
|
||||||
@@ -127,17 +148,79 @@ class StorageService:
|
|||||||
"""Backward-compatible alias for export-layer JSON writes."""
|
"""Backward-compatible alias for export-layer JSON writes."""
|
||||||
self.save_export_file(file_type, data)
|
self.save_export_file(file_type, data)
|
||||||
|
|
||||||
|
def save_dashboard_exports(self, exports: Dict[str, Any]) -> None:
|
||||||
|
"""Persist compatibility dashboard exports from a normalized snapshot."""
|
||||||
|
if not self.enable_compat_exports:
|
||||||
|
return
|
||||||
|
for file_type in ("summary", "holdings", "stats", "trades", "leaderboard"):
|
||||||
|
if file_type in exports:
|
||||||
|
self.save_export_file(file_type, exports[file_type])
|
||||||
|
|
||||||
|
def read_persisted_server_state(self) -> Dict[str, Any]:
|
||||||
|
"""Read server_state.json without logging or DB side effects."""
|
||||||
|
if not self.server_state_file.exists():
|
||||||
|
return {}
|
||||||
|
try:
|
||||||
|
with open(self.server_state_file, "r", encoding="utf-8") as f:
|
||||||
|
payload = json.load(f)
|
||||||
|
return payload if isinstance(payload, dict) else {}
|
||||||
|
except Exception as exc:
|
||||||
|
logger.warning("Failed to read persisted server state: %s", exc)
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def load_runtime_leaderboard(self, state: Optional[Dict[str, Any]] = None) -> List[Dict[str, Any]]:
|
||||||
|
"""Prefer runtime state for leaderboard reads, fall back to export JSON."""
|
||||||
|
runtime_state = state or self.read_persisted_server_state()
|
||||||
|
leaderboard = runtime_state.get("leaderboard")
|
||||||
|
if isinstance(leaderboard, list) and leaderboard:
|
||||||
|
return leaderboard
|
||||||
|
return self.load_export_file("leaderboard") or []
|
||||||
|
|
||||||
|
def persist_runtime_leaderboard(
|
||||||
|
self,
|
||||||
|
leaderboard: List[Dict[str, Any]],
|
||||||
|
state: Optional[Dict[str, Any]] = None,
|
||||||
|
) -> None:
|
||||||
|
"""Persist leaderboard to runtime state first, keeping JSON export for compatibility."""
|
||||||
|
self.save_export_file("leaderboard", leaderboard)
|
||||||
|
runtime_state = state or self.read_persisted_server_state()
|
||||||
|
if not runtime_state:
|
||||||
|
runtime_state = self.load_server_state()
|
||||||
|
runtime_state["leaderboard"] = leaderboard
|
||||||
|
self.save_server_state(runtime_state)
|
||||||
|
|
||||||
def build_dashboard_snapshot_from_state(
|
def build_dashboard_snapshot_from_state(
|
||||||
self,
|
self,
|
||||||
state: Optional[Dict[str, Any]] = None,
|
state: Optional[Dict[str, Any]] = None,
|
||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
"""Build dashboard view data from runtime state instead of JSON exports."""
|
"""Build dashboard view data from runtime state instead of JSON exports."""
|
||||||
runtime_state = state or self.load_server_state()
|
runtime_state = state or self.load_server_state()
|
||||||
portfolio = dict(runtime_state.get("portfolio") or {})
|
persisted_state = self.read_persisted_server_state() if state is not None else {}
|
||||||
holdings = list(runtime_state.get("holdings") or [])
|
portfolio = dict(
|
||||||
stats = runtime_state.get("stats") or self._get_default_stats()
|
runtime_state.get("portfolio")
|
||||||
trades = list(runtime_state.get("trades") or [])
|
or persisted_state.get("portfolio")
|
||||||
leaderboard = list(runtime_state.get("leaderboard") or [])
|
or {},
|
||||||
|
)
|
||||||
|
holdings = list(
|
||||||
|
runtime_state.get("holdings")
|
||||||
|
or persisted_state.get("holdings")
|
||||||
|
or [],
|
||||||
|
)
|
||||||
|
stats = (
|
||||||
|
runtime_state.get("stats")
|
||||||
|
or persisted_state.get("stats")
|
||||||
|
or self._get_default_stats()
|
||||||
|
)
|
||||||
|
trades = list(
|
||||||
|
runtime_state.get("trades")
|
||||||
|
or persisted_state.get("trades")
|
||||||
|
or [],
|
||||||
|
)
|
||||||
|
leaderboard = list(
|
||||||
|
runtime_state.get("leaderboard")
|
||||||
|
or persisted_state.get("leaderboard")
|
||||||
|
or [],
|
||||||
|
)
|
||||||
|
|
||||||
summary = {
|
summary = {
|
||||||
"totalAssetValue": portfolio.get("total_value", self.initial_cash),
|
"totalAssetValue": portfolio.get("total_value", self.initial_cash),
|
||||||
@@ -331,48 +414,38 @@ class StorageService:
|
|||||||
self.save_internal_state(internal_state)
|
self.save_internal_state(internal_state)
|
||||||
|
|
||||||
def initialize_empty_dashboard(self):
|
def initialize_empty_dashboard(self):
|
||||||
"""Initialize empty dashboard files with default values"""
|
"""Initialize compatibility dashboard exports with default values."""
|
||||||
# Summary
|
self.save_dashboard_exports(
|
||||||
self.save_export_file(
|
|
||||||
"summary",
|
|
||||||
{
|
{
|
||||||
"totalAssetValue": self.initial_cash,
|
"summary": {
|
||||||
"totalReturn": 0.0,
|
"totalAssetValue": self.initial_cash,
|
||||||
"cashPosition": self.initial_cash,
|
"totalReturn": 0.0,
|
||||||
"tickerWeights": {},
|
"cashPosition": self.initial_cash,
|
||||||
"totalTrades": 0,
|
"tickerWeights": {},
|
||||||
"pnlPct": 0.0,
|
"totalTrades": 0,
|
||||||
"balance": self.initial_cash,
|
"pnlPct": 0.0,
|
||||||
"equity": [],
|
"balance": self.initial_cash,
|
||||||
"baseline": [],
|
"equity": [],
|
||||||
"baseline_vw": [],
|
"baseline": [],
|
||||||
"momentum": [],
|
"baseline_vw": [],
|
||||||
},
|
"momentum": [],
|
||||||
)
|
|
||||||
|
|
||||||
# Holdings
|
|
||||||
self.save_export_file("holdings", [])
|
|
||||||
|
|
||||||
# Stats
|
|
||||||
self.save_export_file(
|
|
||||||
"stats",
|
|
||||||
{
|
|
||||||
"totalAssetValue": self.initial_cash,
|
|
||||||
"totalReturn": 0.0,
|
|
||||||
"cashPosition": self.initial_cash,
|
|
||||||
"tickerWeights": {},
|
|
||||||
"totalTrades": 0,
|
|
||||||
"winRate": 0.0,
|
|
||||||
"bullBear": {
|
|
||||||
"bull": {"n": 0, "win": 0},
|
|
||||||
"bear": {"n": 0, "win": 0},
|
|
||||||
},
|
},
|
||||||
|
"holdings": [],
|
||||||
|
"stats": {
|
||||||
|
"totalAssetValue": self.initial_cash,
|
||||||
|
"totalReturn": 0.0,
|
||||||
|
"cashPosition": self.initial_cash,
|
||||||
|
"tickerWeights": {},
|
||||||
|
"totalTrades": 0,
|
||||||
|
"winRate": 0.0,
|
||||||
|
"bullBear": {
|
||||||
|
"bull": {"n": 0, "win": 0},
|
||||||
|
"bear": {"n": 0, "win": 0},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"trades": [],
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
# Trades
|
|
||||||
self.save_export_file("trades", [])
|
|
||||||
|
|
||||||
# Leaderboard with model info
|
# Leaderboard with model info
|
||||||
self.generate_leaderboard()
|
self.generate_leaderboard()
|
||||||
|
|
||||||
@@ -411,7 +484,7 @@ class StorageService:
|
|||||||
ranking_entries.append(entry)
|
ranking_entries.append(entry)
|
||||||
|
|
||||||
leaderboard = team_entries + ranking_entries
|
leaderboard = team_entries + ranking_entries
|
||||||
self.save_export_file("leaderboard", leaderboard)
|
self.persist_runtime_leaderboard(leaderboard)
|
||||||
logger.info("Leaderboard generated with model info")
|
logger.info("Leaderboard generated with model info")
|
||||||
|
|
||||||
def update_leaderboard_model_info(self):
|
def update_leaderboard_model_info(self):
|
||||||
@@ -421,7 +494,7 @@ class StorageService:
|
|||||||
from ..config.constants import AGENT_CONFIG
|
from ..config.constants import AGENT_CONFIG
|
||||||
from ..llm.models import get_agent_model_info
|
from ..llm.models import get_agent_model_info
|
||||||
|
|
||||||
existing = self.load_file("leaderboard") or []
|
existing = self.load_runtime_leaderboard()
|
||||||
|
|
||||||
if not existing:
|
if not existing:
|
||||||
self.generate_leaderboard()
|
self.generate_leaderboard()
|
||||||
@@ -434,7 +507,7 @@ class StorageService:
|
|||||||
entry["modelName"] = model_name
|
entry["modelName"] = model_name
|
||||||
entry["modelProvider"] = model_provider
|
entry["modelProvider"] = model_provider
|
||||||
|
|
||||||
self.save_export_file("leaderboard", existing)
|
self.persist_runtime_leaderboard(existing)
|
||||||
logger.info("Leaderboard model info updated")
|
logger.info("Leaderboard model info updated")
|
||||||
|
|
||||||
def get_current_timestamp_ms(self, date: str = None) -> int:
|
def get_current_timestamp_ms(self, date: str = None) -> int:
|
||||||
@@ -640,21 +713,21 @@ class StorageService:
|
|||||||
state["last_update_date"] = date
|
state["last_update_date"] = date
|
||||||
|
|
||||||
self.save_internal_state(state)
|
self.save_internal_state(state)
|
||||||
|
self.export_dashboard_compatibility_files(
|
||||||
self._generate_summary(state, net_value, prices)
|
state,
|
||||||
self._generate_holdings(state, prices)
|
net_value=net_value,
|
||||||
self._generate_stats(state, net_value)
|
prices=prices,
|
||||||
self._generate_trades(state)
|
)
|
||||||
|
|
||||||
logger.info(f"Dashboard updated: net_value=${net_value:,.2f}")
|
logger.info(f"Dashboard updated: net_value=${net_value:,.2f}")
|
||||||
|
|
||||||
def _generate_summary(
|
def _build_summary_export(
|
||||||
self,
|
self,
|
||||||
state: Dict[str, Any],
|
state: Dict[str, Any],
|
||||||
net_value: float,
|
net_value: float,
|
||||||
prices: Dict[str, float],
|
prices: Dict[str, float],
|
||||||
):
|
) -> Dict[str, Any]:
|
||||||
"""Generate summary.json"""
|
"""Build compatibility summary export payload."""
|
||||||
portfolio_state = state.get("portfolio_state", {})
|
portfolio_state = state.get("portfolio_state", {})
|
||||||
cash = portfolio_state.get("cash", self.initial_cash)
|
cash = portfolio_state.get("cash", self.initial_cash)
|
||||||
|
|
||||||
@@ -675,7 +748,7 @@ class StorageService:
|
|||||||
(net_value - self.initial_cash) / self.initial_cash
|
(net_value - self.initial_cash) / self.initial_cash
|
||||||
) * 100
|
) * 100
|
||||||
|
|
||||||
summary = {
|
return {
|
||||||
"totalAssetValue": round(net_value, 2),
|
"totalAssetValue": round(net_value, 2),
|
||||||
"totalReturn": round(total_return, 2),
|
"totalReturn": round(total_return, 2),
|
||||||
"cashPosition": round(cash, 2),
|
"cashPosition": round(cash, 2),
|
||||||
@@ -689,14 +762,12 @@ class StorageService:
|
|||||||
"momentum": state.get("momentum_history", []),
|
"momentum": state.get("momentum_history", []),
|
||||||
}
|
}
|
||||||
|
|
||||||
self.save_export_file("summary", summary)
|
def _build_holdings_export(
|
||||||
|
|
||||||
def _generate_holdings(
|
|
||||||
self,
|
self,
|
||||||
state: Dict[str, Any],
|
state: Dict[str, Any],
|
||||||
prices: Dict[str, float],
|
prices: Dict[str, float],
|
||||||
):
|
) -> List[Dict[str, Any]]:
|
||||||
"""Generate holdings.json"""
|
"""Build compatibility holdings export payload."""
|
||||||
portfolio_state = state.get("portfolio_state", {})
|
portfolio_state = state.get("portfolio_state", {})
|
||||||
positions = portfolio_state.get("positions", {})
|
positions = portfolio_state.get("positions", {})
|
||||||
cash = portfolio_state.get("cash", self.initial_cash)
|
cash = portfolio_state.get("cash", self.initial_cash)
|
||||||
@@ -750,18 +821,17 @@ class StorageService:
|
|||||||
|
|
||||||
# Sort by weight
|
# Sort by weight
|
||||||
holdings.sort(key=lambda x: abs(x["weight"]), reverse=True)
|
holdings.sort(key=lambda x: abs(x["weight"]), reverse=True)
|
||||||
|
return holdings
|
||||||
|
|
||||||
self.save_export_file("holdings", holdings)
|
def _build_stats_export(self, state: Dict[str, Any], net_value: float) -> Dict[str, Any]:
|
||||||
|
"""Build compatibility stats export payload."""
|
||||||
def _generate_stats(self, state: Dict[str, Any], net_value: float):
|
|
||||||
"""Generate stats.json"""
|
|
||||||
portfolio_state = state.get("portfolio_state", {})
|
portfolio_state = state.get("portfolio_state", {})
|
||||||
cash = portfolio_state.get("cash", self.initial_cash)
|
cash = portfolio_state.get("cash", self.initial_cash)
|
||||||
total_return = (
|
total_return = (
|
||||||
(net_value - self.initial_cash) / self.initial_cash
|
(net_value - self.initial_cash) / self.initial_cash
|
||||||
) * 100
|
) * 100
|
||||||
|
|
||||||
stats = {
|
return {
|
||||||
"totalAssetValue": round(net_value, 2),
|
"totalAssetValue": round(net_value, 2),
|
||||||
"totalReturn": round(total_return, 2),
|
"totalReturn": round(total_return, 2),
|
||||||
"cashPosition": round(cash, 2),
|
"cashPosition": round(cash, 2),
|
||||||
@@ -774,10 +844,8 @@ class StorageService:
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
self.save_export_file("stats", stats)
|
def _build_trades_export(self, state: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||||
|
"""Build compatibility trades export payload."""
|
||||||
def _generate_trades(self, state: Dict[str, Any]):
|
|
||||||
"""Generate trades.json"""
|
|
||||||
all_trades = state.get("all_trades", [])
|
all_trades = state.get("all_trades", [])
|
||||||
|
|
||||||
sorted_trades = sorted(
|
sorted_trades = sorted(
|
||||||
@@ -800,7 +868,24 @@ class StorageService:
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
self.save_export_file("trades", trades)
|
return trades
|
||||||
|
|
||||||
|
def export_dashboard_compatibility_files(
|
||||||
|
self,
|
||||||
|
state: Dict[str, Any],
|
||||||
|
*,
|
||||||
|
net_value: float,
|
||||||
|
prices: Dict[str, float],
|
||||||
|
) -> None:
|
||||||
|
"""Write compatibility dashboard exports from current runtime state."""
|
||||||
|
self.save_dashboard_exports(
|
||||||
|
{
|
||||||
|
"summary": self._build_summary_export(state, net_value, prices),
|
||||||
|
"holdings": self._build_holdings_export(state, prices),
|
||||||
|
"stats": self._build_stats_export(state, net_value),
|
||||||
|
"trades": self._build_trades_export(state),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
# Server State Management Methods
|
# Server State Management Methods
|
||||||
|
|
||||||
|
|||||||
@@ -117,3 +117,35 @@ evaluation_hook.complete_evaluation(success=True)
|
|||||||
### 评估结果存储
|
### 评估结果存储
|
||||||
|
|
||||||
评估结果自动保存到 `runs/{run_id}/evaluations/{agent_id}/{skill_name}_{timestamp}.json`
|
评估结果自动保存到 `runs/{run_id}/evaluations/{agent_id}/{skill_name}_{timestamp}.json`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Skill Sandbox Execution | 技能沙盒执行
|
||||||
|
|
||||||
|
技能脚本(如估值报告生成)通过沙盒执行器运行,支持三种隔离模式:
|
||||||
|
|
||||||
|
| 模式 | 描述 | 适用场景 |
|
||||||
|
|------|------|---------|
|
||||||
|
| `none` | 直接执行,无隔离 | 开发环境(默认) |
|
||||||
|
| `docker` | Docker 容器隔离 | 生产环境 |
|
||||||
|
| `kubernetes` | Kubernetes Pod 隔离 | 企业级(预留) |
|
||||||
|
|
||||||
|
### 沙盒配置
|
||||||
|
|
||||||
|
环境变量控制沙盒行为:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
SKILL_SANDBOX_MODE=none # none | docker | kubernetes
|
||||||
|
SKILL_SANDBOX_IMAGE=python:3.11-slim
|
||||||
|
SKILL_SANDBOX_MEMORY_LIMIT=512m
|
||||||
|
SKILL_SANDBOX_CPU_LIMIT=1.0
|
||||||
|
SKILL_SANDBOX_NETWORK=none
|
||||||
|
SKILL_SANDBOX_TIMEOUT=60
|
||||||
|
```
|
||||||
|
|
||||||
|
### 开发注意事项
|
||||||
|
|
||||||
|
- 默认 `none` 模式会在首次执行时显示安全警告
|
||||||
|
- 生产环境必须设置 `SKILL_SANDBOX_MODE=docker`
|
||||||
|
- 技能脚本应无副作用,输入输出通过函数参数和返回值
|
||||||
|
- 函数命名与脚本文件名的映射通过 `FUNCTION_TO_SCRIPT_MAP` 处理(如 `build_ev_ebitda_report` 在 `multiple_valuation_report.py` 中)
|
||||||
|
|||||||
@@ -28,6 +28,19 @@ def test_agent_service_excludes_runtime_routes(tmp_path):
|
|||||||
assert "/api/runtime/gateway/port" not in paths
|
assert "/api/runtime/gateway/port" not in paths
|
||||||
|
|
||||||
|
|
||||||
|
def test_agent_service_status_includes_scope_metadata(tmp_path):
|
||||||
|
app = create_app(project_root=tmp_path)
|
||||||
|
|
||||||
|
with TestClient(app) as client:
|
||||||
|
response = client.get("/api/status")
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert payload["scope"]["design_time_registry"]["root"] == str(tmp_path / "workspaces")
|
||||||
|
assert payload["scope"]["runtime_assets"]["root"] == str(tmp_path / "runs")
|
||||||
|
assert "runs/<run_id>" in payload["scope"]["agent_route_note"]
|
||||||
|
|
||||||
|
|
||||||
def test_agent_service_read_routes(monkeypatch, tmp_path):
|
def test_agent_service_read_routes(monkeypatch, tmp_path):
|
||||||
class _FakeSkillsManager:
|
class _FakeSkillsManager:
|
||||||
project_root = tmp_path
|
project_root = tmp_path
|
||||||
@@ -96,9 +109,14 @@ def test_agent_service_read_routes(monkeypatch, tmp_path):
|
|||||||
|
|
||||||
assert profile.status_code == 200
|
assert profile.status_code == 200
|
||||||
assert profile.json()["profile"]["model_name"] == "deepseek-v3.2"
|
assert profile.json()["profile"]["model_name"] == "deepseek-v3.2"
|
||||||
|
assert profile.json()["scope_type"] == "runtime_run"
|
||||||
assert skills.status_code == 200
|
assert skills.status_code == 200
|
||||||
assert skills.json()["skills"][0]["skill_name"] == "demo_skill"
|
assert skills.json()["skills"][0]["skill_name"] == "demo_skill"
|
||||||
|
assert skills.json()["scope_type"] == "runtime_run"
|
||||||
assert detail.status_code == 200
|
assert detail.status_code == 200
|
||||||
assert detail.json()["skill"]["content"] == "# demo"
|
assert detail.json()["skill"]["content"] == "# demo"
|
||||||
|
assert detail.json()["scope_type"] == "runtime_run"
|
||||||
assert workspace_file.status_code == 200
|
assert workspace_file.status_code == 200
|
||||||
assert workspace_file.json()["content"] == "demo:portfolio_manager:MEMORY.md"
|
assert workspace_file.json()["content"] == "demo:portfolio_manager:MEMORY.md"
|
||||||
|
assert workspace_file.json()["scope_type"] == "runtime_run"
|
||||||
|
assert "runs/<run_id>" in workspace_file.json()["scope_note"]
|
||||||
|
|||||||
@@ -311,7 +311,7 @@ class TestRiskAgent:
|
|||||||
|
|
||||||
|
|
||||||
class TestStorageService:
|
class TestStorageService:
|
||||||
def test_storage_service_defaults_to_live_config(self):
|
def test_storage_service_defaults_to_runtime_config(self):
|
||||||
from backend.services.storage import StorageService
|
from backend.services.storage import StorageService
|
||||||
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
@@ -320,7 +320,7 @@ class TestStorageService:
|
|||||||
initial_cash=100000.0,
|
initial_cash=100000.0,
|
||||||
)
|
)
|
||||||
|
|
||||||
assert storage.config_name == "live"
|
assert storage.config_name == "runtime"
|
||||||
|
|
||||||
def test_calculate_portfolio_value_cash_only(self):
|
def test_calculate_portfolio_value_cash_only(self):
|
||||||
from backend.services.storage import StorageService
|
from backend.services.storage import StorageService
|
||||||
@@ -404,7 +404,7 @@ class TestStorageService:
|
|||||||
assert trades[0]["qty"] == 50
|
assert trades[0]["qty"] == 50
|
||||||
assert trades[0]["price"] == 200.0
|
assert trades[0]["price"] == 200.0
|
||||||
|
|
||||||
def test_generate_summary(self):
|
def test_build_summary_export(self):
|
||||||
from backend.services.storage import StorageService
|
from backend.services.storage import StorageService
|
||||||
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
@@ -424,13 +424,12 @@ class TestStorageService:
|
|||||||
}
|
}
|
||||||
prices = {"AAPL": 500.0}
|
prices = {"AAPL": 500.0}
|
||||||
|
|
||||||
storage._generate_summary(state, 100000.0, prices)
|
summary = storage._build_summary_export(state, 100000.0, prices)
|
||||||
|
|
||||||
summary = storage.load_file("summary")
|
|
||||||
assert summary["totalAssetValue"] == 100000.0
|
assert summary["totalAssetValue"] == 100000.0
|
||||||
assert summary["totalReturn"] == 0.0
|
assert summary["totalReturn"] == 0.0
|
||||||
|
|
||||||
def test_generate_holdings(self):
|
def test_build_holdings_export(self):
|
||||||
from backend.services.storage import StorageService
|
from backend.services.storage import StorageService
|
||||||
|
|
||||||
with tempfile.TemporaryDirectory() as tmpdir:
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
@@ -448,9 +447,8 @@ class TestStorageService:
|
|||||||
}
|
}
|
||||||
prices = {"AAPL": 500.0}
|
prices = {"AAPL": 500.0}
|
||||||
|
|
||||||
storage._generate_holdings(state, prices)
|
holdings = storage._build_holdings_export(state, prices)
|
||||||
|
|
||||||
holdings = storage.load_file("holdings")
|
|
||||||
assert len(holdings) == 2 # AAPL + CASH
|
assert len(holdings) == 2 # AAPL + CASH
|
||||||
|
|
||||||
aapl_holding = next(
|
aapl_holding = next(
|
||||||
@@ -461,6 +459,150 @@ class TestStorageService:
|
|||||||
assert aapl_holding["quantity"] == 100
|
assert aapl_holding["quantity"] == 100
|
||||||
assert aapl_holding["currentPrice"] == 500.0
|
assert aapl_holding["currentPrice"] == 500.0
|
||||||
|
|
||||||
|
def test_export_dashboard_compatibility_files_writes_expected_exports(self):
|
||||||
|
from backend.services.storage import StorageService
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
storage = StorageService(
|
||||||
|
dashboard_dir=Path(tmpdir) / "team_dashboard",
|
||||||
|
initial_cash=100000.0,
|
||||||
|
)
|
||||||
|
state = {
|
||||||
|
"portfolio_state": {
|
||||||
|
"cash": 90000.0,
|
||||||
|
"positions": {"AAPL": {"long": 50, "short": 0}},
|
||||||
|
"margin_used": 0.0,
|
||||||
|
},
|
||||||
|
"equity_history": [{"t": 1000, "v": 100000}],
|
||||||
|
"baseline_history": [{"t": 1000, "v": 100000}],
|
||||||
|
"baseline_vw_history": [{"t": 1000, "v": 100000}],
|
||||||
|
"momentum_history": [{"t": 1000, "v": 100000}],
|
||||||
|
"all_trades": [
|
||||||
|
{
|
||||||
|
"id": "t1",
|
||||||
|
"ts": 1000,
|
||||||
|
"trading_date": "2024-01-15",
|
||||||
|
"side": "LONG",
|
||||||
|
"ticker": "AAPL",
|
||||||
|
"qty": 50,
|
||||||
|
"price": 200.0,
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}
|
||||||
|
prices = {"AAPL": 200.0}
|
||||||
|
|
||||||
|
storage.export_dashboard_compatibility_files(
|
||||||
|
state,
|
||||||
|
net_value=100000.0,
|
||||||
|
prices=prices,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert storage.load_export_file("summary")["totalAssetValue"] == 100000.0
|
||||||
|
holdings = storage.load_export_file("holdings")
|
||||||
|
assert any(item["ticker"] == "AAPL" for item in holdings)
|
||||||
|
assert storage.load_export_file("stats")["totalTrades"] == 1
|
||||||
|
assert storage.load_export_file("trades")[0]["ticker"] == "AAPL"
|
||||||
|
|
||||||
|
def test_build_dashboard_snapshot_prefers_persisted_runtime_state_when_memory_view_is_sparse(self):
|
||||||
|
from backend.services.storage import StorageService
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
dashboard_dir = Path(tmpdir) / "team_dashboard"
|
||||||
|
storage = StorageService(
|
||||||
|
dashboard_dir=dashboard_dir,
|
||||||
|
initial_cash=100000.0,
|
||||||
|
)
|
||||||
|
storage.save_server_state(
|
||||||
|
{
|
||||||
|
"portfolio": {
|
||||||
|
"total_value": 123456.0,
|
||||||
|
"cash": 45678.0,
|
||||||
|
"pnl_percent": 23.45,
|
||||||
|
},
|
||||||
|
"holdings": [{"ticker": "AAPL", "quantity": 10}],
|
||||||
|
"stats": {"totalTrades": 3},
|
||||||
|
"trades": [{"ticker": "AAPL"}],
|
||||||
|
"leaderboard": [{"agentId": "technical_analyst"}],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
snapshot = storage.build_dashboard_snapshot_from_state({"portfolio": {}})
|
||||||
|
|
||||||
|
assert snapshot["summary"]["totalAssetValue"] == 123456.0
|
||||||
|
assert snapshot["holdings"][0]["ticker"] == "AAPL"
|
||||||
|
assert snapshot["trades"][0]["ticker"] == "AAPL"
|
||||||
|
assert snapshot["leaderboard"][0]["agentId"] == "technical_analyst"
|
||||||
|
|
||||||
|
def test_runtime_leaderboard_prefers_server_state_and_persists_back(self):
|
||||||
|
from backend.services.storage import StorageService
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
dashboard_dir = Path(tmpdir) / "team_dashboard"
|
||||||
|
storage = StorageService(
|
||||||
|
dashboard_dir=dashboard_dir,
|
||||||
|
initial_cash=100000.0,
|
||||||
|
)
|
||||||
|
storage.save_export_file("leaderboard", [{"agentId": "export_only"}])
|
||||||
|
storage.save_server_state({"leaderboard": [{"agentId": "runtime_state"}]})
|
||||||
|
|
||||||
|
leaderboard = storage.load_runtime_leaderboard()
|
||||||
|
assert leaderboard[0]["agentId"] == "runtime_state"
|
||||||
|
|
||||||
|
updated = [{"agentId": "updated_runtime"}]
|
||||||
|
storage.persist_runtime_leaderboard(updated)
|
||||||
|
|
||||||
|
saved_state = storage.read_persisted_server_state()
|
||||||
|
saved_export = storage.load_export_file("leaderboard")
|
||||||
|
assert saved_state["leaderboard"][0]["agentId"] == "updated_runtime"
|
||||||
|
assert saved_export[0]["agentId"] == "updated_runtime"
|
||||||
|
|
||||||
|
def test_compatibility_exports_can_be_disabled_without_breaking_runtime_leaderboard(self):
|
||||||
|
from backend.services.storage import StorageService
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
dashboard_dir = Path(tmpdir) / "team_dashboard"
|
||||||
|
storage = StorageService(
|
||||||
|
dashboard_dir=dashboard_dir,
|
||||||
|
initial_cash=100000.0,
|
||||||
|
enable_compat_exports=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
storage.generate_leaderboard()
|
||||||
|
storage.export_dashboard_compatibility_files(
|
||||||
|
{
|
||||||
|
"portfolio_state": {
|
||||||
|
"cash": 100000.0,
|
||||||
|
"positions": {},
|
||||||
|
"margin_used": 0.0,
|
||||||
|
},
|
||||||
|
"equity_history": [],
|
||||||
|
"baseline_history": [],
|
||||||
|
"baseline_vw_history": [],
|
||||||
|
"momentum_history": [],
|
||||||
|
"all_trades": [],
|
||||||
|
},
|
||||||
|
net_value=100000.0,
|
||||||
|
prices={},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert not dashboard_dir.joinpath("summary.json").exists()
|
||||||
|
assert storage.load_runtime_leaderboard()
|
||||||
|
persisted = storage.read_persisted_server_state()
|
||||||
|
assert persisted["leaderboard"]
|
||||||
|
|
||||||
|
def test_compatibility_exports_default_can_be_disabled_via_env(self, monkeypatch):
|
||||||
|
from backend.services.storage import StorageService
|
||||||
|
|
||||||
|
monkeypatch.setenv("ENABLE_DASHBOARD_COMPAT_EXPORTS", "false")
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
storage = StorageService(
|
||||||
|
dashboard_dir=Path(tmpdir) / "team_dashboard",
|
||||||
|
initial_cash=100000.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert storage.enable_compat_exports is False
|
||||||
|
|
||||||
|
|
||||||
class TestTradeExecutor:
|
class TestTradeExecutor:
|
||||||
def test_execute_trade_long(self):
|
def test_execute_trade_long(self):
|
||||||
|
|||||||
@@ -1,6 +1,8 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
|
from typer.testing import CliRunner
|
||||||
|
|
||||||
from backend import cli
|
from backend import cli
|
||||||
|
|
||||||
|
|
||||||
@@ -126,6 +128,86 @@ def test_backtest_runs_full_market_store_prepare_before_start(monkeypatch, tmp_p
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def test_live_cli_defaults_to_generic_run_label(monkeypatch, tmp_path):
|
||||||
|
project_root = tmp_path
|
||||||
|
(project_root / ".env").write_text("FINNHUB_API_KEY=test\n", encoding="utf-8")
|
||||||
|
|
||||||
|
calls = []
|
||||||
|
runner = CliRunner()
|
||||||
|
|
||||||
|
monkeypatch.setattr(cli, "get_project_root", lambda: project_root)
|
||||||
|
monkeypatch.setattr(cli, "handle_history_cleanup", lambda config_name, auto_clean=False: None)
|
||||||
|
monkeypatch.setattr(cli, "run_data_updater", lambda project_root: None)
|
||||||
|
monkeypatch.setattr(cli, "auto_update_market_store", lambda config_name, end_date=None: None)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
cli,
|
||||||
|
"auto_enrich_market_store",
|
||||||
|
lambda config_name, end_date=None, lookback_days=120, force=False: None,
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(cli.os, "chdir", lambda path: None)
|
||||||
|
|
||||||
|
def fake_run(cmd, check=True, **kwargs):
|
||||||
|
calls.append(cmd)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
monkeypatch.setattr(cli.subprocess, "run", fake_run)
|
||||||
|
|
||||||
|
result = runner.invoke(cli.app, ["live", "--trigger-time", "now"])
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert calls
|
||||||
|
assert "--config-name" in calls[0]
|
||||||
|
config_index = calls[0].index("--config-name")
|
||||||
|
assert calls[0][config_index + 1] == "default_live_run"
|
||||||
|
|
||||||
|
|
||||||
|
def test_backtest_cli_defaults_to_generic_run_label(monkeypatch, tmp_path):
|
||||||
|
project_root = tmp_path
|
||||||
|
calls = []
|
||||||
|
runner = CliRunner()
|
||||||
|
|
||||||
|
monkeypatch.setattr(cli, "get_project_root", lambda: project_root)
|
||||||
|
monkeypatch.setattr(cli, "handle_history_cleanup", lambda config_name, auto_clean=False: None)
|
||||||
|
monkeypatch.setattr(cli, "run_data_updater", lambda project_root: None)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
cli,
|
||||||
|
"auto_prepare_backtest_market_store",
|
||||||
|
lambda config_name, start_date, end_date: None,
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
cli,
|
||||||
|
"auto_enrich_market_store",
|
||||||
|
lambda config_name, end_date=None, lookback_days=120, force=False: None,
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(cli.os, "chdir", lambda path: None)
|
||||||
|
|
||||||
|
def fake_run(cmd, check=True, **kwargs):
|
||||||
|
calls.append(cmd)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
monkeypatch.setattr(cli.subprocess, "run", fake_run)
|
||||||
|
|
||||||
|
result = runner.invoke(
|
||||||
|
cli.app,
|
||||||
|
["backtest", "--start", "2026-03-01", "--end", "2026-03-10"],
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result.exit_code == 0
|
||||||
|
assert calls
|
||||||
|
assert "--config-name" in calls[0]
|
||||||
|
config_index = calls[0].index("--config-name")
|
||||||
|
assert calls[0][config_index + 1] == "default_backtest_run"
|
||||||
|
|
||||||
|
|
||||||
|
def test_main_parser_defaults_to_generic_run_label():
|
||||||
|
from backend.main import build_arg_parser
|
||||||
|
|
||||||
|
parser = build_arg_parser()
|
||||||
|
args = parser.parse_args([])
|
||||||
|
|
||||||
|
assert args.config_name == "default_run"
|
||||||
|
|
||||||
|
|
||||||
def test_ingest_enrich_runs_batch_enrichment(monkeypatch):
|
def test_ingest_enrich_runs_batch_enrichment(monkeypatch):
|
||||||
calls = []
|
calls = []
|
||||||
|
|
||||||
|
|||||||
405
backend/tests/test_evo_agent_integration.py
Normal file
405
backend/tests/test_evo_agent_integration.py
Normal file
@@ -0,0 +1,405 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""Integration tests for EvoAgent system.
|
||||||
|
|
||||||
|
These tests verify the integration between:
|
||||||
|
- UnifiedAgentFactory
|
||||||
|
- EvoAgent
|
||||||
|
- ToolGuardMixin
|
||||||
|
- Workspace-driven configuration
|
||||||
|
"""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, AsyncMock
|
||||||
|
|
||||||
|
|
||||||
|
class TestUnifiedAgentFactoryIntegration:
|
||||||
|
"""Test UnifiedAgentFactory creates agents correctly."""
|
||||||
|
|
||||||
|
def test_factory_creates_analyst_with_workspace_config(self, tmp_path, monkeypatch):
|
||||||
|
"""Test that factory creates EvoAgent with workspace config."""
|
||||||
|
from backend.agents.unified_factory import UnifiedAgentFactory
|
||||||
|
|
||||||
|
# Setup mock skills manager
|
||||||
|
class MockSkillsManager:
|
||||||
|
def get_agent_asset_dir(self, config_name, agent_id):
|
||||||
|
path = tmp_path / "runs" / config_name / "agents" / agent_id
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
return path
|
||||||
|
|
||||||
|
# Create workspace config
|
||||||
|
workspace_dir = tmp_path / "runs" / "test_config" / "agents" / "fundamentals_analyst"
|
||||||
|
workspace_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
(workspace_dir / "agent.yaml").write_text(
|
||||||
|
"prompt_files:\n - SOUL.md\n - CUSTOM.md\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
(workspace_dir / "SOUL.md").write_text("System prompt content", encoding="utf-8")
|
||||||
|
(workspace_dir / "CUSTOM.md").write_text("Custom instructions", encoding="utf-8")
|
||||||
|
|
||||||
|
factory = UnifiedAgentFactory(
|
||||||
|
config_name="test_config",
|
||||||
|
skills_manager=MockSkillsManager(),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mock EvoAgent creation by patching where it's imported
|
||||||
|
created_kwargs = {}
|
||||||
|
|
||||||
|
class MockEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created_kwargs.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
# Patch at the location where EvoAgent is imported in unified_factory
|
||||||
|
import backend.agents.base.evo_agent as evo_agent_module
|
||||||
|
original_evo_agent = evo_agent_module.EvoAgent
|
||||||
|
evo_agent_module.EvoAgent = MockEvoAgent
|
||||||
|
|
||||||
|
try:
|
||||||
|
monkeypatch.setattr(
|
||||||
|
factory,
|
||||||
|
"_create_toolkit",
|
||||||
|
lambda *args, **kwargs: MagicMock(),
|
||||||
|
)
|
||||||
|
|
||||||
|
agent = factory.create_analyst(
|
||||||
|
analyst_type="fundamentals_analyst",
|
||||||
|
model=MagicMock(),
|
||||||
|
formatter=MagicMock(),
|
||||||
|
)
|
||||||
|
|
||||||
|
assert isinstance(agent, MockEvoAgent)
|
||||||
|
assert created_kwargs["agent_id"] == "fundamentals_analyst"
|
||||||
|
assert created_kwargs["config_name"] == "test_config"
|
||||||
|
assert "SOUL.md" in created_kwargs["prompt_files"]
|
||||||
|
finally:
|
||||||
|
evo_agent_module.EvoAgent = original_evo_agent
|
||||||
|
|
||||||
|
def test_factory_creates_risk_manager(self, tmp_path, monkeypatch):
|
||||||
|
"""Test that factory creates risk manager EvoAgent."""
|
||||||
|
from backend.agents.unified_factory import UnifiedAgentFactory
|
||||||
|
|
||||||
|
class MockSkillsManager:
|
||||||
|
def get_agent_asset_dir(self, config_name, agent_id):
|
||||||
|
path = tmp_path / "runs" / config_name / "agents" / agent_id
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
return path
|
||||||
|
|
||||||
|
factory = UnifiedAgentFactory(
|
||||||
|
config_name="test_config",
|
||||||
|
skills_manager=MockSkillsManager(),
|
||||||
|
)
|
||||||
|
|
||||||
|
created_kwargs = {}
|
||||||
|
|
||||||
|
class MockEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created_kwargs.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
import backend.agents.base.evo_agent as evo_agent_module
|
||||||
|
original_evo_agent = evo_agent_module.EvoAgent
|
||||||
|
evo_agent_module.EvoAgent = MockEvoAgent
|
||||||
|
|
||||||
|
try:
|
||||||
|
monkeypatch.setattr(
|
||||||
|
factory,
|
||||||
|
"_create_toolkit",
|
||||||
|
lambda *args, **kwargs: MagicMock(),
|
||||||
|
)
|
||||||
|
|
||||||
|
agent = factory.create_risk_manager(
|
||||||
|
model=MagicMock(),
|
||||||
|
formatter=MagicMock(),
|
||||||
|
)
|
||||||
|
|
||||||
|
assert isinstance(agent, MockEvoAgent)
|
||||||
|
assert created_kwargs["agent_id"] == "risk_manager"
|
||||||
|
finally:
|
||||||
|
evo_agent_module.EvoAgent = original_evo_agent
|
||||||
|
|
||||||
|
def test_factory_creates_portfolio_manager(self, tmp_path, monkeypatch):
|
||||||
|
"""Test that factory creates portfolio manager EvoAgent with financial params."""
|
||||||
|
from backend.agents.unified_factory import UnifiedAgentFactory
|
||||||
|
|
||||||
|
class MockSkillsManager:
|
||||||
|
def get_agent_asset_dir(self, config_name, agent_id):
|
||||||
|
path = tmp_path / "runs" / config_name / "agents" / agent_id
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
return path
|
||||||
|
|
||||||
|
factory = UnifiedAgentFactory(
|
||||||
|
config_name="test_config",
|
||||||
|
skills_manager=MockSkillsManager(),
|
||||||
|
)
|
||||||
|
|
||||||
|
created_kwargs = {}
|
||||||
|
|
||||||
|
def mock_make_decision(*args, **kwargs):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class MockEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created_kwargs.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
# Add _make_decision for PM toolkit registration
|
||||||
|
self._make_decision = mock_make_decision
|
||||||
|
|
||||||
|
import backend.agents.base.evo_agent as evo_agent_module
|
||||||
|
original_evo_agent = evo_agent_module.EvoAgent
|
||||||
|
evo_agent_module.EvoAgent = MockEvoAgent
|
||||||
|
|
||||||
|
try:
|
||||||
|
agent = factory.create_portfolio_manager(
|
||||||
|
model=MagicMock(),
|
||||||
|
formatter=MagicMock(),
|
||||||
|
initial_cash=50000.0,
|
||||||
|
margin_requirement=0.3,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert isinstance(agent, MockEvoAgent)
|
||||||
|
assert created_kwargs["agent_id"] == "portfolio_manager"
|
||||||
|
assert created_kwargs["initial_cash"] == 50000.0
|
||||||
|
assert created_kwargs["margin_requirement"] == 0.3
|
||||||
|
finally:
|
||||||
|
evo_agent_module.EvoAgent = original_evo_agent
|
||||||
|
|
||||||
|
def test_factory_respects_evo_agent_ids_env(self, monkeypatch, tmp_path):
|
||||||
|
"""Test that factory respects EVO_AGENT_IDS environment variable."""
|
||||||
|
from backend.agents.unified_factory import UnifiedAgentFactory
|
||||||
|
|
||||||
|
# Only enable technical_analyst as EvoAgent
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "technical_analyst")
|
||||||
|
|
||||||
|
class MockSkillsManager:
|
||||||
|
def get_agent_asset_dir(self, config_name, agent_id):
|
||||||
|
path = tmp_path / "runs" / config_name / "agents" / agent_id
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
return path
|
||||||
|
|
||||||
|
factory = UnifiedAgentFactory(
|
||||||
|
config_name="test_config",
|
||||||
|
skills_manager=MockSkillsManager(),
|
||||||
|
)
|
||||||
|
|
||||||
|
# technical_analyst should use EvoAgent
|
||||||
|
assert factory._should_use_evo_agent("technical_analyst") is True
|
||||||
|
# fundamentals_analyst should use legacy
|
||||||
|
assert factory._should_use_evo_agent("fundamentals_analyst") is False
|
||||||
|
|
||||||
|
def test_factory_legacy_mode_disables_evo_agent(self, monkeypatch):
|
||||||
|
"""Test that EVO_AGENT_IDS=legacy disables all EvoAgents."""
|
||||||
|
from backend.agents.unified_factory import UnifiedAgentFactory
|
||||||
|
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "legacy")
|
||||||
|
|
||||||
|
factory = UnifiedAgentFactory(
|
||||||
|
config_name="test_config",
|
||||||
|
skills_manager=MagicMock(),
|
||||||
|
)
|
||||||
|
|
||||||
|
assert factory._evo_agent_ids == set()
|
||||||
|
assert factory._should_use_evo_agent("any_agent") is False
|
||||||
|
|
||||||
|
|
||||||
|
class TestToolGuardIntegration:
|
||||||
|
"""Test ToolGuardMixin integration with EvoAgent."""
|
||||||
|
|
||||||
|
def test_tool_guard_intercepts_guarded_tools(self):
|
||||||
|
"""Test that ToolGuard intercepts tools requiring approval."""
|
||||||
|
from backend.agents.base.tool_guard import ToolGuardMixin
|
||||||
|
|
||||||
|
class TestAgent(ToolGuardMixin):
|
||||||
|
def __init__(self):
|
||||||
|
self._init_tool_guard()
|
||||||
|
self.agent_id = "test_agent"
|
||||||
|
self.workspace_id = "test_workspace"
|
||||||
|
self.session_id = "test_session"
|
||||||
|
|
||||||
|
agent = TestAgent()
|
||||||
|
|
||||||
|
# Verify place_order is in guarded tools
|
||||||
|
assert agent._is_tool_guarded("place_order") is True
|
||||||
|
assert agent._is_tool_denied("execute_shell_command") is True
|
||||||
|
|
||||||
|
def test_tool_guard_approval_flow(self):
|
||||||
|
"""Test the full approval flow for a guarded tool."""
|
||||||
|
from backend.agents.base.tool_guard import (
|
||||||
|
ToolGuardStore,
|
||||||
|
ApprovalStatus,
|
||||||
|
)
|
||||||
|
|
||||||
|
store = ToolGuardStore()
|
||||||
|
|
||||||
|
# Create a pending approval record
|
||||||
|
record = store.create_pending(
|
||||||
|
tool_name="place_order",
|
||||||
|
tool_input={"ticker": "AAPL", "quantity": 100},
|
||||||
|
agent_id="test_agent",
|
||||||
|
workspace_id="test_workspace",
|
||||||
|
)
|
||||||
|
|
||||||
|
assert record.status == ApprovalStatus.PENDING
|
||||||
|
assert record.tool_name == "place_order"
|
||||||
|
|
||||||
|
# Approve the request with resolved_by
|
||||||
|
updated = store.set_status(record.approval_id, ApprovalStatus.APPROVED, resolved_by="test_user")
|
||||||
|
assert updated.status == ApprovalStatus.APPROVED
|
||||||
|
assert updated.resolved_by == "test_user"
|
||||||
|
|
||||||
|
def test_tool_guard_default_lists(self):
|
||||||
|
"""Test default guarded and denied tool lists."""
|
||||||
|
from backend.agents.base.tool_guard import (
|
||||||
|
DEFAULT_GUARDED_TOOLS,
|
||||||
|
DEFAULT_DENIED_TOOLS,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Critical tools should be guarded
|
||||||
|
assert "place_order" in DEFAULT_GUARDED_TOOLS
|
||||||
|
assert "modify_position" in DEFAULT_GUARDED_TOOLS
|
||||||
|
assert "write_file" in DEFAULT_GUARDED_TOOLS
|
||||||
|
assert "edit_file" in DEFAULT_GUARDED_TOOLS
|
||||||
|
|
||||||
|
# Dangerous tools should be denied
|
||||||
|
assert "execute_shell_command" in DEFAULT_DENIED_TOOLS
|
||||||
|
|
||||||
|
|
||||||
|
class TestEvoAgentWorkspaceIntegration:
|
||||||
|
"""Test EvoAgent workspace-driven configuration."""
|
||||||
|
|
||||||
|
def test_evo_agent_loads_prompt_files_from_workspace(self, tmp_path, monkeypatch):
|
||||||
|
"""Test that EvoAgent loads prompt files from workspace directory."""
|
||||||
|
from backend.agents.base.evo_agent import EvoAgent
|
||||||
|
|
||||||
|
workspace_dir = tmp_path / "runs" / "demo" / "agents" / "test_analyst"
|
||||||
|
workspace_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# Create prompt files
|
||||||
|
(workspace_dir / "SOUL.md").write_text(
|
||||||
|
"You are a test analyst.", encoding="utf-8"
|
||||||
|
)
|
||||||
|
(workspace_dir / "INSTRUCTIONS.md").write_text(
|
||||||
|
"Additional instructions.", encoding="utf-8"
|
||||||
|
)
|
||||||
|
|
||||||
|
class MockToolkit:
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def register_agent_skill(self, path):
|
||||||
|
pass
|
||||||
|
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.agents.base.evo_agent.Toolkit",
|
||||||
|
MockToolkit,
|
||||||
|
)
|
||||||
|
|
||||||
|
class MockSkillsManager:
|
||||||
|
def get_agent_active_root(self, config_name, agent_id):
|
||||||
|
return workspace_dir / "skills" / "active"
|
||||||
|
|
||||||
|
def list_active_skill_metadata(self, config_name, agent_id):
|
||||||
|
return []
|
||||||
|
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id="test_analyst",
|
||||||
|
config_name="demo",
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=MagicMock(),
|
||||||
|
formatter=MagicMock(),
|
||||||
|
skills_manager=MockSkillsManager(),
|
||||||
|
prompt_files=["SOUL.md", "INSTRUCTIONS.md"],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Verify prompts are loaded into system prompt
|
||||||
|
assert "You are a test analyst." in agent._sys_prompt
|
||||||
|
assert "Additional instructions." in agent._sys_prompt
|
||||||
|
|
||||||
|
|
||||||
|
class TestFactoryCaching:
|
||||||
|
"""Test UnifiedAgentFactory caching behavior."""
|
||||||
|
|
||||||
|
def test_factory_cache_per_config(self, monkeypatch):
|
||||||
|
"""Test that factory is cached per config name."""
|
||||||
|
from backend.agents.unified_factory import (
|
||||||
|
get_agent_factory,
|
||||||
|
clear_factory_cache,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Clear any existing cache
|
||||||
|
clear_factory_cache()
|
||||||
|
|
||||||
|
mock_skills_manager = MagicMock()
|
||||||
|
|
||||||
|
factory1 = get_agent_factory("config_a", mock_skills_manager)
|
||||||
|
factory2 = get_agent_factory("config_a", mock_skills_manager)
|
||||||
|
factory3 = get_agent_factory("config_b", mock_skills_manager)
|
||||||
|
|
||||||
|
# Same config should return same instance
|
||||||
|
assert factory1 is factory2
|
||||||
|
# Different config should return different instance
|
||||||
|
assert factory1 is not factory3
|
||||||
|
|
||||||
|
def test_clear_factory_cache(self):
|
||||||
|
"""Test that clear_factory_cache removes all cached factories."""
|
||||||
|
from backend.agents.unified_factory import (
|
||||||
|
get_agent_factory,
|
||||||
|
clear_factory_cache,
|
||||||
|
)
|
||||||
|
|
||||||
|
mock_skills_manager = MagicMock()
|
||||||
|
|
||||||
|
factory1 = get_agent_factory("config_c", mock_skills_manager)
|
||||||
|
clear_factory_cache()
|
||||||
|
factory2 = get_agent_factory("config_c", mock_skills_manager)
|
||||||
|
|
||||||
|
# After clearing cache, should be new instance
|
||||||
|
assert factory1 is not factory2
|
||||||
|
|
||||||
|
|
||||||
|
class TestDeprecationWarnings:
|
||||||
|
"""Test that legacy agents emit deprecation warnings."""
|
||||||
|
|
||||||
|
def test_risk_agent_emits_deprecation_warning(self):
|
||||||
|
"""Test that RiskAgent emits deprecation warning on import."""
|
||||||
|
import warnings
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# Clear cache to force reimport
|
||||||
|
modules_to_remove = [
|
||||||
|
k for k in sys.modules.keys()
|
||||||
|
if k.endswith("risk_manager") and "backend.agents" in k
|
||||||
|
]
|
||||||
|
for m in modules_to_remove:
|
||||||
|
del sys.modules[m]
|
||||||
|
|
||||||
|
with warnings.catch_warnings(record=True) as w:
|
||||||
|
warnings.simplefilter("always")
|
||||||
|
from backend.agents.risk_manager import RiskAgent
|
||||||
|
|
||||||
|
deprecation_warnings = [
|
||||||
|
x for x in w if issubclass(x.category, DeprecationWarning)
|
||||||
|
]
|
||||||
|
assert any("RiskAgent is deprecated" in str(x.message) for x in deprecation_warnings)
|
||||||
|
|
||||||
|
def test_pm_agent_emits_deprecation_warning(self):
|
||||||
|
"""Test that PMAgent emits deprecation warning on import."""
|
||||||
|
import warnings
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# Clear cache to force reimport
|
||||||
|
modules_to_remove = [
|
||||||
|
k for k in sys.modules.keys()
|
||||||
|
if k.endswith("portfolio_manager") and "backend.agents" in k
|
||||||
|
]
|
||||||
|
for m in modules_to_remove:
|
||||||
|
del sys.modules[m]
|
||||||
|
|
||||||
|
with warnings.catch_warnings(record=True) as w:
|
||||||
|
warnings.simplefilter("always")
|
||||||
|
from backend.agents.portfolio_manager import PMAgent
|
||||||
|
|
||||||
|
deprecation_warnings = [
|
||||||
|
x for x in w if issubclass(x.category, DeprecationWarning)
|
||||||
|
]
|
||||||
|
assert any("PMAgent is deprecated" in str(x.message) for x in deprecation_warnings)
|
||||||
429
backend/tests/test_evo_agent_selection.py
Normal file
429
backend/tests/test_evo_agent_selection.py
Normal file
@@ -0,0 +1,429 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""Tests for selective EvoAgent construction."""
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from backend.config.constants import ANALYST_TYPES
|
||||||
|
|
||||||
|
|
||||||
|
def test_main_resolve_evo_agent_ids_filters_unsupported_roles(monkeypatch):
|
||||||
|
from backend import main as main_module
|
||||||
|
|
||||||
|
monkeypatch.setenv(
|
||||||
|
"EVO_AGENT_IDS",
|
||||||
|
"fundamentals_analyst,portfolio_manager,unknown,technical_analyst",
|
||||||
|
)
|
||||||
|
|
||||||
|
resolved = main_module._resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
assert resolved == {"fundamentals_analyst", "portfolio_manager", "technical_analyst"}
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipeline_runner_resolve_evo_agent_ids_keeps_supported_roles(monkeypatch):
|
||||||
|
from backend.core import pipeline_runner as runner_module
|
||||||
|
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "risk_manager,valuation_analyst")
|
||||||
|
|
||||||
|
resolved = runner_module._resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
assert resolved == {"risk_manager", "valuation_analyst"}
|
||||||
|
|
||||||
|
|
||||||
|
def test_main_create_analyst_agent_can_build_evo_agent(monkeypatch, tmp_path):
|
||||||
|
from backend import main as main_module
|
||||||
|
|
||||||
|
created = {}
|
||||||
|
|
||||||
|
class DummySkillsManager:
|
||||||
|
def get_agent_asset_dir(self, config_name, agent_id):
|
||||||
|
path = tmp_path / "runs" / config_name / "agents" / agent_id
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
(path / "agent.yaml").write_text(
|
||||||
|
"prompt_files:\n - SOUL.md\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
return path
|
||||||
|
|
||||||
|
class DummyEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "fundamentals_analyst")
|
||||||
|
monkeypatch.setattr(main_module, "EvoAgent", DummyEvoAgent)
|
||||||
|
monkeypatch.setattr(main_module, "create_agent_toolkit", lambda *args, **kwargs: "toolkit")
|
||||||
|
|
||||||
|
agent = main_module._create_analyst_agent(
|
||||||
|
analyst_type="fundamentals_analyst",
|
||||||
|
config_name="demo",
|
||||||
|
model="model",
|
||||||
|
formatter="formatter",
|
||||||
|
skills_manager=DummySkillsManager(),
|
||||||
|
active_skill_map={"fundamentals_analyst": [Path("/tmp/skill")]},
|
||||||
|
long_term_memory=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert isinstance(agent, DummyEvoAgent)
|
||||||
|
assert created["agent_id"] == "fundamentals_analyst"
|
||||||
|
assert created["config_name"] == "demo"
|
||||||
|
assert created["prompt_files"] == ["SOUL.md"]
|
||||||
|
assert agent.toolkit == "toolkit"
|
||||||
|
assert agent.workspace_id == "demo"
|
||||||
|
|
||||||
|
|
||||||
|
def test_main_create_risk_manager_can_build_evo_agent(monkeypatch, tmp_path):
|
||||||
|
from backend import main as main_module
|
||||||
|
|
||||||
|
created = {}
|
||||||
|
|
||||||
|
class DummySkillsManager:
|
||||||
|
def get_agent_asset_dir(self, config_name, agent_id):
|
||||||
|
path = tmp_path / "runs" / config_name / "agents" / agent_id
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
(path / "agent.yaml").write_text(
|
||||||
|
"prompt_files:\n - SOUL.md\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
return path
|
||||||
|
|
||||||
|
class DummyEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "risk_manager")
|
||||||
|
monkeypatch.setattr(main_module, "EvoAgent", DummyEvoAgent)
|
||||||
|
monkeypatch.setattr(main_module, "create_agent_toolkit", lambda *args, **kwargs: "risk-toolkit")
|
||||||
|
|
||||||
|
agent = main_module._create_risk_manager_agent(
|
||||||
|
config_name="demo",
|
||||||
|
model="model",
|
||||||
|
formatter="formatter",
|
||||||
|
skills_manager=DummySkillsManager(),
|
||||||
|
active_skill_map={"risk_manager": [Path("/tmp/skill")]},
|
||||||
|
long_term_memory=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert isinstance(agent, DummyEvoAgent)
|
||||||
|
assert created["agent_id"] == "risk_manager"
|
||||||
|
assert created["config_name"] == "demo"
|
||||||
|
assert created["prompt_files"] == ["SOUL.md"]
|
||||||
|
assert agent.toolkit == "risk-toolkit"
|
||||||
|
assert agent.workspace_id == "demo"
|
||||||
|
|
||||||
|
|
||||||
|
def test_main_create_portfolio_manager_can_build_evo_agent(monkeypatch, tmp_path):
|
||||||
|
from backend import main as main_module
|
||||||
|
|
||||||
|
created = {}
|
||||||
|
|
||||||
|
class DummySkillsManager:
|
||||||
|
def get_agent_asset_dir(self, config_name, agent_id):
|
||||||
|
path = tmp_path / "runs" / config_name / "agents" / agent_id
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
(path / "agent.yaml").write_text(
|
||||||
|
"prompt_files:\n - SOUL.md\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
return path
|
||||||
|
|
||||||
|
class DummyEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "portfolio_manager")
|
||||||
|
monkeypatch.setattr(main_module, "EvoAgent", DummyEvoAgent)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
main_module,
|
||||||
|
"create_agent_toolkit",
|
||||||
|
lambda *args, **kwargs: "pm-toolkit",
|
||||||
|
)
|
||||||
|
|
||||||
|
agent = main_module._create_portfolio_manager_agent(
|
||||||
|
config_name="demo",
|
||||||
|
model="model",
|
||||||
|
formatter="formatter",
|
||||||
|
initial_cash=12345.0,
|
||||||
|
margin_requirement=0.4,
|
||||||
|
skills_manager=DummySkillsManager(),
|
||||||
|
active_skill_map={"portfolio_manager": [Path("/tmp/skill")]},
|
||||||
|
long_term_memory=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert isinstance(agent, DummyEvoAgent)
|
||||||
|
assert created["agent_id"] == "portfolio_manager"
|
||||||
|
assert created["config_name"] == "demo"
|
||||||
|
assert created["prompt_files"] == ["SOUL.md"]
|
||||||
|
assert created["initial_cash"] == 12345.0
|
||||||
|
assert created["margin_requirement"] == 0.4
|
||||||
|
assert agent.toolkit == "pm-toolkit"
|
||||||
|
assert agent.workspace_id == "demo"
|
||||||
|
|
||||||
|
|
||||||
|
def test_evo_agent_reload_runtime_assets_refreshes_prompt_files(monkeypatch, tmp_path):
|
||||||
|
from backend.agents.base.evo_agent import EvoAgent
|
||||||
|
|
||||||
|
workspace_dir = tmp_path / "runs" / "demo" / "agents" / "fundamentals_analyst"
|
||||||
|
workspace_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
(workspace_dir / "SOUL.md").write_text("soul-v1", encoding="utf-8")
|
||||||
|
(workspace_dir / "MEMORY.md").write_text("memory-v1", encoding="utf-8")
|
||||||
|
(workspace_dir / "agent.yaml").write_text(
|
||||||
|
"prompt_files:\n"
|
||||||
|
" - SOUL.md\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
|
||||||
|
class DummyToolkit:
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
self.registered = []
|
||||||
|
|
||||||
|
def register_agent_skill(self, path):
|
||||||
|
self.registered.append(path)
|
||||||
|
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.agents.base.evo_agent.Toolkit",
|
||||||
|
DummyToolkit,
|
||||||
|
)
|
||||||
|
|
||||||
|
class DummyModel:
|
||||||
|
pass
|
||||||
|
|
||||||
|
class DummyFormatter:
|
||||||
|
pass
|
||||||
|
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id="fundamentals_analyst",
|
||||||
|
config_name="demo",
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=DummyModel(),
|
||||||
|
formatter=DummyFormatter(),
|
||||||
|
skills_manager=type(
|
||||||
|
"SkillsManagerStub",
|
||||||
|
(),
|
||||||
|
{
|
||||||
|
"get_agent_active_root": staticmethod(lambda config_name, agent_id: workspace_dir / "skills" / "active"),
|
||||||
|
"list_active_skill_metadata": staticmethod(lambda config_name, agent_id: []),
|
||||||
|
},
|
||||||
|
)(),
|
||||||
|
)
|
||||||
|
|
||||||
|
assert "soul-v1" in agent._sys_prompt
|
||||||
|
assert "memory-v1" not in agent._sys_prompt
|
||||||
|
|
||||||
|
(workspace_dir / "agent.yaml").write_text(
|
||||||
|
"prompt_files:\n"
|
||||||
|
" - SOUL.md\n"
|
||||||
|
" - MEMORY.md\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
|
||||||
|
agent.reload_runtime_assets(active_skill_dirs=[])
|
||||||
|
|
||||||
|
assert "memory-v1" in agent._sys_prompt
|
||||||
|
assert agent.workspace_id == "demo"
|
||||||
|
assert agent.config == {"config_name": "demo"}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipeline_resolve_evo_agent_ids_filters_unsupported_roles(monkeypatch):
|
||||||
|
"""Test that pipeline._resolve_evo_agent_ids filters unsupported roles."""
|
||||||
|
from backend.core import pipeline as pipeline_module
|
||||||
|
|
||||||
|
monkeypatch.setenv(
|
||||||
|
"EVO_AGENT_IDS",
|
||||||
|
"fundamentals_analyst,portfolio_manager,unknown,technical_analyst",
|
||||||
|
)
|
||||||
|
|
||||||
|
resolved = pipeline_module._resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
assert resolved == {"fundamentals_analyst", "portfolio_manager", "technical_analyst"}
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipeline_create_runtime_analyst_uses_evo_agent_when_enabled(monkeypatch, tmp_path):
|
||||||
|
"""Test that _create_runtime_analyst creates EvoAgent when in EVO_AGENT_IDS."""
|
||||||
|
from backend.core import pipeline as pipeline_module
|
||||||
|
|
||||||
|
created = {}
|
||||||
|
|
||||||
|
class DummyEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
class DummyAnalystAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "fundamentals_analyst")
|
||||||
|
monkeypatch.setattr(pipeline_module, "EvoAgent", DummyEvoAgent)
|
||||||
|
monkeypatch.setattr(pipeline_module, "AnalystAgent", DummyAnalystAgent)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
pipeline_module,
|
||||||
|
"create_agent_toolkit",
|
||||||
|
lambda *args, **kwargs: "toolkit",
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
pipeline_module,
|
||||||
|
"get_agent_model",
|
||||||
|
lambda x: "model",
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
pipeline_module,
|
||||||
|
"get_agent_formatter",
|
||||||
|
lambda x: "formatter",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create a mock pipeline instance
|
||||||
|
class MockPM:
|
||||||
|
def __init__(self):
|
||||||
|
self.config = {"config_name": "demo"}
|
||||||
|
|
||||||
|
pipeline = pipeline_module.TradingPipeline(
|
||||||
|
analysts=[],
|
||||||
|
risk_manager=None,
|
||||||
|
portfolio_manager=MockPM(),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mock workspace_manager methods
|
||||||
|
monkeypatch.setattr(
|
||||||
|
pipeline_module.WorkspaceManager,
|
||||||
|
"ensure_agent_assets",
|
||||||
|
lambda *args, **kwargs: None,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = pipeline._create_runtime_analyst("test_analyst", "fundamentals_analyst")
|
||||||
|
|
||||||
|
assert "Created runtime analyst" in result
|
||||||
|
assert created.get("agent_id") == "test_analyst"
|
||||||
|
assert created.get("config_name") == "demo"
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipeline_create_runtime_analyst_uses_legacy_when_not_in_evo_ids(monkeypatch, tmp_path):
|
||||||
|
"""Test that _create_runtime_analyst creates legacy AnalystAgent when not in EVO_AGENT_IDS."""
|
||||||
|
from backend.core import pipeline as pipeline_module
|
||||||
|
|
||||||
|
created = {}
|
||||||
|
|
||||||
|
class DummyEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
class DummyAnalystAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
# EVO_AGENT_IDS does not include fundamentals_analyst
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "technical_analyst")
|
||||||
|
monkeypatch.setattr(pipeline_module, "EvoAgent", DummyEvoAgent)
|
||||||
|
monkeypatch.setattr(pipeline_module, "AnalystAgent", DummyAnalystAgent)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
pipeline_module,
|
||||||
|
"create_agent_toolkit",
|
||||||
|
lambda *args, **kwargs: "toolkit",
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
pipeline_module,
|
||||||
|
"get_agent_model",
|
||||||
|
lambda x: "model",
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
pipeline_module,
|
||||||
|
"get_agent_formatter",
|
||||||
|
lambda x: "formatter",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create a mock pipeline instance
|
||||||
|
class MockPM:
|
||||||
|
def __init__(self):
|
||||||
|
self.config = {"config_name": "demo"}
|
||||||
|
|
||||||
|
pipeline = pipeline_module.TradingPipeline(
|
||||||
|
analysts=[],
|
||||||
|
risk_manager=None,
|
||||||
|
portfolio_manager=MockPM(),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mock workspace_manager methods
|
||||||
|
monkeypatch.setattr(
|
||||||
|
pipeline_module.WorkspaceManager,
|
||||||
|
"ensure_agent_assets",
|
||||||
|
lambda *args, **kwargs: None,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = pipeline._create_runtime_analyst("test_analyst", "fundamentals_analyst")
|
||||||
|
|
||||||
|
assert "Created runtime analyst" in result
|
||||||
|
# Should use legacy AnalystAgent
|
||||||
|
assert created.get("analyst_type") == "fundamentals_analyst"
|
||||||
|
|
||||||
|
|
||||||
|
def test_main_resolve_evo_agent_ids_returns_all_by_default(monkeypatch):
|
||||||
|
"""Test that _resolve_evo_agent_ids returns all supported roles by default."""
|
||||||
|
from backend import main as main_module
|
||||||
|
from backend.config.constants import ANALYST_TYPES
|
||||||
|
|
||||||
|
# Unset EVO_AGENT_IDS to test default behavior
|
||||||
|
monkeypatch.delenv("EVO_AGENT_IDS", raising=False)
|
||||||
|
|
||||||
|
resolved = main_module._resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
expected = set(ANALYST_TYPES) | {"risk_manager", "portfolio_manager"}
|
||||||
|
assert resolved == expected
|
||||||
|
|
||||||
|
|
||||||
|
def test_evo_agent_supports_long_term_memory(monkeypatch, tmp_path):
|
||||||
|
"""Test that EvoAgent can be created with long_term_memory."""
|
||||||
|
from backend import main as main_module
|
||||||
|
|
||||||
|
created = {}
|
||||||
|
|
||||||
|
class DummySkillsManager:
|
||||||
|
def get_agent_asset_dir(self, config_name, agent_id):
|
||||||
|
path = tmp_path / "runs" / config_name / "agents" / agent_id
|
||||||
|
path.mkdir(parents=True, exist_ok=True)
|
||||||
|
(path / "agent.yaml").write_text(
|
||||||
|
"prompt_files:\n - SOUL.md\n",
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
return path
|
||||||
|
|
||||||
|
class DummyEvoAgent:
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
created.update(kwargs)
|
||||||
|
self.toolkit = None
|
||||||
|
|
||||||
|
# Default: all roles use EvoAgent
|
||||||
|
monkeypatch.delenv("EVO_AGENT_IDS", raising=False)
|
||||||
|
monkeypatch.setattr(main_module, "EvoAgent", DummyEvoAgent)
|
||||||
|
monkeypatch.setattr(main_module, "create_agent_toolkit", lambda *args, **kwargs: "toolkit")
|
||||||
|
|
||||||
|
# Create with long_term_memory - should still use EvoAgent
|
||||||
|
dummy_memory = {"type": "reme"}
|
||||||
|
agent = main_module._create_analyst_agent(
|
||||||
|
analyst_type="fundamentals_analyst",
|
||||||
|
config_name="demo",
|
||||||
|
model="model",
|
||||||
|
formatter="formatter",
|
||||||
|
skills_manager=DummySkillsManager(),
|
||||||
|
active_skill_map={"fundamentals_analyst": []},
|
||||||
|
long_term_memory=dummy_memory,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert isinstance(agent, DummyEvoAgent)
|
||||||
|
assert created["agent_id"] == "fundamentals_analyst"
|
||||||
|
assert created["long_term_memory"] is dummy_memory
|
||||||
|
|
||||||
|
|
||||||
|
def test_evo_agent_legacy_mode(monkeypatch):
|
||||||
|
"""Test that EVO_AGENT_IDS=legacy disables EvoAgent."""
|
||||||
|
from backend import main as main_module
|
||||||
|
|
||||||
|
monkeypatch.setenv("EVO_AGENT_IDS", "legacy")
|
||||||
|
|
||||||
|
resolved = main_module._resolve_evo_agent_ids()
|
||||||
|
assert resolved == set()
|
||||||
@@ -5,6 +5,7 @@ from types import SimpleNamespace
|
|||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
|
from backend.core.state_sync import StateSync
|
||||||
from backend.services import gateway_cycle_support, gateway_runtime_support
|
from backend.services import gateway_cycle_support, gateway_runtime_support
|
||||||
|
|
||||||
|
|
||||||
@@ -43,6 +44,12 @@ class _DummyStorage:
|
|||||||
self.initial_cash = 100000.0
|
self.initial_cash = 100000.0
|
||||||
self.is_live_session_active = False
|
self.is_live_session_active = False
|
||||||
self.server_state_updates = []
|
self.server_state_updates = []
|
||||||
|
self.max_feed_history = 200
|
||||||
|
self.runtime_db = SimpleNamespace(
|
||||||
|
get_recent_feed_events=lambda limit=200: [],
|
||||||
|
get_last_day_feed_events=lambda current_date=None, limit=200: [],
|
||||||
|
)
|
||||||
|
self._persisted_server_state = {}
|
||||||
|
|
||||||
def can_apply_initial_cash(self):
|
def can_apply_initial_cash(self):
|
||||||
return True
|
return True
|
||||||
@@ -54,6 +61,9 @@ class _DummyStorage:
|
|||||||
def update_server_state_from_dashboard(self, state):
|
def update_server_state_from_dashboard(self, state):
|
||||||
self.server_state_updates.append(state)
|
self.server_state_updates.append(state)
|
||||||
|
|
||||||
|
def read_persisted_server_state(self):
|
||||||
|
return dict(self._persisted_server_state)
|
||||||
|
|
||||||
def load_file(self, name):
|
def load_file(self, name):
|
||||||
if name == "summary":
|
if name == "summary":
|
||||||
return {"totalAssetValue": self.initial_cash}
|
return {"totalAssetValue": self.initial_cash}
|
||||||
@@ -199,3 +209,70 @@ async def test_refresh_market_store_for_watchlist_emits_system_messages(monkeypa
|
|||||||
|
|
||||||
assert gateway.state_sync.system_messages[0] == "正在同步自选股市场数据: AAPL, MSFT"
|
assert gateway.state_sync.system_messages[0] == "正在同步自选股市场数据: AAPL, MSFT"
|
||||||
assert "自选股市场数据已同步:" in gateway.state_sync.system_messages[1]
|
assert "自选股市场数据已同步:" in gateway.state_sync.system_messages[1]
|
||||||
|
|
||||||
|
|
||||||
|
def test_initial_state_payload_prefers_dashboard_snapshot_for_top_level_views():
|
||||||
|
storage = _DummyStorage()
|
||||||
|
sync = StateSync(storage=storage)
|
||||||
|
sync._state = {
|
||||||
|
"holdings": [],
|
||||||
|
"trades": [],
|
||||||
|
"stats": {},
|
||||||
|
"leaderboard": [],
|
||||||
|
"portfolio": {"total_value": 100000.0},
|
||||||
|
}
|
||||||
|
|
||||||
|
payload = sync.get_initial_state_payload(include_dashboard=True)
|
||||||
|
|
||||||
|
assert payload["holdings"] == []
|
||||||
|
assert payload["trades"] == []
|
||||||
|
assert payload["stats"] == {}
|
||||||
|
assert payload["leaderboard"] == []
|
||||||
|
assert payload["dashboard"]["summary"]["totalAssetValue"] == 100000.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_initial_state_payload_uses_dashboard_snapshot_for_sparse_runtime_state():
|
||||||
|
class SnapshotStorage(_DummyStorage):
|
||||||
|
def build_dashboard_snapshot_from_state(self, state):
|
||||||
|
return {
|
||||||
|
"summary": {"totalAssetValue": 123456.0},
|
||||||
|
"holdings": [{"ticker": "AAPL"}],
|
||||||
|
"stats": {"totalTrades": 3},
|
||||||
|
"trades": [{"ticker": "AAPL"}],
|
||||||
|
"leaderboard": [{"agentId": "technical_analyst"}],
|
||||||
|
}
|
||||||
|
|
||||||
|
sync = StateSync(storage=SnapshotStorage())
|
||||||
|
sync._state = {
|
||||||
|
"holdings": [],
|
||||||
|
"trades": [],
|
||||||
|
"stats": {},
|
||||||
|
"leaderboard": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
payload = sync.get_initial_state_payload(include_dashboard=True)
|
||||||
|
|
||||||
|
assert payload["holdings"][0]["ticker"] == "AAPL"
|
||||||
|
assert payload["trades"][0]["ticker"] == "AAPL"
|
||||||
|
assert payload["stats"]["totalTrades"] == 3
|
||||||
|
assert payload["leaderboard"][0]["agentId"] == "technical_analyst"
|
||||||
|
|
||||||
|
|
||||||
|
def test_initial_state_payload_falls_back_to_persisted_portfolio():
|
||||||
|
storage = _DummyStorage()
|
||||||
|
storage._persisted_server_state = {
|
||||||
|
"portfolio": {
|
||||||
|
"total_value": 123456.0,
|
||||||
|
"pnl_percent": 12.34,
|
||||||
|
"equity": [{"t": 1, "v": 123456.0}],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sync = StateSync(storage=storage)
|
||||||
|
sync._state = {
|
||||||
|
"portfolio": {},
|
||||||
|
}
|
||||||
|
|
||||||
|
payload = sync.get_initial_state_payload(include_dashboard=True)
|
||||||
|
|
||||||
|
assert payload["portfolio"]["total_value"] == 123456.0
|
||||||
|
assert payload["portfolio"]["pnl_percent"] == 12.34
|
||||||
|
|||||||
225
backend/tests/test_migration_boundaries.py
Normal file
225
backend/tests/test_migration_boundaries.py
Normal file
@@ -0,0 +1,225 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""Guardrails around partially migrated agent-loading paths."""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from fastapi.testclient import TestClient
|
||||||
|
|
||||||
|
from backend.agents.base.tool_guard import TOOL_GUARD_STORE, ToolApprovalRequest
|
||||||
|
from backend.apps.agent_service import create_app
|
||||||
|
from backend.core.pipeline import TradingPipeline
|
||||||
|
|
||||||
|
|
||||||
|
class _FakeStore:
|
||||||
|
"""Fake MarketStore for testing."""
|
||||||
|
|
||||||
|
def get_ticker_watermarks(self, symbol):
|
||||||
|
return {"symbol": symbol, "last_news_fetch": "2026-12-31"}
|
||||||
|
|
||||||
|
def get_news_timeline_enriched(self, symbol, start_date=None, end_date=None):
|
||||||
|
return [{"date": end_date, "count": 1}]
|
||||||
|
|
||||||
|
def get_news_items(self, symbol, start_date=None, end_date=None, limit=100):
|
||||||
|
return [{"id": "news-raw-1", "ticker": symbol, "title": "Raw Title", "date": end_date}]
|
||||||
|
|
||||||
|
def get_news_items_enriched(self, symbol, start_date=None, end_date=None, trade_date=None, limit=100):
|
||||||
|
return [{"id": "news-1", "ticker": symbol, "title": "Title", "date": trade_date or end_date}]
|
||||||
|
|
||||||
|
def upsert_news_analysis(self, symbol, rows):
|
||||||
|
return len(rows)
|
||||||
|
|
||||||
|
def get_analyzed_news_ids(self, symbol, start_date=None, end_date=None):
|
||||||
|
return set()
|
||||||
|
|
||||||
|
def get_news_categories_enriched(self, symbol, start_date=None, end_date=None, limit=200):
|
||||||
|
return {"market": {"label": "market", "count": 1, "article_ids": ["news-1"]}}
|
||||||
|
|
||||||
|
def get_news_by_ids_enriched(self, symbol, article_ids):
|
||||||
|
return [{"id": article_ids[0], "ticker": symbol, "title": "Picked"}]
|
||||||
|
|
||||||
|
|
||||||
|
def test_legacy_adapter_module_has_been_removed():
|
||||||
|
compat_path = Path(__file__).resolve().parents[1] / "agents" / "compat.py"
|
||||||
|
assert compat_path.exists() is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipeline_workspace_loading_entrypoints_have_been_removed():
|
||||||
|
pipeline = TradingPipeline(
|
||||||
|
analysts=[],
|
||||||
|
risk_manager=object(),
|
||||||
|
portfolio_manager=object(),
|
||||||
|
)
|
||||||
|
|
||||||
|
assert hasattr(pipeline, "load_agents_from_workspace") is False
|
||||||
|
assert hasattr(pipeline, "reload_agents_from_workspace") is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipeline_sync_agent_runtime_context_sets_session_and_workspace():
|
||||||
|
pm = type("PM", (), {"config": {"config_name": "demo"}})()
|
||||||
|
analyst = type("Analyst", (), {})()
|
||||||
|
pipeline = TradingPipeline(
|
||||||
|
analysts=[analyst],
|
||||||
|
risk_manager=object(),
|
||||||
|
portfolio_manager=pm,
|
||||||
|
)
|
||||||
|
|
||||||
|
pipeline._sync_agent_runtime_context([analyst], session_key="2026-03-30")
|
||||||
|
|
||||||
|
assert analyst.session_id == "2026-03-30"
|
||||||
|
assert analyst.workspace_id == "demo"
|
||||||
|
|
||||||
|
|
||||||
|
def test_guard_approve_endpoint_notifies_pending_request():
|
||||||
|
record = TOOL_GUARD_STORE.create_pending(
|
||||||
|
tool_name="write_file",
|
||||||
|
tool_input={"path": "demo.txt"},
|
||||||
|
agent_id="fundamentals_analyst",
|
||||||
|
workspace_id="demo",
|
||||||
|
)
|
||||||
|
pending = ToolApprovalRequest(
|
||||||
|
approval_id=record.approval_id,
|
||||||
|
tool_name=record.tool_name,
|
||||||
|
tool_input=record.tool_input,
|
||||||
|
tool_call_id="call_1",
|
||||||
|
session_id=None,
|
||||||
|
)
|
||||||
|
record.pending_request = pending
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
response = client.post(
|
||||||
|
"/api/guard/approve",
|
||||||
|
json={"approval_id": record.approval_id, "one_time": True, "expires_in_minutes": 30},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
assert response.json()["run_id"] == "demo"
|
||||||
|
assert response.json()["workspace_id"] == "demo"
|
||||||
|
assert response.json()["scope_type"] == "runtime_run"
|
||||||
|
assert pending.approved is True
|
||||||
|
assert asyncio.run(pending.wait_for_approval(timeout=0.01)) is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_runtime_api_backward_compatibility_paths(monkeypatch, tmp_path):
|
||||||
|
"""Test that runtime API paths maintain backward compatibility."""
|
||||||
|
from backend.api import runtime as runtime_module
|
||||||
|
|
||||||
|
run_dir = tmp_path / "runs" / "demo"
|
||||||
|
state_dir = run_dir / "state"
|
||||||
|
state_dir.mkdir(parents=True)
|
||||||
|
(state_dir / "runtime_state.json").write_text(
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"context": {
|
||||||
|
"config_name": "demo",
|
||||||
|
"run_dir": str(run_dir),
|
||||||
|
"bootstrap_values": {"tickers": ["AAPL"]},
|
||||||
|
},
|
||||||
|
"agents": [],
|
||||||
|
"events": [],
|
||||||
|
}
|
||||||
|
),
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
|
||||||
|
monkeypatch.setattr(runtime_module, "PROJECT_ROOT", tmp_path)
|
||||||
|
monkeypatch.setattr(runtime_module, "_is_gateway_running", lambda: True)
|
||||||
|
runtime_module.get_runtime_state().gateway_port = 8765
|
||||||
|
|
||||||
|
from backend.apps.runtime_service import create_app
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
# Test that old path patterns still work
|
||||||
|
assert client.get("/api/runtime/config").status_code == 200
|
||||||
|
assert client.get("/api/runtime/agents").status_code == 200
|
||||||
|
assert client.get("/api/runtime/events").status_code == 200
|
||||||
|
assert client.get("/api/runtime/history").status_code == 200
|
||||||
|
assert client.get("/api/runtime/context").status_code == 200
|
||||||
|
|
||||||
|
|
||||||
|
def test_trading_service_backward_compatibility_paths(monkeypatch):
|
||||||
|
"""Test that trading API paths maintain backward compatibility."""
|
||||||
|
from backend.apps.trading_service import create_app
|
||||||
|
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.trading.get_prices_payload",
|
||||||
|
lambda ticker, start_date, end_date: {"ticker": ticker, "prices": []},
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.trading.get_financials_payload",
|
||||||
|
lambda ticker, end_date, period, limit: {"financial_metrics": []},
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.trading.get_news_payload",
|
||||||
|
lambda ticker, end_date, start_date=None, limit=1000: {"news": []},
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.trading.get_market_status_payload",
|
||||||
|
lambda: {"status": "open"},
|
||||||
|
)
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
# Test that old path patterns still work
|
||||||
|
assert client.get("/api/prices?ticker=AAPL&start_date=2026-01-01&end_date=2026-03-01").status_code == 200
|
||||||
|
assert client.get("/api/financials?ticker=AAPL&end_date=2026-03-01").status_code == 200
|
||||||
|
assert client.get("/api/news?ticker=AAPL&end_date=2026-03-01").status_code == 200
|
||||||
|
assert client.get("/api/market/status").status_code == 200
|
||||||
|
|
||||||
|
|
||||||
|
def test_news_service_backward_compatibility_paths(monkeypatch):
|
||||||
|
"""Test that news API paths maintain backward compatibility."""
|
||||||
|
from backend.apps.news_service import create_app
|
||||||
|
from backend.apps import news_service as news_service_module
|
||||||
|
|
||||||
|
app = create_app()
|
||||||
|
app.dependency_overrides[news_service_module.get_market_store] = lambda: _FakeStore()
|
||||||
|
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.news.enrich_news_for_symbol",
|
||||||
|
lambda *args, **kwargs: {"symbol": "AAPL", "analyzed": 1, "news": []},
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.news.get_or_create_stock_story",
|
||||||
|
lambda store, symbol, as_of_date: {"symbol": symbol, "as_of_date": as_of_date, "story": ""},
|
||||||
|
)
|
||||||
|
|
||||||
|
with TestClient(app) as client:
|
||||||
|
# Test that old path patterns still work
|
||||||
|
assert client.get("/api/enriched-news?ticker=AAPL&end_date=2026-03-01").status_code == 200
|
||||||
|
assert client.get("/api/stories/AAPL?as_of_date=2026-03-01").status_code == 200
|
||||||
|
|
||||||
|
|
||||||
|
def test_service_ports_match_documentation():
|
||||||
|
"""Verify that service ports match documentation."""
|
||||||
|
import backend.apps.agent_service as agent_service
|
||||||
|
import backend.apps.news_service as news_service
|
||||||
|
import backend.apps.runtime_service as runtime_service
|
||||||
|
import backend.apps.trading_service as trading_service
|
||||||
|
|
||||||
|
# These ports are documented in README.md and start-dev.sh
|
||||||
|
assert "8000" in agent_service.__file__ or True # agent_service doesn't hardcode port
|
||||||
|
assert "8001" in trading_service.__file__ or True # trading_service doesn't hardcode port
|
||||||
|
assert "8002" in news_service.__file__ or True # news_service doesn't hardcode port
|
||||||
|
assert "8003" in runtime_service.__file__ or True # runtime_service doesn't hardcode port
|
||||||
|
|
||||||
|
# Verify the __main__ blocks use correct ports
|
||||||
|
import ast
|
||||||
|
import inspect
|
||||||
|
|
||||||
|
def get_main_port(module):
|
||||||
|
source = inspect.getsource(module)
|
||||||
|
tree = ast.parse(source)
|
||||||
|
for node in ast.walk(tree):
|
||||||
|
if isinstance(node, ast.Call):
|
||||||
|
for kw in node.keywords:
|
||||||
|
if kw.arg == "port" and isinstance(kw.value, ast.Constant):
|
||||||
|
return kw.value.value
|
||||||
|
return None
|
||||||
|
|
||||||
|
assert get_main_port(agent_service) == 8000
|
||||||
|
assert get_main_port(trading_service) == 8001
|
||||||
|
assert get_main_port(news_service) == 8002
|
||||||
|
assert get_main_port(runtime_service) == 8003
|
||||||
@@ -178,3 +178,84 @@ def test_news_service_range_explain(monkeypatch):
|
|||||||
|
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert response.json()["result"]["news_count"] == 1
|
assert response.json()["result"]["news_count"] == 1
|
||||||
|
|
||||||
|
|
||||||
|
def test_news_service_contract_stability():
|
||||||
|
"""Verify news service API maintains contract stability."""
|
||||||
|
app = create_app()
|
||||||
|
routes = {route.path: route for route in app.routes if hasattr(route, "methods")}
|
||||||
|
|
||||||
|
# Health endpoint
|
||||||
|
assert "/health" in routes
|
||||||
|
|
||||||
|
# News/explain endpoints
|
||||||
|
assert "/api/enriched-news" in routes
|
||||||
|
assert "/api/news-for-date" in routes
|
||||||
|
assert "/api/news-timeline" in routes
|
||||||
|
assert "/api/categories" in routes
|
||||||
|
assert "/api/similar-days" in routes
|
||||||
|
assert "/api/stories/{ticker}" in routes
|
||||||
|
assert "/api/range-explain" in routes
|
||||||
|
|
||||||
|
# Verify all are GET endpoints (read-only service)
|
||||||
|
for path in ["/api/enriched-news", "/api/news-for-date", "/api/news-timeline",
|
||||||
|
"/api/categories", "/api/similar-days", "/api/stories/{ticker}",
|
||||||
|
"/api/range-explain"]:
|
||||||
|
assert "GET" in routes[path].methods
|
||||||
|
|
||||||
|
|
||||||
|
def test_news_service_enriched_news_contract(monkeypatch):
|
||||||
|
"""Test enriched news endpoint maintains response contract."""
|
||||||
|
app = create_app()
|
||||||
|
app.dependency_overrides.clear()
|
||||||
|
from backend.apps import news_service as news_service_module
|
||||||
|
|
||||||
|
app.dependency_overrides[news_service_module.get_market_store] = lambda: _FakeStore()
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.news.enrich_news_for_symbol",
|
||||||
|
lambda *args, **kwargs: {"symbol": "AAPL", "analyzed": 1, "news": [{"id": "1", "title": "Test"}]},
|
||||||
|
)
|
||||||
|
|
||||||
|
with TestClient(app) as client:
|
||||||
|
response = client.get(
|
||||||
|
"/api/enriched-news",
|
||||||
|
params={"ticker": "AAPL", "end_date": "2026-03-23"},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert "news" in payload
|
||||||
|
|
||||||
|
|
||||||
|
def test_news_service_stories_contract(monkeypatch):
|
||||||
|
"""Test stories endpoint maintains response contract."""
|
||||||
|
app = create_app()
|
||||||
|
from backend.apps import news_service as news_service_module
|
||||||
|
|
||||||
|
app.dependency_overrides[news_service_module.get_market_store] = lambda: _FakeStore()
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.news.enrich_news_for_symbol",
|
||||||
|
lambda *args, **kwargs: {"symbol": "AAPL", "analyzed": 1},
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.news.get_or_create_stock_story",
|
||||||
|
lambda store, symbol, as_of_date: {
|
||||||
|
"symbol": symbol,
|
||||||
|
"as_of_date": as_of_date,
|
||||||
|
"story": "story body",
|
||||||
|
"source": "local",
|
||||||
|
"headline": "Test Headline",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
with TestClient(app) as client:
|
||||||
|
response = client.get(
|
||||||
|
"/api/stories/AAPL",
|
||||||
|
params={"as_of_date": "2026-03-23"},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert "symbol" in payload
|
||||||
|
assert "as_of_date" in payload
|
||||||
|
assert "story" in payload
|
||||||
|
|||||||
@@ -242,7 +242,6 @@ def test_runtime_cleanup_endpoint_prunes_old_runs(monkeypatch, tmp_path):
|
|||||||
def test_runtime_history_lists_recent_runs(monkeypatch, tmp_path):
|
def test_runtime_history_lists_recent_runs(monkeypatch, tmp_path):
|
||||||
run_dir = tmp_path / "runs" / "20260324_120000"
|
run_dir = tmp_path / "runs" / "20260324_120000"
|
||||||
(run_dir / "state").mkdir(parents=True)
|
(run_dir / "state").mkdir(parents=True)
|
||||||
(run_dir / "team_dashboard").mkdir(parents=True)
|
|
||||||
(run_dir / "state" / "runtime_state.json").write_text(
|
(run_dir / "state" / "runtime_state.json").write_text(
|
||||||
json.dumps(
|
json.dumps(
|
||||||
{
|
{
|
||||||
@@ -256,8 +255,13 @@ def test_runtime_history_lists_recent_runs(monkeypatch, tmp_path):
|
|||||||
),
|
),
|
||||||
encoding="utf-8",
|
encoding="utf-8",
|
||||||
)
|
)
|
||||||
(run_dir / "team_dashboard" / "summary.json").write_text(
|
(run_dir / "state" / "server_state.json").write_text(
|
||||||
json.dumps({"totalTrades": 3, "totalAssetValue": 123456.0}),
|
json.dumps(
|
||||||
|
{
|
||||||
|
"portfolio": {"total_value": 123456.0},
|
||||||
|
"trades": [{}, {}, {}],
|
||||||
|
}
|
||||||
|
),
|
||||||
encoding="utf-8",
|
encoding="utf-8",
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -270,6 +274,7 @@ def test_runtime_history_lists_recent_runs(monkeypatch, tmp_path):
|
|||||||
payload = response.json()
|
payload = response.json()
|
||||||
assert payload["runs"][0]["run_id"] == "20260324_120000"
|
assert payload["runs"][0]["run_id"] == "20260324_120000"
|
||||||
assert payload["runs"][0]["total_trades"] == 3
|
assert payload["runs"][0]["total_trades"] == 3
|
||||||
|
assert payload["runs"][0]["total_asset_value"] == 123456.0
|
||||||
|
|
||||||
|
|
||||||
def test_restore_run_assets_copies_state(monkeypatch, tmp_path):
|
def test_restore_run_assets_copies_state(monkeypatch, tmp_path):
|
||||||
@@ -278,6 +283,7 @@ def test_restore_run_assets_copies_state(monkeypatch, tmp_path):
|
|||||||
(source_run / "state").mkdir(parents=True)
|
(source_run / "state").mkdir(parents=True)
|
||||||
(source_run / "agents").mkdir(parents=True)
|
(source_run / "agents").mkdir(parents=True)
|
||||||
(source_run / "team_dashboard" / "_internal_state.json").write_text("{}", encoding="utf-8")
|
(source_run / "team_dashboard" / "_internal_state.json").write_text("{}", encoding="utf-8")
|
||||||
|
(source_run / "team_dashboard" / "summary.json").write_text("{}", encoding="utf-8")
|
||||||
(source_run / "state" / "server_state.json").write_text("{}", encoding="utf-8")
|
(source_run / "state" / "server_state.json").write_text("{}", encoding="utf-8")
|
||||||
|
|
||||||
target_run = tmp_path / "runs" / "20260324_130000"
|
target_run = tmp_path / "runs" / "20260324_130000"
|
||||||
@@ -288,6 +294,237 @@ def test_restore_run_assets_copies_state(monkeypatch, tmp_path):
|
|||||||
|
|
||||||
assert (target_run / "team_dashboard" / "_internal_state.json").exists()
|
assert (target_run / "team_dashboard" / "_internal_state.json").exists()
|
||||||
assert (target_run / "state" / "server_state.json").exists()
|
assert (target_run / "state" / "server_state.json").exists()
|
||||||
|
assert not (target_run / "team_dashboard" / "summary.json").exists()
|
||||||
|
|
||||||
|
|
||||||
|
def test_runtime_service_routes_contract_stability():
|
||||||
|
"""Verify runtime API routes maintain contract stability."""
|
||||||
|
app = create_app()
|
||||||
|
routes = {route.path: route for route in app.routes if hasattr(route, "methods")}
|
||||||
|
|
||||||
|
# Core runtime lifecycle endpoints
|
||||||
|
assert "/api/runtime/start" in routes
|
||||||
|
assert "/api/runtime/stop" in routes
|
||||||
|
assert "/api/runtime/restart" in routes
|
||||||
|
assert "/api/runtime/current" in routes
|
||||||
|
|
||||||
|
# Configuration endpoints
|
||||||
|
assert "/api/runtime/config" in routes
|
||||||
|
|
||||||
|
# Query endpoints
|
||||||
|
assert "/api/runtime/agents" in routes
|
||||||
|
assert "/api/runtime/events" in routes
|
||||||
|
assert "/api/runtime/history" in routes
|
||||||
|
assert "/api/runtime/context" in routes
|
||||||
|
assert "/api/runtime/logs" in routes
|
||||||
|
|
||||||
|
# Gateway endpoints
|
||||||
|
assert "/api/runtime/gateway/status" in routes
|
||||||
|
assert "/api/runtime/gateway/port" in routes
|
||||||
|
|
||||||
|
# Maintenance endpoints
|
||||||
|
assert "/api/runtime/cleanup" in routes
|
||||||
|
|
||||||
|
|
||||||
|
def test_runtime_service_start_stop_lifecycle_contract(monkeypatch, tmp_path):
|
||||||
|
"""Test the start/stop lifecycle maintains expected contract."""
|
||||||
|
run_dir = tmp_path / "runs" / "test_run"
|
||||||
|
state_dir = run_dir / "state"
|
||||||
|
state_dir.mkdir(parents=True)
|
||||||
|
# Create runtime_state.json so /api/runtime/current can find the context after stop
|
||||||
|
(state_dir / "runtime_state.json").write_text(
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"context": {
|
||||||
|
"config_name": "test_run",
|
||||||
|
"run_dir": str(run_dir),
|
||||||
|
"bootstrap_values": {"tickers": ["AAPL", "MSFT"]},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
),
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
|
||||||
|
class _DummyManager:
|
||||||
|
def __init__(self, config_name, run_dir, bootstrap):
|
||||||
|
self.config_name = config_name
|
||||||
|
self.run_dir = Path(run_dir)
|
||||||
|
self.bootstrap = bootstrap
|
||||||
|
self.context = None
|
||||||
|
|
||||||
|
def prepare_run(self):
|
||||||
|
self.context = type(
|
||||||
|
"Ctx",
|
||||||
|
(),
|
||||||
|
{
|
||||||
|
"config_name": self.config_name,
|
||||||
|
"run_dir": self.run_dir,
|
||||||
|
"bootstrap_values": self.bootstrap,
|
||||||
|
},
|
||||||
|
)()
|
||||||
|
return self.context
|
||||||
|
|
||||||
|
class _DummyProcess:
|
||||||
|
def poll(self):
|
||||||
|
return None
|
||||||
|
|
||||||
|
monkeypatch.setattr(runtime_module, "PROJECT_ROOT", tmp_path)
|
||||||
|
monkeypatch.setattr(runtime_module, "_find_available_port", lambda start_port=8765, max_port=9000: 8765)
|
||||||
|
monkeypatch.setattr(runtime_module, "_start_gateway_process", lambda **kwargs: _DummyProcess())
|
||||||
|
monkeypatch.setattr(runtime_module, "_stop_gateway", lambda: True)
|
||||||
|
monkeypatch.setattr("backend.runtime.manager.TradingRuntimeManager", _DummyManager)
|
||||||
|
runtime_state = runtime_module.get_runtime_state()
|
||||||
|
runtime_state.gateway_process = None
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
# Start runtime
|
||||||
|
start_response = client.post(
|
||||||
|
"/api/runtime/start",
|
||||||
|
json={
|
||||||
|
"launch_mode": "fresh",
|
||||||
|
"tickers": ["AAPL", "MSFT"],
|
||||||
|
"schedule_mode": "daily",
|
||||||
|
"interval_minutes": 60,
|
||||||
|
"trigger_time": "09:30",
|
||||||
|
"max_comm_cycles": 2,
|
||||||
|
"initial_cash": 100000.0,
|
||||||
|
"margin_requirement": 0.0,
|
||||||
|
"enable_memory": False,
|
||||||
|
"mode": "live",
|
||||||
|
"poll_interval": 10,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert start_response.status_code == 200
|
||||||
|
start_payload = start_response.json()
|
||||||
|
assert "run_id" in start_payload
|
||||||
|
assert "status" in start_payload
|
||||||
|
assert "run_dir" in start_payload
|
||||||
|
assert "gateway_port" in start_payload
|
||||||
|
assert "message" in start_payload
|
||||||
|
assert start_payload["status"] == "started"
|
||||||
|
|
||||||
|
# Get current runtime while running
|
||||||
|
current_response = client.get("/api/runtime/current")
|
||||||
|
assert current_response.status_code == 200
|
||||||
|
current_payload = current_response.json()
|
||||||
|
assert "run_id" in current_payload
|
||||||
|
assert "run_dir" in current_payload
|
||||||
|
assert "is_running" in current_payload
|
||||||
|
assert "gateway_port" in current_payload
|
||||||
|
assert "bootstrap" in current_payload
|
||||||
|
|
||||||
|
# Stop runtime
|
||||||
|
stop_response = client.post("/api/runtime/stop?force=true")
|
||||||
|
assert stop_response.status_code == 200
|
||||||
|
stop_payload = stop_response.json()
|
||||||
|
assert "status" in stop_payload
|
||||||
|
assert "message" in stop_payload
|
||||||
|
assert stop_payload["status"] == "stopped"
|
||||||
|
|
||||||
|
|
||||||
|
def test_runtime_service_agents_events_contract(monkeypatch, tmp_path):
|
||||||
|
"""Test agents and events endpoints maintain contract."""
|
||||||
|
run_dir = tmp_path / "runs" / "demo"
|
||||||
|
state_dir = run_dir / "state"
|
||||||
|
state_dir.mkdir(parents=True)
|
||||||
|
(state_dir / "runtime_state.json").write_text(
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"context": {
|
||||||
|
"config_name": "demo",
|
||||||
|
"run_dir": str(run_dir),
|
||||||
|
"bootstrap_values": {"tickers": ["AAPL"]},
|
||||||
|
},
|
||||||
|
"agents": [
|
||||||
|
{
|
||||||
|
"agent_id": "fundamentals_analyst",
|
||||||
|
"status": "idle",
|
||||||
|
"last_session": "2026-03-30",
|
||||||
|
"last_updated": "2026-03-30T10:00:00",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"agent_id": "technical_analyst",
|
||||||
|
"status": "analyzing",
|
||||||
|
"last_session": None,
|
||||||
|
"last_updated": "2026-03-30T10:05:00",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
"events": [
|
||||||
|
{
|
||||||
|
"timestamp": "2026-03-30T10:00:00",
|
||||||
|
"event": "agent_registered",
|
||||||
|
"details": {"agent_id": "fundamentals_analyst"},
|
||||||
|
"session": "2026-03-30",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}
|
||||||
|
),
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
|
||||||
|
monkeypatch.setattr(runtime_module, "PROJECT_ROOT", tmp_path)
|
||||||
|
monkeypatch.setattr(runtime_module, "_is_gateway_running", lambda: True)
|
||||||
|
runtime_module.get_runtime_state().gateway_port = 8765
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
# Agents endpoint
|
||||||
|
agents_response = client.get("/api/runtime/agents")
|
||||||
|
assert agents_response.status_code == 200
|
||||||
|
agents_payload = agents_response.json()
|
||||||
|
assert "agents" in agents_payload
|
||||||
|
assert len(agents_payload["agents"]) == 2
|
||||||
|
agent = agents_payload["agents"][0]
|
||||||
|
assert "agent_id" in agent
|
||||||
|
assert "status" in agent
|
||||||
|
assert "last_session" in agent
|
||||||
|
assert "last_updated" in agent
|
||||||
|
|
||||||
|
# Events endpoint
|
||||||
|
events_response = client.get("/api/runtime/events")
|
||||||
|
assert events_response.status_code == 200
|
||||||
|
events_payload = events_response.json()
|
||||||
|
assert "events" in events_payload
|
||||||
|
assert len(events_payload["events"]) == 1
|
||||||
|
event = events_payload["events"][0]
|
||||||
|
assert "timestamp" in event
|
||||||
|
assert "event" in event
|
||||||
|
assert "details" in event
|
||||||
|
assert "session" in event
|
||||||
|
|
||||||
|
|
||||||
|
def test_runtime_service_gateway_status_contract(monkeypatch, tmp_path):
|
||||||
|
"""Test gateway status endpoint maintains contract."""
|
||||||
|
run_dir = tmp_path / "runs" / "demo"
|
||||||
|
state_dir = run_dir / "state"
|
||||||
|
state_dir.mkdir(parents=True)
|
||||||
|
(state_dir / "runtime_state.json").write_text(
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"context": {
|
||||||
|
"config_name": "demo",
|
||||||
|
"run_dir": str(run_dir),
|
||||||
|
"bootstrap_values": {},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
),
|
||||||
|
encoding="utf-8",
|
||||||
|
)
|
||||||
|
|
||||||
|
monkeypatch.setattr(runtime_module, "PROJECT_ROOT", tmp_path)
|
||||||
|
monkeypatch.setattr(runtime_module, "_is_gateway_running", lambda: True)
|
||||||
|
runtime_module.get_runtime_state().gateway_port = 8765
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
response = client.get("/api/runtime/gateway/status")
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert "is_running" in payload
|
||||||
|
assert "port" in payload
|
||||||
|
assert "run_id" in payload
|
||||||
|
assert payload["is_running"] is True
|
||||||
|
assert payload["port"] == 8765
|
||||||
|
assert payload["run_id"] == "demo"
|
||||||
|
|
||||||
|
|
||||||
def test_start_runtime_restore_reuses_historical_run_id(monkeypatch, tmp_path):
|
def test_start_runtime_restore_reuses_historical_run_id(monkeypatch, tmp_path):
|
||||||
|
|||||||
@@ -200,6 +200,179 @@ def test_trading_service_market_cap_endpoint(monkeypatch):
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def test_trading_service_contract_stability():
|
||||||
|
"""Verify trading service API maintains contract stability."""
|
||||||
|
app = create_app()
|
||||||
|
routes = {route.path: route for route in app.routes if hasattr(route, "methods")}
|
||||||
|
|
||||||
|
# Health endpoint
|
||||||
|
assert "/health" in routes
|
||||||
|
|
||||||
|
# Trading data endpoints
|
||||||
|
assert "/api/prices" in routes
|
||||||
|
assert "/api/financials" in routes
|
||||||
|
assert "/api/news" in routes
|
||||||
|
assert "/api/insider-trades" in routes
|
||||||
|
assert "/api/market/status" in routes
|
||||||
|
assert "/api/market-cap" in routes
|
||||||
|
assert "/api/line-items" in routes
|
||||||
|
|
||||||
|
# Verify all are GET endpoints (read-only service)
|
||||||
|
for path in ["/api/prices", "/api/financials", "/api/news", "/api/insider-trades",
|
||||||
|
"/api/market/status", "/api/market-cap", "/api/line-items"]:
|
||||||
|
assert "GET" in routes[path].methods
|
||||||
|
|
||||||
|
|
||||||
|
def test_trading_service_prices_contract(monkeypatch):
|
||||||
|
"""Test prices endpoint maintains response contract."""
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.trading.get_prices_payload",
|
||||||
|
lambda ticker, start_date, end_date: {
|
||||||
|
"ticker": ticker,
|
||||||
|
"prices": [
|
||||||
|
Price(
|
||||||
|
open=1.0,
|
||||||
|
close=2.0,
|
||||||
|
high=2.5,
|
||||||
|
low=0.5,
|
||||||
|
volume=100,
|
||||||
|
time="2026-03-20",
|
||||||
|
)
|
||||||
|
],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
response = client.get(
|
||||||
|
"/api/prices",
|
||||||
|
params={
|
||||||
|
"ticker": "AAPL",
|
||||||
|
"start_date": "2026-03-01",
|
||||||
|
"end_date": "2026-03-20",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert "ticker" in payload
|
||||||
|
assert "prices" in payload
|
||||||
|
assert isinstance(payload["prices"], list)
|
||||||
|
if payload["prices"]:
|
||||||
|
price = payload["prices"][0]
|
||||||
|
assert "open" in price
|
||||||
|
assert "close" in price
|
||||||
|
assert "high" in price
|
||||||
|
assert "low" in price
|
||||||
|
assert "volume" in price
|
||||||
|
assert "time" in price
|
||||||
|
|
||||||
|
|
||||||
|
def test_trading_service_financials_contract(monkeypatch):
|
||||||
|
"""Test financials endpoint maintains response contract."""
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.trading.get_financials_payload",
|
||||||
|
lambda ticker, end_date, period, limit: {
|
||||||
|
"financial_metrics": [
|
||||||
|
FinancialMetrics(
|
||||||
|
ticker=ticker,
|
||||||
|
report_period=end_date,
|
||||||
|
period=period,
|
||||||
|
currency="USD",
|
||||||
|
market_cap=123.0,
|
||||||
|
enterprise_value=None,
|
||||||
|
price_to_earnings_ratio=None,
|
||||||
|
price_to_book_ratio=None,
|
||||||
|
price_to_sales_ratio=None,
|
||||||
|
enterprise_value_to_ebitda_ratio=None,
|
||||||
|
enterprise_value_to_revenue_ratio=None,
|
||||||
|
free_cash_flow_yield=None,
|
||||||
|
peg_ratio=None,
|
||||||
|
gross_margin=None,
|
||||||
|
operating_margin=None,
|
||||||
|
net_margin=None,
|
||||||
|
return_on_equity=None,
|
||||||
|
return_on_assets=None,
|
||||||
|
return_on_invested_capital=None,
|
||||||
|
asset_turnover=None,
|
||||||
|
inventory_turnover=None,
|
||||||
|
receivables_turnover=None,
|
||||||
|
days_sales_outstanding=None,
|
||||||
|
operating_cycle=None,
|
||||||
|
working_capital_turnover=None,
|
||||||
|
current_ratio=None,
|
||||||
|
quick_ratio=None,
|
||||||
|
cash_ratio=None,
|
||||||
|
operating_cash_flow_ratio=None,
|
||||||
|
debt_to_equity=None,
|
||||||
|
debt_to_assets=None,
|
||||||
|
interest_coverage=None,
|
||||||
|
revenue_growth=None,
|
||||||
|
earnings_growth=None,
|
||||||
|
book_value_growth=None,
|
||||||
|
earnings_per_share_growth=None,
|
||||||
|
free_cash_flow_growth=None,
|
||||||
|
operating_income_growth=None,
|
||||||
|
ebitda_growth=None,
|
||||||
|
payout_ratio=None,
|
||||||
|
earnings_per_share=None,
|
||||||
|
book_value_per_share=None,
|
||||||
|
free_cash_flow_per_share=None,
|
||||||
|
)
|
||||||
|
]
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
response = client.get(
|
||||||
|
"/api/financials",
|
||||||
|
params={"ticker": "AAPL", "end_date": "2026-03-20"},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert "financial_metrics" in payload
|
||||||
|
assert isinstance(payload["financial_metrics"], list)
|
||||||
|
|
||||||
|
|
||||||
|
def test_trading_service_market_status_contract(monkeypatch):
|
||||||
|
"""Test market status endpoint maintains response contract."""
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.trading.get_market_status_payload",
|
||||||
|
lambda: {"status": "open", "status_text": "Open", "next_open": "09:30"},
|
||||||
|
)
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
response = client.get("/api/market/status")
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert "status" in payload
|
||||||
|
|
||||||
|
|
||||||
|
def test_trading_service_market_cap_contract(monkeypatch):
|
||||||
|
"""Test market cap endpoint maintains response contract."""
|
||||||
|
monkeypatch.setattr(
|
||||||
|
"backend.domains.trading.get_market_cap_payload",
|
||||||
|
lambda ticker, end_date: {
|
||||||
|
"ticker": ticker,
|
||||||
|
"end_date": end_date,
|
||||||
|
"market_cap": 3.5e12,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
with TestClient(create_app()) as client:
|
||||||
|
response = client.get(
|
||||||
|
"/api/market-cap",
|
||||||
|
params={"ticker": "AAPL", "end_date": "2026-03-20"},
|
||||||
|
)
|
||||||
|
|
||||||
|
assert response.status_code == 200
|
||||||
|
payload = response.json()
|
||||||
|
assert "ticker" in payload
|
||||||
|
assert "end_date" in payload
|
||||||
|
assert "market_cap" in payload
|
||||||
|
|
||||||
|
|
||||||
def test_trading_service_line_items_endpoint(monkeypatch):
|
def test_trading_service_line_items_endpoint(monkeypatch):
|
||||||
monkeypatch.setattr(
|
monkeypatch.setattr(
|
||||||
"backend.domains.trading.get_line_items_payload",
|
"backend.domains.trading.get_line_items_payload",
|
||||||
|
|||||||
@@ -22,16 +22,6 @@ from agentscope.message import TextBlock
|
|||||||
from agentscope.tool import ToolResponse
|
from agentscope.tool import ToolResponse
|
||||||
|
|
||||||
from backend.data.provider_utils import normalize_symbol
|
from backend.data.provider_utils import normalize_symbol
|
||||||
from backend.skills.builtin.valuation_review.scripts.dcf_report import (
|
|
||||||
build_dcf_report,
|
|
||||||
)
|
|
||||||
from backend.skills.builtin.valuation_review.scripts.multiple_valuation_report import (
|
|
||||||
build_ev_ebitda_report,
|
|
||||||
build_residual_income_report,
|
|
||||||
)
|
|
||||||
from backend.skills.builtin.valuation_review.scripts.owner_earnings_report import (
|
|
||||||
build_owner_earnings_report,
|
|
||||||
)
|
|
||||||
from backend.tools.data_tools import (
|
from backend.tools.data_tools import (
|
||||||
get_company_news,
|
get_company_news,
|
||||||
get_financial_metrics,
|
get_financial_metrics,
|
||||||
@@ -41,10 +31,12 @@ from backend.tools.data_tools import (
|
|||||||
prices_to_df,
|
prices_to_df,
|
||||||
search_line_items,
|
search_line_items,
|
||||||
)
|
)
|
||||||
|
from backend.tools.sandboxed_executor import get_sandbox
|
||||||
from backend.tools.technical_signals import StockTechnicalAnalyzer
|
from backend.tools.technical_signals import StockTechnicalAnalyzer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
_technical_analyzer = StockTechnicalAnalyzer()
|
_technical_analyzer = StockTechnicalAnalyzer()
|
||||||
|
_sandbox = get_sandbox()
|
||||||
|
|
||||||
|
|
||||||
def _to_text_response(text: str) -> ToolResponse:
|
def _to_text_response(text: str) -> ToolResponse:
|
||||||
@@ -869,7 +861,13 @@ def dcf_valuation_analysis(
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
return _to_text_response(build_dcf_report(rows, current_date))
|
return _to_text_response(
|
||||||
|
_sandbox.execute_skill(
|
||||||
|
skill_name="builtin/valuation_review",
|
||||||
|
function_name="build_dcf_report",
|
||||||
|
function_args={"rows": rows, "current_date": current_date},
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@safe
|
@safe
|
||||||
@@ -958,7 +956,13 @@ def owner_earnings_valuation_analysis(
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
return _to_text_response(build_owner_earnings_report(rows, current_date))
|
return _to_text_response(
|
||||||
|
_sandbox.execute_skill(
|
||||||
|
skill_name="builtin/valuation_review",
|
||||||
|
function_name="build_owner_earnings_report",
|
||||||
|
function_args={"rows": rows, "current_date": current_date},
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@safe
|
@safe
|
||||||
@@ -1033,7 +1037,13 @@ def ev_ebitda_valuation_analysis(
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
return _to_text_response(build_ev_ebitda_report(rows, current_date))
|
return _to_text_response(
|
||||||
|
_sandbox.execute_skill(
|
||||||
|
skill_name="builtin/valuation_review",
|
||||||
|
function_name="build_ev_ebitda_report",
|
||||||
|
function_args={"rows": rows, "current_date": current_date},
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@safe
|
@safe
|
||||||
@@ -1114,7 +1124,13 @@ def residual_income_valuation_analysis(
|
|||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
return _to_text_response(build_residual_income_report(rows, current_date))
|
return _to_text_response(
|
||||||
|
_sandbox.execute_skill(
|
||||||
|
skill_name="builtin/valuation_review",
|
||||||
|
function_name="build_residual_income_report",
|
||||||
|
function_args={"rows": rows, "current_date": current_date},
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# Tool Registry for dynamic toolkit creation
|
# Tool Registry for dynamic toolkit creation
|
||||||
|
|||||||
457
backend/tools/sandboxed_executor.py
Normal file
457
backend/tools/sandboxed_executor.py
Normal file
@@ -0,0 +1,457 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
多模式技能沙盒执行器
|
||||||
|
|
||||||
|
支持三种模式:
|
||||||
|
- none: 直接执行(默认,开发环境)
|
||||||
|
- docker: Docker 容器隔离
|
||||||
|
- kubernetes: Kubernetes Pod 隔离
|
||||||
|
|
||||||
|
环境变量:
|
||||||
|
SKILL_SANDBOX_MODE: 沙盒模式 (none/docker/kubernetes),默认 none
|
||||||
|
SKILL_SANDBOX_IMAGE: Docker 镜像,默认 python:3.11-slim
|
||||||
|
SKILL_SANDBOX_MEMORY_LIMIT: 内存限制,默认 512m
|
||||||
|
SKILL_SANDBOX_CPU_LIMIT: CPU 限制,默认 1.0
|
||||||
|
SKILL_SANDBOX_NETWORK: 网络模式,默认 none
|
||||||
|
SKILL_SANDBOX_TIMEOUT: 超时时间(秒),默认 60
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import warnings
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class SandboxBackend(ABC):
|
||||||
|
"""沙盒后端抽象基类"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def execute(
|
||||||
|
self,
|
||||||
|
skill_name: str,
|
||||||
|
function_name: str,
|
||||||
|
function_args: dict,
|
||||||
|
) -> dict:
|
||||||
|
"""
|
||||||
|
执行技能函数
|
||||||
|
|
||||||
|
Args:
|
||||||
|
skill_name: 技能名称,如 "builtin/valuation_review"
|
||||||
|
function_name: 要执行的函数名,如 "build_dcf_report"
|
||||||
|
function_args: 函数参数字典
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
执行结果字典
|
||||||
|
"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class NoSandboxBackend(SandboxBackend):
|
||||||
|
"""
|
||||||
|
无沙盒模式 - 直接执行(默认,仅用于开发环境)
|
||||||
|
|
||||||
|
特性:
|
||||||
|
- 直接导入并执行技能模块
|
||||||
|
- 零性能开销
|
||||||
|
- 无隔离,依赖代码审查保证安全
|
||||||
|
"""
|
||||||
|
|
||||||
|
# 函数名到脚本模块名的映射
|
||||||
|
FUNCTION_TO_SCRIPT_MAP = {
|
||||||
|
# valuation_review 技能
|
||||||
|
"build_dcf_report": "dcf_report",
|
||||||
|
"build_owner_earnings_report": "owner_earnings_report",
|
||||||
|
"build_ev_ebitda_report": "multiple_valuation_report",
|
||||||
|
"build_residual_income_report": "multiple_valuation_report",
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._module_cache = {}
|
||||||
|
self._warning_shown = False
|
||||||
|
|
||||||
|
def _get_script_name(self, function_name: str) -> str:
|
||||||
|
"""
|
||||||
|
根据函数名获取脚本模块名
|
||||||
|
|
||||||
|
优先使用预定义映射,否则尝试自动推断
|
||||||
|
"""
|
||||||
|
if function_name in self.FUNCTION_TO_SCRIPT_MAP:
|
||||||
|
return self.FUNCTION_TO_SCRIPT_MAP[function_name]
|
||||||
|
|
||||||
|
# 自动推断: build_X_report -> X_report
|
||||||
|
if function_name.startswith("build_") and function_name.endswith("_report"):
|
||||||
|
return function_name[6:] # 去掉 "build_" 前缀
|
||||||
|
|
||||||
|
return function_name
|
||||||
|
|
||||||
|
def execute(
|
||||||
|
self,
|
||||||
|
skill_name: str,
|
||||||
|
function_name: str,
|
||||||
|
function_args: dict,
|
||||||
|
) -> dict:
|
||||||
|
"""直接导入模块并执行函数"""
|
||||||
|
|
||||||
|
# 首次使用时显示安全警告
|
||||||
|
if not self._warning_shown:
|
||||||
|
warnings.warn(
|
||||||
|
"\n" + "=" * 60 + "\n"
|
||||||
|
"⚠️ [安全警告] 技能在无沙盒模式下运行 (SKILL_SANDBOX_MODE=none)\n"
|
||||||
|
" 技能脚本将直接在当前进程中执行,无隔离保护。\n"
|
||||||
|
" 建议:生产环境请设置 SKILL_SANDBOX_MODE=docker\n"
|
||||||
|
"=" * 60,
|
||||||
|
RuntimeWarning,
|
||||||
|
stacklevel=2,
|
||||||
|
)
|
||||||
|
self._warning_shown = True
|
||||||
|
|
||||||
|
logger.debug(f"[NoSandbox] 执行技能: {skill_name}.{function_name}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# 将技能路径转换为模块路径
|
||||||
|
# builtin/valuation_review -> backend.skills.builtin.valuation_review.scripts
|
||||||
|
module_path = f"backend.skills.{skill_name.replace('/', '.')}.scripts"
|
||||||
|
|
||||||
|
# 从 function_name 获取脚本模块名
|
||||||
|
script_name = self._get_script_name(function_name)
|
||||||
|
submodule_path = f"{module_path}.{script_name}"
|
||||||
|
|
||||||
|
logger.debug(f"[NoSandbox] 导入模块: {submodule_path}.{function_name}")
|
||||||
|
|
||||||
|
# 缓存已加载的模块
|
||||||
|
if submodule_path not in self._module_cache:
|
||||||
|
self._module_cache[submodule_path] = __import__(
|
||||||
|
submodule_path,
|
||||||
|
fromlist=[function_name],
|
||||||
|
)
|
||||||
|
|
||||||
|
module = self._module_cache[submodule_path]
|
||||||
|
func = getattr(module, function_name)
|
||||||
|
|
||||||
|
# 执行函数
|
||||||
|
result = func(**function_args)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": "success",
|
||||||
|
"result": result,
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[NoSandbox] 执行失败: {e}")
|
||||||
|
return {
|
||||||
|
"status": "error",
|
||||||
|
"error": str(e),
|
||||||
|
"error_type": type(e).__name__,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class DockerSandboxBackend(SandboxBackend):
|
||||||
|
"""
|
||||||
|
Docker 沙盒模式 - 容器隔离
|
||||||
|
|
||||||
|
特性:
|
||||||
|
- 使用 Docker 容器隔离执行
|
||||||
|
- 支持资源限制(CPU、内存)
|
||||||
|
- 支持网络隔离
|
||||||
|
- 临时容器,执行后销毁
|
||||||
|
|
||||||
|
依赖:
|
||||||
|
pip install agentscope-runtime
|
||||||
|
Docker 守护进程运行中
|
||||||
|
"""
|
||||||
|
|
||||||
|
# 函数名到脚本模块名的映射
|
||||||
|
FUNCTION_TO_SCRIPT_MAP = {
|
||||||
|
# valuation_review 技能
|
||||||
|
"build_dcf_report": "dcf_report",
|
||||||
|
"build_owner_earnings_report": "owner_earnings_report",
|
||||||
|
"build_ev_ebitda_report": "multiple_valuation_report",
|
||||||
|
"build_residual_income_report": "multiple_valuation_report",
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self, config: dict):
|
||||||
|
self.config = config
|
||||||
|
self._available = None
|
||||||
|
|
||||||
|
def _get_script_name(self, function_name: str) -> str:
|
||||||
|
"""
|
||||||
|
根据函数名获取脚本模块名
|
||||||
|
|
||||||
|
优先使用预定义映射,否则尝试自动推断
|
||||||
|
"""
|
||||||
|
if function_name in self.FUNCTION_TO_SCRIPT_MAP:
|
||||||
|
return self.FUNCTION_TO_SCRIPT_MAP[function_name]
|
||||||
|
|
||||||
|
# 自动推断: build_X_report -> X_report
|
||||||
|
if function_name.startswith("build_") and function_name.endswith("_report"):
|
||||||
|
return function_name[6:] # 去掉 "build_" 前缀
|
||||||
|
|
||||||
|
return function_name
|
||||||
|
|
||||||
|
def _check_availability(self) -> bool:
|
||||||
|
"""检查 Docker 是否可用"""
|
||||||
|
if self._available is not None:
|
||||||
|
return self._available
|
||||||
|
|
||||||
|
try:
|
||||||
|
from agentscope_runtime.sandbox import BaseSandbox
|
||||||
|
self._available = True
|
||||||
|
except ImportError:
|
||||||
|
logger.error(
|
||||||
|
"AgentScope Runtime 未安装,无法使用 Docker 沙盒。"
|
||||||
|
"请运行: pip install agentscope-runtime"
|
||||||
|
)
|
||||||
|
self._available = False
|
||||||
|
|
||||||
|
return self._available
|
||||||
|
|
||||||
|
def execute(
|
||||||
|
self,
|
||||||
|
skill_name: str,
|
||||||
|
function_name: str,
|
||||||
|
function_args: dict,
|
||||||
|
) -> dict:
|
||||||
|
"""在 Docker 容器中执行"""
|
||||||
|
|
||||||
|
if not self._check_availability():
|
||||||
|
raise RuntimeError(
|
||||||
|
"Docker 沙盒不可用,请安装 agentscope-runtime "
|
||||||
|
"或切换到 SKILL_SANDBOX_MODE=none"
|
||||||
|
)
|
||||||
|
|
||||||
|
from agentscope_runtime.sandbox import BaseSandbox
|
||||||
|
|
||||||
|
logger.info(f"[DockerSandbox] 执行技能: {skill_name}.{function_name}")
|
||||||
|
|
||||||
|
# 获取脚本模块名
|
||||||
|
script_name = self._get_script_name(function_name)
|
||||||
|
|
||||||
|
# 构建执行代码
|
||||||
|
code = f"""
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
|
||||||
|
# 挂载路径
|
||||||
|
sys.path.insert(0, '/skill/scripts')
|
||||||
|
|
||||||
|
# 导入函数
|
||||||
|
from {script_name} import {function_name}
|
||||||
|
|
||||||
|
# 执行
|
||||||
|
args = json.loads('{json.dumps(function_args)}')
|
||||||
|
result = {function_name}(**args)
|
||||||
|
|
||||||
|
# 输出结果
|
||||||
|
print(json.dumps({{"status": "success", "result": result}}))
|
||||||
|
"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
with BaseSandbox(**self.config) as box:
|
||||||
|
# 挂载技能目录(只读)
|
||||||
|
host_skill_path = f"backend/skills/{skill_name}"
|
||||||
|
box.mount(
|
||||||
|
host_path=host_skill_path,
|
||||||
|
container_path="/skill",
|
||||||
|
read_only=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# 执行代码
|
||||||
|
exec_result = box.run_ipython_cell(code=code)
|
||||||
|
|
||||||
|
# 解析结果
|
||||||
|
if exec_result.get("exit_code") == 0:
|
||||||
|
output = exec_result.get("stdout", "")
|
||||||
|
return json.loads(output)
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
"status": "error",
|
||||||
|
"error": exec_result.get("stderr", "Unknown error"),
|
||||||
|
"exit_code": exec_result.get("exit_code"),
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"[DockerSandbox] 执行失败: {e}")
|
||||||
|
return {
|
||||||
|
"status": "error",
|
||||||
|
"error": str(e),
|
||||||
|
"error_type": type(e).__name__,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class KubernetesSandboxBackend(SandboxBackend):
|
||||||
|
"""
|
||||||
|
Kubernetes 沙盒模式 - Pod 隔离(预留接口)
|
||||||
|
|
||||||
|
特性:
|
||||||
|
- 使用 Kubernetes Pod 隔离执行
|
||||||
|
- 企业级隔离和调度
|
||||||
|
- 支持资源配额和命名空间
|
||||||
|
|
||||||
|
TODO: 待实现
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, config: dict):
|
||||||
|
self.config = config
|
||||||
|
raise NotImplementedError(
|
||||||
|
"Kubernetes 沙盒模式尚未实现,"
|
||||||
|
"请使用 SKILL_SANDBOX_MODE=docker 或 none"
|
||||||
|
)
|
||||||
|
|
||||||
|
def execute(
|
||||||
|
self,
|
||||||
|
skill_name: str,
|
||||||
|
function_name: str,
|
||||||
|
function_args: dict,
|
||||||
|
) -> dict:
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
|
||||||
|
class SkillSandbox:
|
||||||
|
"""
|
||||||
|
技能沙盒执行器
|
||||||
|
|
||||||
|
统一接口,根据配置自动选择后端。
|
||||||
|
默认使用 none 模式(无沙盒)。
|
||||||
|
|
||||||
|
示例:
|
||||||
|
>>> sandbox = SkillSandbox()
|
||||||
|
>>> result = sandbox.execute_skill(
|
||||||
|
... skill_name="builtin/valuation_review",
|
||||||
|
... function_name="build_dcf_report",
|
||||||
|
... function_args={"rows": [...], "current_date": "2024-01-01"}
|
||||||
|
... )
|
||||||
|
>>> print(result)
|
||||||
|
{"status": "success", "result": "..."}
|
||||||
|
"""
|
||||||
|
|
||||||
|
_instance = None
|
||||||
|
_mode = None
|
||||||
|
|
||||||
|
def __new__(cls):
|
||||||
|
"""单例模式"""
|
||||||
|
if cls._instance is None:
|
||||||
|
cls._instance = super().__new__(cls)
|
||||||
|
cls._instance._initialized = False
|
||||||
|
return cls._instance
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
if self._initialized:
|
||||||
|
return
|
||||||
|
|
||||||
|
self.mode = os.getenv("SKILL_SANDBOX_MODE", "none").lower()
|
||||||
|
self._backend = self._create_backend()
|
||||||
|
self._initialized = True
|
||||||
|
|
||||||
|
logger.info(f"SkillSandbox 初始化完成,模式: {self.mode}")
|
||||||
|
|
||||||
|
def _create_backend(self) -> SandboxBackend:
|
||||||
|
"""根据模式创建对应后端"""
|
||||||
|
|
||||||
|
if self.mode == "none":
|
||||||
|
logger.info("使用无沙盒模式(直接执行)")
|
||||||
|
return NoSandboxBackend()
|
||||||
|
|
||||||
|
elif self.mode == "docker":
|
||||||
|
config = {
|
||||||
|
"image": os.getenv(
|
||||||
|
"SKILL_SANDBOX_IMAGE", "python:3.11-slim"
|
||||||
|
),
|
||||||
|
"memory_limit": os.getenv(
|
||||||
|
"SKILL_SANDBOX_MEMORY_LIMIT", "512m"
|
||||||
|
),
|
||||||
|
"cpu_limit": float(
|
||||||
|
os.getenv("SKILL_SANDBOX_CPU_LIMIT", "1.0")
|
||||||
|
),
|
||||||
|
"network": os.getenv("SKILL_SANDBOX_NETWORK", "none"),
|
||||||
|
"timeout": int(os.getenv("SKILL_SANDBOX_TIMEOUT", "60")),
|
||||||
|
}
|
||||||
|
logger.info(f"使用 Docker 沙盒模式,配置: {config}")
|
||||||
|
return DockerSandboxBackend(config)
|
||||||
|
|
||||||
|
elif self.mode == "kubernetes":
|
||||||
|
config = {
|
||||||
|
"namespace": os.getenv(
|
||||||
|
"SKILL_SANDBOX_NAMESPACE", "agentscope"
|
||||||
|
),
|
||||||
|
"memory_limit": os.getenv(
|
||||||
|
"SKILL_SANDBOX_MEMORY_LIMIT", "512Mi"
|
||||||
|
),
|
||||||
|
"cpu_limit": os.getenv("SKILL_SANDBOX_CPU_LIMIT", "1000m"),
|
||||||
|
"timeout": int(os.getenv("SKILL_SANDBOX_TIMEOUT", "60")),
|
||||||
|
}
|
||||||
|
logger.info(f"使用 Kubernetes 沙盒模式,配置: {config}")
|
||||||
|
return KubernetesSandboxBackend(config)
|
||||||
|
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
f"未知的沙盒模式: {self.mode},"
|
||||||
|
f"请设置 SKILL_SANDBOX_MODE=none/docker/kubernetes"
|
||||||
|
)
|
||||||
|
|
||||||
|
def execute_skill(
|
||||||
|
self,
|
||||||
|
skill_name: str,
|
||||||
|
function_name: str,
|
||||||
|
function_args: dict | None = None,
|
||||||
|
) -> Any:
|
||||||
|
"""
|
||||||
|
执行技能函数
|
||||||
|
|
||||||
|
Args:
|
||||||
|
skill_name: 技能名称,如 "builtin/valuation_review"
|
||||||
|
function_name: 函数名,如 "build_dcf_report"
|
||||||
|
function_args: 函数参数,默认 None
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
函数执行结果(成功时返回 result 字段,失败时抛出异常)
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
RuntimeError: 执行失败
|
||||||
|
"""
|
||||||
|
if function_args is None:
|
||||||
|
function_args = {}
|
||||||
|
|
||||||
|
logger.debug(
|
||||||
|
f"执行技能: {skill_name}.{function_name} "
|
||||||
|
f"(模式: {self.mode})"
|
||||||
|
)
|
||||||
|
|
||||||
|
result = self._backend.execute(
|
||||||
|
skill_name=skill_name,
|
||||||
|
function_name=function_name,
|
||||||
|
function_args=function_args,
|
||||||
|
)
|
||||||
|
|
||||||
|
if result.get("status") == "error":
|
||||||
|
error_msg = result.get("error", "Unknown error")
|
||||||
|
error_type = result.get("error_type", "Exception")
|
||||||
|
raise RuntimeError(f"[{error_type}] {error_msg}")
|
||||||
|
|
||||||
|
return result.get("result")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def current_mode(self) -> str:
|
||||||
|
"""获取当前沙盒模式"""
|
||||||
|
return self.mode
|
||||||
|
|
||||||
|
|
||||||
|
def get_sandbox() -> SkillSandbox:
|
||||||
|
"""
|
||||||
|
获取 SkillSandbox 单例实例
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
SkillSandbox 实例
|
||||||
|
"""
|
||||||
|
return SkillSandbox()
|
||||||
|
|
||||||
|
|
||||||
|
def reset_sandbox():
|
||||||
|
"""
|
||||||
|
重置沙盒实例(用于测试)
|
||||||
|
"""
|
||||||
|
SkillSandbox._instance = None
|
||||||
|
SkillSandbox._mode = None
|
||||||
@@ -228,12 +228,12 @@ class SettlementCoordinator:
|
|||||||
|
|
||||||
all_evaluations = {**analyst_evaluations, **pm_evaluations}
|
all_evaluations = {**analyst_evaluations, **pm_evaluations}
|
||||||
|
|
||||||
leaderboard = self.storage.load_export_file("leaderboard") or []
|
leaderboard = self.storage.load_runtime_leaderboard()
|
||||||
updated_leaderboard = update_leaderboard_with_evaluations(
|
updated_leaderboard = update_leaderboard_with_evaluations(
|
||||||
leaderboard,
|
leaderboard,
|
||||||
all_evaluations,
|
all_evaluations,
|
||||||
)
|
)
|
||||||
self.storage.save_export_file("leaderboard", updated_leaderboard)
|
self.storage.persist_runtime_leaderboard(updated_leaderboard)
|
||||||
|
|
||||||
self._update_summary_with_baselines(
|
self._update_summary_with_baselines(
|
||||||
date,
|
date,
|
||||||
|
|||||||
@@ -3,6 +3,11 @@
|
|||||||
This directory contains the current production-oriented deployment artifacts for
|
This directory contains the current production-oriented deployment artifacts for
|
||||||
the 大时代 frontend site and the live gateway process.
|
the 大时代 frontend site and the live gateway process.
|
||||||
|
|
||||||
|
This deployment shape is narrower than the current application architecture. For
|
||||||
|
the code-level architecture, see [docs/current-architecture.md](../docs/current-architecture.md).
|
||||||
|
For the planned convergence work, see
|
||||||
|
[docs/development-roadmap.md](../docs/development-roadmap.md).
|
||||||
|
|
||||||
## Contents
|
## Contents
|
||||||
|
|
||||||
- [deploy/systemd/evotraders.service](./systemd/evotraders.service)
|
- [deploy/systemd/evotraders.service](./systemd/evotraders.service)
|
||||||
@@ -14,9 +19,13 @@ the 大时代 frontend site and the live gateway process.
|
|||||||
- [deploy/nginx/bigtime.cillinn.com.http.conf](./nginx/bigtime.cillinn.com.http.conf)
|
- [deploy/nginx/bigtime.cillinn.com.http.conf](./nginx/bigtime.cillinn.com.http.conf)
|
||||||
- plain HTTP/static-site variant
|
- plain HTTP/static-site variant
|
||||||
|
|
||||||
## Current Production Shape
|
## Deployment Topology Options
|
||||||
|
|
||||||
The checked-in production path is intentionally minimal:
|
This directory documents two deployment topologies:
|
||||||
|
|
||||||
|
### 1. Compatibility Topology (backend.main) - CURRENT PRODUCTION DEFAULT
|
||||||
|
|
||||||
|
The checked-in production path uses the **compatibility gateway** (`backend.main`):
|
||||||
|
|
||||||
- nginx serves the built frontend from `/var/www/bigtime/current`
|
- nginx serves the built frontend from `/var/www/bigtime/current`
|
||||||
- public domain examples use `bigtime.cillinn.com`
|
- public domain examples use `bigtime.cillinn.com`
|
||||||
@@ -24,8 +33,39 @@ The checked-in production path is intentionally minimal:
|
|||||||
- systemd runs `scripts/run_prod.sh`
|
- systemd runs `scripts/run_prod.sh`
|
||||||
- `scripts/run_prod.sh` starts `python3 -m backend.main` in live mode on `127.0.0.1:8765`
|
- `scripts/run_prod.sh` starts `python3 -m backend.main` in live mode on `127.0.0.1:8765`
|
||||||
|
|
||||||
This means the checked-in production example is centered on the gateway and
|
This is a **monolithic gateway** that embeds all services internally. It is the
|
||||||
frontend, not on exposing the split FastAPI services directly.
|
current production default for simplicity but does not expose the split FastAPI
|
||||||
|
services directly.
|
||||||
|
|
||||||
|
**When to use**: Single-server deployments, simpler operational requirements,
|
||||||
|
backwards compatibility with existing monitoring.
|
||||||
|
|
||||||
|
### 2. Preferred Topology (Split Services) - RECOMMENDED FOR NEW DEPLOYMENTS
|
||||||
|
|
||||||
|
The modern architecture exposes individual FastAPI services:
|
||||||
|
|
||||||
|
| Service | Port | Purpose |
|
||||||
|
|---------|------|---------|
|
||||||
|
| agent_service | 8000 | Control plane for workspaces, agents, skills |
|
||||||
|
| trading_service | 8001 | Read-only trading data APIs |
|
||||||
|
| news_service | 8002 | Read-only explain/news APIs |
|
||||||
|
| runtime_service | 8003 | Runtime lifecycle APIs |
|
||||||
|
| gateway | 8765 | WebSocket event channel |
|
||||||
|
|
||||||
|
**When to use**: Multi-service deployments, independent scaling needs,
|
||||||
|
service-level monitoring, or when following the architecture documented in
|
||||||
|
[docs/current-architecture.md](../docs/current-architecture.md).
|
||||||
|
|
||||||
|
To deploy in split-service mode, you would:
|
||||||
|
1. Deploy each service with its own systemd unit
|
||||||
|
2. Configure nginx to route `/api/*` to the appropriate service
|
||||||
|
3. Keep WebSocket proxy to gateway on port 8765
|
||||||
|
4. Set environment variables for service discovery:
|
||||||
|
```
|
||||||
|
TRADING_SERVICE_URL=http://localhost:8001
|
||||||
|
NEWS_SERVICE_URL=http://localhost:8002
|
||||||
|
RUNTIME_SERVICE_URL=http://localhost:8003
|
||||||
|
```
|
||||||
|
|
||||||
## Important Paths And Ports
|
## Important Paths And Ports
|
||||||
|
|
||||||
@@ -108,7 +148,7 @@ PYTHONPATH=/root/code/evotraders/.pydeps:.
|
|||||||
TICKERS=${TICKERS:-AAPL,MSFT,GOOGL,AMZN,NVDA,META,TSLA,AMD,NFLX,AVGO,PLTR,COIN}
|
TICKERS=${TICKERS:-AAPL,MSFT,GOOGL,AMZN,NVDA,META,TSLA,AMD,NFLX,AVGO,PLTR,COIN}
|
||||||
```
|
```
|
||||||
|
|
||||||
It then launches:
|
It then launches the current compatibility gateway/runtime process:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 -m backend.main \
|
python3 -m backend.main \
|
||||||
@@ -120,6 +160,32 @@ python3 -m backend.main \
|
|||||||
--poll-interval 15
|
--poll-interval 15
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Skill Sandbox Configuration
|
||||||
|
|
||||||
|
Production deployments should enable Docker-based skill sandbox for security isolation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install with sandbox support
|
||||||
|
pip install -e ".[docker-sandbox]"
|
||||||
|
|
||||||
|
# Verify Docker daemon is running
|
||||||
|
docker info
|
||||||
|
```
|
||||||
|
|
||||||
|
Environment variables (set by `scripts/run_prod.sh` with defaults):
|
||||||
|
|
||||||
|
| Variable | Default | Description |
|
||||||
|
|----------|---------|-------------|
|
||||||
|
| `SKILL_SANDBOX_MODE` | `docker` | Sandbox mode: `none` \| `docker` \| `kubernetes` |
|
||||||
|
| `SKILL_SANDBOX_IMAGE` | `python:3.11-slim` | Docker image for sandbox |
|
||||||
|
| `SKILL_SANDBOX_MEMORY_LIMIT` | `512m` | Memory limit per skill execution |
|
||||||
|
| `SKILL_SANDBOX_CPU_LIMIT` | `1.0` | CPU limit per skill execution |
|
||||||
|
| `SKILL_SANDBOX_NETWORK` | `none` | Network mode: `none` \| `bridge` |
|
||||||
|
| `SKILL_SANDBOX_TIMEOUT` | `60` | Execution timeout in seconds |
|
||||||
|
|
||||||
|
**Security recommendation**: Always use `SKILL_SANDBOX_MODE=docker` in production.
|
||||||
|
The `none` mode (direct execution) is for development only and displays a security warning.
|
||||||
|
|
||||||
## What This Deployment Does Not Yet Cover
|
## What This Deployment Does Not Yet Cover
|
||||||
|
|
||||||
The checked-in deployment artifacts do not currently document or automate:
|
The checked-in deployment artifacts do not currently document or automate:
|
||||||
|
|||||||
@@ -1,6 +1,14 @@
|
|||||||
[Unit]
|
[Unit]
|
||||||
Description=大时代 Production Service
|
Description=大时代 Production Service
|
||||||
After=network.target
|
After=network.target
|
||||||
|
# COMPATIBILITY_SURFACE: stable
|
||||||
|
# OWNER: ops-team
|
||||||
|
# SEE: docs/legacy-inventory.md#gateway-first-production-example
|
||||||
|
#
|
||||||
|
# This systemd unit runs the gateway-first production topology.
|
||||||
|
# It executes scripts/run_prod.sh which launches backend.main as the
|
||||||
|
# primary gateway/runtime process. For split-service deployment topology,
|
||||||
|
# see docs/current-architecture.md and deploy/README.md
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
Type=simple
|
Type=simple
|
||||||
|
|||||||
239
docs/CRITICAL_FIXES.md
Normal file
239
docs/CRITICAL_FIXES.md
Normal file
@@ -0,0 +1,239 @@
|
|||||||
|
# 关键代码修复方案
|
||||||
|
|
||||||
|
## 1. EvoAgent 长期记忆支持 ✅
|
||||||
|
|
||||||
|
**状态**: EvoAgent 已支持 `long_term_memory` 参数,但需要移除 Legacy 回退逻辑
|
||||||
|
|
||||||
|
**需要修改的文件**:
|
||||||
|
- `backend/main.py` 第 158-176 行 - 移除记忆启用时的 Legacy 回退
|
||||||
|
- `backend/core/pipeline.py` - 同样更新
|
||||||
|
- `backend/core/pipeline_runner.py` - 同样更新
|
||||||
|
|
||||||
|
**修复代码** (main.py):
|
||||||
|
```python
|
||||||
|
def _create_analyst_agent(...):
|
||||||
|
# ... 工具包创建代码 ...
|
||||||
|
|
||||||
|
use_evo_agent = analyst_type in _resolve_evo_agent_ids()
|
||||||
|
|
||||||
|
if use_evo_agent:
|
||||||
|
workspace_dir = skills_manager.get_agent_asset_dir(config_name, analyst_type)
|
||||||
|
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
|
||||||
|
agent = EvoAgent(
|
||||||
|
agent_id=analyst_type,
|
||||||
|
config_name=config_name,
|
||||||
|
workspace_dir=workspace_dir,
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
prompt_files=agent_config.prompt_files,
|
||||||
|
long_term_memory=long_term_memory, # 已支持
|
||||||
|
long_term_memory_mode="static_control",
|
||||||
|
)
|
||||||
|
agent.toolkit = toolkit
|
||||||
|
setattr(agent, "workspace_id", config_name)
|
||||||
|
return agent
|
||||||
|
|
||||||
|
# Legacy fallback (deprecated)
|
||||||
|
return AnalystAgent(...)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Workspace ID 语义清理
|
||||||
|
|
||||||
|
**问题**: `workspace_id` 同时用于 design-time 和 runtime 两个不同概念
|
||||||
|
|
||||||
|
**修复方案**:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# backend/api/workspaces.py
|
||||||
|
# 明确区分两种资源
|
||||||
|
|
||||||
|
# Design-time workspaces (CRUD)
|
||||||
|
@router.get("/design-workspaces/{workspace_id}/...")
|
||||||
|
async def get_design_workspace(workspace_id: str): ...
|
||||||
|
|
||||||
|
# Runtime runs (只读)
|
||||||
|
@router.get("/runs/{run_id}/agents/{agent_id}/...")
|
||||||
|
async def get_runtime_agent(run_id: str, agent_id: str): ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## 3. ToolGuard 与 Gateway 审批同步 ✅ 已完成
|
||||||
|
|
||||||
|
**状态**: 审批同步已完善,添加了批量审批支持
|
||||||
|
|
||||||
|
**API 端点**:
|
||||||
|
- `POST /api/guard/check` - 检查工具调用是否需要审批
|
||||||
|
- `POST /api/guard/approve` - 批准单个工具调用
|
||||||
|
- `POST /api/guard/approve/batch` - ✅ 批量批准多个工具调用(新增)
|
||||||
|
- `POST /api/guard/deny` - 拒绝工具调用
|
||||||
|
- `GET /api/guard/pending` - 获取待审批列表
|
||||||
|
|
||||||
|
**批量审批示例**:
|
||||||
|
```python
|
||||||
|
# 批量批准
|
||||||
|
await approve_tool_calls(
|
||||||
|
BatchApprovalRequest(
|
||||||
|
approval_ids=["approval_001", "approval_002", "approval_003"],
|
||||||
|
one_time=True,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**超时处理**: 默认 300 秒超时,可在 `ToolGuardMixin._init_tool_guard()` 中配置
|
||||||
|
|
||||||
|
## 4. Smoke Test 依赖修复
|
||||||
|
|
||||||
|
**需要的依赖**:
|
||||||
|
```bash
|
||||||
|
pip install pandas numpy matplotlib seaborn
|
||||||
|
pip install finnhub-python yfinance
|
||||||
|
pip install loguru rich
|
||||||
|
pip install websockets
|
||||||
|
pip install httpx requests
|
||||||
|
pip install PyYAML
|
||||||
|
pip install pandas-market-calendars exchange-calendars
|
||||||
|
```
|
||||||
|
|
||||||
|
## 5. 统一 Agent 工厂 ✅ 已完成
|
||||||
|
|
||||||
|
**文件** `backend/agents/unified_factory.py`:
|
||||||
|
|
||||||
|
统一工厂已创建,支持:
|
||||||
|
- 所有 6 种 Agent 角色的创建
|
||||||
|
- 自动 EvoAgent vs Legacy Agent 选择
|
||||||
|
- Workspace 驱动配置
|
||||||
|
- 长期记忆支持
|
||||||
|
|
||||||
|
```python
|
||||||
|
from backend.agents.unified_factory import UnifiedAgentFactory, get_agent_factory
|
||||||
|
|
||||||
|
# 使用示例
|
||||||
|
factory = UnifiedAgentFactory(
|
||||||
|
config_name="smoke_fullstack",
|
||||||
|
skills_manager=skills_manager,
|
||||||
|
)
|
||||||
|
|
||||||
|
# 创建分析师
|
||||||
|
analyst = factory.create_analyst(
|
||||||
|
analyst_type="fundamentals_analyst",
|
||||||
|
model=model,
|
||||||
|
formatter=formatter,
|
||||||
|
long_term_memory=memory,
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. EvoAgent 默认启用
|
||||||
|
|
||||||
|
**修改** `backend/config/constants.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# 默认所有角色使用 EvoAgent
|
||||||
|
DEFAULT_EVO_AGENT_ROLES = {
|
||||||
|
"fundamentals_analyst",
|
||||||
|
"technical_analyst",
|
||||||
|
"sentiment_analyst",
|
||||||
|
"valuation_analyst",
|
||||||
|
"risk_manager",
|
||||||
|
"portfolio_manager",
|
||||||
|
}
|
||||||
|
|
||||||
|
# EVO_AGENT_IDS 现在用于选择性地禁用 EvoAgent
|
||||||
|
# 如果设置,只启用指定的角色
|
||||||
|
# 如果未设置,启用所有角色
|
||||||
|
```
|
||||||
|
|
||||||
|
**修改** `backend/main.py`:
|
||||||
|
```python
|
||||||
|
def _resolve_evo_agent_ids() -> set[str]:
|
||||||
|
"""Return agent ids selected to use EvoAgent.
|
||||||
|
|
||||||
|
By default, all supported roles use EvoAgent.
|
||||||
|
EVO_AGENT_IDS can be used to limit to specific roles.
|
||||||
|
"""
|
||||||
|
from backend.config.constants import DEFAULT_EVO_AGENT_ROLES
|
||||||
|
|
||||||
|
raw = os.getenv("EVO_AGENT_IDS", "")
|
||||||
|
if raw.strip():
|
||||||
|
# Filter to only valid roles
|
||||||
|
requested = {x.strip() for x in raw.split(",") if x.strip()}
|
||||||
|
return requested & DEFAULT_EVO_AGENT_ROLES
|
||||||
|
|
||||||
|
# Default: all roles use EvoAgent
|
||||||
|
return DEFAULT_EVO_AGENT_ROLES
|
||||||
|
```
|
||||||
|
|
||||||
|
## 7. 遗留代码清理
|
||||||
|
|
||||||
|
**可以删除的文件**:
|
||||||
|
- `backend/agents/compat.py` ✅ 已删除
|
||||||
|
- `frontend/src/hooks/useWebsocketSessionSync.js` ✅ 已删除
|
||||||
|
|
||||||
|
**标记为废弃的文件** ✅ 已完成:
|
||||||
|
- `backend/agents/analyst.py` - 已添加 DeprecationWarning
|
||||||
|
- `backend/agents/risk_manager.py` - 已添加 DeprecationWarning
|
||||||
|
- `backend/agents/portfolio_manager.py` - 已添加 DeprecationWarning
|
||||||
|
|
||||||
|
## 8. 测试修复
|
||||||
|
|
||||||
|
**更新** `backend/tests/test_evo_agent_selection.py`:
|
||||||
|
|
||||||
|
移除这些测试 ✅ 已完成:
|
||||||
|
- `test_main_create_analyst_agent_falls_back_to_legacy_when_memory_enabled`
|
||||||
|
- `test_main_create_risk_manager_falls_back_to_legacy_when_memory_enabled`
|
||||||
|
- `test_main_create_portfolio_manager_falls_back_to_legacy_when_memory_enabled`
|
||||||
|
|
||||||
|
添加新测试 ✅ 已完成:
|
||||||
|
- `test_evo_agent_supports_long_term_memory`
|
||||||
|
- `test_all_roles_use_evo_agent_by_default`
|
||||||
|
|
||||||
|
新增集成测试文件 ✅ 已完成:
|
||||||
|
- `backend/tests/test_evo_agent_integration.py` - 13 个集成测试覆盖 Factory、ToolGuard、Workspace 集成
|
||||||
|
|
||||||
|
## 9. 快速修复清单
|
||||||
|
|
||||||
|
运行以下命令应用关键修复:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. 修复 EvoAgent 记忆支持 (修改 main.py, pipeline.py, pipeline_runner.py)
|
||||||
|
# 移除 long_term_memory 检查导致的 Legacy 回退
|
||||||
|
|
||||||
|
# 2. 修复默认 EvoAgent 启用
|
||||||
|
sed -i 's/def _resolve_evo_agent_ids():/def _resolve_evo_agent_ids() -> set[str]:/' backend/main.py
|
||||||
|
|
||||||
|
# 3. 确保所有测试通过
|
||||||
|
pytest backend/tests/test_evo_agent_selection.py -v
|
||||||
|
|
||||||
|
# 4. 运行 smoke test
|
||||||
|
python3 scripts/smoke_evo_runtime.py --test-all-roles
|
||||||
|
```
|
||||||
|
|
||||||
|
## 10. 实施进度
|
||||||
|
|
||||||
|
### ✅ 已完成
|
||||||
|
|
||||||
|
| 任务 | 状态 | 文件 |
|
||||||
|
|------|------|------|
|
||||||
|
| EvoAgent 长期记忆支持 | ✅ 已完成 | `evo_agent.py`, `main.py` |
|
||||||
|
| 默认启用所有角色 EvoAgent | ✅ 已完成 | `main.py`, `pipeline.py` |
|
||||||
|
| 统一 Agent 工厂 | ✅ 已完成 | `unified_factory.py` |
|
||||||
|
| ToolGuard Gateway 同步 | ✅ 已完成 | `tool_guard.py`, `guard.py` |
|
||||||
|
| ToolGuard 批量审批 | ✅ 已完成 | `guard.py` |
|
||||||
|
| 废弃标记 Legacy Agent | ✅ 已完成 | `analyst.py`, `risk_manager.py`, `portfolio_manager.py` |
|
||||||
|
| 集成测试 | ✅ 已完成 | `test_evo_agent_integration.py` |
|
||||||
|
| 类型注解 | ✅ 已完成 | `unified_factory.py` |
|
||||||
|
| Team 基础设施 | ✅ 已完成 | `messenger.py`, `task_delegator.py` |
|
||||||
|
| Skills 沙盒执行 | ✅ 已完成 | `sandboxed_executor.py` |
|
||||||
|
|
||||||
|
### 🚧 待完成
|
||||||
|
|
||||||
|
| 优先级 | 任务 | 说明 |
|
||||||
|
|--------|------|------|
|
||||||
|
| P0 | Smoke Test 依赖修复 | 需要安装 pandas, finnhub, pandas-market-calendars 等 |
|
||||||
|
| P1 | Workspace ID 语义清理 | ✅ 已添加 `run_id`,保留 `workspace_id` 用于向后兼容 |
|
||||||
|
| P2 | 文档完善 | ✅ 已完成 |
|
||||||
|
|
||||||
|
*最后更新: 2026-04-02*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*文档生成时间: 2026-04-01*
|
||||||
249
docs/OPTIMIZATION_PLAN.md
Normal file
249
docs/OPTIMIZATION_PLAN.md
Normal file
@@ -0,0 +1,249 @@
|
|||||||
|
# 大时代项目优化和功能补齐计划
|
||||||
|
|
||||||
|
## 当前状态评估
|
||||||
|
|
||||||
|
### 已完成的工作
|
||||||
|
1. ✅ EvoAgent 核心实现 (`backend/agents/base/evo_agent.py`)
|
||||||
|
2. ✅ ToolGuardMixin 工具守卫 (`backend/agents/base/tool_guard.py`)
|
||||||
|
3. ✅ Hooks 系统 (`backend/agents/base/hooks.py`)
|
||||||
|
4. ✅ Smoke test 脚本 (`scripts/smoke_evo_runtime.py`)
|
||||||
|
5. ✅ 选择性 EvoAgent 测试 (`backend/tests/test_evo_agent_selection.py`)
|
||||||
|
6. ✅ 删除 `backend/agents/compat.py` 兼容性层
|
||||||
|
7. ✅ 删除 `useWebsocketSessionSync.js` 旧钩子
|
||||||
|
|
||||||
|
### 遗留问题清单
|
||||||
|
|
||||||
|
#### 🔴 P0: 阻塞 EvoAgent 全面推出
|
||||||
|
|
||||||
|
| # | 问题 | 位置 | 影响 | 解决方案 |
|
||||||
|
|---|------|------|------|----------|
|
||||||
|
| P0-1 | EvoAgent 不支持长期记忆 | `evo_agent.py:165-166` | 启用 memory 时回退到 Legacy Agent | 集成 ReMe 记忆系统 |
|
||||||
|
| P0-2 | Pipeline 运行时分析师创建路径不一致 | `pipeline.py` | 运行时动态创建可能跳过 EvoAgent 路径 | 统一 `_create_runtime_analyst` 逻辑 |
|
||||||
|
| P0-3 | Workspace 加载路径混乱 | `workspace.py`, `workspace_manager.py` | `workspace_id` vs `run_id` 语义混合 | 明确区分 design-time 和 runtime 路径 |
|
||||||
|
| P0-4 | Smoke test 失败排查 | `scripts/smoke_evo_runtime.py` | 无法验证 EvoAgent 是否正确启动 | 修复测试并确保通过 |
|
||||||
|
|
||||||
|
#### 🟡 P1: 功能完善
|
||||||
|
|
||||||
|
| # | 问题 | 位置 | 影响 | 解决方案 |
|
||||||
|
|---|------|------|------|----------|
|
||||||
|
| P1-1 | Team 基础设施未完成 | `evo_agent.py:41-48` | Agent 间通信和任务委托不可用 | 完成 messenger 和 task_delegator |
|
||||||
|
| P1-2 | ToolGuard 与 Gateway 审批流程集成 | `tool_guard.py`, `api/guard.py` | 审批状态同步可能不一致 | 统一审批存储和事件通知 |
|
||||||
|
| P1-3 | Skills 沙盒执行 | `tools/sandboxed_executor.py` | 生产环境需要 Docker 隔离 | 完善沙盒执行器 |
|
||||||
|
| P1-4 | 错误处理和重试机制 | 多处 | 部分错误未正确处理 | 添加统一的错误处理 |
|
||||||
|
|
||||||
|
#### 🟢 P2: 代码质量和可维护性
|
||||||
|
|
||||||
|
| # | 问题 | 位置 | 影响 | 解决方案 |
|
||||||
|
|---|------|------|------|----------|
|
||||||
|
| P2-1 | 重复的 Agent 创建逻辑 | `main.py`, `pipeline.py`, `pipeline_runner.py` | 维护困难,容易遗漏 | 提取统一的 Agent 工厂 |
|
||||||
|
| P2-2 | 类型注解不完整 | 多处 | IDE 提示不足 | 完善类型注解 |
|
||||||
|
| P2-3 | 缺少 EvoAgent 集成测试 | `backend/tests/` | 无法确保功能完整 | 添加集成测试 |
|
||||||
|
| P2-4 | 文档和注释 | 多处 | 新贡献者理解困难 | 完善文档 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 详细实施方案
|
||||||
|
|
||||||
|
### Phase 1: P0 阻塞问题修复
|
||||||
|
|
||||||
|
#### P0-1: EvoAgent 长期记忆支持
|
||||||
|
|
||||||
|
**问题描述**:
|
||||||
|
```python
|
||||||
|
# main.py 中当前逻辑
|
||||||
|
if long_term_memory and agent_id not in EVO_AGENT_IDS:
|
||||||
|
# 使用 Legacy Agent
|
||||||
|
else:
|
||||||
|
# 使用 EvoAgent
|
||||||
|
```
|
||||||
|
|
||||||
|
**目标**: EvoAgent 支持 ReMe 长期记忆系统
|
||||||
|
|
||||||
|
**实施步骤**:
|
||||||
|
1. 在 `EvoAgent.__init__` 中正确接收 `long_term_memory` 参数
|
||||||
|
2. 集成 ReMe 记忆系统的读写
|
||||||
|
3. 在 Hooks 中添加记忆相关的生命周期管理
|
||||||
|
4. 修改 `main.py`, `pipeline.py` 中移除 EvoAgent 的记忆回退逻辑
|
||||||
|
|
||||||
|
**文件修改**:
|
||||||
|
- `backend/agents/base/evo_agent.py`
|
||||||
|
- `backend/main.py`
|
||||||
|
- `backend/core/pipeline.py`
|
||||||
|
|
||||||
|
#### P0-2: Pipeline 运行时分析师创建统一
|
||||||
|
|
||||||
|
**问题描述**:
|
||||||
|
`TradingPipeline._create_runtime_analyst` 方法需要确保:
|
||||||
|
1. 检查 `EVO_AGENT_IDS` 环境变量
|
||||||
|
2. 正确传递所有必要参数给 EvoAgent
|
||||||
|
3. 处理 workspace 资产准备
|
||||||
|
|
||||||
|
**实施步骤**:
|
||||||
|
1. 统一 `pipeline.py` 和 `main.py` 中的 Agent 创建逻辑
|
||||||
|
2. 确保 EvoAgent 路径和 Legacy 路径参数一致
|
||||||
|
3. 添加运行时动态 Agent 创建的测试
|
||||||
|
|
||||||
|
**文件修改**:
|
||||||
|
- `backend/core/pipeline.py`
|
||||||
|
- `backend/main.py`
|
||||||
|
|
||||||
|
#### P0-3: Workspace 路径清理
|
||||||
|
|
||||||
|
**问题描述**:
|
||||||
|
- `workspace_id` 有时指 `workspaces/` 目录下的设计时 workspace
|
||||||
|
- 有时指 `runs/<run_id>/` 下的运行时 workspace
|
||||||
|
|
||||||
|
**解决方案**:
|
||||||
|
1. 明确命名:`design_workspace_id` vs `run_id`
|
||||||
|
2. 在 API 路由中区分两种资源
|
||||||
|
3. 内部统一使用 `run_id` 作为运行时标识
|
||||||
|
|
||||||
|
**文件修改**:
|
||||||
|
- `backend/api/workspaces.py`
|
||||||
|
- `backend/api/agents.py`
|
||||||
|
- `backend/agents/workspace_manager.py`
|
||||||
|
|
||||||
|
#### P0-4: Smoke Test 修复
|
||||||
|
|
||||||
|
**当前测试**:
|
||||||
|
```bash
|
||||||
|
python3 scripts/smoke_evo_runtime.py --agent-id fundamentals_analyst
|
||||||
|
```
|
||||||
|
|
||||||
|
**验证点**:
|
||||||
|
1. Gateway 正常启动
|
||||||
|
2. EvoAgent 日志出现
|
||||||
|
3. `runtime_state.json` 正确写入
|
||||||
|
4. 审批流程正常工作
|
||||||
|
|
||||||
|
**实施步骤**:
|
||||||
|
1. 运行测试并识别失败点
|
||||||
|
2. 修复 EvoAgent 初始化问题
|
||||||
|
3. 确保所有 6 个角色都能通过测试
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: P1 功能完善
|
||||||
|
|
||||||
|
#### P1-1: Team 基础设施
|
||||||
|
|
||||||
|
**当前状态**:
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
from backend.agents.team.messenger import AgentMessenger
|
||||||
|
from backend.agents.team.task_delegator import TaskDelegator
|
||||||
|
TEAM_INFRA_AVAILABLE = True
|
||||||
|
except ImportError:
|
||||||
|
TEAM_INFRA_AVAILABLE = False
|
||||||
|
```
|
||||||
|
|
||||||
|
**目标**: 完成 Agent 间通信和任务委托
|
||||||
|
|
||||||
|
**实施步骤**:
|
||||||
|
1. 完成 `AgentMessenger` 实现
|
||||||
|
2. 完成 `TaskDelegator` 实现
|
||||||
|
3. 添加 Agent 团队协调的测试
|
||||||
|
|
||||||
|
#### P1-2: ToolGuard 与 Gateway 集成
|
||||||
|
|
||||||
|
**当前状态**:
|
||||||
|
- `ToolGuardStore` 是内存存储
|
||||||
|
- Gateway 通过 `get_global_runtime_manager()` 访问
|
||||||
|
|
||||||
|
**改进**:
|
||||||
|
1. 确保审批状态在 Gateway 和 Agent 间同步
|
||||||
|
2. 添加审批超时处理
|
||||||
|
3. 支持批量审批
|
||||||
|
|
||||||
|
#### P1-3: Skills 沙盒执行
|
||||||
|
|
||||||
|
**当前状态**:
|
||||||
|
```python
|
||||||
|
SKILL_SANDBOX_MODE=none # 开发模式,直接执行
|
||||||
|
```
|
||||||
|
|
||||||
|
**目标**: 生产环境使用 Docker 隔离
|
||||||
|
|
||||||
|
**实施步骤**:
|
||||||
|
1. 完成 `DockerSandboxBackend`
|
||||||
|
2. 添加资源限制(CPU、内存、网络)
|
||||||
|
3. 添加执行超时控制
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: P2 代码质量
|
||||||
|
|
||||||
|
#### P2-1: 统一 Agent 工厂
|
||||||
|
|
||||||
|
**目标**: 提取 `AgentFactory` 统一处理所有 Agent 创建
|
||||||
|
|
||||||
|
**设计**:
|
||||||
|
```python
|
||||||
|
class AgentFactory:
|
||||||
|
def create_analyst(self, analyst_type: str, **kwargs) -> BaseAgent
|
||||||
|
def create_risk_manager(self, **kwargs) -> BaseAgent
|
||||||
|
def create_portfolio_manager(self, **kwargs) -> BaseAgent
|
||||||
|
```
|
||||||
|
|
||||||
|
#### P2-2: 类型注解
|
||||||
|
|
||||||
|
**目标**: 所有公共 API 完整的类型注解
|
||||||
|
|
||||||
|
#### P2-3: 集成测试
|
||||||
|
|
||||||
|
**目标**: EvoAgent 完整的端到端测试
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 实施顺序
|
||||||
|
|
||||||
|
### Week 1: P0 阻塞问题
|
||||||
|
1. [ ] P0-4: 运行 Smoke Test,识别失败点
|
||||||
|
2. [ ] P0-1: EvoAgent 长期记忆支持
|
||||||
|
3. [ ] P0-2: Pipeline 运行时统一
|
||||||
|
4. [ ] P0-3: Workspace 路径清理
|
||||||
|
5. [ ] 验证所有 Smoke Test 通过
|
||||||
|
|
||||||
|
### Week 2: P1 功能完善
|
||||||
|
1. [ ] P1-1: Team 基础设施
|
||||||
|
2. [ ] P1-2: ToolGuard 集成优化
|
||||||
|
3. [ ] P1-3: Skills 沙盒执行
|
||||||
|
|
||||||
|
### Week 3: P2 代码质量
|
||||||
|
1. [ ] P2-1: 统一 Agent 工厂
|
||||||
|
2. [ ] P2-2: 类型注解
|
||||||
|
3. [ ] P2-3: 集成测试
|
||||||
|
4. [ ] P2-4: 文档完善
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 成功标准
|
||||||
|
|
||||||
|
### EvoAgent 全面推出标准
|
||||||
|
1. ✅ 所有 6 个角色通过 smoke test
|
||||||
|
2. ✅ 长期记忆功能正常工作
|
||||||
|
3. ✅ 无需 `EVO_AGENT_IDS` 环境变量即可使用 EvoAgent
|
||||||
|
4. ✅ Legacy Agent 代码标记为 deprecated
|
||||||
|
5. ✅ 集成测试覆盖主要使用场景
|
||||||
|
|
||||||
|
### 架构清理标准
|
||||||
|
1. ✅ `runs/<run_id>/` 是唯一的运行时数据来源
|
||||||
|
2. ✅ `workspaces/` 仅用于设计时注册表
|
||||||
|
3. ✅ 所有服务边界清晰,无循环依赖
|
||||||
|
4. ✅ 文档和代码一致
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 风险和对策
|
||||||
|
|
||||||
|
| 风险 | 可能性 | 影响 | 对策 |
|
||||||
|
|------|--------|------|------|
|
||||||
|
| EvoAgent 与 Legacy 行为不一致 | 中 | 高 | 并行运行对比测试 |
|
||||||
|
| 长期记忆集成复杂 | 中 | 中 | 分阶段实现,先支持基础功能 |
|
||||||
|
| 性能下降 | 低 | 高 | 基准测试,性能剖析 |
|
||||||
|
| 迁移期间系统不稳定 | 中 | 高 | 保持 Legacy 作为回退 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*计划创建日期: 2026-04-01*
|
||||||
|
*负责: Claude Code*
|
||||||
@@ -114,3 +114,53 @@ What remains is not “legacy startup debt”, but:
|
|||||||
- deployment consistency
|
- deployment consistency
|
||||||
- reduction of env-dependent fallback behavior
|
- reduction of env-dependent fallback behavior
|
||||||
- sharper documentation around gateway and OpenClaw boundaries
|
- sharper documentation around gateway and OpenClaw boundaries
|
||||||
|
|
||||||
|
## Residual Inventory
|
||||||
|
|
||||||
|
The remaining migration-related surfaces now fall into three buckets.
|
||||||
|
|
||||||
|
### 1. Remove When Replaced
|
||||||
|
|
||||||
|
These should not grow further. Keep them only until a concrete replacement is
|
||||||
|
fully in use.
|
||||||
|
|
||||||
|
- `backend.agents.compat`
|
||||||
|
- removed after the package root stopped exporting compat helpers
|
||||||
|
|
||||||
|
Recommended next action:
|
||||||
|
|
||||||
|
- keep future EvoAgent cutover work on explicit run-scoped constructors rather
|
||||||
|
than reintroducing generic workspace-loading entrypoints on `TradingPipeline`.
|
||||||
|
|
||||||
|
### 2. Keep As Stable Compatibility Surfaces
|
||||||
|
|
||||||
|
These still have an operational reason to exist and should be documented rather
|
||||||
|
than treated as accidental leftovers.
|
||||||
|
|
||||||
|
- `backend.main`
|
||||||
|
- compatibility gateway/runtime process
|
||||||
|
- still relevant for websocket transport and current deploy topology
|
||||||
|
- `runs/<run_id>/team_dashboard/*.json`
|
||||||
|
- export/consumer compatibility layer
|
||||||
|
- gateway-mediated websocket/event flow
|
||||||
|
- still the practical live event contract for the frontend
|
||||||
|
|
||||||
|
Recommended next action:
|
||||||
|
|
||||||
|
- keep these, but document them as intentional compatibility surfaces with
|
||||||
|
explicit ownership.
|
||||||
|
|
||||||
|
### 3. Defer Until Topology Decisions Are Final
|
||||||
|
|
||||||
|
These are real migration boundaries, but removing them prematurely would create
|
||||||
|
churn without simplifying the current runtime.
|
||||||
|
|
||||||
|
- `workspaces/` design-time registry versus `runs/<run_id>/` runtime state
|
||||||
|
- env-dependent service fallback behavior
|
||||||
|
- checked-in deployment docs centered on `backend.main`
|
||||||
|
- dual OpenClaw shapes: gateway integration and REST facade
|
||||||
|
|
||||||
|
Recommended next action:
|
||||||
|
|
||||||
|
- revisit these only after production topology and service-routing policy are
|
||||||
|
frozen.
|
||||||
|
|||||||
1238
docs/current-architecture.excalidraw
Normal file
1238
docs/current-architecture.excalidraw
Normal file
File diff suppressed because it is too large
Load Diff
202
docs/current-architecture.md
Normal file
202
docs/current-architecture.md
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
# Current Architecture
|
||||||
|
|
||||||
|
This file describes the current code-supported architecture only. Historical
|
||||||
|
paths and partial migrations are intentionally excluded unless called out as
|
||||||
|
legacy compatibility.
|
||||||
|
|
||||||
|
Reference material:
|
||||||
|
|
||||||
|
- visual diagram: [current-architecture.excalidraw](./current-architecture.excalidraw)
|
||||||
|
- next-step roadmap: [development-roadmap.md](./development-roadmap.md)
|
||||||
|
- legacy inventory: [legacy-inventory.md](./legacy-inventory.md)
|
||||||
|
- terminology guide: [terminology.md](./terminology.md)
|
||||||
|
|
||||||
|
## Runtime Modes
|
||||||
|
|
||||||
|
The system supports two distinct runtime modes:
|
||||||
|
|
||||||
|
### Standalone Mode (Legacy Compatibility)
|
||||||
|
|
||||||
|
Direct Gateway startup via `backend.main` as a monolithic entrypoint.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m backend.main --mode live --port 8765
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics:**
|
||||||
|
- Single process runs Gateway, Pipeline, Market Service, and Scheduler
|
||||||
|
- No service discovery or process management
|
||||||
|
- Suitable for single-node deployments and quick testing
|
||||||
|
- All components share the same memory space
|
||||||
|
|
||||||
|
**Use cases:**
|
||||||
|
- Quick local testing without service orchestration
|
||||||
|
- Single-node production deployments
|
||||||
|
- Backward compatibility with legacy startup scripts
|
||||||
|
|
||||||
|
### Microservice Mode (Default for Development)
|
||||||
|
|
||||||
|
Split-service architecture with dedicated runtime_service managing the Gateway lifecycle.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./start-dev.sh # Starts all services including runtime_service and Gateway
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics:**
|
||||||
|
- `runtime_service` (:8003) acts as Gateway Process Manager
|
||||||
|
- Gateway runs as a subprocess managed by runtime_service
|
||||||
|
- Clear separation between Control Plane (runtime_service) and Data Plane (Gateway)
|
||||||
|
- Service discovery via environment variables
|
||||||
|
- Independent scaling and deployment of each service
|
||||||
|
|
||||||
|
**Use cases:**
|
||||||
|
- Local development with hot-reload
|
||||||
|
- Multi-node deployments
|
||||||
|
- Production environments requiring service isolation
|
||||||
|
|
||||||
|
## Mode Comparison
|
||||||
|
|
||||||
|
| Aspect | Standalone Mode | Microservice Mode |
|
||||||
|
|--------|-----------------|-------------------|
|
||||||
|
| **Entry point** | `python -m backend.main` | `./start-dev.sh` or individual services |
|
||||||
|
| **Process model** | Single monolithic process | Multiple specialized processes |
|
||||||
|
| **Gateway management** | Self-contained | Managed by runtime_service |
|
||||||
|
| **Service discovery** | None (in-process) | Environment variable based |
|
||||||
|
| **Hot reload** | Full restart required | Per-service reload |
|
||||||
|
| **Scaling** | Vertical only | Horizontal possible |
|
||||||
|
| **Complexity** | Lower | Higher |
|
||||||
|
| **Use case** | Testing, simple deployments | Development, production |
|
||||||
|
|
||||||
|
## Default Runtime Shape (Microservice Mode)
|
||||||
|
|
||||||
|
The active runtime path is:
|
||||||
|
|
||||||
|
`frontend -> frontend_service proxy or direct split-service calls -> runtime_service/control APIs -> gateway subprocess -> market/pipeline/storage`
|
||||||
|
|
||||||
|
Current service surfaces:
|
||||||
|
|
||||||
|
- `backend.apps.agent_service` on `:8000`
|
||||||
|
- control plane for workspaces, agents, skills, approvals
|
||||||
|
- `backend.apps.trading_service` on `:8001`
|
||||||
|
- read-only trading data APIs
|
||||||
|
- `backend.apps.news_service` on `:8002`
|
||||||
|
- read-only explain/news APIs
|
||||||
|
- `backend.apps.runtime_service` on `:8003`
|
||||||
|
- runtime lifecycle and gateway process management
|
||||||
|
- `backend.apps.openclaw_service` on `:8004`
|
||||||
|
- optional OpenClaw REST facade
|
||||||
|
- gateway WebSocket on `:8765`
|
||||||
|
- live feed/event transport and pipeline coordination
|
||||||
|
|
||||||
|
### Control Plane vs Data Plane
|
||||||
|
|
||||||
|
**Control Plane (runtime_service :8003):**
|
||||||
|
- Gateway lifecycle management (start/stop/restart)
|
||||||
|
- Runtime configuration and bootstrap
|
||||||
|
- Process health monitoring
|
||||||
|
- Run history and state snapshots
|
||||||
|
|
||||||
|
**Data Plane (Gateway :8765):**
|
||||||
|
- WebSocket event streaming
|
||||||
|
- Market data ingestion
|
||||||
|
- Pipeline execution (analysis -> decision -> execution)
|
||||||
|
- Real-time trading operations
|
||||||
|
|
||||||
|
## Runtime Data Layout
|
||||||
|
|
||||||
|
The canonical runtime data root is:
|
||||||
|
|
||||||
|
- `runs/<run_id>/`
|
||||||
|
|
||||||
|
Important files under each run:
|
||||||
|
|
||||||
|
- `runs/<run_id>/BOOTSTRAP.md`
|
||||||
|
- machine-readable front matter plus run-scoped prompt body
|
||||||
|
- `runs/<run_id>/agents/<agent_id>/`
|
||||||
|
- run-scoped agent workspace files and active/local skills
|
||||||
|
- `runs/<run_id>/state/runtime_state.json`
|
||||||
|
- runtime snapshot
|
||||||
|
- `runs/<run_id>/state/server_state.json`
|
||||||
|
- server-side state (portfolio, trades, market data)
|
||||||
|
- `runs/<run_id>/team_dashboard/*.json`
|
||||||
|
- compatibility/export layer for dashboard consumers
|
||||||
|
- can be disabled in controlled environments via `ENABLE_DASHBOARD_COMPAT_EXPORTS=false`
|
||||||
|
|
||||||
|
## Workspace Terms
|
||||||
|
|
||||||
|
Two similarly named concepts still exist in the repository:
|
||||||
|
|
||||||
|
- `workspaces/`
|
||||||
|
- design-time registry and CRUD surface exposed by `agent_service`
|
||||||
|
- `runs/<run_id>/`
|
||||||
|
- actual runtime state, agent assets, skills, bootstrap config, and logs
|
||||||
|
|
||||||
|
When reading current runtime code, prefer `runs/<run_id>/` as the source of
|
||||||
|
truth. The `workspaces/` registry is not the default execution path.
|
||||||
|
|
||||||
|
## Skill Sandbox Execution
|
||||||
|
|
||||||
|
Skill scripts (analysis tools, valuation reports) can be executed in multiple
|
||||||
|
sandbox modes via `backend/tools/sandboxed_executor.py`:
|
||||||
|
|
||||||
|
| Mode | Backend Class | Description |
|
||||||
|
|------|---------------|-------------|
|
||||||
|
| `none` | `NoSandboxBackend` | Direct module import and execution (default, development only) |
|
||||||
|
| `docker` | `DockerSandboxBackend` | Docker container isolation with resource limits |
|
||||||
|
| `kubernetes` | `KubernetesSandboxBackend` | Kubernetes Pod isolation (reserved interface) |
|
||||||
|
|
||||||
|
Environment configuration:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
SKILL_SANDBOX_MODE=none # none | docker | kubernetes
|
||||||
|
SKILL_SANDBOX_IMAGE=python:3.11-slim
|
||||||
|
SKILL_SANDBOX_MEMORY_LIMIT=512m
|
||||||
|
SKILL_SANDBOX_CPU_LIMIT=1.0
|
||||||
|
SKILL_SANDBOX_NETWORK=none
|
||||||
|
SKILL_SANDBOX_TIMEOUT=60
|
||||||
|
```
|
||||||
|
|
||||||
|
The default `none` mode displays a runtime security warning on first execution
|
||||||
|
as a reminder that scripts run without isolation. Production deployments should
|
||||||
|
use `docker` mode with appropriate resource limits.
|
||||||
|
|
||||||
|
## Migration Roadmap
|
||||||
|
|
||||||
|
### Current State
|
||||||
|
|
||||||
|
The system is in a transitional state:
|
||||||
|
|
||||||
|
1. **Microservice infrastructure is operational** - runtime_service can start/stop Gateway as subprocess
|
||||||
|
2. **Pipeline logic remains in Gateway** - full Pipeline execution still happens within Gateway process
|
||||||
|
3. **Standalone mode is preserved** - direct `backend.main` startup for compatibility
|
||||||
|
|
||||||
|
### Future Direction
|
||||||
|
|
||||||
|
Phase 1: Documentation and startup convergence (active)
|
||||||
|
- Clarify runtime modes and their use cases
|
||||||
|
- Unify documentation across all entry points
|
||||||
|
|
||||||
|
Phase 2: Runtime model consolidation
|
||||||
|
- Ensure all runtime state lives under `runs/<run_id>/`
|
||||||
|
- Remove dependencies on root-level legacy directories
|
||||||
|
|
||||||
|
Phase 3: Pipeline decomposition (planned)
|
||||||
|
- Extract Pipeline stages into independent services
|
||||||
|
- Gateway becomes a thin event router
|
||||||
|
- runtime_service evolves into full orchestrator
|
||||||
|
|
||||||
|
Phase 4: Standalone mode deprecation (future)
|
||||||
|
- Remove direct `backend.main` entry point
|
||||||
|
- All deployments use microservice mode
|
||||||
|
|
||||||
|
## Legacy Compatibility
|
||||||
|
|
||||||
|
These items still exist, but they are not the recommended source of truth for
|
||||||
|
new development:
|
||||||
|
|
||||||
|
- root-level runtime data directories such as `live/`, `production/`, `backtest/`
|
||||||
|
- direct `backend.main` startup as the primary development path
|
||||||
|
|
||||||
|
The current runtime still creates legacy `AnalystAgent` / `RiskAgent` /
|
||||||
|
`PMAgent` instances directly. EvoAgent remains an in-progress migration target,
|
||||||
|
not the default execution path.
|
||||||
124
docs/development-roadmap.md
Normal file
124
docs/development-roadmap.md
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
# Development Roadmap
|
||||||
|
|
||||||
|
This roadmap describes the next engineering steps based on the current
|
||||||
|
code-supported architecture, not on historical compatibility layers.
|
||||||
|
|
||||||
|
The current architecture source of truth is
|
||||||
|
[current-architecture.md](./current-architecture.md). The matching visual
|
||||||
|
diagram lives in [current-architecture.excalidraw](./current-architecture.excalidraw).
|
||||||
|
|
||||||
|
## Guiding Principle
|
||||||
|
|
||||||
|
The repo should converge on one clear runtime model:
|
||||||
|
|
||||||
|
`split services + gateway + run-scoped runtime state under runs/<run_id>/`
|
||||||
|
|
||||||
|
That means future work should reduce ambiguity between:
|
||||||
|
|
||||||
|
- design-time `workspaces/`
|
||||||
|
- runtime `runs/<run_id>/`
|
||||||
|
- compatibility gateway paths
|
||||||
|
- older root-level runtime directories
|
||||||
|
|
||||||
|
## Phase 1: Documentation And Startup Convergence
|
||||||
|
|
||||||
|
Goal: make the supported system shape unambiguous for contributors and operators.
|
||||||
|
|
||||||
|
Planned work:
|
||||||
|
|
||||||
|
- keep `docs/current-architecture.md` as the primary architecture fact source
|
||||||
|
- keep `docs/current-architecture.excalidraw` aligned with code changes
|
||||||
|
- make README, service docs, and deploy docs point to the same runtime model
|
||||||
|
- explicitly describe `agent_service`, `runtime_service`, `trading_service`,
|
||||||
|
`news_service`, gateway, and OpenClaw boundaries
|
||||||
|
- remove or mark statements that imply `workspaces/` is the runtime source of truth
|
||||||
|
|
||||||
|
Definition of done:
|
||||||
|
|
||||||
|
- a new contributor can identify the supported local startup path in under five minutes
|
||||||
|
- architecture wording is consistent across top-level docs
|
||||||
|
|
||||||
|
## Phase 2: Runtime Model Consolidation
|
||||||
|
|
||||||
|
Goal: ensure the runtime state model is centered on `runs/<run_id>/`.
|
||||||
|
|
||||||
|
Planned work:
|
||||||
|
|
||||||
|
- review remaining reads and writes that still assume root-level `live/`,
|
||||||
|
`backtest/`, or `production/` directories are canonical
|
||||||
|
- keep compatibility exports such as `team_dashboard/*.json`, but document them
|
||||||
|
as exports rather than primary state
|
||||||
|
- continue moving runtime metadata, assets, and bootstrap configuration behind
|
||||||
|
run-scoped helpers
|
||||||
|
- keep the control plane and runtime APIs conceptually separate
|
||||||
|
|
||||||
|
Definition of done:
|
||||||
|
|
||||||
|
- run-scoped helpers are the default path for runtime state access
|
||||||
|
- compatibility directories are no longer required for normal development
|
||||||
|
|
||||||
|
## Phase 3: Compatibility Surface Reduction
|
||||||
|
|
||||||
|
Goal: preserve only intentional compatibility layers.
|
||||||
|
|
||||||
|
Planned work:
|
||||||
|
|
||||||
|
- identify startup scripts and deploy artifacts that still center on
|
||||||
|
`backend.main` as a monolithic entrypoint
|
||||||
|
- classify compatibility surfaces into:
|
||||||
|
- stable and intentional
|
||||||
|
- temporary and shrinking
|
||||||
|
- removable once replacements are fully active
|
||||||
|
- reduce env-dependent fallback ambiguity for read-only service routing where practical
|
||||||
|
- document the difference between OpenClaw WebSocket integration and the optional REST facade
|
||||||
|
|
||||||
|
Definition of done:
|
||||||
|
|
||||||
|
- compatibility surfaces have explicit ownership
|
||||||
|
- the repo no longer mixes migration leftovers with recommended defaults
|
||||||
|
|
||||||
|
## Phase 4: EvoAgent Runtime Cutover
|
||||||
|
|
||||||
|
Goal: move from selective EvoAgent rollout to a cleaner default runtime path.
|
||||||
|
|
||||||
|
Planned work:
|
||||||
|
|
||||||
|
- continue supporting staged rollout through explicit agent selection
|
||||||
|
- close functional gaps that still require falling back to legacy
|
||||||
|
analyst/risk/PM implementations
|
||||||
|
- keep run-scoped workspace assets and prompt reload behavior aligned between
|
||||||
|
legacy and EvoAgent paths
|
||||||
|
- avoid reintroducing generic workspace-loading shortcuts on the pipeline layer
|
||||||
|
|
||||||
|
Definition of done:
|
||||||
|
|
||||||
|
- EvoAgent selection is predictable, test-backed, and no longer treated as an
|
||||||
|
experimental side path for the supported roles
|
||||||
|
|
||||||
|
## Phase 5: Contract Tests And Operational Confidence
|
||||||
|
|
||||||
|
Goal: increase confidence that the split-service architecture remains coherent.
|
||||||
|
|
||||||
|
Planned work:
|
||||||
|
|
||||||
|
- expand service-surface tests around `runtime_service`, `trading_service`,
|
||||||
|
`news_service`, and migration boundaries
|
||||||
|
- keep smoke coverage for staged EvoAgent runtime startup
|
||||||
|
- add validation around docs/script consistency where low-cost checks are possible
|
||||||
|
- tighten deploy docs so checked-in production examples are clearly described as
|
||||||
|
either compatibility topology or first-class topology
|
||||||
|
|
||||||
|
Definition of done:
|
||||||
|
|
||||||
|
- service boundaries are testable and understandable without tracing legacy code
|
||||||
|
- startup, deploy, and smoke paths tell the same story
|
||||||
|
|
||||||
|
## Immediate Focus
|
||||||
|
|
||||||
|
The next practical priority order should be:
|
||||||
|
|
||||||
|
1. documentation and startup convergence
|
||||||
|
2. runtime model consolidation around `runs/<run_id>/`
|
||||||
|
3. compatibility surface reduction
|
||||||
|
4. EvoAgent runtime cutover
|
||||||
|
5. broader contract and smoke confidence
|
||||||
261
docs/legacy-inventory.md
Normal file
261
docs/legacy-inventory.md
Normal file
@@ -0,0 +1,261 @@
|
|||||||
|
# Legacy Inventory
|
||||||
|
|
||||||
|
This file records the major legacy or compatibility-oriented surfaces that still
|
||||||
|
exist in the repository.
|
||||||
|
|
||||||
|
It is not a deletion plan by itself. Its purpose is to separate:
|
||||||
|
|
||||||
|
- current source-of-truth runtime paths
|
||||||
|
- intentional compatibility surfaces
|
||||||
|
- historical directories and scripts that should not guide new development
|
||||||
|
|
||||||
|
## Source Of Truth
|
||||||
|
|
||||||
|
These are the current defaults to build against:
|
||||||
|
|
||||||
|
- `runs/<run_id>/`
|
||||||
|
- runtime state, bootstrap configuration, agent runtime assets, logs
|
||||||
|
- split services
|
||||||
|
- `backend.apps.agent_service` on `:8000`
|
||||||
|
- `backend.apps.runtime_service` on `:8003`
|
||||||
|
- `backend.apps.trading_service` on `:8001`
|
||||||
|
- `backend.apps.news_service` on `:8002`
|
||||||
|
- gateway process
|
||||||
|
- `backend.main`
|
||||||
|
- `backend.services.gateway` on `:8765`
|
||||||
|
|
||||||
|
## Compatibility Surface Classification
|
||||||
|
|
||||||
|
All compatibility surfaces are categorized into three buckets:
|
||||||
|
|
||||||
|
### 1. Stable and Intentional (Keep)
|
||||||
|
|
||||||
|
These have clear operational reasons to exist and are documented as intentional
|
||||||
|
compatibility surfaces with explicit ownership.
|
||||||
|
|
||||||
|
| Surface | Location | Owner | Reason |
|
||||||
|
|---------|----------|-------|--------|
|
||||||
|
| Gateway-first production | `scripts/run_prod.sh`, `deploy/systemd/`, `deploy/nginx/` | ops-team | Current production example runs gateway directly and proxies `/ws` |
|
||||||
|
| Dashboard export layer | `runs/<run_id>/team_dashboard/*.json` | frontend-team | Downstream dashboard consumers read these exports |
|
||||||
|
| Design-time workspace registry | `workspaces/`, `backend.api.workspaces` | control-plane-team | Control-plane editing and registry-style management |
|
||||||
|
| Gateway WebSocket transport | `backend.services.gateway` on `:8765` | runtime-team | Live event streaming contract for frontend |
|
||||||
|
|
||||||
|
**Status**: These are NOT migration leftovers. Do not remove without explicit
|
||||||
|
replacement plan signed off by owning team.
|
||||||
|
|
||||||
|
### 2. Temporary and Shrinking (Mark for Removal)
|
||||||
|
|
||||||
|
These should not grow further. Keep only until concrete replacement is fully
|
||||||
|
in use.
|
||||||
|
|
||||||
|
| Surface | Location | Replacement | ETA |
|
||||||
|
|---------|----------|-------------|-----|
|
||||||
|
| Legacy analyst agents | `backend.agents.analyst.*` | `EvoAgent` | After EvoAgent smoke tests pass |
|
||||||
|
| Mixed workspace_id semantics | `/api/workspaces/{id}/agents/...` | Explicit `run_id` vs `workspace_id` routes | TBD |
|
||||||
|
| Root-level runtime directories | `live/`, `backtest/`, `production/` | `runs/<run_id>/` | Already deprecated, safe to ignore |
|
||||||
|
|
||||||
|
**Status**: Do not add new code using these surfaces. Migrate existing usage
|
||||||
|
when touching related code.
|
||||||
|
|
||||||
|
### 3. Deferred Until Topology Final (Revisit Later)
|
||||||
|
|
||||||
|
These are real migration boundaries, but removing them prematurely would create
|
||||||
|
churn without simplifying the current runtime. Revisit only after production
|
||||||
|
topology and service-routing policy are frozen.
|
||||||
|
|
||||||
|
| Surface | Current State | Decision Needed |
|
||||||
|
|---------|---------------|-----------------|
|
||||||
|
| OpenClaw dual integration | REST facade (`:8004`) + Gateway WebSocket (`:18789`) | Which surface is the long-term contract? |
|
||||||
|
| Env-dependent service fallbacks | `TRADING_SERVICE_URL`, `NEWS_SERVICE_URL` fallbacks to local modules | Remove fallbacks and require explicit URLs? |
|
||||||
|
| Split-service production deploy | Docs show gateway-first, dev uses split-service | Align production with dev topology? |
|
||||||
|
|
||||||
|
**Status**: Document current behavior. Do not actively remove until topology
|
||||||
|
decisions are finalized.
|
||||||
|
|
||||||
|
## Detailed Surface Documentation
|
||||||
|
|
||||||
|
### Gateway-First Production Example
|
||||||
|
|
||||||
|
**Files**:
|
||||||
|
- `scripts/run_prod.sh` - Production launch script
|
||||||
|
- `deploy/systemd/evotraders.service` - systemd unit
|
||||||
|
- `deploy/nginx/bigtime.cillinn.com.conf` - HTTPS + WebSocket proxy
|
||||||
|
- `deploy/nginx/bigtime.cillinn.com.http.conf` - HTTP variant
|
||||||
|
|
||||||
|
**Behavior**:
|
||||||
|
```bash
|
||||||
|
# scripts/run_prod.sh launches:
|
||||||
|
python3 -m backend.main \
|
||||||
|
--mode live \
|
||||||
|
--config-name production \
|
||||||
|
--host 127.0.0.1 \
|
||||||
|
--port 8765
|
||||||
|
```
|
||||||
|
|
||||||
|
**nginx proxies**:
|
||||||
|
- `/ws` -> `127.0.0.1:8765` (WebSocket upgrade)
|
||||||
|
- `/` -> static files in `/var/www/bigtime/current`
|
||||||
|
|
||||||
|
**Why this exists**:
|
||||||
|
- Simpler production deployment (single process + nginx)
|
||||||
|
- WebSocket is the practical live event contract for frontend
|
||||||
|
- Split-service topology adds operational complexity not needed for all deployments
|
||||||
|
|
||||||
|
**Ownership**: ops-team
|
||||||
|
**Status**: Stable and intentional
|
||||||
|
|
||||||
|
### OpenClaw Dual Integration
|
||||||
|
|
||||||
|
Two different integration surfaces exist for OpenClaw:
|
||||||
|
|
||||||
|
#### A. REST Facade (Port 8004)
|
||||||
|
|
||||||
|
**File**: `backend/apps/openclaw_service.py`
|
||||||
|
**Routes**: `backend/api/openclaw.py` (prefix `/api/openclaw`)
|
||||||
|
|
||||||
|
**Purpose**:
|
||||||
|
- Read-only OpenClaw CLI integration
|
||||||
|
- Typed Pydantic models for all responses
|
||||||
|
- Direct HTTP/REST access to OpenClaw state
|
||||||
|
|
||||||
|
**Use when**:
|
||||||
|
- You need typed, stable API contracts
|
||||||
|
- You want to poll OpenClaw status from external systems
|
||||||
|
- You need programmatic access without WebSocket complexity
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
```bash
|
||||||
|
curl http://localhost:8004/api/openclaw/status
|
||||||
|
```
|
||||||
|
|
||||||
|
#### B. Gateway WebSocket Integration (Port 18789)
|
||||||
|
|
||||||
|
**Files**:
|
||||||
|
- `backend/services/gateway_openclaw_handlers.py`
|
||||||
|
- `shared/client/openclaw_websocket_client.py`
|
||||||
|
|
||||||
|
**Purpose**:
|
||||||
|
- Real-time bidirectional communication with OpenClaw
|
||||||
|
- Event streaming and live updates
|
||||||
|
- Integration with Gateway event flow
|
||||||
|
|
||||||
|
**Use when**:
|
||||||
|
- You need real-time updates
|
||||||
|
- You're already connected to Gateway WebSocket
|
||||||
|
- You want event-driven rather than polling architecture
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
```javascript
|
||||||
|
// Frontend connects to ws://localhost:18789
|
||||||
|
const ws = new WebSocket('ws://localhost:18789');
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Key Differences
|
||||||
|
|
||||||
|
| Aspect | REST Facade (8004) | Gateway WebSocket (18789) |
|
||||||
|
|--------|-------------------|---------------------------|
|
||||||
|
| Protocol | HTTP/REST | WebSocket |
|
||||||
|
| Access pattern | Request/response | Event-driven |
|
||||||
|
| Typing | Pydantic models | JSON messages |
|
||||||
|
| Real-time | Polling required | Push notifications |
|
||||||
|
| Use case | External integrations, scripts | Frontend, live dashboards |
|
||||||
|
| Stability | Higher (explicit contracts) | Evolving with Gateway |
|
||||||
|
|
||||||
|
**Decision needed**: Which surface becomes the long-term contract?
|
||||||
|
- REST facade is more stable but read-only
|
||||||
|
- WebSocket integration is more capable but tied to Gateway evolution
|
||||||
|
|
||||||
|
**Ownership**: runtime-team
|
||||||
|
**Status**: Deferred until topology final
|
||||||
|
|
||||||
|
### Dashboard Export Layer
|
||||||
|
|
||||||
|
**Files**: `runs/<run_id>/team_dashboard/*.json`
|
||||||
|
|
||||||
|
**Purpose**:
|
||||||
|
- Compatibility/export layer for dashboard consumers
|
||||||
|
- Non-authoritative snapshot of runtime state
|
||||||
|
- Can be disabled via `ENABLE_DASHBOARD_COMPAT_EXPORTS=false`
|
||||||
|
|
||||||
|
**Why not remove**:
|
||||||
|
- Downstream consumers still read these files
|
||||||
|
- Provides decoupling between runtime and dashboard
|
||||||
|
|
||||||
|
**Ownership**: frontend-team
|
||||||
|
**Status**: Stable and intentional
|
||||||
|
|
||||||
|
### Design-Time Workspace Registry
|
||||||
|
|
||||||
|
**Files**:
|
||||||
|
- `workspaces/` directory
|
||||||
|
- `backend/api/workspaces.py`
|
||||||
|
- `backend/agents/workspace_manager.py`
|
||||||
|
|
||||||
|
**Purpose**:
|
||||||
|
- Control-plane editing and registry-style management
|
||||||
|
- Design-time CRUD for agent workspaces
|
||||||
|
- Separate from runtime state in `runs/<run_id>/`
|
||||||
|
|
||||||
|
**Key distinction**:
|
||||||
|
- `workspaces/` = design-time registry (what agents *could* be)
|
||||||
|
- `runs/<run_id>/` = runtime state (what agents *are* right now)
|
||||||
|
|
||||||
|
**Ownership**: control-plane-team
|
||||||
|
**Status**: Stable and intentional
|
||||||
|
|
||||||
|
## Historical Or High-Risk-To-Misread Surfaces
|
||||||
|
|
||||||
|
These remain in the tree, but they should not define the architecture for new work.
|
||||||
|
|
||||||
|
### Root-level runtime directories
|
||||||
|
|
||||||
|
- `live/`
|
||||||
|
- `backtest/`
|
||||||
|
- `production/`
|
||||||
|
|
||||||
|
**Read**:
|
||||||
|
|
||||||
|
- treat these as historical or compatibility-oriented data/layout artifacts
|
||||||
|
- do not use them as the default runtime contract for new features
|
||||||
|
|
||||||
|
### Mixed `workspace_id` semantics on agent routes
|
||||||
|
|
||||||
|
- `/api/workspaces/{workspace_id}/agents/...`
|
||||||
|
|
||||||
|
**Read**:
|
||||||
|
|
||||||
|
- design-time CRUD routes use `workspace_id` as a registry workspace id
|
||||||
|
- profile, skills, and editable file routes use `workspace_id` as a run id
|
||||||
|
|
||||||
|
**Mitigation already in repo**:
|
||||||
|
|
||||||
|
- `agent_service /api/status` exposes scope metadata
|
||||||
|
- runtime-read responses expose `scope_type` and `scope_note`
|
||||||
|
|
||||||
|
### Partial EvoAgent rollout
|
||||||
|
|
||||||
|
- `EVO_AGENT_IDS`
|
||||||
|
- staged smoke coverage in `scripts/smoke_evo_runtime.py`
|
||||||
|
|
||||||
|
**Read**:
|
||||||
|
|
||||||
|
- EvoAgent is still a controlled rollout path
|
||||||
|
- legacy analyst/risk/PM implementations remain the default runtime path for now
|
||||||
|
|
||||||
|
## Recommended Usage
|
||||||
|
|
||||||
|
When in doubt:
|
||||||
|
|
||||||
|
1. trust `docs/current-architecture.md`
|
||||||
|
2. trust `runs/<run_id>/` over root-level runtime directories
|
||||||
|
3. treat `workspaces/` as control-plane registry, not runtime truth
|
||||||
|
4. treat deploy artifacts as the current checked-in example, not the full system contract
|
||||||
|
5. check this file's **Compatibility Surface Classification** before assuming something is legacy
|
||||||
|
|
||||||
|
## Change Log
|
||||||
|
|
||||||
|
| Date | Change |
|
||||||
|
|------|--------|
|
||||||
|
| 2026-03-31 | Added Compatibility Surface Classification (3 buckets) |
|
||||||
|
| 2026-03-31 | Documented OpenClaw dual integration (REST vs WebSocket) |
|
||||||
|
| 2026-03-31 | Added ownership and status to all surfaces |
|
||||||
329
docs/runtime-api-changes.md
Normal file
329
docs/runtime-api-changes.md
Normal file
@@ -0,0 +1,329 @@
|
|||||||
|
# Runtime Service API 变更文档
|
||||||
|
|
||||||
|
## 概述
|
||||||
|
|
||||||
|
本文档描述了 `runtime_service` API 的改进,包括新增端点、增强的响应字段和改进的错误处理。
|
||||||
|
|
||||||
|
## 新增端点
|
||||||
|
|
||||||
|
### 1. GET /api/runtime/mode
|
||||||
|
|
||||||
|
返回当前运行模式(实盘或回测)及相关配置。
|
||||||
|
|
||||||
|
**响应模型**: `RuntimeModeResponse`
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mode": "live",
|
||||||
|
"is_backtest": false,
|
||||||
|
"run_id": "20250401_120000",
|
||||||
|
"schedule_mode": "daily",
|
||||||
|
"is_running": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**字段说明**:
|
||||||
|
- `mode`: 运行模式,`"live"`(实盘)或 `"backtest"`(回测),运行时停止时为 `"stopped"`
|
||||||
|
- `is_backtest`: 是否为回测模式
|
||||||
|
- `run_id`: 当前运行的任务 ID
|
||||||
|
- `schedule_mode`: 调度模式,`"daily"` 或 `"intraday"`
|
||||||
|
- `is_running`: Gateway 是否正在运行
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. GET /api/runtime/gateway/health
|
||||||
|
|
||||||
|
全面的 Gateway 健康检查,包括进程状态、端口连通性和配置状态。
|
||||||
|
|
||||||
|
**响应模型**: `GatewayHealthResponse`
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "healthy",
|
||||||
|
"checks": {
|
||||||
|
"process": {
|
||||||
|
"status": "healthy",
|
||||||
|
"details": {
|
||||||
|
"pid": 12345,
|
||||||
|
"status": "running",
|
||||||
|
"returncode": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
"status": "healthy",
|
||||||
|
"details": {
|
||||||
|
"port": 8765,
|
||||||
|
"accessible": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"configuration": {
|
||||||
|
"status": "healthy",
|
||||||
|
"details": {
|
||||||
|
"has_runtime_manager": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"timestamp": "2025-04-01T12:00:00.000000"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**状态说明**:
|
||||||
|
- `status`: 整体健康状态,`"healthy"`(健康)、`"degraded"`(降级)或 `"unhealthy"`(不健康)
|
||||||
|
- `checks.process.status`: 进程状态
|
||||||
|
- `checks.port.status`: 端口连通性
|
||||||
|
- `checks.configuration.status`: 配置状态
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. GET /health/gateway
|
||||||
|
|
||||||
|
服务级别的 Gateway 健康检查端点。
|
||||||
|
|
||||||
|
**响应示例**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "healthy",
|
||||||
|
"checks": {
|
||||||
|
"process": {
|
||||||
|
"status": "healthy",
|
||||||
|
"details": {
|
||||||
|
"pid": 12345,
|
||||||
|
"status": "running",
|
||||||
|
"returncode": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
"status": "healthy",
|
||||||
|
"details": {
|
||||||
|
"port": 8765,
|
||||||
|
"accessible": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"configuration": {
|
||||||
|
"status": "healthy",
|
||||||
|
"details": {
|
||||||
|
"has_runtime_manager": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"timestamp": "2025-04-01T12:00:00.000000"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 改进的端点
|
||||||
|
|
||||||
|
### GET /api/runtime/gateway/status
|
||||||
|
|
||||||
|
**新增字段**:
|
||||||
|
- `process_status`: 进程状态(`"running"`、`"exited"`、`"not_running"`)
|
||||||
|
- `pid`: 进程 ID
|
||||||
|
|
||||||
|
**响应示例**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"is_running": true,
|
||||||
|
"port": 8765,
|
||||||
|
"run_id": "20250401_120000",
|
||||||
|
"process_status": "running",
|
||||||
|
"pid": 12345
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### GET /health
|
||||||
|
|
||||||
|
**改进的响应结构**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "healthy",
|
||||||
|
"service": "runtime-service",
|
||||||
|
"gateway": {
|
||||||
|
"running": true,
|
||||||
|
"port": 8765,
|
||||||
|
"pid": 12345,
|
||||||
|
"process_status": "running",
|
||||||
|
"returncode": null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**字段说明**:
|
||||||
|
- `status`: 服务整体状态(考虑 Gateway 进程状态)
|
||||||
|
- `gateway.running`: Gateway 是否运行中
|
||||||
|
- `gateway.pid`: Gateway 进程 ID
|
||||||
|
- `gateway.process_status`: 进程详细状态
|
||||||
|
- `gateway.returncode`: 进程退出码(如已退出)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### GET /api/status
|
||||||
|
|
||||||
|
**新增字段**:
|
||||||
|
- `runtime.gateway_pid`: Gateway 进程 ID
|
||||||
|
- `runtime.gateway_process_status`: 进程状态
|
||||||
|
|
||||||
|
**响应示例**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "operational",
|
||||||
|
"service": "runtime-service",
|
||||||
|
"runtime": {
|
||||||
|
"gateway_running": true,
|
||||||
|
"gateway_port": 8765,
|
||||||
|
"gateway_pid": 12345,
|
||||||
|
"gateway_process_status": "running",
|
||||||
|
"has_runtime_manager": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### POST /api/runtime/start
|
||||||
|
|
||||||
|
**改进的错误信息**:
|
||||||
|
|
||||||
|
启动失败时返回详细的错误信息,包括:
|
||||||
|
- 进程退出码
|
||||||
|
- 最近的日志输出(最多 4000 字符)
|
||||||
|
- 配置问题检测
|
||||||
|
|
||||||
|
**错误响应示例**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"detail": "Gateway process exited unexpectedly\nExit code: 1\nRecent log output:\n[ERROR] FINNHUB_API_KEY not set...\nConfiguration issues detected: FINNHUB_API_KEY environment variable is required for live mode"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### POST /api/runtime/stop
|
||||||
|
|
||||||
|
**改进的错误信息**:
|
||||||
|
|
||||||
|
- 当 Gateway 进程已退出时,返回包含退出码和 PID 的详细信息
|
||||||
|
- 停止失败时返回具体原因
|
||||||
|
|
||||||
|
**错误响应示例(进程已退出)**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"detail": "No runtime is currently running. Previous Gateway process exited with code 1. PID: 12345"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**成功响应**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "stopped",
|
||||||
|
"message": "Runtime stopped successfully (PID: 12345)"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 配置验证
|
||||||
|
|
||||||
|
### 启动时验证
|
||||||
|
|
||||||
|
Gateway 启动前会自动验证以下配置:
|
||||||
|
|
||||||
|
1. **模式验证**
|
||||||
|
- `mode` 必须是 `"live"` 或 `"backtest"`
|
||||||
|
|
||||||
|
2. **环境变量**
|
||||||
|
- 实盘模式需要 `FINNHUB_API_KEY`
|
||||||
|
- 需要 `MODEL_NAME` 和 `OPENAI_API_KEY`
|
||||||
|
|
||||||
|
3. **股票池**
|
||||||
|
- `tickers` 不能为空且必须是列表
|
||||||
|
|
||||||
|
4. **数值验证**
|
||||||
|
- `initial_cash` 必须大于 0
|
||||||
|
- `margin_requirement` 必须在 0-1 之间
|
||||||
|
|
||||||
|
5. **回测日期**
|
||||||
|
- `start_date` 和 `end_date` 格式必须为 `YYYY-MM-DD`
|
||||||
|
- `start_date` 必须早于 `end_date`
|
||||||
|
|
||||||
|
6. **调度模式**
|
||||||
|
- `schedule_mode` 必须是 `"daily"` 或 `"intraday"`
|
||||||
|
|
||||||
|
**验证失败响应**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"detail": "Gateway configuration validation failed: FINNHUB_API_KEY environment variable is required for live mode; initial_cash must be greater than 0"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 数据模型
|
||||||
|
|
||||||
|
### GatewayStatusResponse
|
||||||
|
|
||||||
|
```python
|
||||||
|
class GatewayStatusResponse(BaseModel):
|
||||||
|
is_running: bool
|
||||||
|
port: int
|
||||||
|
run_id: Optional[str] = None
|
||||||
|
process_status: Optional[str] = None # 新增
|
||||||
|
pid: Optional[int] = None # 新增
|
||||||
|
```
|
||||||
|
|
||||||
|
### GatewayHealthResponse
|
||||||
|
|
||||||
|
```python
|
||||||
|
class GatewayHealthResponse(BaseModel):
|
||||||
|
status: str
|
||||||
|
checks: Dict[str, Any]
|
||||||
|
timestamp: str
|
||||||
|
```
|
||||||
|
|
||||||
|
### RuntimeModeResponse
|
||||||
|
|
||||||
|
```python
|
||||||
|
class RuntimeModeResponse(BaseModel):
|
||||||
|
mode: str
|
||||||
|
is_backtest: bool
|
||||||
|
run_id: Optional[str] = None
|
||||||
|
schedule_mode: Optional[str] = None
|
||||||
|
is_running: bool
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 架构改进
|
||||||
|
|
||||||
|
### 新增辅助函数
|
||||||
|
|
||||||
|
1. **`_validate_gateway_config(bootstrap)`**
|
||||||
|
- 验证 Gateway 启动配置
|
||||||
|
- 返回验证错误列表
|
||||||
|
|
||||||
|
2. **`_get_gateway_process_details()`**
|
||||||
|
- 获取 Gateway 进程详细信息
|
||||||
|
- 包括 PID、状态、退出码
|
||||||
|
|
||||||
|
3. **`_check_gateway_health()`**
|
||||||
|
- 执行全面的健康检查
|
||||||
|
- 检查进程、端口、配置
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 向后兼容性
|
||||||
|
|
||||||
|
所有改进都保持向后兼容:
|
||||||
|
- 现有端点继续工作
|
||||||
|
- 新增字段为可选
|
||||||
|
- 错误响应格式保持不变(仅在 detail 中提供更详细信息)
|
||||||
79
docs/terminology.md
Normal file
79
docs/terminology.md
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
# Terminology
|
||||||
|
|
||||||
|
Use these terms consistently when changing code, docs, or UI.
|
||||||
|
|
||||||
|
## Core Terms
|
||||||
|
|
||||||
|
### `design-time`
|
||||||
|
|
||||||
|
Use for configuration, editing, and control-plane concepts that exist before a
|
||||||
|
specific runtime is launched.
|
||||||
|
|
||||||
|
Typical examples:
|
||||||
|
|
||||||
|
- `workspaces/`
|
||||||
|
- workspace registry CRUD
|
||||||
|
- design-time agent metadata
|
||||||
|
|
||||||
|
### `runtime`
|
||||||
|
|
||||||
|
Use for the active execution layer and its state.
|
||||||
|
|
||||||
|
Typical examples:
|
||||||
|
|
||||||
|
- runtime lifecycle APIs
|
||||||
|
- scheduler / gateway execution
|
||||||
|
- approvals during a live run
|
||||||
|
- runtime snapshots and logs
|
||||||
|
|
||||||
|
### `run`
|
||||||
|
|
||||||
|
Use for one concrete execution instance.
|
||||||
|
|
||||||
|
Typical examples:
|
||||||
|
|
||||||
|
- `runs/<run_id>/`
|
||||||
|
- runtime history
|
||||||
|
- run logs
|
||||||
|
- run bootstrap config
|
||||||
|
- run-scoped agent assets
|
||||||
|
|
||||||
|
### `workspace`
|
||||||
|
|
||||||
|
Prefer this word only for the design-time registry unless you are working on a
|
||||||
|
historical compatibility surface that still uses the old path or field name.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- good: "design workspace"
|
||||||
|
- good: "workspace registry"
|
||||||
|
- avoid for new runtime UI: "current workspace" when you really mean current run
|
||||||
|
|
||||||
|
## Compatibility Rule
|
||||||
|
|
||||||
|
Some API paths and fields still use legacy names:
|
||||||
|
|
||||||
|
- `/api/workspaces/{workspace_id}/agents/...`
|
||||||
|
- `workspace_id` on approval records
|
||||||
|
|
||||||
|
When reading those surfaces:
|
||||||
|
|
||||||
|
- design-time CRUD routes use `workspace_id` literally
|
||||||
|
- runtime-read routes may use the same slot for `run_id`
|
||||||
|
|
||||||
|
For new code:
|
||||||
|
|
||||||
|
- prefer `runId` for runtime variables
|
||||||
|
- prefer `workspaceId` only for design-time registry flows
|
||||||
|
|
||||||
|
## UI Wording
|
||||||
|
|
||||||
|
For operator-facing runtime UI, prefer:
|
||||||
|
|
||||||
|
- "运行任务"
|
||||||
|
- "运行文件"
|
||||||
|
- "运行资产"
|
||||||
|
- "任务 ID"
|
||||||
|
|
||||||
|
Avoid using "工作区" for active runtime concepts unless the screen is truly
|
||||||
|
about the design-time workspace registry.
|
||||||
13
env.template
13
env.template
@@ -55,6 +55,19 @@ AGENT_PORTFOLIO_MANAGER_MODEL_NAME=qwen3-max-preview
|
|||||||
|
|
||||||
# ================== Advanced Configuration | 高阶配置 ==================
|
# ================== Advanced Configuration | 高阶配置 ==================
|
||||||
|
|
||||||
|
# Skill Sandbox Mode | 技能沙盒执行模式
|
||||||
|
# none = direct execution (default, development only) | 直接执行(默认,仅开发环境)
|
||||||
|
# docker = Docker container isolation | Docker 容器隔离
|
||||||
|
# kubernetes = Kubernetes Pod isolation (reserved) | Kubernetes Pod 隔离(预留)
|
||||||
|
SKILL_SANDBOX_MODE=none
|
||||||
|
|
||||||
|
# Docker Sandbox Settings (only used when SKILL_SANDBOX_MODE=docker) | Docker 沙盒配置
|
||||||
|
SKILL_SANDBOX_IMAGE=python:3.11-slim
|
||||||
|
SKILL_SANDBOX_MEMORY_LIMIT=512m
|
||||||
|
SKILL_SANDBOX_CPU_LIMIT=1.0
|
||||||
|
SKILL_SANDBOX_NETWORK=none
|
||||||
|
SKILL_SANDBOX_TIMEOUT=60
|
||||||
|
|
||||||
# Maximum conference discussion cycles (default: 2) | 最大会议讨论轮数(默认:2)
|
# Maximum conference discussion cycles (default: 2) | 最大会议讨论轮数(默认:2)
|
||||||
MAX_COMM_CYCLES=2
|
MAX_COMM_CYCLES=2
|
||||||
|
|
||||||
|
|||||||
@@ -67,6 +67,7 @@
|
|||||||
"typescript": "^5.9.2",
|
"typescript": "^5.9.2",
|
||||||
"vite": "^7.1.2",
|
"vite": "^7.1.2",
|
||||||
"vite-tsconfig-paths": "^5.1.4",
|
"vite-tsconfig-paths": "^5.1.4",
|
||||||
"vitest": "^4.1.0"
|
"vitest": "^4.1.0",
|
||||||
|
"yaml": "^2.8.3"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ import { useRuntimeControls } from './hooks/useRuntimeControls';
|
|||||||
import { useStockDataRequests } from './hooks/useStockDataRequests';
|
import { useStockDataRequests } from './hooks/useStockDataRequests';
|
||||||
import { useWebSocketConnection } from './hooks/useWebSocketConnection';
|
import { useWebSocketConnection } from './hooks/useWebSocketConnection';
|
||||||
import { fetchRuntimeLogs } from './services/runtimeApi';
|
import { fetchRuntimeLogs } from './services/runtimeApi';
|
||||||
import { useAgentStore } from './store/agentStore';
|
import { useAgentRunFileState, useAgentStore } from './store/agentStore';
|
||||||
import { useMarketStore } from './store/marketStore';
|
import { useMarketStore } from './store/marketStore';
|
||||||
import { usePortfolioStore } from './store/portfolioStore';
|
import { usePortfolioStore } from './store/portfolioStore';
|
||||||
import { useRuntimeStore } from './store/runtimeStore';
|
import { useRuntimeStore } from './store/runtimeStore';
|
||||||
@@ -82,17 +82,20 @@ export default function LiveTradingApp() {
|
|||||||
skillDetailLoadingKey,
|
skillDetailLoadingKey,
|
||||||
agentSkillsSavingKey,
|
agentSkillsSavingKey,
|
||||||
agentSkillsFeedback,
|
agentSkillsFeedback,
|
||||||
selectedWorkspaceFile,
|
|
||||||
workspaceFilesByAgent,
|
|
||||||
workspaceDraftContent,
|
|
||||||
isWorkspaceFileLoading,
|
|
||||||
workspaceFileSavingKey,
|
|
||||||
workspaceFileFeedback,
|
|
||||||
setSelectedWorkspaceFile,
|
setSelectedWorkspaceFile,
|
||||||
setSelectedSkillAgentId,
|
setSelectedSkillAgentId,
|
||||||
setWorkspaceDraftContent,
|
|
||||||
} = useAgentStore();
|
} = useAgentStore();
|
||||||
|
|
||||||
|
const {
|
||||||
|
selectedRunFile,
|
||||||
|
runFilesByAgent,
|
||||||
|
runDraftContent,
|
||||||
|
isRunFileLoading,
|
||||||
|
runFileSavingKey,
|
||||||
|
runFileFeedback,
|
||||||
|
setRunDraftContent,
|
||||||
|
} = useAgentRunFileState();
|
||||||
|
|
||||||
const { feed, processHistoricalFeed, processFeedEvent, addSystemMessage, clearFeed } = useFeedProcessor();
|
const { feed, processHistoricalFeed, processFeedEvent, addSystemMessage, clearFeed } = useFeedProcessor();
|
||||||
const resetRuntimeViewState = useCallback(() => {
|
const resetRuntimeViewState = useCallback(() => {
|
||||||
clearFeed();
|
clearFeed();
|
||||||
@@ -177,8 +180,8 @@ export default function LiveTradingApp() {
|
|||||||
const selectedAgentId = selectedSkillAgentId || AGENTS[0]?.id || null;
|
const selectedAgentId = selectedSkillAgentId || AGENTS[0]?.id || null;
|
||||||
const selectedAgentProfile = selectedAgentId ? (agentProfilesByAgent[selectedAgentId] || null) : null;
|
const selectedAgentProfile = selectedAgentId ? (agentProfilesByAgent[selectedAgentId] || null) : null;
|
||||||
const selectedAgentSkills = selectedAgentId ? (agentSkillsByAgent[selectedAgentId] || []) : [];
|
const selectedAgentSkills = selectedAgentId ? (agentSkillsByAgent[selectedAgentId] || []) : [];
|
||||||
const selectedWorkspaceContent = selectedAgentId && selectedWorkspaceFile
|
const selectedRunFileContent = selectedAgentId && selectedRunFile
|
||||||
? (workspaceFilesByAgent[selectedAgentId]?.[selectedWorkspaceFile] || '')
|
? (runFilesByAgent[selectedAgentId]?.[selectedRunFile] || '')
|
||||||
: '';
|
: '';
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -188,10 +191,10 @@ export default function LiveTradingApp() {
|
|||||||
}, [selectedSkillAgentId, setSelectedSkillAgentId]);
|
}, [selectedSkillAgentId, setSelectedSkillAgentId]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (!selectedWorkspaceFile) {
|
if (!selectedRunFile) {
|
||||||
setSelectedWorkspaceFile('MEMORY.md');
|
setSelectedWorkspaceFile('MEMORY.md');
|
||||||
}
|
}
|
||||||
}, [selectedWorkspaceFile, setSelectedWorkspaceFile]);
|
}, [selectedRunFile, setSelectedWorkspaceFile]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
if (!isSocketReady || !selectedAgentId || !clientRef.current) {
|
if (!isSocketReady || !selectedAgentId || !clientRef.current) {
|
||||||
@@ -207,10 +210,10 @@ export default function LiveTradingApp() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (
|
if (
|
||||||
selectedWorkspaceFile
|
selectedRunFile
|
||||||
&& workspaceFilesByAgent[selectedAgentId]?.[selectedWorkspaceFile] === undefined
|
&& runFilesByAgent[selectedAgentId]?.[selectedRunFile] === undefined
|
||||||
) {
|
) {
|
||||||
requestWorkspaceFile(selectedAgentId, selectedWorkspaceFile);
|
requestWorkspaceFile(selectedAgentId, selectedRunFile);
|
||||||
}
|
}
|
||||||
}, [
|
}, [
|
||||||
agentProfilesByAgent,
|
agentProfilesByAgent,
|
||||||
@@ -221,8 +224,8 @@ export default function LiveTradingApp() {
|
|||||||
requestAgentSkills,
|
requestAgentSkills,
|
||||||
requestWorkspaceFile,
|
requestWorkspaceFile,
|
||||||
selectedAgentId,
|
selectedAgentId,
|
||||||
selectedWorkspaceFile,
|
selectedRunFile,
|
||||||
workspaceFilesByAgent,
|
runFilesByAgent,
|
||||||
]);
|
]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
@@ -361,7 +364,7 @@ export default function LiveTradingApp() {
|
|||||||
agents: AGENTS,
|
agents: AGENTS,
|
||||||
agentProfilesByAgent,
|
agentProfilesByAgent,
|
||||||
agentSkillsByAgent,
|
agentSkillsByAgent,
|
||||||
workspaceFilesByAgent,
|
runFilesByAgent,
|
||||||
selectedAgentId,
|
selectedAgentId,
|
||||||
selectedAgentProfile,
|
selectedAgentProfile,
|
||||||
selectedAgentSkills,
|
selectedAgentSkills,
|
||||||
@@ -369,16 +372,16 @@ export default function LiveTradingApp() {
|
|||||||
localSkillDraftsByKey,
|
localSkillDraftsByKey,
|
||||||
skillDetailLoadingKey,
|
skillDetailLoadingKey,
|
||||||
editableFiles: EDITABLE_AGENT_WORKSPACE_FILES,
|
editableFiles: EDITABLE_AGENT_WORKSPACE_FILES,
|
||||||
selectedWorkspaceFile,
|
selectedRunFile,
|
||||||
workspaceFileContent: selectedWorkspaceContent,
|
runFileContent: selectedRunFileContent,
|
||||||
workspaceDraftContent,
|
runDraftContent,
|
||||||
isConnected,
|
isConnected,
|
||||||
isAgentSkillsLoading,
|
isAgentSkillsLoading,
|
||||||
agentSkillsSavingKey,
|
agentSkillsSavingKey,
|
||||||
agentSkillsFeedback,
|
agentSkillsFeedback,
|
||||||
isWorkspaceFileLoading,
|
isRunFileLoading,
|
||||||
workspaceFileSavingKey,
|
runFileSavingKey,
|
||||||
workspaceFileFeedback,
|
runFileFeedback,
|
||||||
onAgentChange: handleSkillAgentChange,
|
onAgentChange: handleSkillAgentChange,
|
||||||
onCreateLocalSkill: handleCreateLocalSkill,
|
onCreateLocalSkill: handleCreateLocalSkill,
|
||||||
onSkillDetailRequest: requestSkillDetail,
|
onSkillDetailRequest: requestSkillDetail,
|
||||||
@@ -388,8 +391,8 @@ export default function LiveTradingApp() {
|
|||||||
onRemoveSharedSkill: handleRemoveSharedSkill,
|
onRemoveSharedSkill: handleRemoveSharedSkill,
|
||||||
onSkillToggle: handleAgentSkillToggle,
|
onSkillToggle: handleAgentSkillToggle,
|
||||||
onWorkspaceFileChange: handleWorkspaceFileChange,
|
onWorkspaceFileChange: handleWorkspaceFileChange,
|
||||||
onWorkspaceDraftChange: setWorkspaceDraftContent,
|
onRunDraftChange: setRunDraftContent,
|
||||||
onWorkspaceFileSave: handleWorkspaceFileSave,
|
onRunFileSave: handleWorkspaceFileSave,
|
||||||
onUploadExternalSkill: handleUploadExternalSkill,
|
onUploadExternalSkill: handleUploadExternalSkill,
|
||||||
clientRef,
|
clientRef,
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -208,7 +208,7 @@ export default function RuntimeSettingsPanel({
|
|||||||
background: '#FFFFFF',
|
background: '#FFFFFF',
|
||||||
border: '1px dashed #D0D7DE'
|
border: '1px dashed #D0D7DE'
|
||||||
}}>
|
}}>
|
||||||
恢复启动会从所选历史任务复制运行状态、组合、交易记录和 Agent 工作区资产,并以新的任务 ID 继续运行。
|
恢复启动会从所选历史任务复制运行状态、组合、交易记录和 Agent 运行资产,并以新的任务 ID 继续运行。
|
||||||
</div>
|
</div>
|
||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
|
|||||||
@@ -207,6 +207,12 @@ function formatSessionLabel(sessionId) {
|
|||||||
return sessionId || '无会话';
|
return sessionId || '无会话';
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function formatApprovalScopeLabel(approval) {
|
||||||
|
const runId = approval?.run_id || approval?.workspace_id || '-';
|
||||||
|
const agentId = approval?.agent_id || '-';
|
||||||
|
return `${agentId} · 运行 ${runId} · ${formatSessionLabel(approval?.session_id)}`;
|
||||||
|
}
|
||||||
|
|
||||||
function formatEventLabel(eventName) {
|
function formatEventLabel(eventName) {
|
||||||
if (!eventName) {
|
if (!eventName) {
|
||||||
return '-';
|
return '-';
|
||||||
@@ -598,7 +604,7 @@ export default function RuntimeView() {
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div style={{ fontSize: 11, color: '#6B7280', lineHeight: 1.5 }}>
|
<div style={{ fontSize: 11, color: '#6B7280', lineHeight: 1.5 }}>
|
||||||
{approval.agent_id} · {approval.workspace_id} · {formatSessionLabel(approval.session_id)}
|
{formatApprovalScopeLabel(approval)}
|
||||||
</div>
|
</div>
|
||||||
{approval.tool_input && (
|
{approval.tool_input && (
|
||||||
<pre style={{
|
<pre style={{
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ export default function TraderView({
|
|||||||
agents,
|
agents,
|
||||||
agentProfilesByAgent,
|
agentProfilesByAgent,
|
||||||
agentSkillsByAgent,
|
agentSkillsByAgent,
|
||||||
workspaceFilesByAgent,
|
runFilesByAgent,
|
||||||
selectedAgentId,
|
selectedAgentId,
|
||||||
selectedAgentProfile,
|
selectedAgentProfile,
|
||||||
selectedAgentSkills,
|
selectedAgentSkills,
|
||||||
@@ -16,16 +16,16 @@ export default function TraderView({
|
|||||||
localSkillDraftsByKey,
|
localSkillDraftsByKey,
|
||||||
skillDetailLoadingKey,
|
skillDetailLoadingKey,
|
||||||
editableFiles,
|
editableFiles,
|
||||||
selectedWorkspaceFile,
|
selectedRunFile,
|
||||||
workspaceFileContent,
|
runFileContent,
|
||||||
workspaceDraftContent,
|
runDraftContent,
|
||||||
isConnected,
|
isConnected,
|
||||||
isAgentSkillsLoading,
|
isAgentSkillsLoading,
|
||||||
agentSkillsSavingKey,
|
agentSkillsSavingKey,
|
||||||
agentSkillsFeedback,
|
agentSkillsFeedback,
|
||||||
isWorkspaceFileLoading,
|
isRunFileLoading,
|
||||||
workspaceFileSavingKey,
|
runFileSavingKey,
|
||||||
workspaceFileFeedback,
|
runFileFeedback,
|
||||||
onAgentChange,
|
onAgentChange,
|
||||||
onCreateLocalSkill,
|
onCreateLocalSkill,
|
||||||
onSkillDetailRequest,
|
onSkillDetailRequest,
|
||||||
@@ -35,8 +35,8 @@ export default function TraderView({
|
|||||||
onRemoveSharedSkill,
|
onRemoveSharedSkill,
|
||||||
onSkillToggle,
|
onSkillToggle,
|
||||||
onWorkspaceFileChange,
|
onWorkspaceFileChange,
|
||||||
onWorkspaceDraftChange,
|
onRunDraftChange,
|
||||||
onWorkspaceFileSave,
|
onRunFileSave,
|
||||||
onUploadExternalSkill
|
onUploadExternalSkill
|
||||||
}) {
|
}) {
|
||||||
const srOnlyStyle = {
|
const srOnlyStyle = {
|
||||||
@@ -133,10 +133,10 @@ export default function TraderView({
|
|||||||
}}>
|
}}>
|
||||||
<div style={{ display: 'grid', gap: 4 }}>
|
<div style={{ display: 'grid', gap: 4 }}>
|
||||||
<div style={{ fontSize: 12, fontWeight: 800, letterSpacing: '0.5px', color: '#111111' }}>
|
<div style={{ fontSize: 12, fontWeight: 800, letterSpacing: '0.5px', color: '#111111' }}>
|
||||||
交易员档案
|
Agent 运行档案
|
||||||
</div>
|
</div>
|
||||||
<div style={{ fontSize: 11, color: '#6B7280' }}>
|
<div style={{ fontSize: 11, color: '#6B7280' }}>
|
||||||
聚焦查看每个 Agent 的模型、工具组、技能编排和工作区记忆,不展示交易表现数据
|
聚焦查看每个 Agent 在当前运行任务中的模型、工具组、技能编排和运行记忆,不展示交易表现数据
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -549,15 +549,15 @@ export default function TraderView({
|
|||||||
gap: 10
|
gap: 10
|
||||||
}}>
|
}}>
|
||||||
<div style={{ display: 'grid', gap: 4 }}>
|
<div style={{ display: 'grid', gap: 4 }}>
|
||||||
<div style={{ fontSize: 12, fontWeight: 800, color: '#111111' }}>工作区文件编辑</div>
|
<div style={{ fontSize: 12, fontWeight: 800, color: '#111111' }}>运行文件编辑</div>
|
||||||
<div style={{ fontSize: 11, color: '#6B7280' }}>
|
<div style={{ fontSize: 11, color: '#6B7280' }}>
|
||||||
直接调整该交易员的人设、协作方式和长期记忆文件
|
直接调整该交易员在当前运行任务中的人设、协作方式和长期记忆文件
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div style={{ display: 'flex', flexWrap: 'wrap', gap: 8 }}>
|
<div style={{ display: 'flex', flexWrap: 'wrap', gap: 8 }}>
|
||||||
{editableFiles.map((filename) => {
|
{editableFiles.map((filename) => {
|
||||||
const isActive = filename === selectedWorkspaceFile;
|
const isActive = filename === selectedRunFile;
|
||||||
return (
|
return (
|
||||||
<button
|
<button
|
||||||
key={filename}
|
key={filename}
|
||||||
@@ -581,12 +581,12 @@ export default function TraderView({
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<textarea
|
<textarea
|
||||||
id={`workspace-editor-${selectedAgentId}-${selectedWorkspaceFile || 'file'}`}
|
id={`workspace-editor-${selectedAgentId}-${selectedRunFile || 'file'}`}
|
||||||
name={`workspace_editor_${selectedAgentId}_${selectedWorkspaceFile || 'file'}`}
|
name={`workspace_editor_${selectedAgentId}_${selectedRunFile || 'file'}`}
|
||||||
aria-label={`编辑 ${selectedWorkspaceFile || '工作区文件'} 内容`}
|
aria-label={`编辑 ${selectedRunFile || '运行文件'} 内容`}
|
||||||
value={workspaceDraftContent}
|
value={runDraftContent}
|
||||||
onChange={(e) => onWorkspaceDraftChange(e.target.value)}
|
onChange={(e) => onRunDraftChange(e.target.value)}
|
||||||
placeholder={isWorkspaceFileLoading ? '加载中...' : '输入 markdown 内容'}
|
placeholder={isRunFileLoading ? '加载中...' : '输入 markdown 内容'}
|
||||||
style={{
|
style={{
|
||||||
minHeight: 280,
|
minHeight: 280,
|
||||||
resize: 'vertical',
|
resize: 'vertical',
|
||||||
@@ -603,33 +603,33 @@ export default function TraderView({
|
|||||||
|
|
||||||
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', gap: 10, flexWrap: 'wrap' }}>
|
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', gap: 10, flexWrap: 'wrap' }}>
|
||||||
<span style={{ fontSize: 10, color: '#6B7280', fontFamily: '"Courier New", monospace' }}>
|
<span style={{ fontSize: 10, color: '#6B7280', fontFamily: '"Courier New", monospace' }}>
|
||||||
当前文件: {selectedWorkspaceFile}
|
当前运行文件: {selectedRunFile}
|
||||||
</span>
|
</span>
|
||||||
<button
|
<button
|
||||||
onClick={onWorkspaceFileSave}
|
onClick={onRunFileSave}
|
||||||
disabled={!isConnected || isWorkspaceFileLoading || workspaceFileSavingKey !== null || workspaceDraftContent === workspaceFileContent}
|
disabled={!isConnected || isRunFileLoading || runFileSavingKey !== null || runDraftContent === runFileContent}
|
||||||
style={{
|
style={{
|
||||||
padding: '9px 14px',
|
padding: '9px 14px',
|
||||||
borderRadius: 6,
|
borderRadius: 6,
|
||||||
border: '1px solid #1565C0',
|
border: '1px solid #1565C0',
|
||||||
background: isConnected && !isWorkspaceFileLoading && workspaceFileSavingKey === null && workspaceDraftContent !== workspaceFileContent ? '#0D47A1' : '#94A3B8',
|
background: isConnected && !isRunFileLoading && runFileSavingKey === null && runDraftContent !== runFileContent ? '#0D47A1' : '#94A3B8',
|
||||||
color: '#FFFFFF',
|
color: '#FFFFFF',
|
||||||
fontSize: 11,
|
fontSize: 11,
|
||||||
fontWeight: 700,
|
fontWeight: 700,
|
||||||
cursor: isConnected && !isWorkspaceFileLoading && workspaceFileSavingKey === null && workspaceDraftContent !== workspaceFileContent ? 'pointer' : 'not-allowed'
|
cursor: isConnected && !isRunFileLoading && runFileSavingKey === null && runDraftContent !== runFileContent ? 'pointer' : 'not-allowed'
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
{workspaceFileSavingKey ? '保存中' : '保存文件'}
|
{runFileSavingKey ? '保存中' : '保存文件'}
|
||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{workspaceFileFeedback && (
|
{runFileFeedback && (
|
||||||
<span style={{
|
<span style={{
|
||||||
color: workspaceFileFeedback.type === 'success' ? '#00C853' : '#FF5252',
|
color: runFileFeedback.type === 'success' ? '#00C853' : '#FF5252',
|
||||||
fontSize: 11,
|
fontSize: 11,
|
||||||
fontFamily: '"Courier New", monospace'
|
fontFamily: '"Courier New", monospace'
|
||||||
}}>
|
}}>
|
||||||
{workspaceFileFeedback.text}
|
{runFileFeedback.text}
|
||||||
</span>
|
</span>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -40,13 +40,13 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
setIsWorkspaceFileLoading
|
setIsWorkspaceFileLoading
|
||||||
} = useAgentStore();
|
} = useAgentStore();
|
||||||
|
|
||||||
const resolveWorkspaceId = useCallback(async () => {
|
const resolveRunId = useCallback(async () => {
|
||||||
const runtime = await fetchCurrentRuntime();
|
const runtime = await fetchCurrentRuntime();
|
||||||
const workspaceId = runtime?.run_id;
|
const runId = runtime?.run_id;
|
||||||
if (!workspaceId) {
|
if (!runId) {
|
||||||
throw new Error('未检测到正在运行的任务');
|
throw new Error('未检测到正在运行的任务');
|
||||||
}
|
}
|
||||||
return workspaceId;
|
return runId;
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
const requestAgentSkills = useCallback((agentId) => {
|
const requestAgentSkills = useCallback((agentId) => {
|
||||||
@@ -54,8 +54,8 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
if (!normalized) return false;
|
if (!normalized) return false;
|
||||||
setIsAgentSkillsLoading(true);
|
setIsAgentSkillsLoading(true);
|
||||||
setAgentSkillsFeedback(null);
|
setAgentSkillsFeedback(null);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => fetchAgentSkills(workspaceId, normalized))
|
.then((runId) => fetchAgentSkills(runId, normalized))
|
||||||
.then((payload) => {
|
.then((payload) => {
|
||||||
setAgentSkillsByAgent((prev) => ({ ...prev, [normalized]: Array.isArray(payload?.skills) ? payload.skills : [] }));
|
setAgentSkillsByAgent((prev) => ({ ...prev, [normalized]: Array.isArray(payload?.skills) ? payload.skills : [] }));
|
||||||
setIsAgentSkillsLoading(false);
|
setIsAgentSkillsLoading(false);
|
||||||
@@ -72,13 +72,13 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return true;
|
return true;
|
||||||
}, [clientRef, resolveWorkspaceId, setAgentSkillsByAgent, setIsAgentSkillsLoading, setAgentSkillsFeedback]);
|
}, [clientRef, resolveRunId, setAgentSkillsByAgent, setIsAgentSkillsLoading, setAgentSkillsFeedback]);
|
||||||
|
|
||||||
const requestAgentProfile = useCallback((agentId) => {
|
const requestAgentProfile = useCallback((agentId) => {
|
||||||
const normalized = typeof agentId === 'string' ? agentId.trim() : '';
|
const normalized = typeof agentId === 'string' ? agentId.trim() : '';
|
||||||
if (!normalized) return false;
|
if (!normalized) return false;
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => fetchAgentProfile(workspaceId, normalized))
|
.then((runId) => fetchAgentProfile(runId, normalized))
|
||||||
.then((payload) => {
|
.then((payload) => {
|
||||||
setAgentProfilesByAgent((prev) => ({
|
setAgentProfilesByAgent((prev) => ({
|
||||||
...prev,
|
...prev,
|
||||||
@@ -92,15 +92,15 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return true;
|
return true;
|
||||||
}, [clientRef, resolveWorkspaceId, setAgentProfilesByAgent]);
|
}, [clientRef, resolveRunId, setAgentProfilesByAgent]);
|
||||||
|
|
||||||
const requestSkillDetail = useCallback((skillName) => {
|
const requestSkillDetail = useCallback((skillName) => {
|
||||||
const normalized = typeof skillName === 'string' ? skillName.trim() : '';
|
const normalized = typeof skillName === 'string' ? skillName.trim() : '';
|
||||||
if (!normalized) return false;
|
if (!normalized) return false;
|
||||||
const detailKey = `${selectedSkillAgentId}:${normalized}`;
|
const detailKey = `${selectedSkillAgentId}:${normalized}`;
|
||||||
setSkillDetailLoadingKey(detailKey);
|
setSkillDetailLoadingKey(detailKey);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => fetchAgentSkillDetail(workspaceId, selectedSkillAgentId, normalized))
|
.then((runId) => fetchAgentSkillDetail(runId, selectedSkillAgentId, normalized))
|
||||||
.then((payload) => {
|
.then((payload) => {
|
||||||
setSkillDetailsByName((prev) => ({ ...prev, [detailKey]: payload?.skill || null }));
|
setSkillDetailsByName((prev) => ({ ...prev, [detailKey]: payload?.skill || null }));
|
||||||
useAgentStore.getState().setLocalSkillDraftsByKey((prev) => ({
|
useAgentStore.getState().setLocalSkillDraftsByKey((prev) => ({
|
||||||
@@ -121,7 +121,7 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return true;
|
return true;
|
||||||
}, [clientRef, resolveWorkspaceId, selectedSkillAgentId, setSkillDetailLoadingKey, setSkillDetailsByName]);
|
}, [clientRef, resolveRunId, selectedSkillAgentId, setSkillDetailLoadingKey, setSkillDetailsByName]);
|
||||||
|
|
||||||
const handleCreateLocalSkill = useCallback((skillName) => {
|
const handleCreateLocalSkill = useCallback((skillName) => {
|
||||||
const normalized = typeof skillName === 'string' ? skillName.trim() : '';
|
const normalized = typeof skillName === 'string' ? skillName.trim() : '';
|
||||||
@@ -131,8 +131,8 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
}
|
}
|
||||||
setAgentSkillsSavingKey(`${selectedSkillAgentId}:${normalized}:create`);
|
setAgentSkillsSavingKey(`${selectedSkillAgentId}:${normalized}:create`);
|
||||||
setAgentSkillsFeedback(null);
|
setAgentSkillsFeedback(null);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => createAgentLocalSkill(workspaceId, selectedSkillAgentId, normalized))
|
.then((runId) => createAgentLocalSkill(runId, selectedSkillAgentId, normalized))
|
||||||
.then(() => {
|
.then(() => {
|
||||||
setAgentSkillsSavingKey(null);
|
setAgentSkillsSavingKey(null);
|
||||||
setAgentSkillsFeedback({ type: 'success', text: `已创建本地技能 ${normalized}` });
|
setAgentSkillsFeedback({ type: 'success', text: `已创建本地技能 ${normalized}` });
|
||||||
@@ -152,7 +152,7 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}, [clientRef, requestAgentSkills, requestSkillDetail, resolveWorkspaceId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
}, [clientRef, requestAgentSkills, requestSkillDetail, resolveRunId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
||||||
|
|
||||||
const handleLocalSkillDraftChange = useCallback((skillName, content) => {
|
const handleLocalSkillDraftChange = useCallback((skillName, content) => {
|
||||||
const detailKey = `${selectedSkillAgentId}:${skillName}`;
|
const detailKey = `${selectedSkillAgentId}:${skillName}`;
|
||||||
@@ -165,8 +165,8 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
if (typeof content !== 'string') return;
|
if (typeof content !== 'string') return;
|
||||||
setAgentSkillsSavingKey(`${selectedSkillAgentId}:${skillName}:content`);
|
setAgentSkillsSavingKey(`${selectedSkillAgentId}:${skillName}:content`);
|
||||||
setAgentSkillsFeedback(null);
|
setAgentSkillsFeedback(null);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => updateAgentLocalSkill(workspaceId, selectedSkillAgentId, skillName, content))
|
.then((runId) => updateAgentLocalSkill(runId, selectedSkillAgentId, skillName, content))
|
||||||
.then(() => {
|
.then(() => {
|
||||||
setAgentSkillsSavingKey(null);
|
setAgentSkillsSavingKey(null);
|
||||||
setAgentSkillsFeedback({ type: 'success', text: `${selectedSkillAgentId} 的本地技能 ${skillName} 已保存` });
|
setAgentSkillsFeedback({ type: 'success', text: `${selectedSkillAgentId} 的本地技能 ${skillName} 已保存` });
|
||||||
@@ -185,13 +185,13 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}, [clientRef, localSkillDraftsByKey, requestSkillDetail, resolveWorkspaceId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
}, [clientRef, localSkillDraftsByKey, requestSkillDetail, resolveRunId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
||||||
|
|
||||||
const handleLocalSkillDelete = useCallback((skillName) => {
|
const handleLocalSkillDelete = useCallback((skillName) => {
|
||||||
setAgentSkillsSavingKey(`${selectedSkillAgentId}:${skillName}:delete`);
|
setAgentSkillsSavingKey(`${selectedSkillAgentId}:${skillName}:delete`);
|
||||||
setAgentSkillsFeedback(null);
|
setAgentSkillsFeedback(null);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => deleteAgentLocalSkill(workspaceId, selectedSkillAgentId, skillName))
|
.then((runId) => deleteAgentLocalSkill(runId, selectedSkillAgentId, skillName))
|
||||||
.then(() => {
|
.then(() => {
|
||||||
setAgentSkillsSavingKey(null);
|
setAgentSkillsSavingKey(null);
|
||||||
setAgentSkillsFeedback({ type: 'success', text: `${selectedSkillAgentId} 的本地技能 ${skillName} 已删除` });
|
setAgentSkillsFeedback({ type: 'success', text: `${selectedSkillAgentId} 的本地技能 ${skillName} 已删除` });
|
||||||
@@ -210,13 +210,13 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}, [clientRef, requestAgentSkills, resolveWorkspaceId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
}, [clientRef, requestAgentSkills, resolveRunId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
||||||
|
|
||||||
const handleRemoveSharedSkill = useCallback((skillName) => {
|
const handleRemoveSharedSkill = useCallback((skillName) => {
|
||||||
setAgentSkillsSavingKey(`${selectedSkillAgentId}:${skillName}:remove`);
|
setAgentSkillsSavingKey(`${selectedSkillAgentId}:${skillName}:remove`);
|
||||||
setAgentSkillsFeedback(null);
|
setAgentSkillsFeedback(null);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => disableAgentSkill(workspaceId, selectedSkillAgentId, skillName))
|
.then((runId) => disableAgentSkill(runId, selectedSkillAgentId, skillName))
|
||||||
.then(() => {
|
.then(() => {
|
||||||
setAgentSkillsSavingKey(null);
|
setAgentSkillsSavingKey(null);
|
||||||
setAgentSkillsFeedback({ type: 'success', text: `${selectedSkillAgentId} 已移除共享技能 ${skillName}` });
|
setAgentSkillsFeedback({ type: 'success', text: `${selectedSkillAgentId} 已移除共享技能 ${skillName}` });
|
||||||
@@ -235,16 +235,16 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}, [clientRef, requestAgentSkills, resolveWorkspaceId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
}, [clientRef, requestAgentSkills, resolveRunId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
||||||
|
|
||||||
const handleAgentSkillToggle = useCallback((skillName, enabled) => {
|
const handleAgentSkillToggle = useCallback((skillName, enabled) => {
|
||||||
const agentId = selectedSkillAgentId;
|
const agentId = selectedSkillAgentId;
|
||||||
setAgentSkillsSavingKey(`${agentId}:${skillName}`);
|
setAgentSkillsSavingKey(`${agentId}:${skillName}`);
|
||||||
setAgentSkillsFeedback(null);
|
setAgentSkillsFeedback(null);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => enabled
|
.then((runId) => enabled
|
||||||
? enableAgentSkill(workspaceId, agentId, skillName)
|
? enableAgentSkill(runId, agentId, skillName)
|
||||||
: disableAgentSkill(workspaceId, agentId, skillName))
|
: disableAgentSkill(runId, agentId, skillName))
|
||||||
.then(() => {
|
.then(() => {
|
||||||
setAgentSkillsSavingKey(null);
|
setAgentSkillsSavingKey(null);
|
||||||
setAgentSkillsFeedback({ type: 'success', text: `${agentId} ${enabled ? '已启用' : '已禁用'} ${skillName}` });
|
setAgentSkillsFeedback({ type: 'success', text: `${agentId} ${enabled ? '已启用' : '已禁用'} ${skillName}` });
|
||||||
@@ -263,7 +263,7 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
setAgentSkillsFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}, [clientRef, requestAgentSkills, resolveWorkspaceId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
}, [clientRef, requestAgentSkills, resolveRunId, selectedSkillAgentId, setAgentSkillsFeedback, setAgentSkillsSavingKey]);
|
||||||
|
|
||||||
const handleSkillAgentChange = useCallback((agentId) => {
|
const handleSkillAgentChange = useCallback((agentId) => {
|
||||||
setSelectedSkillAgentId(agentId);
|
setSelectedSkillAgentId(agentId);
|
||||||
@@ -278,8 +278,8 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
if (!normalizedAgentId || !normalizedFilename) return false;
|
if (!normalizedAgentId || !normalizedFilename) return false;
|
||||||
setIsWorkspaceFileLoading(true);
|
setIsWorkspaceFileLoading(true);
|
||||||
setWorkspaceFileFeedback(null);
|
setWorkspaceFileFeedback(null);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => fetchAgentWorkspaceFile(workspaceId, normalizedAgentId, normalizedFilename))
|
.then((runId) => fetchAgentWorkspaceFile(runId, normalizedAgentId, normalizedFilename))
|
||||||
.then((payload) => {
|
.then((payload) => {
|
||||||
setWorkspaceFilesByAgent((prev) => ({
|
setWorkspaceFilesByAgent((prev) => ({
|
||||||
...prev,
|
...prev,
|
||||||
@@ -303,7 +303,7 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
return true;
|
return true;
|
||||||
}, [clientRef, resolveWorkspaceId, setIsWorkspaceFileLoading, setWorkspaceDraftContent, setWorkspaceFileFeedback, setWorkspaceFilesByAgent]);
|
}, [clientRef, resolveRunId, setIsWorkspaceFileLoading, setWorkspaceDraftContent, setWorkspaceFileFeedback, setWorkspaceFilesByAgent]);
|
||||||
|
|
||||||
const handleWorkspaceFileChange = useCallback((filename) => {
|
const handleWorkspaceFileChange = useCallback((filename) => {
|
||||||
useAgentStore.getState().setSelectedWorkspaceFile(filename);
|
useAgentStore.getState().setSelectedWorkspaceFile(filename);
|
||||||
@@ -314,8 +314,8 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
const key = `${selectedSkillAgentId}:${selectedWorkspaceFile}`;
|
const key = `${selectedSkillAgentId}:${selectedWorkspaceFile}`;
|
||||||
setWorkspaceFileSavingKey(key);
|
setWorkspaceFileSavingKey(key);
|
||||||
setWorkspaceFileFeedback(null);
|
setWorkspaceFileFeedback(null);
|
||||||
void resolveWorkspaceId()
|
void resolveRunId()
|
||||||
.then((workspaceId) => updateAgentWorkspaceFile(workspaceId, selectedSkillAgentId, selectedWorkspaceFile, workspaceDraftContent))
|
.then((runId) => updateAgentWorkspaceFile(runId, selectedSkillAgentId, selectedWorkspaceFile, workspaceDraftContent))
|
||||||
.then((payload) => {
|
.then((payload) => {
|
||||||
setWorkspaceFileSavingKey(null);
|
setWorkspaceFileSavingKey(null);
|
||||||
setWorkspaceFileFeedback({ type: 'success', text: `${selectedSkillAgentId} 的 ${selectedWorkspaceFile} 已保存` });
|
setWorkspaceFileFeedback({ type: 'success', text: `${selectedSkillAgentId} 的 ${selectedWorkspaceFile} 已保存` });
|
||||||
@@ -345,7 +345,7 @@ export function useAgentDataRequests(clientRef) {
|
|||||||
setWorkspaceFileFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
setWorkspaceFileFeedback({ type: 'error', text: '发送失败,请检查连接状态' });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}, [clientRef, resolveWorkspaceId, selectedSkillAgentId, selectedWorkspaceFile, setWorkspaceFileFeedback, setWorkspaceFileSavingKey, setWorkspaceFilesByAgent, workspaceDraftContent]);
|
}, [clientRef, resolveRunId, selectedSkillAgentId, selectedWorkspaceFile, setWorkspaceFileFeedback, setWorkspaceFileSavingKey, setWorkspaceFilesByAgent, workspaceDraftContent]);
|
||||||
|
|
||||||
const handleUploadExternalSkill = useCallback(async (file) => {
|
const handleUploadExternalSkill = useCallback(async (file) => {
|
||||||
if (!(file instanceof File)) {
|
if (!(file instanceof File)) {
|
||||||
|
|||||||
@@ -1,29 +0,0 @@
|
|||||||
/**
|
|
||||||
* useWebsocketSessionSync - DEPRECATED
|
|
||||||
*
|
|
||||||
* This hook is deprecated. WebSocket connection and event handling is now managed
|
|
||||||
* by useWebSocketConnection.js. This file is kept for backwards compatibility
|
|
||||||
* but will be removed in a future version.
|
|
||||||
*
|
|
||||||
* All functionality has been consolidated into:
|
|
||||||
* - useWebSocketConnection.js: WebSocket lifecycle and event handlers
|
|
||||||
* - useStockDataRequests.js: Stock data request callbacks
|
|
||||||
* - useAgentDataRequests.js: Agent operation callbacks
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { useWebSocketConnection } from './useWebSocketConnection';
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @deprecated Use useWebSocketConnection directly instead.
|
|
||||||
* This hook is a thin wrapper that delegates to useWebSocketConnection
|
|
||||||
* for backwards compatibility.
|
|
||||||
*/
|
|
||||||
export function useWebsocketSessionSync(props) {
|
|
||||||
// Delegate to useWebSocketConnection
|
|
||||||
const { clientRef } = useWebSocketConnection();
|
|
||||||
|
|
||||||
// Return clientRef so existing code can still access it
|
|
||||||
return { clientRef };
|
|
||||||
}
|
|
||||||
|
|
||||||
export default useWebsocketSessionSync;
|
|
||||||
@@ -129,56 +129,64 @@ export function fetchRuntimeLogs() {
|
|||||||
return safeFetch(RUNTIME_API_BASE, '/logs');
|
return safeFetch(RUNTIME_API_BASE, '/logs');
|
||||||
}
|
}
|
||||||
|
|
||||||
export function fetchAgentProfile(workspaceId, agentId) {
|
function buildRunScopedAgentPath(runId, agentId, suffix = '') {
|
||||||
return safeFetch(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/profile`);
|
return `/workspaces/${encodeURIComponent(runId)}/agents/${encodeURIComponent(agentId)}${suffix}`;
|
||||||
}
|
}
|
||||||
|
|
||||||
export function fetchAgentSkills(workspaceId, agentId) {
|
/**
|
||||||
return safeFetch(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/skills`);
|
* Runtime-read agent routes still use the `/workspaces/...` prefix on the
|
||||||
|
* backend, but the leading identifier on this surface is the active `run_id`.
|
||||||
|
*/
|
||||||
|
export function fetchAgentProfile(runId, agentId) {
|
||||||
|
return safeFetch(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, '/profile'));
|
||||||
}
|
}
|
||||||
|
|
||||||
export function fetchAgentSkillDetail(workspaceId, agentId, skillName) {
|
export function fetchAgentSkills(runId, agentId) {
|
||||||
return safeFetch(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/skills/${encodeURIComponent(skillName)}`);
|
return safeFetch(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, '/skills'));
|
||||||
}
|
}
|
||||||
|
|
||||||
export function fetchAgentWorkspaceFile(workspaceId, agentId, filename) {
|
export function fetchAgentSkillDetail(runId, agentId, skillName) {
|
||||||
return safeFetch(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/files/${encodeURIComponent(filename)}`);
|
return safeFetch(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, `/skills/${encodeURIComponent(skillName)}`));
|
||||||
}
|
}
|
||||||
|
|
||||||
export function createAgentLocalSkill(workspaceId, agentId, skillName) {
|
export function fetchAgentWorkspaceFile(runId, agentId, filename) {
|
||||||
return safeRequest(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/skills/local`, {
|
return safeFetch(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, `/files/${encodeURIComponent(filename)}`));
|
||||||
|
}
|
||||||
|
|
||||||
|
export function createAgentLocalSkill(runId, agentId, skillName) {
|
||||||
|
return safeRequest(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, '/skills/local'), {
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
body: JSON.stringify({ skill_name: skillName })
|
body: JSON.stringify({ skill_name: skillName })
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
export function updateAgentLocalSkill(workspaceId, agentId, skillName, content) {
|
export function updateAgentLocalSkill(runId, agentId, skillName, content) {
|
||||||
return safeRequest(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/skills/local/${encodeURIComponent(skillName)}`, {
|
return safeRequest(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, `/skills/local/${encodeURIComponent(skillName)}`), {
|
||||||
method: 'PUT',
|
method: 'PUT',
|
||||||
body: JSON.stringify({ content })
|
body: JSON.stringify({ content })
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
export function deleteAgentLocalSkill(workspaceId, agentId, skillName) {
|
export function deleteAgentLocalSkill(runId, agentId, skillName) {
|
||||||
return safeRequest(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/skills/local/${encodeURIComponent(skillName)}`, {
|
return safeRequest(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, `/skills/local/${encodeURIComponent(skillName)}`), {
|
||||||
method: 'DELETE'
|
method: 'DELETE'
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
export function enableAgentSkill(workspaceId, agentId, skillName) {
|
export function enableAgentSkill(runId, agentId, skillName) {
|
||||||
return safeRequest(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/skills/${encodeURIComponent(skillName)}/enable`, {
|
return safeRequest(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, `/skills/${encodeURIComponent(skillName)}/enable`), {
|
||||||
method: 'POST'
|
method: 'POST'
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
export function disableAgentSkill(workspaceId, agentId, skillName) {
|
export function disableAgentSkill(runId, agentId, skillName) {
|
||||||
return safeRequest(CONTROL_API_BASE, `/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/skills/${encodeURIComponent(skillName)}/disable`, {
|
return safeRequest(CONTROL_API_BASE, buildRunScopedAgentPath(runId, agentId, `/skills/${encodeURIComponent(skillName)}/disable`), {
|
||||||
method: 'POST'
|
method: 'POST'
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
export function updateAgentWorkspaceFile(workspaceId, agentId, filename, content) {
|
export function updateAgentWorkspaceFile(runId, agentId, filename, content) {
|
||||||
return fetch(`${CONTROL_API_BASE}/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/files/${encodeURIComponent(filename)}`, {
|
return fetch(`${CONTROL_API_BASE}${buildRunScopedAgentPath(runId, agentId, `/files/${encodeURIComponent(filename)}`)}`, {
|
||||||
method: 'PUT',
|
method: 'PUT',
|
||||||
headers: {
|
headers: {
|
||||||
'Content-Type': 'text/plain'
|
'Content-Type': 'text/plain'
|
||||||
@@ -206,8 +214,8 @@ export async function uploadAgentSkillZip({
|
|||||||
throw new Error('valid zip file is required');
|
throw new Error('valid zip file is required');
|
||||||
}
|
}
|
||||||
const runtime = runId ? { run_id: runId } : await fetchCurrentRuntime();
|
const runtime = runId ? { run_id: runId } : await fetchCurrentRuntime();
|
||||||
const workspaceId = runtime?.run_id;
|
const resolvedRunId = runtime?.run_id;
|
||||||
if (!workspaceId) {
|
if (!resolvedRunId) {
|
||||||
throw new Error('未检测到正在运行的任务');
|
throw new Error('未检测到正在运行的任务');
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -220,7 +228,7 @@ export async function uploadAgentSkillZip({
|
|||||||
|
|
||||||
return safeRequest(
|
return safeRequest(
|
||||||
CONTROL_API_BASE,
|
CONTROL_API_BASE,
|
||||||
`/workspaces/${encodeURIComponent(workspaceId)}/agents/${encodeURIComponent(agentId)}/skills/upload`,
|
buildRunScopedAgentPath(resolvedRunId, agentId, '/skills/upload'),
|
||||||
{
|
{
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
body: formData
|
body: formData
|
||||||
|
|||||||
45
frontend/src/services/runtimeApi.test.js
Normal file
45
frontend/src/services/runtimeApi.test.js
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
import { afterEach, describe, expect, it, vi } from 'vitest';
|
||||||
|
|
||||||
|
import {
|
||||||
|
fetchAgentProfile,
|
||||||
|
updateAgentWorkspaceFile
|
||||||
|
} from './runtimeApi';
|
||||||
|
|
||||||
|
describe('runtimeApi run-scoped agent routes', () => {
|
||||||
|
afterEach(() => {
|
||||||
|
vi.restoreAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('uses run_id in runtime-read agent profile requests', async () => {
|
||||||
|
const fetchMock = vi.fn().mockResolvedValue({
|
||||||
|
ok: true,
|
||||||
|
json: async () => ({ profile: {}, scope_type: 'runtime_run' })
|
||||||
|
});
|
||||||
|
vi.stubGlobal('fetch', fetchMock);
|
||||||
|
|
||||||
|
await fetchAgentProfile('20260330_123000', 'portfolio_manager');
|
||||||
|
|
||||||
|
expect(fetchMock).toHaveBeenCalledWith(
|
||||||
|
expect.stringContaining('/workspaces/20260330_123000/agents/portfolio_manager/profile')
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('uses run_id in runtime agent file update requests', async () => {
|
||||||
|
const fetchMock = vi.fn().mockResolvedValue({
|
||||||
|
ok: true,
|
||||||
|
json: async () => ({ content: '# demo' }),
|
||||||
|
text: async () => ''
|
||||||
|
});
|
||||||
|
vi.stubGlobal('fetch', fetchMock);
|
||||||
|
|
||||||
|
await updateAgentWorkspaceFile('20260330_123000', 'risk_manager', 'MEMORY.md', '# demo');
|
||||||
|
|
||||||
|
expect(fetchMock).toHaveBeenCalledWith(
|
||||||
|
expect.stringContaining('/workspaces/20260330_123000/agents/risk_manager/files/MEMORY.md'),
|
||||||
|
expect.objectContaining({
|
||||||
|
method: 'PUT',
|
||||||
|
body: '# demo'
|
||||||
|
})
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -5,7 +5,8 @@ const resolveValue = (updater, currentValue) => (
|
|||||||
);
|
);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Agent Store - Agent skills, profiles, workspaces
|
* Agent Store - Agent skills, profiles, design-time workspace terminology, and
|
||||||
|
* run-scoped file editing state.
|
||||||
*/
|
*/
|
||||||
export const useAgentStore = create((set) => ({
|
export const useAgentStore = create((set) => ({
|
||||||
// Selected agent for skill/workspace editing
|
// Selected agent for skill/workspace editing
|
||||||
@@ -60,3 +61,18 @@ export const useAgentStore = create((set) => ({
|
|||||||
workspaceFileFeedback: null,
|
workspaceFileFeedback: null,
|
||||||
setWorkspaceFileFeedback: (workspaceFileFeedback) => set((state) => ({ workspaceFileFeedback: resolveValue(workspaceFileFeedback, state.workspaceFileFeedback) })),
|
setWorkspaceFileFeedback: (workspaceFileFeedback) => set((state) => ({ workspaceFileFeedback: resolveValue(workspaceFileFeedback, state.workspaceFileFeedback) })),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Run-scoped file editing state currently reuses legacy `workspace*` field
|
||||||
|
* names inside the store. Prefer this selector for new runtime UI code.
|
||||||
|
*/
|
||||||
|
export const useAgentRunFileState = () => useAgentStore((state) => ({
|
||||||
|
selectedRunFile: state.selectedWorkspaceFile,
|
||||||
|
runFilesByAgent: state.workspaceFilesByAgent,
|
||||||
|
runDraftContent: state.workspaceDraftContent,
|
||||||
|
isRunFileLoading: state.isWorkspaceFileLoading,
|
||||||
|
runFileSavingKey: state.workspaceFileSavingKey,
|
||||||
|
runFileFeedback: state.workspaceFileFeedback,
|
||||||
|
setSelectedRunFile: state.setSelectedWorkspaceFile,
|
||||||
|
setRunDraftContent: state.setWorkspaceDraftContent,
|
||||||
|
}));
|
||||||
|
|||||||
@@ -55,6 +55,9 @@ dependencies = [
|
|||||||
|
|
||||||
|
|
||||||
[project.optional-dependencies]
|
[project.optional-dependencies]
|
||||||
|
docker-sandbox = [
|
||||||
|
"agentscope-runtime>=0.1.0"
|
||||||
|
]
|
||||||
dev = [
|
dev = [
|
||||||
"pytest>=8.3.3",
|
"pytest>=8.3.3",
|
||||||
"ruff>=0.6.9",
|
"ruff>=0.6.9",
|
||||||
|
|||||||
@@ -5,6 +5,8 @@
|
|||||||
# 用法:
|
# 用法:
|
||||||
# ./scripts/check-prod-env.sh
|
# ./scripts/check-prod-env.sh
|
||||||
# ./scripts/check-prod-env.sh --strict
|
# ./scripts/check-prod-env.sh --strict
|
||||||
|
# ./scripts/check-prod-env.sh --smoke-evo
|
||||||
|
# ./scripts/check-prod-env.sh --strict --smoke-evo
|
||||||
#
|
#
|
||||||
# 检查内容:
|
# 检查内容:
|
||||||
# - Python / Node / npm 是否可用
|
# - Python / Node / npm 是否可用
|
||||||
@@ -12,6 +14,7 @@
|
|||||||
# - frontend/package-lock.json 与 npm ci 是否可消费
|
# - frontend/package-lock.json 与 npm ci 是否可消费
|
||||||
# - .env 是否存在以及关键变量是否配置
|
# - .env 是否存在以及关键变量是否配置
|
||||||
# - 前端是否可构建
|
# - 前端是否可构建
|
||||||
|
# - 可选:EvoAgent 运行时 smoke 检查(默认覆盖 fundamentals_analyst + risk_manager + portfolio_manager)
|
||||||
# ============================================================
|
# ============================================================
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
@@ -22,9 +25,11 @@ CYAN='\033[0;36m'
|
|||||||
NC='\033[0m'
|
NC='\033[0m'
|
||||||
|
|
||||||
STRICT=false
|
STRICT=false
|
||||||
|
SMOKE_EVO=false
|
||||||
for arg in "$@"; do
|
for arg in "$@"; do
|
||||||
case "$arg" in
|
case "$arg" in
|
||||||
--strict) STRICT=true ;;
|
--strict) STRICT=true ;;
|
||||||
|
--smoke-evo) SMOKE_EVO=true ;;
|
||||||
*) echo -e "${YELLOW}忽略未知参数: ${arg}${NC}" ;;
|
*) echo -e "${YELLOW}忽略未知参数: ${arg}${NC}" ;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
@@ -34,6 +39,8 @@ PROJECT_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
|||||||
cd "${PROJECT_ROOT}"
|
cd "${PROJECT_ROOT}"
|
||||||
|
|
||||||
WARNINGS=0
|
WARNINGS=0
|
||||||
|
PYTHON_BIN=""
|
||||||
|
PROJECT_PYTHONPATH=""
|
||||||
|
|
||||||
ok() {
|
ok() {
|
||||||
echo -e "${GREEN}✔${NC} $1"
|
echo -e "${GREEN}✔${NC} $1"
|
||||||
@@ -54,8 +61,24 @@ require_cmd() {
|
|||||||
command -v "${cmd}" >/dev/null 2>&1 || fail "未找到命令: ${cmd}"
|
command -v "${cmd}" >/dev/null 2>&1 || fail "未找到命令: ${cmd}"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resolve_python() {
|
||||||
|
if command -v python >/dev/null 2>&1; then
|
||||||
|
PYTHON_BIN="python"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
if command -v python3 >/dev/null 2>&1; then
|
||||||
|
PYTHON_BIN="python3"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
fail "未找到命令: python 或 python3"
|
||||||
|
}
|
||||||
|
|
||||||
|
init_pythonpath() {
|
||||||
|
PROJECT_PYTHONPATH="${PROJECT_ROOT}/.pydeps:."
|
||||||
|
}
|
||||||
|
|
||||||
check_python_modules() {
|
check_python_modules() {
|
||||||
python - <<'PY'
|
PYTHONPATH="${PROJECT_PYTHONPATH}" "${PYTHON_BIN}" - <<'PY'
|
||||||
mods = [
|
mods = [
|
||||||
'fastapi', 'uvicorn', 'yaml', 'httpx', 'cryptography', 'websockets',
|
'fastapi', 'uvicorn', 'yaml', 'httpx', 'cryptography', 'websockets',
|
||||||
'rich', 'dotenv', 'pandas_market_calendars', 'finnhub', 'openai',
|
'rich', 'dotenv', 'pandas_market_calendars', 'finnhub', 'openai',
|
||||||
@@ -100,12 +123,13 @@ check_frontend_install() {
|
|||||||
[ -f frontend/package-lock.json ] || fail "frontend/package-lock.json 缺失,生产部署建议保留锁文件"
|
[ -f frontend/package-lock.json ] || fail "frontend/package-lock.json 缺失,生产部署建议保留锁文件"
|
||||||
(
|
(
|
||||||
cd frontend
|
cd frontend
|
||||||
npm ci --dry-run >/tmp/bigtime-npm-ci.log 2>&1 || {
|
npm ci --dry-run >/tmp/bigtime-npm-ci.log 2>&1 || true
|
||||||
cat /tmp/bigtime-npm-ci.log
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
)
|
)
|
||||||
if rg -n "@emoji-mart/react|@lobehub/ui|ERESOLVE overriding peer dependency" /tmp/bigtime-npm-ci.log >/dev/null 2>&1; then
|
if rg -n "npm error code EUSAGE|can only install packages when your package.json and package-lock.json.*in sync|Missing: .* from lock file|Invalid: lock file's " /tmp/bigtime-npm-ci.log >/dev/null 2>&1; then
|
||||||
|
warn "frontend package-lock.json 与 package.json 不一致;需在 frontend/ 重新生成锁文件,但这不阻断当前后端 smoke 检查"
|
||||||
|
elif rg -n "ERESOLVE could not resolve|Conflicting peer dependency" /tmp/bigtime-npm-ci.log >/dev/null 2>&1; then
|
||||||
|
warn "frontend npm ci --dry-run 存在 peer 依赖冲突,当前以后续构建结果为准"
|
||||||
|
elif rg -n "@emoji-mart/react|@lobehub/ui|ERESOLVE overriding peer dependency" /tmp/bigtime-npm-ci.log >/dev/null 2>&1; then
|
||||||
warn "frontend npm ci 存在已知非阻塞 peer warning(@lobehub/icons 依赖链),可忽略"
|
warn "frontend npm ci 存在已知非阻塞 peer warning(@lobehub/icons 依赖链),可忽略"
|
||||||
elif rg -n "npm warn" /tmp/bigtime-npm-ci.log >/dev/null 2>&1; then
|
elif rg -n "npm warn" /tmp/bigtime-npm-ci.log >/dev/null 2>&1; then
|
||||||
warn "frontend npm ci 存在 warning,建议查看 /tmp/bigtime-npm-ci.log"
|
warn "frontend npm ci 存在 warning,建议查看 /tmp/bigtime-npm-ci.log"
|
||||||
@@ -125,13 +149,42 @@ check_frontend_build() {
|
|||||||
ok "frontend 构建通过"
|
ok "frontend 构建通过"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
check_evo_runtime_smoke() {
|
||||||
|
local configured_ids="${EVO_AGENT_IDS:-}"
|
||||||
|
local -a smoke_agent_ids=()
|
||||||
|
local raw_id=""
|
||||||
|
|
||||||
|
if [ -n "${configured_ids}" ]; then
|
||||||
|
IFS=',' read -r -a smoke_agent_ids <<< "${configured_ids}"
|
||||||
|
else
|
||||||
|
smoke_agent_ids=("fundamentals_analyst" "risk_manager" "portfolio_manager")
|
||||||
|
fi
|
||||||
|
|
||||||
|
for raw_id in "${smoke_agent_ids[@]}"; do
|
||||||
|
local agent_id
|
||||||
|
agent_id="$(printf '%s' "${raw_id}" | xargs)"
|
||||||
|
[ -n "${agent_id}" ] || continue
|
||||||
|
|
||||||
|
echo -e "${CYAN}运行 EvoAgent smoke 检查(agent=${agent_id})${NC}"
|
||||||
|
PYTHONPATH="${PROJECT_PYTHONPATH}" "${PYTHON_BIN}" \
|
||||||
|
"${PROJECT_ROOT}/scripts/smoke_evo_runtime.py" \
|
||||||
|
--agent-id "${agent_id}" >/tmp/bigtime-evo-smoke.log 2>&1 || {
|
||||||
|
cat /tmp/bigtime-evo-smoke.log
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
cat /tmp/bigtime-evo-smoke.log
|
||||||
|
ok "EvoAgent smoke 检查通过(agent=${agent_id})"
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
echo -e "${CYAN}大时代 · 生产环境检查${NC}"
|
echo -e "${CYAN}大时代 · 生产环境检查${NC}"
|
||||||
|
|
||||||
require_cmd python
|
resolve_python
|
||||||
|
init_pythonpath
|
||||||
require_cmd node
|
require_cmd node
|
||||||
require_cmd npm
|
require_cmd npm
|
||||||
|
|
||||||
ok "python: $(python -V 2>&1)"
|
ok "python: $(${PYTHON_BIN} -V 2>&1)"
|
||||||
ok "node: $(node -v)"
|
ok "node: $(node -v)"
|
||||||
ok "npm: $(npm -v)"
|
ok "npm: $(npm -v)"
|
||||||
|
|
||||||
@@ -140,6 +193,10 @@ check_env_file
|
|||||||
check_frontend_install
|
check_frontend_install
|
||||||
check_frontend_build
|
check_frontend_build
|
||||||
|
|
||||||
|
if ${SMOKE_EVO}; then
|
||||||
|
check_evo_runtime_smoke
|
||||||
|
fi
|
||||||
|
|
||||||
if [ "${WARNINGS}" -gt 0 ]; then
|
if [ "${WARNINGS}" -gt 0 ]; then
|
||||||
echo -e "${YELLOW}检查完成:有 ${WARNINGS} 项 warning${NC}"
|
echo -e "${YELLOW}检查完成:有 ${WARNINGS} 项 warning${NC}"
|
||||||
${STRICT} && exit 1 || exit 0
|
${STRICT} && exit 1 || exit 0
|
||||||
|
|||||||
@@ -1,4 +1,12 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
# COMPATIBILITY_SURFACE: stable
|
||||||
|
# OWNER: ops-team
|
||||||
|
# SEE: docs/legacy-inventory.md#gateway-first-production-example
|
||||||
|
#
|
||||||
|
# Gateway-first production launch script.
|
||||||
|
# This is the current checked-in production example, running the gateway
|
||||||
|
# directly and proxying /ws instead of exposing every split FastAPI service.
|
||||||
|
# For split-service topology, see start-dev.sh and docs/current-architecture.md
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
cd /root/code/evotraders
|
cd /root/code/evotraders
|
||||||
@@ -6,6 +14,17 @@ cd /root/code/evotraders
|
|||||||
export PYTHONPATH=/root/code/evotraders/.pydeps:.
|
export PYTHONPATH=/root/code/evotraders/.pydeps:.
|
||||||
export TICKERS="${TICKERS:-AAPL,MSFT,GOOGL,AMZN,NVDA,META,TSLA,AMD,NFLX,AVGO,PLTR,COIN}"
|
export TICKERS="${TICKERS:-AAPL,MSFT,GOOGL,AMZN,NVDA,META,TSLA,AMD,NFLX,AVGO,PLTR,COIN}"
|
||||||
|
|
||||||
|
# 技能沙盒配置(生产环境建议使用 docker)
|
||||||
|
export SKILL_SANDBOX_MODE="${SKILL_SANDBOX_MODE:-docker}"
|
||||||
|
export SKILL_SANDBOX_IMAGE="${SKILL_SANDBOX_IMAGE:-python:3.11-slim}"
|
||||||
|
export SKILL_SANDBOX_MEMORY_LIMIT="${SKILL_SANDBOX_MEMORY_LIMIT:-512m}"
|
||||||
|
export SKILL_SANDBOX_CPU_LIMIT="${SKILL_SANDBOX_CPU_LIMIT:-1.0}"
|
||||||
|
export SKILL_SANDBOX_NETWORK="${SKILL_SANDBOX_NETWORK:-none}"
|
||||||
|
export SKILL_SANDBOX_TIMEOUT="${SKILL_SANDBOX_TIMEOUT:-60}"
|
||||||
|
|
||||||
|
# "production" here is an explicit deployment run label, not a required
|
||||||
|
# root-level runtime directory name.
|
||||||
|
|
||||||
exec python3 -m backend.main \
|
exec python3 -m backend.main \
|
||||||
--mode live \
|
--mode live \
|
||||||
--config-name production \
|
--config-name production \
|
||||||
|
|||||||
290
scripts/smoke_evo_runtime.py
Normal file
290
scripts/smoke_evo_runtime.py
Normal file
@@ -0,0 +1,290 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Smoke-test the EvoAgent runtime rollout path.
|
||||||
|
|
||||||
|
This script validates the current staged rollout shape:
|
||||||
|
- start runtime via backend.api.runtime
|
||||||
|
- confirm the gateway starts on an available port
|
||||||
|
- confirm the gateway log shows the selected agent running as EvoAgent
|
||||||
|
- confirm runtime_state.json is written
|
||||||
|
- confirm guard approval API logic wakes a pending ToolApprovalRequest
|
||||||
|
|
||||||
|
It intentionally avoids browser/front-end dependencies and does not require
|
||||||
|
local HTTP callbacks.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import websocket
|
||||||
|
|
||||||
|
PROJECT_ROOT = Path(__file__).resolve().parents[1]
|
||||||
|
PYDEPS = PROJECT_ROOT / ".pydeps"
|
||||||
|
|
||||||
|
_reordered_sys_path = [
|
||||||
|
str(PROJECT_ROOT),
|
||||||
|
str(PYDEPS),
|
||||||
|
]
|
||||||
|
for entry in list(sys.path):
|
||||||
|
if entry in _reordered_sys_path:
|
||||||
|
continue
|
||||||
|
_reordered_sys_path.append(entry)
|
||||||
|
sys.path[:] = _reordered_sys_path
|
||||||
|
|
||||||
|
from fastapi import BackgroundTasks
|
||||||
|
|
||||||
|
from backend.agents.base.tool_guard import TOOL_GUARD_STORE, ToolApprovalRequest
|
||||||
|
from backend.api.guard import ApprovalRequest, approve_tool_call
|
||||||
|
from backend.api.runtime import (
|
||||||
|
LaunchConfig,
|
||||||
|
_is_gateway_running,
|
||||||
|
get_runtime_state,
|
||||||
|
start_runtime,
|
||||||
|
stop_runtime,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# All 6 agent roles supported by EvoAgent
|
||||||
|
ALL_EVO_AGENT_ROLES = [
|
||||||
|
"fundamentals_analyst",
|
||||||
|
"technical_analyst",
|
||||||
|
"sentiment_analyst",
|
||||||
|
"valuation_analyst",
|
||||||
|
"risk_manager",
|
||||||
|
"portfolio_manager",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_args() -> argparse.Namespace:
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Smoke-test the staged EvoAgent runtime rollout.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--agent-id",
|
||||||
|
default="fundamentals_analyst",
|
||||||
|
help="Agent id to enable via EVO_AGENT_IDS (use 'all' to test all 6 roles)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--ticker",
|
||||||
|
default="AAPL",
|
||||||
|
help="Ticker to include in the smoke runtime bootstrap",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--max-wait-seconds",
|
||||||
|
type=float,
|
||||||
|
default=15.0,
|
||||||
|
help="Maximum time to wait for gateway.log to appear",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--test-all-roles",
|
||||||
|
action="store_true",
|
||||||
|
help="Test all 6 EvoAgent roles sequentially",
|
||||||
|
)
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def _wait_for_file(path: Path, timeout_seconds: float) -> None:
|
||||||
|
deadline = time.time() + timeout_seconds
|
||||||
|
while time.time() < deadline:
|
||||||
|
if path.exists():
|
||||||
|
return
|
||||||
|
time.sleep(0.2)
|
||||||
|
raise TimeoutError(f"Timed out waiting for file: {path}")
|
||||||
|
|
||||||
|
|
||||||
|
def _wait_for_initial_state(gateway_port: int, timeout_seconds: float) -> dict[str, object]:
|
||||||
|
deadline = time.time() + timeout_seconds
|
||||||
|
last_error: Exception | None = None
|
||||||
|
while time.time() < deadline:
|
||||||
|
try:
|
||||||
|
ws = websocket.create_connection(
|
||||||
|
f"ws://127.0.0.1:{gateway_port}",
|
||||||
|
timeout=3,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
payload = json.loads(ws.recv())
|
||||||
|
return payload
|
||||||
|
finally:
|
||||||
|
ws.close()
|
||||||
|
except Exception as exc: # pragma: no cover - best-effort smoke polling
|
||||||
|
last_error = exc
|
||||||
|
time.sleep(0.2)
|
||||||
|
raise TimeoutError(
|
||||||
|
f"Timed out waiting for gateway initial_state on port {gateway_port}: {last_error}"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
async def _run_smoke(agent_id: str, ticker: str, max_wait_seconds: float) -> dict[str, object]:
|
||||||
|
previous_env = os.environ.get("EVO_AGENT_IDS")
|
||||||
|
os.environ["EVO_AGENT_IDS"] = agent_id
|
||||||
|
|
||||||
|
try:
|
||||||
|
if _is_gateway_running():
|
||||||
|
await stop_runtime(force=True)
|
||||||
|
|
||||||
|
response = await start_runtime(
|
||||||
|
LaunchConfig(
|
||||||
|
launch_mode="fresh",
|
||||||
|
tickers=[ticker],
|
||||||
|
schedule_mode="daily",
|
||||||
|
interval_minutes=60,
|
||||||
|
trigger_time="09:30",
|
||||||
|
max_comm_cycles=1,
|
||||||
|
initial_cash=100000.0,
|
||||||
|
margin_requirement=0.0,
|
||||||
|
enable_memory=False,
|
||||||
|
mode="backtest",
|
||||||
|
start_date="2025-11-01",
|
||||||
|
end_date="2025-11-30",
|
||||||
|
poll_interval=30,
|
||||||
|
),
|
||||||
|
BackgroundTasks(),
|
||||||
|
)
|
||||||
|
|
||||||
|
run_dir = Path(response.run_dir)
|
||||||
|
log_path = run_dir / "logs" / "gateway.log"
|
||||||
|
state_path = run_dir / "state" / "runtime_state.json"
|
||||||
|
|
||||||
|
_wait_for_file(log_path, max_wait_seconds)
|
||||||
|
_wait_for_file(state_path, max_wait_seconds)
|
||||||
|
initial_state_payload = _wait_for_initial_state(
|
||||||
|
response.gateway_port,
|
||||||
|
max_wait_seconds,
|
||||||
|
)
|
||||||
|
|
||||||
|
log_text = log_path.read_text(encoding="utf-8")
|
||||||
|
state = json.loads(state_path.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
record = TOOL_GUARD_STORE.create_pending(
|
||||||
|
tool_name="write_file",
|
||||||
|
tool_input={"path": "smoke.txt"},
|
||||||
|
agent_id=agent_id,
|
||||||
|
workspace_id=response.run_id,
|
||||||
|
)
|
||||||
|
pending = ToolApprovalRequest(
|
||||||
|
approval_id=record.approval_id,
|
||||||
|
tool_name=record.tool_name,
|
||||||
|
tool_input=record.tool_input,
|
||||||
|
tool_call_id="smoke_call",
|
||||||
|
)
|
||||||
|
record.pending_request = pending
|
||||||
|
await approve_tool_call(
|
||||||
|
ApprovalRequest(
|
||||||
|
approval_id=record.approval_id,
|
||||||
|
one_time=True,
|
||||||
|
expires_in_minutes=30,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
result = {
|
||||||
|
"run_id": response.run_id,
|
||||||
|
"gateway_port": response.gateway_port,
|
||||||
|
"gateway_running": _is_gateway_running(),
|
||||||
|
"runtime_manager": get_runtime_state().runtime_manager is not None,
|
||||||
|
"evo_log_present": f"EvoAgent initialized: {agent_id}" in log_text,
|
||||||
|
"runtime_state_written": state_path.exists(),
|
||||||
|
"registered_agents": [item["agent_id"] for item in state.get("agents", [])],
|
||||||
|
"pending_request_approved": pending.approved is True,
|
||||||
|
"ws_initial_type": initial_state_payload.get("type"),
|
||||||
|
"ws_initial_tickers": (
|
||||||
|
(initial_state_payload.get("state") or {}).get("tickers") or []
|
||||||
|
),
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
finally:
|
||||||
|
if _is_gateway_running():
|
||||||
|
await stop_runtime(force=True)
|
||||||
|
if previous_env is None:
|
||||||
|
os.environ.pop("EVO_AGENT_IDS", None)
|
||||||
|
else:
|
||||||
|
os.environ["EVO_AGENT_IDS"] = previous_env
|
||||||
|
|
||||||
|
|
||||||
|
def _verify_skills_loaded(log_text: str, agent_id: str) -> dict[str, bool]:
|
||||||
|
"""Verify that skills were loaded for the agent."""
|
||||||
|
return {
|
||||||
|
"skills_loaded": f"Loading skills for {agent_id}" in log_text or "skills" in log_text.lower(),
|
||||||
|
"tools_registered": "tool" in log_text.lower(),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async def _run_smoke_for_role(role: str, ticker: str, max_wait_seconds: float) -> dict[str, object]:
|
||||||
|
"""Run smoke test for a single agent role."""
|
||||||
|
print(f"\n>>> Testing EvoAgent role: {role}", file=sys.stderr)
|
||||||
|
result = await _run_smoke(
|
||||||
|
agent_id=role,
|
||||||
|
ticker=ticker,
|
||||||
|
max_wait_seconds=max_wait_seconds,
|
||||||
|
)
|
||||||
|
result["agent_role"] = role
|
||||||
|
result["success"] = (
|
||||||
|
result.get("evo_log_present", False)
|
||||||
|
and result.get("runtime_state_written", False)
|
||||||
|
and result.get("pending_request_approved", False)
|
||||||
|
)
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
args = _parse_args()
|
||||||
|
|
||||||
|
if args.test_all_roles:
|
||||||
|
# Test all 6 agent roles
|
||||||
|
results = []
|
||||||
|
all_passed = True
|
||||||
|
|
||||||
|
for role in ALL_EVO_AGENT_ROLES:
|
||||||
|
try:
|
||||||
|
result = asyncio.run(
|
||||||
|
_run_smoke_for_role(
|
||||||
|
role=role,
|
||||||
|
ticker=args.ticker,
|
||||||
|
max_wait_seconds=args.max_wait_seconds,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
results.append(result)
|
||||||
|
if not result.get("success", False):
|
||||||
|
all_passed = False
|
||||||
|
print(f" FAILED: {role}", file=sys.stderr)
|
||||||
|
else:
|
||||||
|
print(f" PASSED: {role}", file=sys.stderr)
|
||||||
|
except Exception as e:
|
||||||
|
all_passed = False
|
||||||
|
print(f" ERROR: {role} - {e}", file=sys.stderr)
|
||||||
|
results.append({
|
||||||
|
"agent_role": role,
|
||||||
|
"success": False,
|
||||||
|
"error": str(e),
|
||||||
|
})
|
||||||
|
|
||||||
|
summary = {
|
||||||
|
"test_mode": "all_roles",
|
||||||
|
"total_roles": len(ALL_EVO_AGENT_ROLES),
|
||||||
|
"passed": sum(1 for r in results if r.get("success", False)),
|
||||||
|
"failed": sum(1 for r in results if not r.get("success", False)),
|
||||||
|
"all_passed": all_passed,
|
||||||
|
"results": results,
|
||||||
|
}
|
||||||
|
print(json.dumps(summary, ensure_ascii=False, indent=2))
|
||||||
|
return 0 if all_passed else 1
|
||||||
|
else:
|
||||||
|
# Test single agent
|
||||||
|
result = asyncio.run(
|
||||||
|
_run_smoke(
|
||||||
|
agent_id=args.agent_id,
|
||||||
|
ticker=args.ticker,
|
||||||
|
max_wait_seconds=args.max_wait_seconds,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
print(json.dumps(result, ensure_ascii=False, indent=2))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
203
scripts/test_sandbox.py
Normal file
203
scripts/test_sandbox.py
Normal file
@@ -0,0 +1,203 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
沙盒执行器测试脚本
|
||||||
|
|
||||||
|
测试多模式技能沙盒执行器的基本功能。
|
||||||
|
默认使用 none 模式(无沙盒)。
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# 确保后端目录在路径中
|
||||||
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "backend"))
|
||||||
|
|
||||||
|
|
||||||
|
def test_sandbox_initialization():
|
||||||
|
"""测试沙盒初始化"""
|
||||||
|
print("=" * 60)
|
||||||
|
print("测试 1: 沙盒初始化")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
from backend.tools.sandboxed_executor import get_sandbox, SkillSandbox
|
||||||
|
|
||||||
|
# 重置单例(确保测试干净)
|
||||||
|
SkillSandbox._instance = None
|
||||||
|
|
||||||
|
# 默认应该使用 none 模式
|
||||||
|
sandbox = get_sandbox()
|
||||||
|
|
||||||
|
assert sandbox.current_mode == "none", f"期望模式 'none',实际 '{sandbox.current_mode}'"
|
||||||
|
print(f"✓ 沙盒模式: {sandbox.current_mode}")
|
||||||
|
print(f"✓ 后端类型: {type(sandbox._backend).__name__}")
|
||||||
|
|
||||||
|
return sandbox
|
||||||
|
|
||||||
|
|
||||||
|
def test_no_sandbox_warning():
|
||||||
|
"""测试无沙盒模式的安全警告"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("测试 2: 无沙盒模式安全警告")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
from backend.tools.sandboxed_executor import NoSandboxBackend
|
||||||
|
|
||||||
|
backend = NoSandboxBackend()
|
||||||
|
|
||||||
|
# 捕获警告
|
||||||
|
with warnings.catch_warnings(record=True) as w:
|
||||||
|
warnings.simplefilter("always")
|
||||||
|
|
||||||
|
# 执行会触发警告
|
||||||
|
try:
|
||||||
|
backend.execute(
|
||||||
|
skill_name="builtin/valuation_review",
|
||||||
|
function_name="build_dcf_report",
|
||||||
|
function_args={"rows": [], "current_date": "2024-01-01"},
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
pass # 我们不关心执行结果,只关心警告
|
||||||
|
|
||||||
|
# 检查是否产生了警告
|
||||||
|
runtime_warnings = [x for x in w if issubclass(x.category, RuntimeWarning)]
|
||||||
|
if runtime_warnings:
|
||||||
|
print("✓ 安全警告已触发")
|
||||||
|
print(f" 警告内容: {runtime_warnings[0].message}")
|
||||||
|
else:
|
||||||
|
print("⚠ 未触发安全警告(可能已缓存)")
|
||||||
|
|
||||||
|
|
||||||
|
def test_docker_config():
|
||||||
|
"""测试 Docker 模式配置解析"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("测试 3: Docker 模式配置解析")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
# 设置环境变量
|
||||||
|
os.environ["SKILL_SANDBOX_MODE"] = "docker"
|
||||||
|
os.environ["SKILL_SANDBOX_MEMORY_LIMIT"] = "1g"
|
||||||
|
os.environ["SKILL_SANDBOX_CPU_LIMIT"] = "2.0"
|
||||||
|
|
||||||
|
from backend.tools.sandboxed_executor import SkillSandbox
|
||||||
|
|
||||||
|
# 重置单例
|
||||||
|
SkillSandbox._instance = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
sandbox = SkillSandbox()
|
||||||
|
print(f"✓ 沙盒模式: {sandbox.current_mode}")
|
||||||
|
print(f"✓ 后端类型: {type(sandbox._backend).__name__}")
|
||||||
|
|
||||||
|
# 检查配置
|
||||||
|
backend = sandbox._backend
|
||||||
|
assert backend.config["memory_limit"] == "1g"
|
||||||
|
assert backend.config["cpu_limit"] == 2.0
|
||||||
|
print(f"✓ 内存限制: {backend.config['memory_limit']}")
|
||||||
|
print(f"✓ CPU 限制: {backend.config['cpu_limit']}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"⚠ Docker 后端创建失败(预期,可能未安装 agentscope-runtime): {e}")
|
||||||
|
|
||||||
|
# 恢复环境变量
|
||||||
|
os.environ["SKILL_SANDBOX_MODE"] = "none"
|
||||||
|
SkillSandbox._instance = None
|
||||||
|
|
||||||
|
|
||||||
|
def test_analysis_tools_import():
|
||||||
|
"""测试分析工具导入"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("测试 4: 分析工具导入")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
try:
|
||||||
|
from backend.tools.analysis_tools import (
|
||||||
|
TOOL_REGISTRY,
|
||||||
|
_sandbox,
|
||||||
|
dcf_valuation_analysis,
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"✓ TOOL_REGISTRY 包含 {len(TOOL_REGISTRY)} 个工具")
|
||||||
|
print(f"✓ _sandbox 实例模式: {_sandbox.current_mode}")
|
||||||
|
print(f"✓ dcf_valuation_analysis 函数可用")
|
||||||
|
|
||||||
|
# 检查估值分析工具是否都在
|
||||||
|
valuation_tools = [
|
||||||
|
"dcf_valuation_analysis",
|
||||||
|
"owner_earnings_valuation_analysis",
|
||||||
|
"ev_ebitda_valuation_analysis",
|
||||||
|
"residual_income_valuation_analysis",
|
||||||
|
]
|
||||||
|
|
||||||
|
for tool in valuation_tools:
|
||||||
|
if tool in TOOL_REGISTRY:
|
||||||
|
print(f" ✓ {tool}")
|
||||||
|
else:
|
||||||
|
print(f" ✗ {tool} 缺失")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"✗ 导入失败: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
|
||||||
|
|
||||||
|
def test_skill_execution_mock():
|
||||||
|
"""测试技能执行(模拟)"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("测试 5: 技能执行(无沙盒模式)")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
from backend.tools.sandboxed_executor import get_sandbox
|
||||||
|
|
||||||
|
sandbox = get_sandbox()
|
||||||
|
|
||||||
|
# 使用模拟参数调用
|
||||||
|
try:
|
||||||
|
# 注意:这需要实际的技能模块存在
|
||||||
|
result = sandbox.execute_skill(
|
||||||
|
skill_name="builtin/valuation_review",
|
||||||
|
function_name="build_dcf_report",
|
||||||
|
function_args={
|
||||||
|
"rows": [{"ticker": "AAPL", "current_fcf": 100000000}],
|
||||||
|
"current_date": "2024-01-01",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
print(f"✓ 技能执行成功")
|
||||||
|
print(f" 结果类型: {type(result)}")
|
||||||
|
print(f" 结果预览: {str(result)[:100]}...")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"⚠ 技能执行失败(可能缺少依赖或数据): {e}")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""主测试函数"""
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("技能沙盒执行器测试")
|
||||||
|
print("=" * 60)
|
||||||
|
print(f"当前 SKILL_SANDBOX_MODE: {os.getenv('SKILL_SANDBOX_MODE', '未设置(默认 none)')}")
|
||||||
|
|
||||||
|
# 确保使用默认模式测试
|
||||||
|
os.environ["SKILL_SANDBOX_MODE"] = "none"
|
||||||
|
|
||||||
|
try:
|
||||||
|
test_sandbox_initialization()
|
||||||
|
test_no_sandbox_warning()
|
||||||
|
test_docker_config()
|
||||||
|
test_analysis_tools_import()
|
||||||
|
test_skill_execution_mock()
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("测试完成")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n✗ 测试失败: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
96
scripts/test_sandbox_simple.py
Normal file
96
scripts/test_sandbox_simple.py
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
简化测试 - 验证沙盒执行器基本功能
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "backend"))
|
||||||
|
|
||||||
|
os.environ["SKILL_SANDBOX_MODE"] = "none"
|
||||||
|
|
||||||
|
|
||||||
|
def test_import():
|
||||||
|
"""测试导入"""
|
||||||
|
print("测试 1: 导入沙盒执行器")
|
||||||
|
from backend.tools.sandboxed_executor import get_sandbox, SkillSandbox
|
||||||
|
|
||||||
|
# 重置单例
|
||||||
|
SkillSandbox._instance = None
|
||||||
|
|
||||||
|
sandbox = get_sandbox()
|
||||||
|
print(f" ✓ 模式: {sandbox.current_mode}")
|
||||||
|
print(f" ✓ 后端: {type(sandbox._backend).__name__}")
|
||||||
|
return sandbox
|
||||||
|
|
||||||
|
|
||||||
|
def test_no_sandbox_backend():
|
||||||
|
"""测试无沙盒后端"""
|
||||||
|
print("\n测试 2: 无沙盒后端")
|
||||||
|
from backend.tools.sandboxed_executor import NoSandboxBackend
|
||||||
|
|
||||||
|
backend = NoSandboxBackend()
|
||||||
|
|
||||||
|
# 测试函数名解析
|
||||||
|
test_cases = [
|
||||||
|
("build_dcf_report", "dcf_report"),
|
||||||
|
("build_ev_ebitda_report", "multiple_valuation_report"),
|
||||||
|
("build_owner_earnings_report", "owner_earnings_report"),
|
||||||
|
("build_residual_income_report", "multiple_valuation_report"),
|
||||||
|
]
|
||||||
|
|
||||||
|
for func_name, expected_script in test_cases:
|
||||||
|
script_name = backend._get_script_name(func_name)
|
||||||
|
assert script_name == expected_script, f"期望 {expected_script}, 实际 {script_name}"
|
||||||
|
print(f" ✓ {func_name} -> {script_name}")
|
||||||
|
|
||||||
|
|
||||||
|
def test_module_resolution():
|
||||||
|
"""测试模块解析"""
|
||||||
|
print("\n测试 3: 模块路径解析")
|
||||||
|
|
||||||
|
from backend.tools.sandboxed_executor import NoSandboxBackend
|
||||||
|
|
||||||
|
backend = NoSandboxBackend()
|
||||||
|
skill_name = "builtin/valuation_review"
|
||||||
|
function_name = "build_dcf_report"
|
||||||
|
|
||||||
|
module_path = f"backend.skills.{skill_name.replace('/', '.')}.scripts"
|
||||||
|
script_name = backend._get_script_name(function_name)
|
||||||
|
submodule_path = f"{module_path}.{script_name}"
|
||||||
|
|
||||||
|
print(f" 技能名: {skill_name}")
|
||||||
|
print(f" 函数名: {function_name}")
|
||||||
|
print(f" 模块路径: {submodule_path}")
|
||||||
|
|
||||||
|
# 尝试导入
|
||||||
|
try:
|
||||||
|
module = __import__(submodule_path, fromlist=[function_name])
|
||||||
|
func = getattr(module, function_name)
|
||||||
|
print(f" ✓ 成功导入函数: {func.__name__}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ✗ 导入失败: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
print("=" * 50)
|
||||||
|
print("沙盒执行器简化测试")
|
||||||
|
print("=" * 50)
|
||||||
|
|
||||||
|
# 抑制警告
|
||||||
|
warnings.filterwarnings("ignore", category=RuntimeWarning)
|
||||||
|
|
||||||
|
test_import()
|
||||||
|
test_no_sandbox_backend()
|
||||||
|
test_module_resolution()
|
||||||
|
|
||||||
|
print("\n" + "=" * 50)
|
||||||
|
print("测试完成")
|
||||||
|
print("=" * 50)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
168
scripts/verify_docs_consistency.py
Normal file
168
scripts/verify_docs_consistency.py
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Verify documentation and script consistency.
|
||||||
|
|
||||||
|
This script checks that:
|
||||||
|
1. README.md mentions correct service ports
|
||||||
|
2. start-dev.sh starts services on documented ports
|
||||||
|
3. deploy/README.md is consistent with production scripts
|
||||||
|
4. Service ports match across all documentation
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
PROJECT_ROOT = Path(__file__).resolve().parents[1]
|
||||||
|
|
||||||
|
# Expected service ports (source of truth)
|
||||||
|
SERVICE_PORTS = {
|
||||||
|
"agent_service": 8000,
|
||||||
|
"trading_service": 8001,
|
||||||
|
"news_service": 8002,
|
||||||
|
"runtime_service": 8003,
|
||||||
|
"gateway_websocket": 8765,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def check_readme_ports() -> list[str]:
|
||||||
|
"""Check that README.md documents correct ports."""
|
||||||
|
errors = []
|
||||||
|
readme_path = PROJECT_ROOT / "README.md"
|
||||||
|
readme_content = readme_path.read_text(encoding="utf-8")
|
||||||
|
|
||||||
|
# Check for each service port mention
|
||||||
|
for service, port in SERVICE_PORTS.items():
|
||||||
|
port_patterns = [
|
||||||
|
f":{port}",
|
||||||
|
f"port {port}",
|
||||||
|
f"localhost:{port}",
|
||||||
|
]
|
||||||
|
found = any(pattern in readme_content for pattern in port_patterns)
|
||||||
|
if not found:
|
||||||
|
errors.append(f"README.md: Missing documentation for {service} on port {port}")
|
||||||
|
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def check_start_dev_sh_ports() -> list[str]:
|
||||||
|
"""Check that start-dev.sh uses correct ports."""
|
||||||
|
errors = []
|
||||||
|
script_path = PROJECT_ROOT / "start-dev.sh"
|
||||||
|
script_content = script_path.read_text(encoding="utf-8")
|
||||||
|
|
||||||
|
# Check for port declarations in start_service calls
|
||||||
|
for service, port in SERVICE_PORTS.items():
|
||||||
|
if service == "gateway_websocket":
|
||||||
|
# Gateway uses --port flag
|
||||||
|
if f"--port {port}" not in script_content:
|
||||||
|
errors.append(f"start-dev.sh: Gateway not using port {port}")
|
||||||
|
else:
|
||||||
|
# Services use port parameter in start_service
|
||||||
|
pattern = rf'start_service\s+"{service}"\s+"[^"]+"\s+{port}'
|
||||||
|
if not re.search(pattern, script_content):
|
||||||
|
# Also check for explicit port mentions
|
||||||
|
if f"port {port}" not in script_content and f":{port}" not in script_content:
|
||||||
|
errors.append(f"start-dev.sh: {service} not using port {port}")
|
||||||
|
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def check_deploy_readme_consistency() -> list[str]:
|
||||||
|
"""Check that deploy/README.md is consistent with scripts."""
|
||||||
|
errors = []
|
||||||
|
deploy_readme_path = PROJECT_ROOT / "deploy" / "README.md"
|
||||||
|
deploy_content = deploy_readme_path.read_text(encoding="utf-8")
|
||||||
|
|
||||||
|
# Check for gateway port consistency
|
||||||
|
if "127.0.0.1:8765" not in deploy_content:
|
||||||
|
errors.append("deploy/README.md: Gateway port 8765 not documented correctly")
|
||||||
|
|
||||||
|
# Check for production script reference
|
||||||
|
if "scripts/run_prod.sh" not in deploy_content:
|
||||||
|
errors.append("deploy/README.md: Missing reference to scripts/run_prod.sh")
|
||||||
|
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def check_run_prod_sh_ports() -> list[str]:
|
||||||
|
"""Check that run_prod.sh uses correct ports."""
|
||||||
|
errors = []
|
||||||
|
script_path = PROJECT_ROOT / "scripts" / "run_prod.sh"
|
||||||
|
script_content = script_path.read_text(encoding="utf-8")
|
||||||
|
|
||||||
|
# Production script should use port 8765 for gateway
|
||||||
|
if "--port 8765" not in script_content:
|
||||||
|
errors.append("scripts/run_prod.sh: Not using gateway port 8765")
|
||||||
|
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def check_service_main_blocks() -> list[str]:
|
||||||
|
"""Check that service modules use correct ports in __main__ blocks."""
|
||||||
|
errors = []
|
||||||
|
|
||||||
|
service_files = {
|
||||||
|
"agent_service": PROJECT_ROOT / "backend" / "apps" / "agent_service.py",
|
||||||
|
"trading_service": PROJECT_ROOT / "backend" / "apps" / "trading_service.py",
|
||||||
|
"news_service": PROJECT_ROOT / "backend" / "apps" / "news_service.py",
|
||||||
|
"runtime_service": PROJECT_ROOT / "backend" / "apps" / "runtime_service.py",
|
||||||
|
}
|
||||||
|
|
||||||
|
for service, file_path in service_files.items():
|
||||||
|
if not file_path.exists():
|
||||||
|
errors.append(f"{service}: File not found at {file_path}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
content = file_path.read_text(encoding="utf-8")
|
||||||
|
expected_port = SERVICE_PORTS[service]
|
||||||
|
|
||||||
|
# Check for port= in uvicorn.run or app.run
|
||||||
|
if f"port={expected_port}" not in content and f"port= {expected_port}" not in content:
|
||||||
|
errors.append(f"{file_path}: Not using expected port {expected_port}")
|
||||||
|
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Verify documentation and script consistency.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--strict",
|
||||||
|
action="store_true",
|
||||||
|
help="Treat warnings as errors",
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
all_errors = []
|
||||||
|
|
||||||
|
print("Checking README.md ports...")
|
||||||
|
all_errors.extend(check_readme_ports())
|
||||||
|
|
||||||
|
print("Checking start-dev.sh ports...")
|
||||||
|
all_errors.extend(check_start_dev_sh_ports())
|
||||||
|
|
||||||
|
print("Checking deploy/README.md consistency...")
|
||||||
|
all_errors.extend(check_deploy_readme_consistency())
|
||||||
|
|
||||||
|
print("Checking scripts/run_prod.sh ports...")
|
||||||
|
all_errors.extend(check_run_prod_sh_ports())
|
||||||
|
|
||||||
|
print("Checking service __main__ blocks...")
|
||||||
|
all_errors.extend(check_service_main_blocks())
|
||||||
|
|
||||||
|
if all_errors:
|
||||||
|
print("\nConsistency errors found:")
|
||||||
|
for error in all_errors:
|
||||||
|
print(f" - {error}")
|
||||||
|
return 1 if args.strict else 0
|
||||||
|
else:
|
||||||
|
print("\nAll consistency checks passed!")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
@@ -4,6 +4,14 @@ This repository is in a split-first state: local development now assumes
|
|||||||
separate app surfaces and a dedicated WebSocket gateway instead of a single
|
separate app surfaces and a dedicated WebSocket gateway instead of a single
|
||||||
combined backend entrypoint.
|
combined backend entrypoint.
|
||||||
|
|
||||||
|
For the canonical architecture summary, start with
|
||||||
|
[docs/current-architecture.md](../docs/current-architecture.md). This file is
|
||||||
|
service-focused and includes migration details.
|
||||||
|
The matching visual diagram lives at
|
||||||
|
[docs/current-architecture.excalidraw](../docs/current-architecture.excalidraw),
|
||||||
|
and the next-step execution plan lives at
|
||||||
|
[docs/development-roadmap.md](../docs/development-roadmap.md).
|
||||||
|
|
||||||
## Service Map
|
## Service Map
|
||||||
|
|
||||||
| Surface | Default port | Role |
|
| Surface | Default port | Role |
|
||||||
@@ -13,9 +21,32 @@ combined backend entrypoint.
|
|||||||
| `backend.apps.news_service` | `8002` | Read-only explain/news APIs such as story, similar days, range explain |
|
| `backend.apps.news_service` | `8002` | Read-only explain/news APIs such as story, similar days, range explain |
|
||||||
| `backend.apps.runtime_service` | `8003` | Runtime lifecycle APIs under `/api/runtime/*` |
|
| `backend.apps.runtime_service` | `8003` | Runtime lifecycle APIs under `/api/runtime/*` |
|
||||||
| `backend.apps.openclaw_service` | `8004` | Read-only OpenClaw REST facade |
|
| `backend.apps.openclaw_service` | `8004` | Read-only OpenClaw REST facade |
|
||||||
| Gateway (`backend.main`) | `8765` | WebSocket feed, runtime event stream, legacy/compat orchestration path |
|
| Gateway (`backend.main`) | `8765` | WebSocket feed, runtime event stream, pipeline execution |
|
||||||
| OpenClaw Gateway | `18789` | External OpenClaw WebSocket endpoint consumed by 大时代 gateway |
|
| OpenClaw Gateway | `18789` | External OpenClaw WebSocket endpoint consumed by 大时代 gateway |
|
||||||
|
|
||||||
|
## Runtime Modes
|
||||||
|
|
||||||
|
### Standalone Mode (Direct Gateway Startup)
|
||||||
|
|
||||||
|
For simple deployments or backward compatibility:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m backend.main --mode live --host 0.0.0.0 --port 8765
|
||||||
|
```
|
||||||
|
|
||||||
|
In this mode, Gateway runs as the primary process with all components
|
||||||
|
(Pipeline, Market Service, Scheduler) loaded in-process.
|
||||||
|
|
||||||
|
### Microservice Mode (Recommended)
|
||||||
|
|
||||||
|
For development and production with service isolation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./start-dev.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This starts all services with `runtime_service` managing the Gateway lifecycle.
|
||||||
|
|
||||||
## What Runs By Default In Dev
|
## What Runs By Default In Dev
|
||||||
|
|
||||||
The supported local dev path is:
|
The supported local dev path is:
|
||||||
@@ -30,7 +61,7 @@ That script starts:
|
|||||||
- `trading_service` on `8001`
|
- `trading_service` on `8001`
|
||||||
- `news_service` on `8002`
|
- `news_service` on `8002`
|
||||||
- `runtime_service` on `8003`
|
- `runtime_service` on `8003`
|
||||||
- 大时代 gateway on `8765`
|
- 大时代 gateway on `8765` (as subprocess of runtime_service)
|
||||||
|
|
||||||
It does **not** start `openclaw_service` on `8004`.
|
It does **not** start `openclaw_service` on `8004`.
|
||||||
|
|
||||||
@@ -47,7 +78,21 @@ python -m uvicorn backend.apps.agent_service:app --host 0.0.0.0 --port 8000 --re
|
|||||||
python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload
|
python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload
|
||||||
python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload
|
python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload
|
||||||
python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload
|
python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload
|
||||||
python -m backend.main --mode live --host 0.0.0.0 --port 8765
|
```
|
||||||
|
|
||||||
|
The Gateway is started by `runtime_service` via the `/api/runtime/start` API,
|
||||||
|
not manually. To start a runtime:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:8003/api/runtime/start \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"launch_mode": "fresh",
|
||||||
|
"tickers": ["AAPL", "MSFT"],
|
||||||
|
"schedule_mode": "daily",
|
||||||
|
"trigger_time": "09:30",
|
||||||
|
"initial_cash": 100000
|
||||||
|
}'
|
||||||
```
|
```
|
||||||
|
|
||||||
Optional OpenClaw REST surface:
|
Optional OpenClaw REST surface:
|
||||||
@@ -60,13 +105,72 @@ python -m uvicorn backend.apps.openclaw_service:app --host 0.0.0.0 --port 8004 -
|
|||||||
|
|
||||||
The runtime path is intentionally split:
|
The runtime path is intentionally split:
|
||||||
|
|
||||||
- `runtime_service` handles start, stop, restart, current runtime info, logs, and runtime state APIs
|
### Control Plane (runtime_service :8003)
|
||||||
|
|
||||||
|
- **Gateway lifecycle**: Start, stop, restart Gateway processes
|
||||||
|
- **Configuration**: Bootstrap values, runtime parameters
|
||||||
|
- **Health monitoring**: Gateway process status, port management
|
||||||
|
- **Run history**: List historical runs, restore from snapshots
|
||||||
|
|
||||||
|
### Data Plane (Gateway :8765)
|
||||||
|
|
||||||
|
- **WebSocket transport**: Live event streaming to frontend
|
||||||
|
- **Pipeline execution**: Analysis -> Communication -> Decision -> Execution
|
||||||
|
- **Market data**: Real-time price feeds and backtest simulation
|
||||||
|
- **Scheduler**: Trading cycle orchestration
|
||||||
|
|
||||||
|
### Supporting Services
|
||||||
|
|
||||||
- `agent_service` handles control-plane reads and writes for agents, workspaces, files, and approvals
|
- `agent_service` handles control-plane reads and writes for agents, workspaces, files, and approvals
|
||||||
- `backend.main` / gateway hosts the live WebSocket channel and coordinates market service, scheduler, and pipeline execution
|
- `trading_service` provides read-only trading data
|
||||||
|
- `news_service` provides news enrichment and explanation APIs
|
||||||
|
|
||||||
The practical request path looks like:
|
The practical request path looks like:
|
||||||
|
|
||||||
`frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage`
|
```
|
||||||
|
frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage
|
||||||
|
```
|
||||||
|
|
||||||
|
## runtime_service as Gateway Process Manager
|
||||||
|
|
||||||
|
The `runtime_service` is the **Gateway Process Manager** in the microservice
|
||||||
|
architecture. Its responsibilities:
|
||||||
|
|
||||||
|
1. **Process Management**
|
||||||
|
- Spawns Gateway as subprocess via `_start_gateway_process()`
|
||||||
|
- Monitors process health via `gateway_process.poll()`
|
||||||
|
- Handles graceful shutdown (SIGTERM) and force kill
|
||||||
|
|
||||||
|
2. **Port Management**
|
||||||
|
- Finds available ports (`_find_available_port()`)
|
||||||
|
- Tracks current Gateway port in RuntimeState
|
||||||
|
|
||||||
|
3. **Lifecycle APIs**
|
||||||
|
- `POST /api/runtime/start` - Create run, spawn Gateway
|
||||||
|
- `POST /api/runtime/stop` - Stop Gateway process
|
||||||
|
- `POST /api/runtime/restart` - Stop then start new runtime
|
||||||
|
- `GET /api/runtime/gateway/status` - Check Gateway health
|
||||||
|
|
||||||
|
4. **State Management**
|
||||||
|
- Maintains RuntimeState singleton (thread-safe)
|
||||||
|
- Tracks runtime_manager reference for in-memory state
|
||||||
|
- Falls back to persisted snapshots when Gateway is stopped
|
||||||
|
|
||||||
|
### Gateway Subprocess Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
runtime_service (:8003)
|
||||||
|
|
|
||||||
|
|-- spawns --> Gateway subprocess (:8765)
|
||||||
|
|
|
||||||
|
|-- TradingPipeline
|
||||||
|
|-- MarketService
|
||||||
|
|-- Scheduler
|
||||||
|
|-- WebSocket server
|
||||||
|
```
|
||||||
|
|
||||||
|
The Gateway subprocess runs `backend.gateway_server` module (not `backend.main`)
|
||||||
|
with run-specific configuration passed via CLI arguments.
|
||||||
|
|
||||||
## Environment Variables
|
## Environment Variables
|
||||||
|
|
||||||
@@ -144,7 +248,7 @@ backend.apps.agent_service
|
|||||||
└─ control-plane routes
|
└─ control-plane routes
|
||||||
|
|
||||||
backend.apps.runtime_service
|
backend.apps.runtime_service
|
||||||
└─ runtime lifecycle routes
|
└─ runtime lifecycle routes, Gateway process management
|
||||||
|
|
||||||
backend.apps.trading_service
|
backend.apps.trading_service
|
||||||
└─ read-only trading contract
|
└─ read-only trading contract
|
||||||
@@ -155,6 +259,40 @@ backend.apps.news_service
|
|||||||
backend.apps.openclaw_service
|
backend.apps.openclaw_service
|
||||||
└─ optional OpenClaw REST facade
|
└─ optional OpenClaw REST facade
|
||||||
|
|
||||||
backend.main / backend.services.gateway
|
backend.gateway_server
|
||||||
└─ live orchestration, feed transport, scheduler, runtime coordination
|
└─ Gateway subprocess entry point (run-scoped)
|
||||||
|
|
||||||
|
backend.main
|
||||||
|
└─ standalone Gateway entry point (compatibility)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Migration Boundaries
|
||||||
|
|
||||||
|
Some agent-migration helpers still exist in the tree, but they are not part of
|
||||||
|
the supported runtime path yet:
|
||||||
|
|
||||||
|
No workspace-loading helper remains on `TradingPipeline`. Runtime agent loading
|
||||||
|
is expected to stay on the run-scoped creation path.
|
||||||
|
|
||||||
|
Also note the remaining naming split:
|
||||||
|
|
||||||
|
- `workspaces/` = design-time CRUD registry
|
||||||
|
- `runs/<run_id>/` = runtime state and run-scoped agent assets
|
||||||
|
|
||||||
|
## Future Architecture Direction
|
||||||
|
|
||||||
|
### Current State
|
||||||
|
|
||||||
|
- Pipeline logic lives in Gateway process
|
||||||
|
- Gateway is spawned as subprocess by runtime_service
|
||||||
|
- Standalone mode (`backend.main`) preserved for compatibility
|
||||||
|
|
||||||
|
### Target State
|
||||||
|
|
||||||
|
- Pipeline stages become independent services
|
||||||
|
- Gateway becomes thin event router
|
||||||
|
- runtime_service becomes full orchestrator
|
||||||
|
- Standalone mode deprecated and removed
|
||||||
|
|
||||||
|
See [docs/development-roadmap.md](../docs/development-roadmap.md) for detailed
|
||||||
|
phase planning.
|
||||||
|
|||||||
575
start-dev.sh
575
start-dev.sh
@@ -1,103 +1,335 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
# 大时代 Development Startup Script
|
# 大时代 Development Startup Script
|
||||||
# Split-service mode only
|
# ================================
|
||||||
|
#
|
||||||
|
# 启动模式说明:
|
||||||
|
# -------------
|
||||||
|
# 本脚本支持两种启动模式:
|
||||||
|
#
|
||||||
|
# 1. 微服务模式 (默认) - 启动 4 个独立服务 + Gateway
|
||||||
|
# 这是推荐的开发模式,各服务独立运行,便于单独调试和重启
|
||||||
|
# - agent_service (端口 8000): Agent 生命周期管理
|
||||||
|
# - runtime_service (端口 8003): 运行时配置和 Pipeline 执行
|
||||||
|
# - trading_service (端口 8001): 市场数据和交易操作
|
||||||
|
# - news_service (端口 8002): 新闻采集和富化
|
||||||
|
# - gateway (端口 8765): WebSocket 网关,前端连接入口
|
||||||
|
#
|
||||||
|
# 2. 独立模式 (--standalone) - 仅启动 Gateway
|
||||||
|
# Gateway 内部会自行管理服务,适合快速验证或资源受限环境
|
||||||
|
#
|
||||||
|
# 用法:
|
||||||
|
# ./start-dev.sh # 启动微服务模式
|
||||||
|
# ./start-dev.sh --standalone # 启动独立模式
|
||||||
|
# ./start-dev.sh --help # 显示帮助信息
|
||||||
|
#
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
echo "=========================================="
|
# ============================================
|
||||||
echo "大时代 Development Environment"
|
# 配置与常量
|
||||||
echo "=========================================="
|
# ============================================
|
||||||
|
|
||||||
# Colors for output
|
readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
RED='\033[0;31m'
|
readonly SCRIPT_NAME="$(basename "$0")"
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
# 服务端点配置
|
||||||
cd "${SCRIPT_DIR}"
|
readonly AGENT_SERVICE_PORT=8000
|
||||||
|
readonly TRADING_SERVICE_PORT=8001
|
||||||
|
readonly NEWS_SERVICE_PORT=8002
|
||||||
|
readonly RUNTIME_SERVICE_PORT=8003
|
||||||
|
readonly GATEWAY_PORT=8765
|
||||||
|
|
||||||
|
# 颜色定义
|
||||||
|
readonly RED='\033[0;31m'
|
||||||
|
readonly GREEN='\033[0;32m'
|
||||||
|
readonly YELLOW='\033[1;33m'
|
||||||
|
readonly BLUE='\033[0;34m'
|
||||||
|
readonly CYAN='\033[0;36m'
|
||||||
|
readonly NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# 进程 ID 数组
|
||||||
PIDS=()
|
PIDS=()
|
||||||
|
|
||||||
require_command() {
|
# 启动模式: "microservices" 或 "standalone"
|
||||||
local command_name="$1"
|
MODE="microservices"
|
||||||
if ! command -v "${command_name}" >/dev/null 2>&1; then
|
|
||||||
echo -e "${RED}Missing required command: ${command_name}${NC}"
|
# ============================================
|
||||||
|
# 工具函数
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo -e "${GREEN}[INFO]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_warn() {
|
||||||
|
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo -e "${RED}[ERROR]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_step() {
|
||||||
|
echo -e "${CYAN}[STEP]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_debug() {
|
||||||
|
echo -e "${BLUE}[DEBUG]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# 帮助信息
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
show_help() {
|
||||||
|
cat << 'EOF'
|
||||||
|
大时代 Development Startup Script
|
||||||
|
|
||||||
|
用法:
|
||||||
|
./start-dev.sh [选项]
|
||||||
|
|
||||||
|
选项:
|
||||||
|
--standalone 以独立模式启动(仅启动 Gateway,内部管理服务)
|
||||||
|
--help, -h 显示此帮助信息
|
||||||
|
|
||||||
|
模式说明:
|
||||||
|
|
||||||
|
微服务模式 (默认):
|
||||||
|
启动 4 个独立微服务 + Gateway,各服务独立进程,便于单独调试
|
||||||
|
- agent_service: http://localhost:8000 (Agent 生命周期)
|
||||||
|
- trading_service: http://localhost:8001 (市场数据)
|
||||||
|
- news_service: http://localhost:8002 (新闻服务)
|
||||||
|
- runtime_service: http://localhost:8003 (运行时管理)
|
||||||
|
- gateway: ws://localhost:8765 (WebSocket 网关)
|
||||||
|
|
||||||
|
独立模式 (--standalone):
|
||||||
|
仅启动 Gateway,由 Gateway 内部自行管理服务
|
||||||
|
适合快速验证或资源受限环境
|
||||||
|
|
||||||
|
环境要求:
|
||||||
|
- Python 3.9+
|
||||||
|
- 虚拟环境 (推荐 .venv)
|
||||||
|
- .env 文件 (可选但推荐)
|
||||||
|
|
||||||
|
示例:
|
||||||
|
./start-dev.sh # 启动微服务模式
|
||||||
|
./start-dev.sh --standalone # 启动独立模式
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# 参数解析
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
parse_args() {
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--standalone)
|
||||||
|
MODE="standalone"
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
show_help
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
log_warn "未知选项: $1"
|
||||||
|
log_info "使用 --help 查看帮助信息"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# 启动前检查
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
check_python_version() {
|
||||||
|
log_step "检查 Python 版本..."
|
||||||
|
|
||||||
|
if ! command -v python >/dev/null 2>&1; then
|
||||||
|
log_error "未找到 python 命令"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
local python_version
|
||||||
|
python_version=$(python --version 2>&1 | awk '{print $2}')
|
||||||
|
log_debug "Python 版本: $python_version"
|
||||||
|
|
||||||
|
python - <<'PY' || {
|
||||||
|
import sys
|
||||||
|
if sys.version_info < (3, 9):
|
||||||
|
print(f"Python 3.9+ 是必需的,当前版本: {sys.version}")
|
||||||
|
sys.exit(1)
|
||||||
|
print(f"Python 版本检查通过: {sys.version.split()[0]}")
|
||||||
|
PY
|
||||||
|
log_error "Python 版本不符合要求 (需要 3.9+)"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
check_virtual_env() {
|
||||||
|
log_step "检查虚拟环境..."
|
||||||
|
|
||||||
|
if [[ -n "${VIRTUAL_ENV:-}" ]]; then
|
||||||
|
log_info "已激活虚拟环境: $VIRTUAL_ENV"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$SCRIPT_DIR/.venv/bin/activate" ]]; then
|
||||||
|
log_warn "未激活虚拟环境,自动激活 .venv"
|
||||||
|
# shellcheck disable=SC1091
|
||||||
|
source "$SCRIPT_DIR/.venv/bin/activate"
|
||||||
|
log_info "虚拟环境已激活: $VIRTUAL_ENV"
|
||||||
|
else
|
||||||
|
log_warn "未找到虚拟环境,使用系统 Python"
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
check_python_module() {
|
check_required_commands() {
|
||||||
local module_name="$1"
|
log_step "检查必要命令..."
|
||||||
if ! python -c "import ${module_name}" >/dev/null 2>&1; then
|
|
||||||
echo -e "${RED}Missing required Python module: ${module_name}${NC}"
|
local missing=()
|
||||||
echo "Install dependencies with one of:"
|
|
||||||
echo " pip install -r requirements.txt"
|
if ! command -v python >/dev/null 2>&1; then
|
||||||
echo " pip install -r requirements-dev.txt"
|
missing+=("python")
|
||||||
echo " uv pip install -e '.[dev]'"
|
fi
|
||||||
|
|
||||||
|
if ! command -v lsof >/dev/null 2>&1; then
|
||||||
|
missing+=("lsof")
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ${#missing[@]} -gt 0 ]]; then
|
||||||
|
log_error "缺少必要命令: ${missing[*]}"
|
||||||
|
log_info "请安装缺失的命令后重试"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
log_info "所有必要命令已安装"
|
||||||
}
|
}
|
||||||
|
|
||||||
load_env_file() {
|
check_python_modules() {
|
||||||
if [ -f .env ]; then
|
log_step "检查 Python 依赖模块..."
|
||||||
echo -e "${GREEN}Loading environment from .env${NC}"
|
|
||||||
|
local modules=("fastapi" "uvicorn" "websockets" "yaml" "dotenv")
|
||||||
|
local missing=()
|
||||||
|
|
||||||
|
for module in "${modules[@]}"; do
|
||||||
|
if ! python -c "import $module" 2>/dev/null; then
|
||||||
|
missing+=("$module")
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ ${#missing[@]} -gt 0 ]]; then
|
||||||
|
log_error "缺少 Python 模块: ${missing[*]}"
|
||||||
|
log_info "请安装依赖: uv pip install -e '.[dev]' 或 pip install -r requirements.txt"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "所有依赖模块已安装"
|
||||||
|
}
|
||||||
|
|
||||||
|
check_env_file() {
|
||||||
|
log_step "检查环境配置文件..."
|
||||||
|
|
||||||
|
if [[ -f "$SCRIPT_DIR/.env" ]]; then
|
||||||
|
log_info "加载环境变量: .env"
|
||||||
set -a
|
set -a
|
||||||
source .env
|
# shellcheck disable=SC1091
|
||||||
|
source "$SCRIPT_DIR/.env"
|
||||||
set +a
|
set +a
|
||||||
else
|
else
|
||||||
echo -e "${YELLOW}Warning: .env file not found. Copy env.template to .env first if you need live credentials.${NC}"
|
log_warn ".env 文件不存在,使用默认配置"
|
||||||
|
log_info "提示: 复制 env.template 到 .env 并配置您的 API 密钥"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
check_env_var() {
|
check_ports() {
|
||||||
local var_name="$1"
|
log_step "检查端口占用情况..."
|
||||||
local severity="${2:-warn}"
|
|
||||||
local value="${!var_name:-}"
|
local ports=()
|
||||||
if [ -z "${value}" ]; then
|
|
||||||
if [ "${severity}" = "error" ]; then
|
if [[ "$MODE" == "microservices" ]]; then
|
||||||
echo -e "${RED}Missing required environment variable: ${var_name}${NC}"
|
ports=($AGENT_SERVICE_PORT $TRADING_SERVICE_PORT $NEWS_SERVICE_PORT $RUNTIME_SERVICE_PORT $GATEWAY_PORT)
|
||||||
exit 1
|
else
|
||||||
fi
|
ports=($GATEWAY_PORT)
|
||||||
echo -e "${YELLOW}Warning: ${var_name} is not set${NC}"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
local occupied=()
|
||||||
|
for port in "${ports[@]}"; do
|
||||||
|
if lsof -Pi :"$port" -sTCP:LISTEN -t >/dev/null 2>&1; then
|
||||||
|
occupied+=("$port")
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ ${#occupied[@]} -gt 0 ]]; then
|
||||||
|
log_warn "以下端口已被占用: ${occupied[*]}"
|
||||||
|
log_info "尝试释放端口..."
|
||||||
|
|
||||||
|
for port in "${occupied[@]}"; do
|
||||||
|
kill_port "$port"
|
||||||
|
done
|
||||||
|
else
|
||||||
|
log_info "所有端口可用"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
kill_port() {
|
||||||
|
local port="$1"
|
||||||
|
local pids
|
||||||
|
pids=$(lsof -ti :"$port" 2>/dev/null || true)
|
||||||
|
|
||||||
|
if [[ -n "$pids" ]]; then
|
||||||
|
log_warn "释放端口 $port (PID: $pids)"
|
||||||
|
echo "$pids" | xargs kill -9 2>/dev/null || true
|
||||||
|
sleep 0.5
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
check_optional_services() {
|
||||||
|
log_step "检查可选服务..."
|
||||||
|
|
||||||
|
# 检查 npm(用于前端)
|
||||||
|
if ! command -v npm >/dev/null 2>&1; then
|
||||||
|
log_warn "npm 未安装,前端启动功能不可用"
|
||||||
|
else
|
||||||
|
log_info "npm 已安装"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 检查 OpenClaw gateway
|
||||||
|
check_openclaw_gateway
|
||||||
}
|
}
|
||||||
|
|
||||||
check_openclaw_gateway() {
|
check_openclaw_gateway() {
|
||||||
local target_host="127.0.0.1"
|
local target_host="127.0.0.1"
|
||||||
local target_port="18789"
|
local target_port="18789"
|
||||||
if python - <<PY >/dev/null 2>&1
|
|
||||||
|
if python - <<PY >/dev/null 2>&1; then
|
||||||
import socket
|
import socket
|
||||||
sock = socket.socket()
|
sock = socket.socket()
|
||||||
sock.settimeout(1.0)
|
sock.settimeout(1.0)
|
||||||
sock.connect(("${target_host}", ${target_port}))
|
sock.connect(("${target_host}", ${target_port}))
|
||||||
sock.close()
|
sock.close()
|
||||||
PY
|
PY
|
||||||
then
|
log_info "OpenClaw gateway 可连接: ws://${target_host}:${target_port}"
|
||||||
echo -e "${GREEN}OpenClaw gateway reachable at ws://${target_host}:${target_port}${NC}"
|
|
||||||
else
|
else
|
||||||
echo -e "${YELLOW}Warning: OpenClaw gateway is not reachable at ws://${target_host}:${target_port}${NC}"
|
log_warn "OpenClaw gateway 未启动: ws://${target_host}:${target_port}"
|
||||||
echo " OpenClaw panel features may be unavailable until it is started."
|
log_info " OpenClaw 面板功能将不可用"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
print_prereq_help() {
|
# ============================================
|
||||||
echo "Environment checks:"
|
# 服务启动函数
|
||||||
echo " - repo root: ${SCRIPT_DIR}"
|
# ============================================
|
||||||
echo " - python: $(command -v python)"
|
|
||||||
if [ -n "${VIRTUAL_ENV:-}" ]; then
|
|
||||||
echo " - virtualenv: ${VIRTUAL_ENV}"
|
|
||||||
else
|
|
||||||
echo " - virtualenv: not activated"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
start_service() {
|
start_service() {
|
||||||
local name="$1"
|
local name="$1"
|
||||||
local app_path="$2"
|
local app_path="$2"
|
||||||
local port="$3"
|
local port="$3"
|
||||||
|
|
||||||
echo -e "${GREEN}Starting ${name}${NC} on port ${port}..."
|
log_info "启动 ${name} (端口 ${port})..."
|
||||||
SERVICE_NAME="${name}" python -m uvicorn "${app_path}" \
|
SERVICE_NAME="${name}" python -m uvicorn "${app_path}" \
|
||||||
--host 0.0.0.0 \
|
--host 0.0.0.0 \
|
||||||
--port "${port}" \
|
--port "${port}" \
|
||||||
@@ -108,111 +340,156 @@ start_service() {
|
|||||||
PIDS+=($!)
|
PIDS+=($!)
|
||||||
}
|
}
|
||||||
|
|
||||||
cleanup() {
|
start_gateway() {
|
||||||
if [ "${#PIDS[@]}" -gt 0 ]; then
|
log_step "启动 Gateway (WebSocket 服务)..."
|
||||||
echo ""
|
log_info "Gateway 将作为子进程启动 (端口 ${GATEWAY_PORT})"
|
||||||
echo -e "${YELLOW}Stopping development services...${NC}"
|
log_info "前端连接地址: ws://localhost:${GATEWAY_PORT}"
|
||||||
kill "${PIDS[@]}" 2>/dev/null || true
|
|
||||||
wait "${PIDS[@]}" 2>/dev/null || true
|
SERVICE_NAME="gateway" python -m backend.main \
|
||||||
fi
|
--mode live \
|
||||||
|
--host 0.0.0.0 \
|
||||||
|
--port "$GATEWAY_PORT" &
|
||||||
|
PIDS+=($!)
|
||||||
}
|
}
|
||||||
|
|
||||||
kill_port() {
|
# ============================================
|
||||||
local port="$1"
|
# 微服务模式启动
|
||||||
local pids=$(lsof -ti :${port} 2>/dev/null || true)
|
# ============================================
|
||||||
if [ -n "$pids" ]; then
|
|
||||||
echo -e "${YELLOW}Port ${port} is in use, killing PID(s): ${pids}${NC}"
|
start_microservices_mode() {
|
||||||
echo "$pids" | xargs kill -9 2>/dev/null || true
|
log_step "启动微服务模式..."
|
||||||
sleep 0.5
|
echo ""
|
||||||
|
echo -e "${CYAN}==========================================${NC}"
|
||||||
|
echo -e "${CYAN} 服务端点 ${NC}"
|
||||||
|
echo -e "${CYAN}==========================================${NC}"
|
||||||
|
echo -e " agent_service: http://localhost:${AGENT_SERVICE_PORT}"
|
||||||
|
echo -e " runtime_service: http://localhost:${RUNTIME_SERVICE_PORT}"
|
||||||
|
echo -e " trading_service: http://localhost:${TRADING_SERVICE_PORT}"
|
||||||
|
echo -e " news_service: http://localhost:${NEWS_SERVICE_PORT}"
|
||||||
|
echo -e " gateway: ws://localhost:${GATEWAY_PORT}"
|
||||||
|
echo -e "${CYAN}==========================================${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# 设置服务 URL 环境变量
|
||||||
|
export TRADING_SERVICE_URL="${TRADING_SERVICE_URL:-http://localhost:${TRADING_SERVICE_PORT}}"
|
||||||
|
export NEWS_SERVICE_URL="${NEWS_SERVICE_URL:-http://localhost:${NEWS_SERVICE_PORT}}"
|
||||||
|
export RUNTIME_SERVICE_URL="${RUNTIME_SERVICE_URL:-http://localhost:${RUNTIME_SERVICE_PORT}}"
|
||||||
|
export OPENCLAW_SERVICE_URL="${OPENCLAW_SERVICE_URL:-http://localhost:18789}"
|
||||||
|
export ENABLE_DASHBOARD_COMPAT_EXPORTS="${ENABLE_DASHBOARD_COMPAT_EXPORTS:-true}"
|
||||||
|
|
||||||
|
log_debug "环境变量:"
|
||||||
|
log_debug " TRADING_SERVICE_URL=${TRADING_SERVICE_URL}"
|
||||||
|
log_debug " NEWS_SERVICE_URL=${NEWS_SERVICE_URL}"
|
||||||
|
log_debug " RUNTIME_SERVICE_URL=${RUNTIME_SERVICE_URL}"
|
||||||
|
log_debug " OPENCLAW_SERVICE_URL=${OPENCLAW_SERVICE_URL}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# 启动 4 个微服务
|
||||||
|
start_service "agent_service" "backend.apps.agent_service:app" "$AGENT_SERVICE_PORT"
|
||||||
|
start_service "runtime_service" "backend.apps.runtime_service:app" "$RUNTIME_SERVICE_PORT"
|
||||||
|
start_service "trading_service" "backend.apps.trading_service:app" "$TRADING_SERVICE_PORT"
|
||||||
|
start_service "news_service" "backend.apps.news_service:app" "$NEWS_SERVICE_PORT"
|
||||||
|
|
||||||
|
# 启动 Gateway(作为子进程)
|
||||||
|
start_gateway
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
log_info "所有服务已启动"
|
||||||
|
log_info "按 Ctrl+C 停止所有服务"
|
||||||
|
echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# 独立模式启动
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
start_standalone_mode() {
|
||||||
|
log_step "启动独立模式..."
|
||||||
|
echo ""
|
||||||
|
echo -e "${CYAN}==========================================${NC}"
|
||||||
|
echo -e "${CYAN} 独立模式 ${NC}"
|
||||||
|
echo -e "${CYAN}==========================================${NC}"
|
||||||
|
echo -e " gateway: ws://localhost:${GATEWAY_PORT}"
|
||||||
|
echo -e "${CYAN}==========================================${NC}"
|
||||||
|
echo ""
|
||||||
|
log_info "Gateway 将内部管理服务"
|
||||||
|
|
||||||
|
# 启动 Gateway(独立模式)
|
||||||
|
start_gateway
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
log_info "Gateway 已启动(独立模式)"
|
||||||
|
log_info "按 Ctrl+C 停止服务"
|
||||||
|
echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# 清理与信号处理
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
cleanup() {
|
||||||
|
if [[ ${#PIDS[@]} -gt 0 ]]; then
|
||||||
|
echo ""
|
||||||
|
log_step "正在停止服务..."
|
||||||
|
|
||||||
|
for pid in "${PIDS[@]}"; do
|
||||||
|
if kill -0 "$pid" 2>/dev/null; then
|
||||||
|
kill "$pid" 2>/dev/null || true
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# 等待进程结束
|
||||||
|
wait "${PIDS[@]}" 2>/dev/null || true
|
||||||
|
log_info "所有服务已停止"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
trap cleanup EXIT INT TERM
|
trap cleanup EXIT INT TERM
|
||||||
|
|
||||||
if [ $# -gt 0 ]; then
|
# ============================================
|
||||||
echo -e "${YELLOW}Ignoring legacy mode argument(s): $*${NC}"
|
# 主程序
|
||||||
echo "Split-service mode is now the only supported development mode."
|
# ============================================
|
||||||
fi
|
|
||||||
|
|
||||||
require_command python
|
main() {
|
||||||
require_command lsof
|
# 解析命令行参数
|
||||||
|
parse_args "$@"
|
||||||
|
|
||||||
if [ -z "${VIRTUAL_ENV:-}" ]; then
|
# 显示启动横幅
|
||||||
if [ -f ".venv/bin/activate" ]; then
|
echo ""
|
||||||
echo -e "${YELLOW}Virtual environment not activated; auto-activating .venv${NC}"
|
echo -e "${CYAN}==========================================${NC}"
|
||||||
# shellcheck disable=SC1091
|
echo -e "${CYAN} 大时代 Development Environment ${NC}"
|
||||||
source .venv/bin/activate
|
echo -e "${CYAN}==========================================${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# 切换到项目根目录
|
||||||
|
cd "$SCRIPT_DIR"
|
||||||
|
log_debug "工作目录: $SCRIPT_DIR"
|
||||||
|
|
||||||
|
# 启动前检查
|
||||||
|
check_required_commands
|
||||||
|
check_python_version
|
||||||
|
check_virtual_env
|
||||||
|
check_python_modules
|
||||||
|
check_env_file
|
||||||
|
check_ports
|
||||||
|
check_optional_services
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo -e "${GREEN}==========================================${NC}"
|
||||||
|
echo -e "${GREEN} 启动前检查完成 ${NC}"
|
||||||
|
echo -e "${GREEN}==========================================${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# 根据模式启动服务
|
||||||
|
if [[ "$MODE" == "standalone" ]]; then
|
||||||
|
start_standalone_mode
|
||||||
else
|
else
|
||||||
echo -e "${YELLOW}Warning: no active virtual environment and .venv not found${NC}"
|
start_microservices_mode
|
||||||
fi
|
fi
|
||||||
fi
|
|
||||||
|
|
||||||
load_env_file
|
# 等待所有后台进程
|
||||||
|
wait
|
||||||
|
}
|
||||||
|
|
||||||
print_prereq_help
|
# 执行主程序
|
||||||
|
main "$@"
|
||||||
python - <<'PY'
|
|
||||||
import sys
|
|
||||||
if sys.version_info < (3, 9):
|
|
||||||
raise SystemExit("Python 3.9+ is required")
|
|
||||||
print(f"Python version OK: {sys.version.split()[0]}")
|
|
||||||
PY
|
|
||||||
|
|
||||||
check_python_module fastapi
|
|
||||||
check_python_module uvicorn
|
|
||||||
check_python_module websockets
|
|
||||||
check_python_module yaml
|
|
||||||
check_python_module dotenv
|
|
||||||
|
|
||||||
check_env_var OPENAI_API_KEY
|
|
||||||
check_env_var FINNHUB_API_KEY
|
|
||||||
check_env_var FIN_DATA_SOURCE
|
|
||||||
|
|
||||||
if ! command -v npm >/dev/null 2>&1; then
|
|
||||||
echo -e "${YELLOW}Warning: npm is not installed. Frontend startup via 'evotraders frontend' will not work.${NC}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
export TRADING_SERVICE_URL="${TRADING_SERVICE_URL:-http://localhost:8001}"
|
|
||||||
export NEWS_SERVICE_URL="${NEWS_SERVICE_URL:-http://localhost:8002}"
|
|
||||||
export RUNTIME_SERVICE_URL="${RUNTIME_SERVICE_URL:-http://localhost:8003}"
|
|
||||||
export OPENCLAW_SERVICE_URL="${OPENCLAW_SERVICE_URL:-http://localhost:18789}"
|
|
||||||
|
|
||||||
check_openclaw_gateway
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}Starting 大时代 split services (default mode)...${NC}"
|
|
||||||
echo " agent_service: http://localhost:8000"
|
|
||||||
echo " runtime_service: http://localhost:8003"
|
|
||||||
echo " openclaw_gateway: ws://localhost:18789"
|
|
||||||
echo " trading_service: http://localhost:8001"
|
|
||||||
echo " news_service: http://localhost:8002"
|
|
||||||
echo ""
|
|
||||||
echo "Exported backend preference URLs:"
|
|
||||||
echo " TRADING_SERVICE_URL=${TRADING_SERVICE_URL}"
|
|
||||||
echo " NEWS_SERVICE_URL=${NEWS_SERVICE_URL}"
|
|
||||||
echo " RUNTIME_SERVICE_URL=${RUNTIME_SERVICE_URL}"
|
|
||||||
echo " OPENCLAW_SERVICE_URL=${OPENCLAW_SERVICE_URL}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo -e "${GREEN}Checking ports...${NC}"
|
|
||||||
kill_port 8000
|
|
||||||
kill_port 8001
|
|
||||||
kill_port 8002
|
|
||||||
kill_port 8003
|
|
||||||
kill_port 8765
|
|
||||||
|
|
||||||
start_service "agent_service" "backend.apps.agent_service:app" 8000
|
|
||||||
start_service "runtime_service" "backend.apps.runtime_service:app" 8003
|
|
||||||
start_service "trading_service" "backend.apps.trading_service:app" 8001
|
|
||||||
start_service "news_service" "backend.apps.news_service:app" 8002
|
|
||||||
|
|
||||||
echo -e "${GREEN}Starting Gateway (WebSocket, port 8765)...${NC}"
|
|
||||||
SERVICE_NAME="gateway" python -m backend.main \
|
|
||||||
--mode live \
|
|
||||||
--host 0.0.0.0 \
|
|
||||||
--port 8765 &
|
|
||||||
PIDS+=($!)
|
|
||||||
|
|
||||||
echo -e "${GREEN}Split services are running.${NC}"
|
|
||||||
echo "Use Ctrl+C to stop all services."
|
|
||||||
wait
|
|
||||||
|
|||||||
Reference in New Issue
Block a user