18 Commits

Author SHA1 Message Date
dfc8fda187 perf: asynchronize and parallelize analysis tools to prevent main loop blocking 2026-04-07 14:59:41 +08:00
aae4bc7d40 fix: enable parallel analyst execution by removing broken TeamMsgHub import 2026-04-07 14:01:31 +08:00
11849208ed perf: optimize system concurrency, I/O stability and fix WebSocket disconnects 2026-04-07 13:58:49 +08:00
62c7341cf6 Add dynamic analyst runtime updates and deployment guides 2026-04-07 09:39:37 +08:00
80ce63da5a refactor: remove legacy agent fallback paths
Remove legacy AnalystAgent fallback and EVO_AGENT_IDS=legacy test paths.
EvoAgent is now the default for all supported roles.

- Delete runs/_legacy/ backup directory (live/, backtest/, production/)
- Remove test_evo_agent_legacy_mode test
- Remove test_pipeline_create_runtime_analyst_uses_legacy_when_not_in_evo_ids test
- Update TradingPipeline docstring to reflect EvoAgent-only runtime

Constraint: EvoAgent migration completed in prior commits
Scope-risk: narrow (test and comment cleanup only)
2026-04-03 14:28:16 +08:00
a9d863073f chore: ignore codex local artifacts 2026-04-03 13:51:21 +08:00
4ea8fc4c32 chore: ignore local codex state 2026-04-03 13:50:48 +08:00
771de8c49c docs: refresh runtime guidance 2026-04-03 13:48:49 +08:00
a399384e07 feat: update frontend runtime team controls 2026-04-03 13:48:39 +08:00
ecfbd87244 feat: add runtime dynamic team controls 2026-04-03 13:48:31 +08:00
dc0b250adc chore: remove legacy startup paths 2026-04-03 13:45:57 +08:00
2027635efe chore: remove Kubernetes sandbox TODO placeholder
Remove the TODO comment as this feature is not planned.
Kubernetes sandbox would require cluster setup and is not a priority.
2026-04-02 11:25:35 +08:00
fd71ee5e57 docs: remove references to deleted OpenClaw REST facade (port 8004)
Update all documentation to reflect removal of OpenClaw REST service:
- README.md, README_zh.md: remove service table entry
- deploy/README.md: update port range 8000-8003
- services/README.md: remove 8004 references and service list
- docs/compat-removal-plan.md: remove REST surface mention
- docs/current-architecture.md: remove service reference
- docs/legacy-inventory.md: simplify to WebSocket-only description

Follow-up to: refactor(openclaw): remove REST facade
2026-04-02 11:14:31 +08:00
ecc7623093 refactor(openclaw): remove REST facade (port 8004), unify on WebSocket
Remove the redundant OpenClaw REST service (port 8004) since frontend
already uses WebSocket via Gateway (port 8765) → OpenClaw (port 18789).

Deleted:
- backend/apps/openclaw_service.py
- backend/api/openclaw.py
- backend/tests/test_openclaw_service_app.py
- backend/tests/test_service_clients.py
- shared/client/openclaw_client.py

Updated:
- backend/apps/__init__.py — remove openclaw_app exports
- backend/api/__init__.py — remove openclaw_router
- shared/client/__init__.py — remove OpenClawServiceClient
- backend/services/gateway_openclaw_handlers.py — update docstring
- start.sh — remove port 8004 service startup

Architecture:
- Before: Frontend → HTTP :8004 → subprocess openclaw CLI
- After: Frontend → WS :8765 → Gateway → WS :18789 → OpenClaw

Constraint: Frontend already uses WebSocket exclusively
Confidence: high
Scope-risk: low (frontend unchanged)
2026-04-02 11:04:06 +08:00
45c3996434 refactor(cleanup): remove legacy agent classes and complete EvoAgent migration
Remove deprecated AnalystAgent, PMAgent, and RiskAgent classes.
All agent creation now goes through UnifiedAgentFactory creating EvoAgent instances.

- Delete backend/agents/analyst.py (169 lines)
- Delete backend/agents/portfolio_manager.py (420 lines)
- Delete backend/agents/risk_manager.py (139 lines)
- Update all imports to use EvoAgent exclusively
- Clean up unused imports across 25 files
- Update tests to work with simplified agent structure

Constraint: EvoAgent is now the single source of truth for all agent roles
Constraint: UnifiedAgentFactory handles runtime agent creation
Rejected: Keep legacy aliases | creates maintenance burden
Confidence: high
Scope-risk: moderate (affects agent instantiation paths)
Directive: All new agent features must be added to EvoAgent, not legacy classes
Not-tested: Kubernetes sandbox executor (marked with TODO)
2026-04-02 10:51:14 +08:00
49d704c363 refactor(cleanup): remove legacy CLI and complete EvoAgent migration cleanup
- Delete backend/cli.py and all CLI-specific tests (test_cli.py,
  test_openclaw_cli_service.py, test_skills_cli.py)
- Remove evotraders console script from pyproject.toml
- Update README/CLAUDE.md to reference python backend/main.py instead of CLI
- Add pytest-asyncio to dev dependencies
- Enhance EvoAgent with reload_runtime_assets and backward-compat attrs
- Align tests with updated API semantics and gateway process models

Constraint: CLI is deprecated in favor of split-service runtime model
Confidence: high
Scope-risk: moderate
2026-04-02 02:06:46 +08:00
3334a41e5a refactor(cleanup): remove legacy runtime directories and fix API semantics
Task 1: Clean up root-level runtime directories
- Backup live/, backtest/, production/ to runs/_legacy/
- Remove legacy directories from repo root

Task 2: API route semantics cleanup
- Create /api/runs/{run_id}/agents/* routes for runtime agent operations
- Keep /api/workspaces/{id}/agents/* for design-time (deprecated)
- Update frontend runtimeApi.js to use new /runs/ prefix
- Update legacy-inventory.md with completion status

New files:
- backend/api/runs.py - Runtime agent routes with proper run_id semantics

Modified:
- backend/api/__init__.py - Export runs_router
- backend/apps/agent_service.py - Include runs_router, update scope docs
- frontend/src/services/runtimeApi.js - Use /runs/ instead of /workspaces/
- docs/legacy-inventory.md - Mark cleanup as completed

Constraint: Maintain backward compatibility with old /workspaces/ routes
Rejected: Remove old routes entirely | need backward compatibility during transition
Confidence: high
Scope-risk: moderate
Directive: Old /api/workspaces/ routes remain functional but deprecated
Not-tested: Full integration test with active runtime
2026-04-02 01:03:28 +08:00
16b54d5ccc feat(agent): complete EvoAgent integration for all 6 agent roles
Migrate all agent roles from Legacy to EvoAgent architecture:
- fundamentals_analyst, technical_analyst, sentiment_analyst, valuation_analyst
- risk_manager, portfolio_manager

Key changes:
- EvoAgent now supports Portfolio Manager compatibility methods (_make_decision,
  get_decisions, get_portfolio_state, load_portfolio_state, update_portfolio)
- Add UnifiedAgentFactory for centralized agent creation
- ToolGuard with batch approval API and WebSocket broadcast
- Legacy agents marked deprecated (AnalystAgent, RiskAgent, PMAgent)
- Remove backend/agents/compat.py migration shim
- Add run_id alongside workspace_id for semantic clarity
- Complete integration test coverage (13 tests)
- All smoke tests passing for 6 agent roles

Constraint: Must maintain backward compatibility with existing run configs
Constraint: Memory support must work with EvoAgent (no fallback to Legacy)
Rejected: Separate PM implementation for EvoAgent | unified approach cleaner
Confidence: high
Scope-risk: broad
Directive: EVO_AGENT_IDS env var still respected but defaults to all roles
Not-tested: Kubernetes sandbox mode for skill execution
2026-04-02 00:55:08 +08:00
177 changed files with 13448 additions and 12118 deletions

View File

@@ -26,6 +26,10 @@ EXPLAIN_RANGE_USE_LLM=
# Memory module # Memory module
MEMORY_API_KEY= MEMORY_API_KEY=
# Experimental EvoAgent rollout for selected analysts only.
# Example: EVO_AGENT_IDS=fundamentals_analyst,risk_manager,portfolio_manager
EVO_AGENT_IDS=
# ================== Agent-Specific Model Configuration | Agent特定模型配置 ================== # ================== Agent-Specific Model Configuration | Agent特定模型配置 ==================
AGENT_SENTIMENT_ANALYST_MODEL_NAME=deepseek-v3.2-exp AGENT_SENTIMENT_ANALYST_MODEL_NAME=deepseek-v3.2-exp
AGENT_TECHNICAL_ANALYST_MODEL_NAME=glm-4.6 AGENT_TECHNICAL_ANALYST_MODEL_NAME=glm-4.6
@@ -35,6 +39,20 @@ AGENT_RISK_MANAGER_MODEL_NAME=qwen3-max-preview
AGENT_PORTFOLIO_MANAGER_MODEL_NAME=qwen3-max-preview AGENT_PORTFOLIO_MANAGER_MODEL_NAME=qwen3-max-preview
# ================== Advanced Configuration | 高阶配置 ================== # ================== Advanced Configuration | 高阶配置 ==================
# Skill Sandbox Mode | 技能沙盒执行模式
# none = direct execution (default, development only) | 直接执行(默认,仅开发环境)
# docker = Docker container isolation | Docker 容器隔离
# kubernetes = Kubernetes Pod isolation (reserved) | Kubernetes Pod 隔离(预留)
SKILL_SANDBOX_MODE=none
# Docker Sandbox Settings (only used when SKILL_SANDBOX_MODE=docker) | Docker 沙盒配置
SKILL_SANDBOX_IMAGE=python:3.11-slim
SKILL_SANDBOX_MEMORY_LIMIT=512m
SKILL_SANDBOX_CPU_LIMIT=1.0
SKILL_SANDBOX_NETWORK=none
SKILL_SANDBOX_TIMEOUT=60
MAX_COMM_CYCLES=2 MAX_COMM_CYCLES=2
MARGIN_REQUIREMENT=0.5 MARGIN_REQUIREMENT=0.5
DATA_START_DATE=2022-01-01 DATA_START_DATE=2022-01-01

8
.gitignore vendored
View File

@@ -51,11 +51,19 @@ node_modules
outputs/ outputs/
/production/ /production/
/smoke_test/ /smoke_test/
/frontend/dist/
/frontend/test-results/
# Local tooling state # Local tooling state
.omc/ .omc/
/.codex/
/.codex
/.pydeps/ /.pydeps/
/referance/ /referance/
/.pids/
/.pytest_cache/
/.ruff_cache/
/evotraders.egg-info/
# Run outputs # Run outputs
/runs/ /runs/

View File

@@ -1 +0,0 @@
73343

View File

@@ -1 +0,0 @@
73348

View File

@@ -1 +0,0 @@
66939

View File

@@ -1 +0,0 @@
73345

View File

@@ -1 +0,0 @@
73347

View File

@@ -1 +0,0 @@
73346

View File

@@ -1 +0,0 @@
73344

View File

@@ -16,12 +16,14 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
# 安装依赖 # 安装依赖
uv pip install -e . uv pip install -e .
# 运行命令 # 运行回测 / 实盘
evotraders backtest --start 2025-11-01 --end 2025-12-01 # 回测模式 python backend/main.py --mode backtest --config-name smoke_fullstack --start-date 2025-11-01 --end-date 2025-12-01
evotraders backtest --start 2025-11-01 --end 2025-12-01 --enable-memory # 带记忆回测 python backend/main.py --mode backtest --config-name smoke_fullstack --start-date 2025-11-01 --end-date 2025-12-01 --enable-memory
evotraders live # 实盘交易 python backend/main.py --mode live --config-name live
evotraders live -t 22:30 # 定时每日交易 python backend/main.py --mode live --config-name live --trigger-time 22:30
evotraders frontend # 启动可视化界面
# 启动前端
cd frontend && npm run dev
# 开发服务器 # 开发服务器
./start-dev.sh # 启动全部 4 个微服务 (agent, runtime, trading, news) ./start-dev.sh # 启动全部 4 个微服务 (agent, runtime, trading, news)
@@ -113,7 +115,8 @@ npm run test # Vitest 单元测试
| 文件 | 职责 | | 文件 | 职责 |
|------|------| |------|------|
| `pipeline.py` | TradingPipeline - 核心编排器(分析→沟通→决策→执行→评估) | | `pipeline.py` | TradingPipeline - 核心编排器(分析→沟通→决策→执行→评估),支持断点 Checkpoint |
| `apo.py` | PolicyOptimizer - (APO) 自动策略优化器,根据 P&L 自动修改 Agent POLICY.md |
| `pipeline_runner.py` | REST API 触发的独立执行5 阶段启动 | | `pipeline_runner.py` | REST API 触发的独立执行5 阶段启动 |
| `scheduler.py` | BacktestScheduler、Scheduler - 回测/实盘调度 | | `scheduler.py` | BacktestScheduler、Scheduler - 回测/实盘调度 |
| `state_sync.py` | StateSync - 状态同步和广播 | | `state_sync.py` | StateSync - 状态同步和广播 |
@@ -166,7 +169,8 @@ backend/
│ └── models.py # ProcessRun、ProcessRunState │ └── models.py # ProcessRun、ProcessRunState
├── core/ # Pipeline 执行 ├── core/ # Pipeline 执行
│ ├── pipeline.py # TradingPipeline核心编排器 │ ├── pipeline.py # TradingPipeline核心编排器,支持恢复
│ ├── apo.py # PolicyOptimizer自动调优
│ ├── pipeline_runner.py # 独立 Pipeline 执行 │ ├── pipeline_runner.py # 独立 Pipeline 执行
│ ├── scheduler.py # 调度器 │ ├── scheduler.py # 调度器
│ └── state_sync.py # 状态同步 │ └── state_sync.py # 状态同步

160
README.md
View File

@@ -12,7 +12,7 @@
大时代 is an open-source financial trading agent framework that combines multi-agent collaboration, run-scoped workspaces, and memory to support both backtests and live trading workflows. 大时代 is an open-source financial trading agent framework that combines multi-agent collaboration, run-scoped workspaces, and memory to support both backtests and live trading workflows.
The repository name and CLI entrypoints still use `evotraders` for compatibility, but the product-facing branding now follows the 大时代 naming used by the reference branch. The repository name still uses `evotraders`, but the product-facing branding now follows the 大时代 naming used by the reference branch.
--- ---
@@ -21,8 +21,11 @@ The repository name and CLI entrypoints still use `evotraders` for compatibility
**Multi-agent trading team** **Multi-agent trading team**
Six roles collaborate like a real desk: four specialist analysts (fundamentals, technical, sentiment, valuation), one portfolio manager, and one risk manager. Six roles collaborate like a real desk: four specialist analysts (fundamentals, technical, sentiment, valuation), one portfolio manager, and one risk manager.
**Continuous learning** **Continuous learning & Evolution**
Agents can persist long-term memory with ReMe, reflect after each cycle, and evolve their decision patterns over time. Agents persist long-term memory with ReMe and reflect after each cycle. The **Autonomous Policy Optimizer (APO)** automatically tunes agent operational policies (`POLICY.md`) based on daily P&L feedback to correct recurring mistakes.
**Robust execution with recovery**
The trading pipeline supports **phase-based checkpointing**. If a run is interrupted, it can resume from the last successful phase (Analysis, Risk, Discussion, Decision, Execution, or Settlement), ensuring resilience in production.
**Backtest and live modes** **Backtest and live modes**
The same runtime model supports historical simulation and live execution with real-time market data. The same runtime model supports historical simulation and live execution with real-time market data.
@@ -39,22 +42,41 @@ The frontend exposes the trading room, runtime controls, logs, approvals, agent
## Current Architecture ## Current Architecture
The repository is currently in a transition from a modular monolith to split service surfaces. The split-service path is the default local development mode. The repository uses a **split-service runtime model** for local development and is the default supported path.
Current app surfaces: ### Runtime vs Design-Time
- `backend.apps.agent_service` on `:8000`: control plane for workspaces, agents, skills, and guard/approval APIs - **runtime** — the active execution layer (scheduler, gateway, pipeline, approvals during a live run)
- `backend.apps.trading_service` on `:8001`: read-only trading data APIs - **run** — one concrete execution instance (`runs/<run_id>/`)
- `backend.apps.news_service` on `:8002`: read-only explain/news APIs - **design-time** — configuration and control-plane concepts before a specific runtime is launched
- `backend.apps.runtime_service` on `:8003`: runtime lifecycle APIs - **workspace** — the design-time registry exposed by `agent_service` (`workspaces/`)
- `backend.apps.openclaw_service` on `:8004`: read-only OpenClaw facade
- WebSocket gateway on `:8765`: live event/feed channel for the frontend
The most important runtime path today is: ### Service Surfaces
`frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage` | Service | Port | Responsibility |
|---------|------|----------------|
| `backend.apps.agent_service` | `:8000` | Control plane for workspaces, agents, skills, and guard/approval APIs |
| `backend.apps.trading_service` | `:8001` | Read-only trading data APIs |
| `backend.apps.news_service` | `:8002` | Read-only explain/news APIs |
| `backend.apps.runtime_service` | `:8003` | Runtime lifecycle APIs |
| WebSocket gateway | `:8765` | Live event/feed channel for the frontend |
Reference notes for the migration live in [services/README.md](./services/README.md). ### Active Runtime Path
```
frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage
```
Runtime state is stored in `runs/<run_id>/` — this is the **runtime source of truth**. The `workspaces/` directory is the **design-time registry**, not the runtime execution path.
### Documentation
- [docs/README.md](./docs/README.md) — documentation index and reading order
- [docs/current-architecture.md](./docs/current-architecture.md) — canonical architecture facts
- [services/README.md](./services/README.md) — service boundaries and migration details
- [docs/current-architecture.excalidraw](./docs/current-architecture.excalidraw) — visual diagram
- [docs/development-roadmap.md](./docs/development-roadmap.md) — next-step execution plan
- [docs/terminology.md](./docs/terminology.md) — consistent terminology guide
--- ---
@@ -66,15 +88,11 @@ Reference notes for the migration live in [services/README.md](./services/README
# clone this repository, then: # clone this repository, then:
cd evotraders cd evotraders
# backend runtime dependencies
uv pip install -r requirements.txt
# install package entrypoint in editable mode # install package entrypoint in editable mode
uv pip install -e . uv pip install -e .
# optional # optional dev dependencies
# uv pip install -e ".[dev]" # uv pip install -e ".[dev]"
# pip install -e .
``` ```
Frontend dependencies: Frontend dependencies:
@@ -85,7 +103,7 @@ npm ci
cd .. cd ..
``` ```
Production deployment should prefer `requirements.txt` for backend and `npm ci` for frontend so the pulled environment matches the checked-in lockfiles and version pins. Production deployment should prefer the checked-in Python package metadata in `pyproject.toml` for backend installation and `npm ci` for frontend so the pulled environment matches the checked-in dependency declarations and lockfiles.
### 2. Configure environment ### 2. Configure environment
@@ -114,6 +132,9 @@ MODEL_NAME=qwen3-max-preview
# memory (optional unless --enable-memory is used) # memory (optional unless --enable-memory is used)
MEMORY_API_KEY= MEMORY_API_KEY=
# experimental: switch selected analyst / risk roles to EvoAgent
EVO_AGENT_IDS=
``` ```
Notes: Notes:
@@ -121,6 +142,52 @@ Notes:
- `FINNHUB_API_KEY` is required for live mode. - `FINNHUB_API_KEY` is required for live mode.
- `POLYGON_API_KEY` enables long-lived market-store ingestion and refresh helpers. - `POLYGON_API_KEY` enables long-lived market-store ingestion and refresh helpers.
- `MEMORY_API_KEY` is only required when long-term memory is enabled. - `MEMORY_API_KEY` is only required when long-term memory is enabled.
- `EVO_AGENT_IDS` currently supports analyst roles plus `risk_manager` and `portfolio_manager`, and is intended for staged rollout.
### Skill Sandbox Security | 技能沙盒安全
Skill scripts can be executed in multiple sandbox modes controlled by `SKILL_SANDBOX_MODE`:
| Mode | Description | Use Case |
|------|-------------|----------|
| `none` | Direct execution, no isolation | Development only (default) |
| `docker` | Docker container isolation | Production with Docker |
| `kubernetes` | Kubernetes Pod isolation | Enterprise (reserved) |
Default configuration (development):
```bash
SKILL_SANDBOX_MODE=none
```
For production with Docker isolation:
```bash
SKILL_SANDBOX_MODE=docker
SKILL_SANDBOX_MEMORY_LIMIT=512m
SKILL_SANDBOX_CPU_LIMIT=1.0
SKILL_SANDBOX_NETWORK=none
```
When running in `none` mode, a runtime security warning is displayed on first skill execution as a reminder that scripts execute directly without isolation.
Smoke test for a specific staged EvoAgent rollout target:
```bash
python3 scripts/smoke_evo_runtime.py --agent-id fundamentals_analyst
```
This script starts a temporary runtime, verifies the gateway log contains the
selected `EvoAgent`, checks `runtime_state.json`, validates the approval wake-up
path, and then stops the runtime.
You can also include it in the local release check:
```bash
./scripts/check-prod-env.sh --smoke-evo
```
Without `EVO_AGENT_IDS`, this release check now runs
`fundamentals_analyst`, `risk_manager`, and `portfolio_manager`
smoke paths by default.
For a production-style local start flow, you can also use: For a production-style local start flow, you can also use:
@@ -128,6 +195,9 @@ For a production-style local start flow, you can also use:
./start.sh ./start.sh
``` ```
The checked-in `production` label in the deploy scripts is only an example run
label. It should not be treated as a canonical root-level runtime directory.
### 3. Start the stack ### 3. Start the stack
Recommended local development flow: Recommended local development flow:
@@ -136,18 +206,18 @@ Recommended local development flow:
./start-dev.sh ./start-dev.sh
``` ```
This starts: This starts directly from the script:
- `agent_service` at `http://localhost:8000` - `agent_service` at `http://localhost:8000`
- `trading_service` at `http://localhost:8001` - `trading_service` at `http://localhost:8001`
- `news_service` at `http://localhost:8002` - `news_service` at `http://localhost:8002`
- `runtime_service` at `http://localhost:8003` - `runtime_service` at `http://localhost:8003`
- gateway WebSocket at `ws://localhost:8765` - gateway WebSocket at `ws://localhost:8765` via `runtime_service` managed startup
Then start the frontend in another terminal: Then start the frontend in another terminal:
```bash ```bash
evotraders frontend cd frontend && npm run dev
``` ```
Open `http://localhost:5173`. Open `http://localhost:5173`.
@@ -159,37 +229,35 @@ python -m uvicorn backend.apps.agent_service:app --host 0.0.0.0 --port 8000 --re
python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload
python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload
python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload
python -m backend.main --mode live --host 0.0.0.0 --port 8765
```
### 4. Run backtest or live mode from CLI # then create a runtime so runtime_service can spawn the Gateway subprocess
curl -X POST http://localhost:8003/api/runtime/start \
-H "Content-Type: application/json" \
-d '{"launch_mode":"fresh","tickers":["AAPL","MSFT"],"mode":"live"}'
```
### 4. Run backtest or live mode
Backtest: Backtest:
```bash ```bash
evotraders backtest --start 2025-11-01 --end 2025-12-01 curl -X POST http://localhost:8003/api/runtime/start \
evotraders backtest --start 2025-11-01 --end 2025-12-01 --enable-memory -H "Content-Type: application/json" \
evotraders backtest --config-name smoke_fullstack --start 2025-11-01 --end 2025-12-01 -d '{"launch_mode":"fresh","mode":"backtest","tickers":["AAPL","MSFT"],"start_date":"2025-11-01","end_date":"2025-12-01"}'
``` ```
Live: Live:
```bash ```bash
evotraders live curl -X POST http://localhost:8003/api/runtime/start \
evotraders live --enable-memory -H "Content-Type: application/json" \
evotraders live --schedule-mode intraday --interval-minutes 60 -d '{"launch_mode":"fresh","mode":"live","tickers":["AAPL","MSFT"]}'
evotraders live --trigger-time 22:30
``` ```
Help: Help:
```bash ```bash
evotraders --help python backend/main.py --help # compatibility standalone entrypoint only
evotraders backtest --help
evotraders live --help
evotraders frontend --help
``` ```
### Offline backtest data ### Offline backtest data
If you want a quick backtest demo without external market APIs, download the offline bundle and unzip it into `backend/data`: If you want a quick backtest demo without external market APIs, download the offline bundle and unzip it into `backend/data`:
@@ -208,6 +276,11 @@ unzip ret_data.zip -d backend/data
- `runs/<run_id>/BOOTSTRAP.md` stores run-specific bootstrap values and prompt body - `runs/<run_id>/BOOTSTRAP.md` stores run-specific bootstrap values and prompt body
- `runs/<run_id>/state/runtime_state.json` stores runtime snapshot state - `runs/<run_id>/state/runtime_state.json` stores runtime snapshot state
- `runs/<run_id>/team_dashboard/*.json` is a compatibility/export layer for dashboard consumers, not the primary runtime source of truth - `runs/<run_id>/team_dashboard/*.json` is a compatibility/export layer for dashboard consumers, not the primary runtime source of truth
- `ENABLE_DASHBOARD_COMPAT_EXPORTS=false` can disable those compatibility JSON exports in controlled environments while keeping runtime state persistence intact
Legacy root-level directories such as `live/`, `production/`, and `backtest/`
should be treated as historical compatibility artifacts, not the default runtime
location for new work.
Optional retention control: Optional retention control:
@@ -241,7 +314,7 @@ If these are not set, the frontend falls back to its local defaults and compatib
```text ```text
Market data -> independent analyst work -> team communication -> portfolio decision -> Market data -> independent analyst work -> team communication -> portfolio decision ->
risk review -> execution/settlement -> reflection/memory update risk review -> execution/settlement -> reflection/memory update -> APO policy tuning
``` ```
The runtime manager also tracks: The runtime manager also tracks:
@@ -304,11 +377,7 @@ trigger_time: "09:30"
enable_memory: false enable_memory: false
``` ```
Initialize a run workspace with: Run-scoped workspaces are created automatically at runtime. No manual initialization is required.
```bash
evotraders init-workspace --config-name my_run
```
--- ---
@@ -322,8 +391,7 @@ evotraders/
│ ├── apps/ # split service surfaces │ ├── apps/ # split service surfaces
│ ├── core/ # pipeline, scheduler, state sync │ ├── core/ # pipeline, scheduler, state sync
│ ├── runtime/ # runtime manager and agent runtime state │ ├── runtime/ # runtime manager and agent runtime state
── services/ # gateway, market/storage/db services ── services/ # gateway, market/storage/db services
│ └── cli.py # Typer CLI entrypoint
├── frontend/ # React + Vite UI ├── frontend/ # React + Vite UI
├── shared/ # shared clients and schemas for split services ├── shared/ # shared clients and schemas for split services
├── runs/ # run-scoped state and dashboards ├── runs/ # run-scoped state and dashboards

View File

@@ -37,22 +37,42 @@
## 当前架构 ## 当前架构
仓库目前处于“模块化单体 -> 拆分服务”的迁移阶段,本地开发默认走 split-service 路径。 仓库目前使用 **split-service 运行时模型** 进行本地开发,这是默认支持的运行路径。
当前 app surface ### 运行时 vs 设计时
- `backend.apps.agent_service`,端口 `8000`:控制面,负责 workspaces、agents、skills、审批接口 - **runtime** — 活跃的执行层scheduler、gateway、pipeline、实盘运行期间的审批
- `backend.apps.trading_service`,端口 `8001`:只读交易数据接口 - **run** — 一次具体的执行实例(`runs/<run_id>/`
- `backend.apps.news_service`,端口 `8002`:只读 explain/news 接口 - **design-time** — 启动特定 runtime 之前的配置和控制面概念
- `backend.apps.runtime_service`,端口 `8003`:运行时生命周期接口 - **workspace** — `agent_service` 暴露的设计时注册表(`workspaces/`
- `backend.apps.openclaw_service`,端口 `8004`:只读 OpenClaw facade
- WebSocket gateway端口 `8765`:前端实时事件和 feed 通道
当前最关键的主链路是: ### 服务表面
`frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage` | 服务 | 端口 | 职责 |
|------|------|------|
| `backend.apps.agent_service` | `:8000` | workspaces、agents、skills 和 guard/approval API 的控制面 |
| `backend.apps.trading_service` | `:8001` | 只读交易数据 API |
| `backend.apps.news_service` | `:8002` | 只读 explain/news API |
| `backend.apps.runtime_service` | `:8003` | 运行时生命周期 API |
| WebSocket gateway | `:8765` | 前端实时事件/feed 通道 |
迁移背景可参考 [services/README.md](./services/README.md)。 ### 活跃运行时路径
```
frontend -> runtime_service/control APIs -> gateway/runtime manager -> market service + pipeline + storage
```
运行时状态存储在 `runs/<run_id>/` — 这是 **运行时唯一真相源**`workspaces/` 目录是 **设计时注册表**,不是运行时执行路径。
### 文档
- [docs/README.md](./docs/README.md) — 文档索引与阅读顺序
- [docs/current-architecture.md](./docs/current-architecture.md) — 权威架构事实
- [docs/project-layout.md](./docs/project-layout.md) — 当前目录结构与职责说明
- [services/README.md](./services/README.md) — 服务边界和迁移详情
- [docs/current-architecture.excalidraw](./docs/current-architecture.excalidraw) — 架构图
- [docs/development-roadmap.md](./docs/development-roadmap.md) — 下一步执行计划
- [docs/terminology.md](./docs/terminology.md) — 术语规范指南
--- ---
@@ -64,15 +84,11 @@
# 克隆仓库后进入项目目录 # 克隆仓库后进入项目目录
cd evotraders cd evotraders
# 安装后端运行时依赖
uv pip install -r requirements.txt
# 安装项目入口(可编辑模式) # 安装项目入口(可编辑模式)
uv pip install -e . uv pip install -e .
# 可选 # 可选开发依赖
# uv pip install -e ".[dev]" # uv pip install -e ".[dev]"
# pip install -e .
``` ```
前端依赖: 前端依赖:
@@ -83,7 +99,7 @@ npm ci
cd .. cd ..
``` ```
生产环境部署建议后端使用 `requirements.txt`,前端使用 `npm ci`,这样拉起的环境会严格跟随仓库中锁定的依赖版本。 生产环境部署建议后端 `pyproject.toml` 中声明的包元数据为准进行安装,前端使用 `npm ci`,这样拉起的环境会严格跟随仓库中声明的依赖和锁定版本。
### 2. 配置环境变量 ### 2. 配置环境变量
@@ -112,6 +128,9 @@ MODEL_NAME=qwen3-max-preview
# 长期记忆(只有启用 --enable-memory 才需要) # 长期记忆(只有启用 --enable-memory 才需要)
MEMORY_API_KEY= MEMORY_API_KEY=
# 实验性:将选定的 analyst / risk 角色切换到 EvoAgent
EVO_AGENT_IDS=
``` ```
说明: 说明:
@@ -119,6 +138,23 @@ MEMORY_API_KEY=
- live 模式必须配置 `FINNHUB_API_KEY` - live 模式必须配置 `FINNHUB_API_KEY`
- `POLYGON_API_KEY` 用于长期 market store 的补数和刷新 - `POLYGON_API_KEY` 用于长期 market store 的补数和刷新
- `MEMORY_API_KEY` 仅在启用长期记忆时需要 - `MEMORY_API_KEY` 仅在启用长期记忆时需要
- `EVO_AGENT_IDS` 目前支持 analyst 角色以及 `risk_manager``portfolio_manager`,用于分阶段灰度发布
特定 EvoAgent 灰度目标的冒烟测试:
```bash
python3 scripts/smoke_evo_runtime.py --agent-id fundamentals_analyst
```
该脚本启动临时运行时,验证 gateway 日志包含选定的 `EvoAgent`,检查 `runtime_state.json`,验证审批唤醒路径,然后停止运行时。
你也可以将其包含在本地发布检查中:
```bash
./scripts/check-prod-env.sh --smoke-evo
```
未设置 `EVO_AGENT_IDS` 时,此发布检查默认运行 `fundamentals_analyst``risk_manager``portfolio_manager` 的冒烟路径。
如果要用更接近生产的本地启动方式,也可以直接执行: 如果要用更接近生产的本地启动方式,也可以直接执行:
@@ -140,12 +176,12 @@ MEMORY_API_KEY=
- `trading_service``http://localhost:8001` - `trading_service``http://localhost:8001`
- `news_service``http://localhost:8002` - `news_service``http://localhost:8002`
- `runtime_service``http://localhost:8003` - `runtime_service``http://localhost:8003`
- gateway WebSocket`ws://localhost:8765` - gateway WebSocket`ws://localhost:8765`,由 `runtime_service` 托管拉起
然后在另一个终端启动前端: 然后在另一个终端启动前端:
```bash ```bash
evotraders frontend cd frontend && npm run dev
``` ```
访问 `http://localhost:5173` 访问 `http://localhost:5173`
@@ -157,35 +193,32 @@ python -m uvicorn backend.apps.agent_service:app --host 0.0.0.0 --port 8000 --re
python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload python -m uvicorn backend.apps.trading_service:app --host 0.0.0.0 --port 8001 --reload
python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload python -m uvicorn backend.apps.news_service:app --host 0.0.0.0 --port 8002 --reload
python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload python -m uvicorn backend.apps.runtime_service:app --host 0.0.0.0 --port 8003 --reload
python -m backend.main --mode live --host 0.0.0.0 --port 8765
# 然后通过 runtime_service 创建运行时,由它拉起 Gateway 子进程
curl -X POST http://localhost:8003/api/runtime/start \
-H "Content-Type: application/json" \
-d '{"launch_mode":"fresh","tickers":["AAPL","MSFT"],"mode":"live"}'
``` ```
### 4. 使用 CLI 运行回测或实盘 仓库里部署脚本使用的 `production` 只是一个示例 run label不应再把它理解成
系统规定的根目录运行目录名。
### 4. 运行回测或实盘
回测: 回测:
```bash ```bash
evotraders backtest --start 2025-11-01 --end 2025-12-01 curl -X POST http://localhost:8003/api/runtime/start \
evotraders backtest --start 2025-11-01 --end 2025-12-01 --enable-memory -H "Content-Type: application/json" \
evotraders backtest --config-name smoke_fullstack --start 2025-11-01 --end 2025-12-01 -d '{"launch_mode":"fresh","mode":"backtest","tickers":["AAPL","MSFT"],"start_date":"2025-11-01","end_date":"2025-12-01"}'
``` ```
实盘: 实盘:
```bash ```bash
evotraders live curl -X POST http://localhost:8003/api/runtime/start \
evotraders live --enable-memory -H "Content-Type: application/json" \
evotraders live --schedule-mode intraday --interval-minutes 60 -d '{"launch_mode":"fresh","mode":"live","tickers":["AAPL","MSFT"]}'
evotraders live --trigger-time 22:30
```
帮助:
```bash
evotraders --help
evotraders backtest --help
evotraders live --help
evotraders frontend --help
``` ```
### 离线回测数据 ### 离线回测数据
@@ -205,7 +238,10 @@ unzip ret_data.zip -d backend/data
- 每次 run 的状态写入 `runs/<run_id>/` - 每次 run 的状态写入 `runs/<run_id>/`
- `runs/<run_id>/BOOTSTRAP.md` 保存该 run 的 bootstrap 值和 prompt body - `runs/<run_id>/BOOTSTRAP.md` 保存该 run 的 bootstrap 值和 prompt body
- `runs/<run_id>/state/runtime_state.json` 保存运行时快照 - `runs/<run_id>/state/runtime_state.json` 保存运行时快照
- `runs/<run_id>/team_dashboard/*.json` 主要是给 dashboard 用的兼容导出层,不是唯一真相源 - `runs/<run_id>/team_dashboard/*.json` 主要是给 dashboard 用的兼容导出层,不是运行时唯一真相源
- 在受控环境里可通过 `ENABLE_DASHBOARD_COMPAT_EXPORTS=false` 关闭这层兼容 JSON 导出,而不影响 runtime state 持久化
遗留的根级目录如 `live/``production/``backtest/` 应被视为历史兼容性产物,不是新工作的默认运行时位置。
可选保留策略: 可选保留策略:
@@ -231,7 +267,7 @@ VITE_TRADING_SERVICE_URL=http://localhost:8001
VITE_WS_URL=ws://localhost:8765 VITE_WS_URL=ws://localhost:8765
``` ```
如果不配置,前端会本地默认值和兼容回退逻辑运行 如果未设置这些变量,前端会回退到本地默认值和兼容性路径
--- ---
@@ -302,11 +338,7 @@ trigger_time: "09:30"
enable_memory: false enable_memory: false
``` ```
初始化一个 run 工作区: 运行时作用域工作区会在首次运行 pipeline 或服务时自动创建,无需手动初始化。
```bash
evotraders init-workspace --config-name my_run
```
--- ---
@@ -321,10 +353,9 @@ evotraders/
│ ├── core/ # pipeline、scheduler、state sync │ ├── core/ # pipeline、scheduler、state sync
│ ├── runtime/ # runtime manager 和 agent runtime state │ ├── runtime/ # runtime manager 和 agent runtime state
│ ├── services/ # gateway、market/storage/db 服务 │ ├── services/ # gateway、market/storage/db 服务
│ └── cli.py # Typer CLI 入口
├── frontend/ # React + Vite 前端 ├── frontend/ # React + Vite 前端
├── shared/ # 拆分服务共用 client 和 schema ├── shared/ # 拆分服务共用 client 和 schema
├── runs/ # run 状态和 dashboard 导出 ├── runs/ # run-scoped 状态和 dashboards
├── data/ # 长期研究数据 ├── data/ # 长期研究数据
└── services/README.md └── services/README.md
``` ```

View File

@@ -1,56 +1,46 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
""" """
Agents package - EvoAgent architecture for trading system. Agents package for the EvoAgent-based runtime.
Exports: Exports:
- EvoAgent: Next-generation agent with workspace support - EvoAgent: Core agent with workspace support
- ToolGuardMixin: Tool call approval/denial flow - ToolGuardMixin: Tool call approval/denial flow
- CommandHandler: System command handling - CommandHandler: System command handling
- AgentFactory: Dynamic agent creation and management - AgentFactory: Design-time agent creation under `workspaces/`
- WorkspaceManager: Legacy name for the persistent workspace registry - WorkspaceManager: Alias for the persistent `workspaces/` registry
- WorkspaceRegistry: Explicit run-time-agnostic workspace registry - WorkspaceRegistry: Explicit design-time `workspaces/` registry
- RunWorkspaceManager: Run-scoped workspace asset manager - RunWorkspaceManager: Run-scoped workspace asset manager
- AgentRegistry: Central agent registry - AgentRegistry: Central agent registry
- Legacy compatibility: AnalystAgent, PMAgent, RiskAgent - UnifiedAgentFactory: Runtime agent factory for creating EvoAgent instances
""" """
# New EvoAgent architecture (from agent_core.py) # EvoAgent architecture
from .agent_core import EvoAgent, ToolGuardMixin, CommandHandler from .agent_core import EvoAgent, ToolGuardMixin, CommandHandler
from .factory import AgentFactory, ModelConfig from .factory import AgentFactory, ModelConfig
from .unified_factory import UnifiedAgentFactory, get_agent_factory, clear_factory_cache
from .workspace import WorkspaceManager, WorkspaceRegistry, WorkspaceConfig from .workspace import WorkspaceManager, WorkspaceRegistry, WorkspaceConfig
from .workspace_manager import RunWorkspaceManager from .workspace_manager import RunWorkspaceManager
from .registry import AgentRegistry, AgentInfo, get_registry, reset_registry from .registry import AgentRegistry, AgentInfo, get_registry, reset_registry
# Legacy agents (backward compatibility)
from .analyst import AnalystAgent
from .portfolio_manager import PMAgent
from .risk_manager import RiskAgent
# Compatibility layer
from .compat import LegacyAgentAdapter, adapt_agent, adapt_agents, is_legacy_agent
__all__ = [ __all__ = [
# New architecture # Core EvoAgent
"EvoAgent", "EvoAgent",
"ToolGuardMixin", "ToolGuardMixin",
"CommandHandler", "CommandHandler",
# Factories
"AgentFactory", "AgentFactory",
"ModelConfig", "ModelConfig",
"UnifiedAgentFactory",
"get_agent_factory",
"clear_factory_cache",
# Workspace
"WorkspaceManager", "WorkspaceManager",
"WorkspaceRegistry", "WorkspaceRegistry",
"WorkspaceConfig", "WorkspaceConfig",
"RunWorkspaceManager", "RunWorkspaceManager",
# Registry
"AgentRegistry", "AgentRegistry",
"AgentInfo", "AgentInfo",
"get_registry", "get_registry",
"reset_registry", "reset_registry",
# Legacy compatibility
"AnalystAgent",
"PMAgent",
"RiskAgent",
# Compatibility layer
"LegacyAgentAdapter",
"adapt_agent",
"adapt_agents",
"is_legacy_agent",
] ]

View File

@@ -1,139 +0,0 @@
# -*- coding: utf-8 -*-
"""
Analyst Agent - Based on AgentScope ReActAgent
Performs analysis using tools and LLM
"""
from typing import Any, Dict, Optional
from agentscope.agent import ReActAgent
from agentscope.memory import InMemoryMemory, LongTermMemoryBase
from agentscope.message import Msg
from ..config.constants import ANALYST_TYPES
from ..utils.progress import progress
from .prompt_factory import build_agent_system_prompt, clear_prompt_factory_cache
class AnalystAgent(ReActAgent):
"""
Analyst Agent - Uses LLM for tool selection and analysis
Inherits from AgentScope's ReActAgent
"""
def __init__(
self,
analyst_type: str,
toolkit: Any,
model: Any,
formatter: Any,
agent_id: Optional[str] = None,
config: Optional[Dict[str, Any]] = None,
long_term_memory: Optional[LongTermMemoryBase] = None,
):
"""
Initialize Analyst Agent
Args:
analyst_type: Type of analyst (e.g., "fundamentals", etc.)
toolkit: AgentScope Toolkit instance
model: LLM model instance
formatter: Message formatter instance
agent_id: Agent ID (defaults to "{analyst_type}_analyst")
config: Configuration dictionary
long_term_memory: Optional ReMeTaskLongTermMemory instance
"""
if analyst_type not in ANALYST_TYPES:
raise ValueError(
f"Unknown analyst type: {analyst_type}. "
f"Must be one of: {list(ANALYST_TYPES.keys())}",
)
object.__setattr__(self, "analyst_type_key", analyst_type)
object.__setattr__(
self,
"analyst_persona",
ANALYST_TYPES[analyst_type]["display_name"],
)
if agent_id is None:
agent_id = analyst_type
object.__setattr__(self, "agent_id", agent_id)
object.__setattr__(self, "config", config or {})
object.__setattr__(self, "toolkit", toolkit)
sys_prompt = self._load_system_prompt()
kwargs = {
"name": agent_id,
"sys_prompt": sys_prompt,
"model": model,
"formatter": formatter,
"toolkit": toolkit,
"memory": InMemoryMemory(),
"max_iters": 10,
}
if long_term_memory:
kwargs["long_term_memory"] = long_term_memory
kwargs["long_term_memory_mode"] = "static_control"
super().__init__(**kwargs)
def _load_system_prompt(self) -> str:
"""Load system prompt for analyst"""
return build_agent_system_prompt(
agent_id=self.agent_id,
config_name=self.config.get("config_name", "default"),
toolkit=self.toolkit,
)
async def reply(self, x: Msg = None) -> Msg:
"""
Override reply method to add progress tracking
Args:
x: Input message (content must be str)
Returns:
Response message (content is str)
"""
ticker = None
if x and hasattr(x, "metadata") and x.metadata:
ticker = x.metadata.get("tickers")
if ticker:
progress.update_status(
self.name,
ticker,
f"Starting {self.analyst_persona} analysis",
)
result = await super().reply(x)
if ticker:
progress.update_status(
self.name,
ticker,
"Analysis completed",
)
return result
def reload_runtime_assets(self, active_skill_dirs: Optional[list] = None) -> None:
"""Reload toolkit and system prompt from current run assets."""
from .toolkit_factory import create_agent_toolkit
clear_prompt_factory_cache()
self.toolkit = create_agent_toolkit(
self.agent_id,
self.config.get("config_name", "default"),
active_skill_dirs=active_skill_dirs,
)
self._apply_runtime_sys_prompt(self._load_system_prompt())
def _apply_runtime_sys_prompt(self, sys_prompt: str) -> None:
"""Update the prompt used by future turns and the cached system msg."""
self._sys_prompt = sys_prompt
for msg, _marks in self.memory.content:
if getattr(msg, "role", None) == "system":
msg.content = sys_prompt
break

View File

@@ -8,7 +8,7 @@ import logging
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from dataclasses import dataclass, field from dataclasses import dataclass, field
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Protocol from typing import TYPE_CHECKING, Any, Dict, List, Optional
if TYPE_CHECKING: if TYPE_CHECKING:
from .agent import EvoAgent from .agent import EvoAgent

View File

@@ -8,11 +8,11 @@ from __future__ import annotations
import json import json
import logging import logging
from dataclasses import dataclass, field, asdict from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime
from enum import Enum from enum import Enum
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional, Set from typing import Any, Dict, List, Optional
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@@ -31,7 +31,6 @@ from .hooks import (
HOOK_PRE_REASONING, HOOK_PRE_REASONING,
) )
from ..prompts.builder import ( from ..prompts.builder import (
PromptBuilder,
build_system_prompt_from_workspace, build_system_prompt_from_workspace,
) )
from ..agent_workspace import load_agent_workspace_config from ..agent_workspace import load_agent_workspace_config
@@ -90,6 +89,8 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
sys_prompt: Optional[str] = None, sys_prompt: Optional[str] = None,
max_iters: int = 10, max_iters: int = 10,
memory: Optional[Any] = None, memory: Optional[Any] = None,
long_term_memory: Optional[Any] = None,
long_term_memory_mode: str = "static_control",
enable_tool_guard: bool = True, enable_tool_guard: bool = True,
enable_bootstrap_hook: bool = True, enable_bootstrap_hook: bool = True,
enable_memory_compaction: bool = False, enable_memory_compaction: bool = False,
@@ -97,6 +98,9 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
memory_compact_threshold: Optional[int] = None, memory_compact_threshold: Optional[int] = None,
env_context: Optional[str] = None, env_context: Optional[str] = None,
prompt_files: Optional[List[str]] = None, prompt_files: Optional[List[str]] = None,
# Portfolio manager specific parameters
initial_cash: Optional[float] = None,
margin_requirement: Optional[float] = None,
): ):
"""Initialize EvoAgent. """Initialize EvoAgent.
@@ -121,6 +125,8 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
self.agent_id = agent_id self.agent_id = agent_id
self.config_name = config_name self.config_name = config_name
self.workspace_dir = Path(workspace_dir) self.workspace_dir = Path(workspace_dir)
self.workspace_id = config_name
self.config = {"config_name": config_name}
self._skills_manager = skills_manager or SkillsManager() self._skills_manager = skills_manager or SkillsManager()
self._env_context = env_context self._env_context = env_context
self._prompt_files = prompt_files self._prompt_files = prompt_files
@@ -144,16 +150,24 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
# Initialize hook manager # Initialize hook manager
self._hook_manager = HookManager() self._hook_manager = HookManager()
# Build kwargs for parent ReActAgent
kwargs = {
"name": agent_id,
"model": model,
"sys_prompt": self._sys_prompt,
"toolkit": toolkit,
"memory": memory or InMemoryMemory(),
"formatter": formatter,
"max_iters": max_iters,
}
# Add long-term memory if provided
if long_term_memory:
kwargs["long_term_memory"] = long_term_memory
kwargs["long_term_memory_mode"] = long_term_memory_mode
# Initialize parent ReActAgent # Initialize parent ReActAgent
super().__init__( super().__init__(**kwargs)
name=agent_id,
model=model,
sys_prompt=self._sys_prompt,
toolkit=toolkit,
memory=memory or InMemoryMemory(),
formatter=formatter,
max_iters=max_iters,
)
# Register hooks # Register hooks
self._register_hooks( self._register_hooks(
@@ -296,11 +310,12 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
) )
logger.debug("Registered workspace watch hook") logger.debug("Registered workspace watch hook")
async def _reasoning(self, **kwargs) -> Msg: async def _reasoning(self, tool_choice: Optional[str] = None, **kwargs) -> Msg:
"""Override reasoning to execute pre-reasoning hooks. """Override reasoning to execute pre-reasoning hooks.
Args: Args:
**kwargs: Arguments for reasoning tool_choice: Optional tool choice for structured output
**kwargs: Additional arguments for reasoning
Returns: Returns:
Response message Response message
@@ -313,7 +328,18 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
) )
# Call parent (which may be ToolGuardMixin's _reasoning) # Call parent (which may be ToolGuardMixin's _reasoning)
return await super()._reasoning(**kwargs) return await super()._reasoning(tool_choice=tool_choice, **kwargs)
def reload_runtime_assets(self, active_skill_dirs: Optional[List[Path]] = None) -> None:
"""Reload toolkit and system prompt from current run assets.
Refreshes prompt files from workspace config and rebuilds the toolkit.
"""
# Rebuild system prompt (also refreshes _agent_config and _prompt_files)
self.rebuild_sys_prompt()
# Reload skills/toolkit
self.reload_skills(active_skill_dirs=active_skill_dirs)
def reload_skills(self, active_skill_dirs: Optional[List[Path]] = None) -> None: def reload_skills(self, active_skill_dirs: Optional[List[Path]] = None) -> None:
"""Reload skills at runtime. """Reload skills at runtime.
@@ -366,6 +392,110 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
self.toolkit = new_toolkit self.toolkit = new_toolkit
logger.info("Skills reloaded for agent: %s", self.agent_id) logger.info("Skills reloaded for agent: %s", self.agent_id)
def _make_decision(
self,
ticker: str,
action: str,
quantity: int,
confidence: int = 50,
reasoning: str = "",
) -> "ToolResponse":
"""Record a trading decision for a ticker (PM agent compatibility).
Args:
ticker: Stock ticker symbol (e.g., "AAPL")
action: Decision - "long", "short" or "hold"
quantity: Number of shares to trade (0 for hold)
confidence: Confidence level 0-100
reasoning: Explanation for this decision
Returns:
ToolResponse confirming decision recorded
"""
from agentscope.message import TextBlock
from agentscope.tool import ToolResponse
if action not in ["long", "short", "hold"]:
return ToolResponse(
content=[
TextBlock(
type="text",
text=f"Invalid action: {action}. Must be 'long', 'short', or 'hold'.",
),
],
)
# Store decision in metadata for retrieval
if not hasattr(self, "_decisions"):
self._decisions = {}
self._decisions[ticker] = {
"action": action,
"quantity": quantity if action != "hold" else 0,
"confidence": confidence,
"reasoning": reasoning,
}
return ToolResponse(
content=[
TextBlock(
type="text",
text=f"Decision recorded: {action} {quantity} shares of {ticker} "
f"(confidence: {confidence}%)",
),
],
)
def get_decisions(self) -> Dict[str, Dict]:
"""Get decisions from current cycle (PM compatibility)."""
return getattr(self, "_decisions", {}).copy()
def get_portfolio_state(self) -> Dict[str, Any]:
"""Get current portfolio state (PM compatibility)."""
return getattr(self, "_portfolio", {}).copy()
def load_portfolio_state(self, portfolio: Dict[str, Any]) -> None:
"""Load portfolio state (PM compatibility).
Args:
portfolio: Portfolio state dict with cash, positions, margin_used
"""
if not portfolio:
return
if not hasattr(self, "_portfolio"):
self._portfolio = {
"cash": 100000.0,
"positions": {},
"margin_used": 0.0,
"margin_requirement": 0.25,
}
self._portfolio = {
"cash": portfolio.get("cash", self._portfolio["cash"]),
"positions": portfolio.get("positions", {}).copy(),
"margin_used": portfolio.get("margin_used", 0.0),
"margin_requirement": portfolio.get(
"margin_requirement",
self._portfolio["margin_requirement"],
),
}
def update_portfolio(self, portfolio: Dict[str, Any]) -> None:
"""Update portfolio after external execution (PM compatibility).
Args:
portfolio: Portfolio updates to apply
"""
if not hasattr(self, "_portfolio"):
self._portfolio = {
"cash": 100000.0,
"positions": {},
"margin_used": 0.0,
"margin_requirement": 0.25,
}
self._portfolio.update(portfolio)
def rebuild_sys_prompt(self) -> None: def rebuild_sys_prompt(self) -> None:
"""Rebuild and replace the system prompt at runtime. """Rebuild and replace the system prompt at runtime.
@@ -380,6 +510,10 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
# Reload agent config in case it changed # Reload agent config in case it changed
self._agent_config = self._load_agent_config() self._agent_config = self._load_agent_config()
# Refresh prompt_files from updated config
if "prompt_files" in self._agent_config:
self._prompt_files = list(self._agent_config["prompt_files"])
# Rebuild prompt # Rebuild prompt
self._sys_prompt = self._build_system_prompt() self._sys_prompt = self._build_system_prompt()
@@ -446,7 +580,7 @@ class EvoAgent(ToolGuardMixin, ReActAgent):
return return
try: try:
self._messenger = AgentMessenger(agent_id=self.agent_id) self._messenger = AgentMessenger()
self._task_delegator = TaskDelegator(agent=self) self._task_delegator = TaskDelegator(agent=self)
logger.debug( logger.debug(
"Team infrastructure initialized for agent: %s", "Team infrastructure initialized for agent: %s",

View File

@@ -12,11 +12,10 @@ from dataclasses import dataclass, field
from datetime import datetime from datetime import datetime
from enum import Enum from enum import Enum
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional, Set from typing import Any, Dict, List, Optional
from .evaluation_hook import ( from .evaluation_hook import (
EvaluationCollector, EvaluationCollector,
EvaluationResult,
MetricType, MetricType,
) )

View File

@@ -12,8 +12,7 @@ from __future__ import annotations
import asyncio import asyncio
import json import json
import logging import logging
from dataclasses import dataclass, field from datetime import datetime, timezone
from datetime import datetime
from enum import Enum from enum import Enum
from typing import Any, Callable, Dict, Iterable, List, Optional, Set from typing import Any, Callable, Dict, Iterable, List, Optional, Set
@@ -73,11 +72,13 @@ class ApprovalRecord:
self.tool_name = tool_name self.tool_name = tool_name
self.tool_input = tool_input self.tool_input = tool_input
self.agent_id = agent_id self.agent_id = agent_id
# run_id is the new preferred name; workspace_id is kept for backward compatibility
self.run_id = workspace_id
self.workspace_id = workspace_id self.workspace_id = workspace_id
self.session_id = session_id self.session_id = session_id
self.status = ApprovalStatus.PENDING self.status = ApprovalStatus.PENDING
self.findings = findings or [] self.findings = findings or []
self.created_at = datetime.utcnow() self.created_at = datetime.now(timezone.utc)
self.resolved_at: Optional[datetime] = None self.resolved_at: Optional[datetime] = None
self.resolved_by: Optional[str] = None self.resolved_by: Optional[str] = None
self.metadata: Dict[str, Any] = {} self.metadata: Dict[str, Any] = {}
@@ -90,6 +91,7 @@ class ApprovalRecord:
"tool_name": self.tool_name, "tool_name": self.tool_name,
"tool_input": self.tool_input, "tool_input": self.tool_input,
"agent_id": self.agent_id, "agent_id": self.agent_id,
"run_id": self.run_id,
"workspace_id": self.workspace_id, "workspace_id": self.workspace_id,
"session_id": self.session_id, "session_id": self.session_id,
"findings": [f.to_dict() for f in self.findings], "findings": [f.to_dict() for f in self.findings],
@@ -161,7 +163,7 @@ class ToolGuardStore:
return record return record
record.status = status record.status = status
record.resolved_at = datetime.utcnow() record.resolved_at = datetime.now(timezone.utc)
record.resolved_by = resolved_by record.resolved_by = resolved_by
if notify_request and record.pending_request: if notify_request and record.pending_request:
if status == ApprovalStatus.APPROVED: if status == ApprovalStatus.APPROVED:
@@ -395,18 +397,34 @@ class ToolGuardMixin:
) )
manager = get_global_runtime_manager() manager = get_global_runtime_manager()
if manager: approval_data = {
manager.register_pending_approval(
record.approval_id,
{
"tool_name": record.tool_name, "tool_name": record.tool_name,
"agent_id": record.agent_id, "agent_id": record.agent_id,
"workspace_id": record.workspace_id, "workspace_id": record.workspace_id,
"session_id": record.session_id, "session_id": record.session_id,
"tool_input": record.tool_input, "tool_input": record.tool_input,
}, }
if manager:
manager.register_pending_approval(
record.approval_id,
approval_data,
) )
# Broadcast WebSocket event for real-time UI updates
try:
if hasattr(manager, 'broadcast_event'):
await manager.broadcast_event({
"type": "approval_requested",
"approval_id": record.approval_id,
"agent_id": record.agent_id,
"tool_name": record.tool_name,
"timestamp": record.created_at.isoformat(),
"data": approval_data,
})
except Exception as e:
logger.warning(f"Failed to broadcast approval event: {e}")
self._pending_approval = ToolApprovalRequest( self._pending_approval = ToolApprovalRequest(
approval_id=record.approval_id, approval_id=record.approval_id,
tool_name=tool_name, tool_name=tool_name,

View File

@@ -1,146 +0,0 @@
# -*- coding: utf-8 -*-
"""
Compatibility Layer - Adapters for legacy to EvoAgent migration.
Provides:
- LegacyAgentAdapter: Wraps old AnalystAgent to work with new interfaces
- Migration utilities for gradual adoption
"""
from typing import Any, Dict, Optional
from agentscope.message import Msg
from .agent_core import EvoAgent
class LegacyAgentAdapter:
"""
Adapter to make legacy AnalystAgent compatible with EvoAgent interfaces.
This allows gradual migration by wrapping existing agents.
"""
def __init__(self, legacy_agent: Any):
"""
Initialize adapter.
Args:
legacy_agent: Legacy AnalystAgent instance
"""
self._agent = legacy_agent
self.agent_id = getattr(legacy_agent, 'agent_id', getattr(legacy_agent, 'name', 'unknown'))
self.analyst_type = getattr(legacy_agent, 'analyst_type_key', None)
@property
def name(self) -> str:
"""Get agent name."""
return getattr(self._agent, 'name', self.agent_id)
@property
def toolkit(self) -> Any:
"""Get agent toolkit."""
return getattr(self._agent, 'toolkit', None)
@property
def model(self) -> Any:
"""Get agent model."""
return getattr(self._agent, 'model', None)
@property
def memory(self) -> Any:
"""Get agent memory."""
return getattr(self._agent, 'memory', None)
async def reply(self, x: Msg = None) -> Msg:
"""
Delegate to legacy agent's reply method.
Args:
x: Input message
Returns:
Response message
"""
return await self._agent.reply(x)
def reload_runtime_assets(self, active_skill_dirs: Optional[list] = None) -> None:
"""
Reload runtime assets if supported.
Args:
active_skill_dirs: Optional list of active skill directories
"""
if hasattr(self._agent, 'reload_runtime_assets'):
self._agent.reload_runtime_assets(active_skill_dirs)
def to_evo_agent(
self,
workspace_manager: Optional[Any] = None,
enable_tool_guard: bool = False,
) -> EvoAgent:
"""
Convert legacy agent to EvoAgent.
Args:
workspace_manager: Optional workspace manager
enable_tool_guard: Whether to enable tool guard
Returns:
New EvoAgent instance with same configuration
"""
return EvoAgent(
agent_id=self.agent_id,
model=self.model,
formatter=getattr(self._agent, 'formatter', None),
toolkit=self.toolkit,
workspace_manager=workspace_manager,
config=getattr(self._agent, 'config', {}),
long_term_memory=getattr(self._agent, 'long_term_memory', None),
enable_tool_guard=enable_tool_guard,
sys_prompt=getattr(self._agent, '_sys_prompt', None),
)
def __getattr__(self, name: str) -> Any:
"""Delegate unknown attributes to wrapped agent."""
return getattr(self._agent, name)
def is_legacy_agent(agent: Any) -> bool:
"""
Check if an agent is a legacy agent.
Args:
agent: Agent instance to check
Returns:
True if legacy agent
"""
return hasattr(agent, 'analyst_type_key') and not isinstance(agent, EvoAgent)
def adapt_agent(agent: Any) -> Any:
"""
Wrap agent in adapter if it's a legacy agent.
Args:
agent: Agent instance
Returns:
Adapted agent or original if already EvoAgent
"""
if is_legacy_agent(agent):
return LegacyAgentAdapter(agent)
return agent
def adapt_agents(agents: list) -> list:
"""
Wrap multiple agents in adapters.
Args:
agents: List of agent instances
Returns:
List of adapted agents
"""
return [adapt_agent(agent) for agent in agents]

View File

@@ -0,0 +1,372 @@
# -*- coding: utf-8 -*-
"""Dynamic Team Types - Core data types for PM-driven analyst team management.
This module provides data structures for:
- Analyst persona definitions (custom analyst types)
- Analyst creation configuration (custom SOUL.md, AGENTS.md, etc.)
- Dynamic team runtime state tracking
These types enable the Portfolio Manager to dynamically create, clone, and manage
analyst agents with custom configurations beyond the predefined 4 analyst types.
"""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Optional, Dict, Any, List
from datetime import datetime
@dataclass
class AnalystPersona:
"""Analyst role definition - extends or replaces personas.yaml entries.
Defines the identity, focus areas, and characteristics of an analyst type.
Can be used to create entirely new analyst types at runtime.
Attributes:
name: Display name for the analyst (e.g., "期权策略分析师")
focus: List of focus areas (e.g., ["期权定价", "波动率交易"])
description: Detailed description of the analyst's role and expertise
preferred_tools: Optional list of preferred tool types or categories
icon: Optional icon identifier for frontend display
"""
name: str
focus: List[str]
description: str
preferred_tools: Optional[List[str]] = None
icon: Optional[str] = None
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"name": self.name,
"focus": self.focus,
"description": self.description,
"preferred_tools": self.preferred_tools,
"icon": self.icon,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> AnalystPersona:
"""Create from dictionary."""
return cls(
name=data["name"],
focus=data.get("focus", []),
description=data.get("description", ""),
preferred_tools=data.get("preferred_tools"),
icon=data.get("icon"),
)
@dataclass
class AnalystConfig:
"""Complete configuration for dynamically creating an analyst.
This dataclass allows the PM to specify all aspects of analyst creation,
including custom workspace files, model overrides, and skill selections.
Attributes:
persona: Complete persona definition (if creating custom type)
analyst_type: Reference to predefined type (e.g., "technical_analyst")
soul_md: Custom SOUL.md content (overrides default generation)
agents_md: Custom AGENTS.md content (overrides default generation)
profile_md: Custom PROFILE.md content (overrides default generation)
skills: List of skill IDs to enable for this analyst
model_name: Override default model for this analyst
memory_config: Custom memory system configuration
tags: Classification tags (e.g., ["options", "derivatives"])
parent_id: If cloned, the source analyst ID
"""
# Identity configuration
persona: Optional[AnalystPersona] = None
analyst_type: Optional[str] = None # Reference to predefined type
# Workspace file contents (override default generation)
soul_md: Optional[str] = None
agents_md: Optional[str] = None
profile_md: Optional[str] = None
bootstrap_md: Optional[str] = None
# Runtime configuration
skills: Optional[List[str]] = field(default_factory=list)
model_name: Optional[str] = None
memory_config: Optional[Dict[str, Any]] = field(default_factory=dict)
# Metadata
tags: Optional[List[str]] = field(default_factory=list)
parent_id: Optional[str] = None # For clone tracking
def __post_init__(self):
"""Initialize default collections."""
if self.skills is None:
self.skills = []
if self.memory_config is None:
self.memory_config = {}
if self.tags is None:
self.tags = []
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"persona": self.persona.to_dict() if self.persona else None,
"analyst_type": self.analyst_type,
"soul_md": self.soul_md,
"agents_md": self.agents_md,
"profile_md": self.profile_md,
"bootstrap_md": self.bootstrap_md,
"skills": self.skills,
"model_name": self.model_name,
"memory_config": self.memory_config,
"tags": self.tags,
"parent_id": self.parent_id,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> AnalystConfig:
"""Create from dictionary."""
persona_data = data.get("persona")
return cls(
persona=AnalystPersona.from_dict(persona_data) if persona_data else None,
analyst_type=data.get("analyst_type"),
soul_md=data.get("soul_md"),
agents_md=data.get("agents_md"),
profile_md=data.get("profile_md"),
bootstrap_md=data.get("bootstrap_md"),
skills=data.get("skills", []),
model_name=data.get("model_name"),
memory_config=data.get("memory_config", {}),
tags=data.get("tags", []),
parent_id=data.get("parent_id"),
)
def get_effective_analyst_type(self) -> Optional[str]:
"""Get the effective analyst type for tool selection.
Returns analyst_type if set, otherwise derives from persona name.
"""
if self.analyst_type:
return self.analyst_type
if self.persona:
# Derive type ID from persona name (e.g., "期权策略分析师" -> "options_strategist")
return self._derive_type_id(self.persona.name)
return None
@staticmethod
def _derive_type_id(name: str) -> str:
"""Derive a type ID from a display name."""
import re
# Convert Chinese or mixed names to snake_case
# Remove special characters, keep alphanumeric and spaces
cleaned = re.sub(r'[^\w\s]', '', name)
# Convert to lowercase and replace spaces with underscores
return cleaned.lower().strip().replace(' ', '_')
@dataclass
class DynamicAnalystInstance:
"""Runtime information about a dynamically created analyst.
Tracks the creation metadata and current state of a dynamic analyst.
Attributes:
agent_id: Unique identifier for this analyst instance
config: The configuration used to create this analyst
created_at: Timestamp when the analyst was created
created_by: Identifier of the agent that created this analyst (usually PM)
status: Current status (active, paused, removed)
"""
agent_id: str
config: AnalystConfig
created_at: str = field(default_factory=lambda: datetime.now().isoformat())
created_by: str = "portfolio_manager"
status: str = "active" # active, paused, removed
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"agent_id": self.agent_id,
"config": self.config.to_dict(),
"created_at": self.created_at,
"created_by": self.created_by,
"status": self.status,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> DynamicAnalystInstance:
"""Create from dictionary."""
return cls(
agent_id=data["agent_id"],
config=AnalystConfig.from_dict(data.get("config", {})),
created_at=data.get("created_at", datetime.now().isoformat()),
created_by=data.get("created_by", "portfolio_manager"),
status=data.get("status", "active"),
)
@dataclass
class DynamicTeamState:
"""Complete runtime state for dynamic analyst team management.
This state is persisted alongside TEAM_PIPELINE.yaml and tracks:
- Custom analyst types registered at runtime
- All dynamically created analyst instances
- Configuration snapshots for cloning
Attributes:
run_id: The run configuration this state belongs to
registered_types: Runtime-registered analyst type definitions
instances: Dynamically created analyst instances
version: State format version for migration handling
"""
run_id: str
registered_types: Dict[str, AnalystPersona] = field(default_factory=dict)
instances: Dict[str, DynamicAnalystInstance] = field(default_factory=dict)
version: int = 1
def register_type(self, type_id: str, persona: AnalystPersona) -> bool:
"""Register a new analyst type.
Returns:
True if registered, False if type_id already exists
"""
if type_id in self.registered_types:
return False
self.registered_types[type_id] = persona
return True
def add_instance(self, instance: DynamicAnalystInstance) -> None:
"""Add a new analyst instance."""
self.instances[instance.agent_id] = instance
def remove_instance(self, agent_id: str) -> bool:
"""Mark an instance as removed.
Returns:
True if instance was found and removed
"""
if agent_id in self.instances:
self.instances[agent_id].status = "removed"
return True
return False
def get_active_instances(self) -> List[DynamicAnalystInstance]:
"""Get all active (non-removed) analyst instances."""
return [
inst for inst in self.instances.values()
if inst.status == "active"
]
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
"run_id": self.run_id,
"registered_types": {
k: v.to_dict() for k, v in self.registered_types.items()
},
"instances": {
k: v.to_dict() for k, v in self.instances.items()
},
"version": self.version,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> DynamicTeamState:
"""Create from dictionary."""
registered_types = {
k: AnalystPersona.from_dict(v)
for k, v in data.get("registered_types", {}).items()
}
instances = {
k: DynamicAnalystInstance.from_dict(v)
for k, v in data.get("instances", {}).items()
}
return cls(
run_id=data.get("run_id", "unknown"),
registered_types=registered_types,
instances=instances,
version=data.get("version", 1),
)
@dataclass
class CreateAnalystResult:
"""Result of creating a dynamic analyst.
Attributes:
success: Whether creation was successful
agent_id: The ID of the created analyst (if successful)
message: Human-readable result message
error: Error details (if failed)
"""
success: bool
agent_id: Optional[str] = None
message: str = ""
error: Optional[str] = None
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for API responses."""
return {
"success": self.success,
"agent_id": self.agent_id,
"message": self.message,
"error": self.error,
}
@dataclass
class CloneAnalystRequest:
"""Request to clone an existing analyst.
Attributes:
source_id: ID of the analyst to clone
new_id: ID for the new analyst
config_overrides: Configuration fields to override
"""
source_id: str
new_id: str
config_overrides: Optional[Dict[str, Any]] = field(default_factory=dict)
def __post_init__(self):
if self.config_overrides is None:
self.config_overrides = {}
@dataclass
class AnalystTypeInfo:
"""Information about an available analyst type.
Used for listing all available types (predefined + runtime-registered).
Attributes:
type_id: Unique identifier for this type
name: Display name
description: Type description
is_builtin: Whether this is a built-in type or runtime-registered
source: Source of this type (e.g., "constants", "runtime", "config")
"""
type_id: str
name: str
description: str
is_builtin: bool
source: str
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for API responses."""
return {
"type_id": self.type_id,
"name": self.name,
"description": self.description,
"is_builtin": self.is_builtin,
"source": self.source,
}
__all__ = [
"AnalystPersona",
"AnalystConfig",
"DynamicAnalystInstance",
"DynamicTeamState",
"CreateAnalystResult",
"CloneAnalystRequest",
"AnalystTypeInfo",
]

View File

@@ -1,388 +0,0 @@
# -*- coding: utf-8 -*-
"""
Portfolio Manager Agent - Based on AgentScope ReActAgent
Responsible for decision-making (NOT trade execution)
"""
from pathlib import Path
from typing import Any, Dict, Optional, Callable
from agentscope.agent import ReActAgent
from agentscope.memory import InMemoryMemory, LongTermMemoryBase
from agentscope.message import Msg, TextBlock
from agentscope.tool import Toolkit, ToolResponse
from ..utils.progress import progress
from .prompt_factory import build_agent_system_prompt, clear_prompt_factory_cache
from .team_pipeline_config import update_active_analysts
from ..config.constants import ANALYST_TYPES
class PMAgent(ReActAgent):
"""
Portfolio Manager Agent - Makes investment decisions
Key features:
1. PM outputs decisions only (action + quantity per ticker)
2. Trade execution happens externally (in pipeline/executor)
3. Supports both backtest and live modes
"""
def __init__(
self,
name: str = "portfolio_manager",
model: Any = None,
formatter: Any = None,
initial_cash: float = 100000.0,
margin_requirement: float = 0.25,
config: Optional[Dict[str, Any]] = None,
long_term_memory: Optional[LongTermMemoryBase] = None,
toolkit_factory: Any = None,
toolkit_factory_kwargs: Optional[Dict[str, Any]] = None,
toolkit: Optional[Toolkit] = None,
):
object.__setattr__(self, "config", config or {})
# Portfolio state
object.__setattr__(
self,
"portfolio",
{
"cash": initial_cash,
"positions": {},
"margin_used": 0.0,
"margin_requirement": margin_requirement,
},
)
# Decisions made in current cycle
object.__setattr__(self, "_decisions", {})
toolkit_factory_kwargs = toolkit_factory_kwargs or {}
object.__setattr__(self, "_toolkit_factory", toolkit_factory)
object.__setattr__(
self,
"_toolkit_factory_kwargs",
toolkit_factory_kwargs,
)
object.__setattr__(self, "_create_team_agent_cb", None)
object.__setattr__(self, "_remove_team_agent_cb", None)
# Create toolkit after local state is ready so bound tool methods can be registered.
if toolkit is None:
if toolkit_factory is not None:
toolkit = toolkit_factory(
name,
self.config.get("config_name", "default"),
owner=self,
**toolkit_factory_kwargs,
)
else:
toolkit = self._create_toolkit()
object.__setattr__(self, "toolkit", toolkit)
sys_prompt = build_agent_system_prompt(
agent_id=name,
config_name=self.config.get("config_name", "default"),
toolkit=self.toolkit,
)
kwargs = {
"name": name,
"sys_prompt": sys_prompt,
"model": model,
"formatter": formatter,
"toolkit": toolkit,
"memory": InMemoryMemory(),
"max_iters": 10,
}
if long_term_memory:
kwargs["long_term_memory"] = long_term_memory
kwargs["long_term_memory_mode"] = "both"
super().__init__(**kwargs)
def _create_toolkit(self) -> Toolkit:
"""Create toolkit with decision recording tool"""
toolkit = Toolkit()
toolkit.register_tool_function(self._make_decision)
return toolkit
def _make_decision(
self,
ticker: str,
action: str,
quantity: int,
confidence: int = 50,
reasoning: str = "",
) -> ToolResponse:
"""
Record a trading decision for a ticker.
Args:
ticker: Stock ticker symbol (e.g., "AAPL")
action: Decision - "long", "short" or "hold"
quantity: Number of shares to trade (0 for hold)
confidence: Confidence level 0-100
reasoning: Explanation for this decision
Returns:
ToolResponse confirming decision recorded
"""
if action not in ["long", "short", "hold"]:
return ToolResponse(
content=[
TextBlock(
type="text",
text=f"Invalid action: {action}. "
"Must be 'long', 'short', or 'hold'.",
),
],
)
self._decisions[ticker] = {
"action": action,
"quantity": quantity if action != "hold" else 0,
"confidence": confidence,
"reasoning": reasoning,
}
return ToolResponse(
content=[
TextBlock(
type="text",
text=f"Decision recorded: {action} "
f"{quantity} shares of {ticker}"
f" (confidence: {confidence}%)",
),
],
)
def _add_team_analyst(self, agent_id: str) -> ToolResponse:
"""Add one analyst to active discussion team."""
config_name = self.config.get("config_name", "default")
project_root = Path(__file__).resolve().parents[2]
active = update_active_analysts(
project_root=project_root,
config_name=config_name,
available_analysts=list(ANALYST_TYPES.keys()),
add=[agent_id],
)
return ToolResponse(
content=[
TextBlock(
type="text",
text=(
f"Active analyst team updated. Added: {agent_id}. "
f"Current active analysts: {', '.join(active)}"
),
),
],
)
def _remove_team_analyst(self, agent_id: str) -> ToolResponse:
"""Remove one analyst from active discussion team."""
callback_msg = ""
callback = self._remove_team_agent_cb
if callback is not None:
callback_msg = callback(agent_id=agent_id)
config_name = self.config.get("config_name", "default")
project_root = Path(__file__).resolve().parents[2]
active = update_active_analysts(
project_root=project_root,
config_name=config_name,
available_analysts=list(ANALYST_TYPES.keys()),
remove=[agent_id],
)
return ToolResponse(
content=[
TextBlock(
type="text",
text=(
f"Active analyst team updated. Removed: {agent_id}. "
f"Current active analysts: {', '.join(active)}"
+ (f" | {callback_msg}" if callback_msg else "")
),
),
],
)
def _set_active_analysts(self, agent_ids: str) -> ToolResponse:
"""Set active analysts from comma-separated agent ids."""
requested = [
item.strip() for item in str(agent_ids or "").split(",") if item.strip()
]
config_name = self.config.get("config_name", "default")
project_root = Path(__file__).resolve().parents[2]
active = update_active_analysts(
project_root=project_root,
config_name=config_name,
available_analysts=list(ANALYST_TYPES.keys()),
set_to=requested,
)
return ToolResponse(
content=[
TextBlock(
type="text",
text=f"Active analyst team set to: {', '.join(active)}",
),
],
)
def _create_team_analyst(self, agent_id: str, analyst_type: str) -> ToolResponse:
"""Create a runtime analyst instance and activate it."""
callback = self._create_team_agent_cb
if callback is None:
return ToolResponse(
content=[
TextBlock(
type="text",
text="Runtime agent creation is not available in current pipeline.",
),
],
)
result = callback(agent_id=agent_id, analyst_type=analyst_type)
return ToolResponse(
content=[
TextBlock(type="text", text=result),
],
)
def set_team_controller(
self,
*,
create_agent_callback: Optional[Callable[..., str]] = None,
remove_agent_callback: Optional[Callable[..., str]] = None,
) -> None:
"""Inject runtime team lifecycle callbacks from pipeline."""
object.__setattr__(self, "_create_team_agent_cb", create_agent_callback)
object.__setattr__(self, "_remove_team_agent_cb", remove_agent_callback)
async def reply(self, x: Msg = None) -> Msg:
"""
Make investment decisions
Returns:
Msg with decisions in metadata
"""
if x is None:
return Msg(
name=self.name,
content="No input provided",
role="assistant",
)
# Clear previous decisions
self._decisions = {}
progress.update_status(
self.name,
None,
"Analyzing and making decisions",
)
result = await super().reply(x)
progress.update_status(self.name, None, "Completed")
# Attach decisions to metadata
if result.metadata is None:
result.metadata = {}
result.metadata["decisions"] = self._decisions.copy()
result.metadata["portfolio"] = self.portfolio.copy()
return result
def get_decisions(self) -> Dict[str, Dict]:
"""Get decisions from current cycle"""
return self._decisions.copy()
def get_portfolio_state(self) -> Dict[str, Any]:
"""Get current portfolio state"""
return self.portfolio.copy()
def load_portfolio_state(self, portfolio: Dict[str, Any]):
"""Load portfolio state"""
if not portfolio:
return
self.portfolio = {
"cash": portfolio.get("cash", self.portfolio["cash"]),
"positions": portfolio.get("positions", {}).copy(),
"margin_used": portfolio.get("margin_used", 0.0),
"margin_requirement": portfolio.get(
"margin_requirement",
self.portfolio["margin_requirement"],
),
}
def update_portfolio(self, portfolio: Dict[str, Any]):
"""Update portfolio after external execution"""
self.portfolio.update(portfolio)
def _has_open_positions(self) -> bool:
"""Return whether the current portfolio still has non-zero positions."""
for position in self.portfolio.get("positions", {}).values():
if position.get("long", 0) or position.get("short", 0):
return True
return False
def can_apply_initial_cash(self) -> bool:
"""Only allow cash rebasing before any positions or margin exist."""
return (
not self._has_open_positions()
and float(self.portfolio.get("margin_used", 0.0) or 0.0) == 0.0
)
def apply_runtime_portfolio_config(
self,
*,
margin_requirement: Optional[float] = None,
initial_cash: Optional[float] = None,
) -> Dict[str, bool]:
"""Apply safe run-time portfolio config updates."""
result = {
"margin_requirement": False,
"initial_cash": False,
}
if margin_requirement is not None:
self.portfolio["margin_requirement"] = float(margin_requirement)
result["margin_requirement"] = True
if initial_cash is not None and self.can_apply_initial_cash():
self.portfolio["cash"] = float(initial_cash)
result["initial_cash"] = True
return result
def reload_runtime_assets(self, active_skill_dirs: Optional[list] = None) -> None:
"""Reload toolkit and system prompt from current run assets."""
from .toolkit_factory import create_agent_toolkit
clear_prompt_factory_cache()
toolkit_factory = self._toolkit_factory or create_agent_toolkit
toolkit_kwargs = dict(self._toolkit_factory_kwargs)
if active_skill_dirs is not None:
toolkit_kwargs["active_skill_dirs"] = active_skill_dirs
self.toolkit = toolkit_factory(
self.name,
self.config.get("config_name", "default"),
owner=self,
**toolkit_kwargs,
)
self._apply_runtime_sys_prompt(
build_agent_system_prompt(
agent_id=self.name,
config_name=self.config.get("config_name", "default"),
toolkit=self.toolkit,
),
)
def _apply_runtime_sys_prompt(self, sys_prompt: str) -> None:
"""Update the prompt used by future turns and the cached system msg."""
self._sys_prompt = sys_prompt
for msg, _marks in self.memory.content:
if getattr(msg, "role", None) == "system":
msg.content = sys_prompt
break

View File

@@ -1,110 +0,0 @@
# -*- coding: utf-8 -*-
"""
Risk Manager Agent - Based on AgentScope ReActAgent
Uses LLM for risk assessment
"""
from typing import Any, Dict, Optional
from agentscope.agent import ReActAgent
from agentscope.memory import InMemoryMemory, LongTermMemoryBase
from agentscope.message import Msg
from agentscope.tool import Toolkit
from ..utils.progress import progress
from .prompt_factory import build_agent_system_prompt, clear_prompt_factory_cache
class RiskAgent(ReActAgent):
"""
Risk Manager Agent - Uses LLM for risk assessment
Inherits from AgentScope's ReActAgent
"""
def __init__(
self,
model: Any,
formatter: Any,
name: str = "risk_manager",
config: Optional[Dict[str, Any]] = None,
long_term_memory: Optional[LongTermMemoryBase] = None,
toolkit: Optional[Toolkit] = None,
):
"""
Initialize Risk Manager Agent
Args:
model: LLM model instance
formatter: Message formatter instance
name: Agent name
config: Configuration dictionary
long_term_memory: Optional ReMeTaskLongTermMemory instance
"""
object.__setattr__(self, "config", config or {})
object.__setattr__(self, "agent_id", name)
if toolkit is None:
toolkit = Toolkit()
object.__setattr__(self, "toolkit", toolkit)
sys_prompt = self._load_system_prompt()
kwargs = {
"name": name,
"sys_prompt": sys_prompt,
"model": model,
"formatter": formatter,
"toolkit": toolkit,
"memory": InMemoryMemory(),
"max_iters": 10,
}
if long_term_memory:
kwargs["long_term_memory"] = long_term_memory
kwargs["long_term_memory_mode"] = "static_control"
super().__init__(**kwargs)
def _load_system_prompt(self) -> str:
"""Load system prompt for risk manager"""
return build_agent_system_prompt(
agent_id=self.agent_id,
config_name=self.config.get("config_name", "default"),
toolkit=self.toolkit,
)
async def reply(self, x: Msg = None) -> Msg:
"""
Provide risk assessment
Args:
x: Input message (content must be str)
Returns:
Msg with risk warnings (content is str)
"""
progress.update_status(self.name, None, "Assessing risk")
result = await super().reply(x)
progress.update_status(self.name, None, "Risk assessment completed")
return result
def reload_runtime_assets(self, active_skill_dirs: Optional[list] = None) -> None:
"""Reload toolkit and system prompt from current run assets."""
from .toolkit_factory import create_agent_toolkit
clear_prompt_factory_cache()
self.toolkit = create_agent_toolkit(
self.agent_id,
self.config.get("config_name", "default"),
active_skill_dirs=active_skill_dirs,
)
self._apply_runtime_sys_prompt(self._load_system_prompt())
def _apply_runtime_sys_prompt(self, sys_prompt: str) -> None:
"""Update the prompt used by future turns and the cached system msg."""
self._sys_prompt = sys_prompt
for msg, _marks in self.memory.content:
if getattr(msg, "role", None) == "system":
msg.content = sys_prompt
break

View File

@@ -6,7 +6,7 @@ import shutil
import tempfile import tempfile
import zipfile import zipfile
from threading import Lock from threading import Lock
from typing import Any, Dict, Iterable, Iterator, List, Optional, Set from typing import Any, Dict, Iterable, List, Optional, Set
from urllib.parse import urlparse from urllib.parse import urlparse
from urllib.request import urlretrieve from urllib.request import urlretrieve

View File

@@ -9,7 +9,7 @@ from __future__ import annotations
import asyncio import asyncio
import logging import logging
from typing import Any, Callable, Dict, List, Optional, Set from typing import Callable, Dict, List, Set
from agentscope.message import Msg from agentscope.message import Msg

View File

@@ -10,7 +10,6 @@ from __future__ import annotations
import logging import logging
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from agentscope.message import Msg
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@@ -11,7 +11,7 @@ from __future__ import annotations
import asyncio import asyncio
import logging import logging
import uuid import uuid
from typing import Any, Awaitable, Callable, Dict, List, Optional, Union from typing import Any, Awaitable, Callable, Dict, List, Optional
from agentscope.message import Msg from agentscope.message import Msg

View File

@@ -9,7 +9,7 @@ from __future__ import annotations
import asyncio import asyncio
import logging import logging
from typing import Any, Awaitable, Callable, Dict, List, Optional, Type from typing import Any, Dict, List, Optional
from agentscope.message import Msg from agentscope.message import Msg

View File

@@ -12,9 +12,16 @@ import yaml
from backend.agents.agent_workspace import load_agent_workspace_config from backend.agents.agent_workspace import load_agent_workspace_config
from backend.agents.skills_manager import SkillsManager from backend.agents.skills_manager import SkillsManager
from backend.agents.skill_loader import load_skill_from_dir, get_skill_tools
from backend.agents.skill_metadata import parse_skill_metadata from backend.agents.skill_metadata import parse_skill_metadata
from backend.config.bootstrap_config import get_bootstrap_config_for_run from backend.config.bootstrap_config import get_bootstrap_config_for_run
from backend.tools.dynamic_team_tools import (
create_analyst,
clone_analyst,
remove_analyst,
list_analyst_types,
get_analyst_info,
get_team_summary,
)
def load_agent_profiles() -> Dict[str, Dict[str, Any]]: def load_agent_profiles() -> Dict[str, Dict[str, Any]]:
@@ -139,6 +146,23 @@ def _register_portfolio_tool_groups(toolkit: Any, pm_agent: Any) -> None:
group_name="portfolio_ops", group_name="portfolio_ops",
) )
# Register dynamic team management tools
toolkit.create_tool_group(
group_name="dynamic_team",
description="Dynamic analyst team management tools.",
active=False,
notes=(
"Use these tools to create, clone, and manage analyst agents dynamically. "
"Only available when allow_dynamic_team_update is enabled."
),
)
toolkit.register_tool_function(create_analyst, group_name="dynamic_team")
toolkit.register_tool_function(clone_analyst, group_name="dynamic_team")
toolkit.register_tool_function(remove_analyst, group_name="dynamic_team")
toolkit.register_tool_function(list_analyst_types, group_name="dynamic_team")
toolkit.register_tool_function(get_analyst_info, group_name="dynamic_team")
toolkit.register_tool_function(get_team_summary, group_name="dynamic_team")
def _register_risk_tool_groups(toolkit: Any) -> None: def _register_risk_tool_groups(toolkit: Any) -> None:
"""注册风险工具组""" """注册风险工具组"""

View File

@@ -0,0 +1,333 @@
# -*- coding: utf-8 -*-
"""Unified Agent Factory - Centralized agent creation for 大时代.
This module provides a unified factory for creating all agent types (analysts,
risk manager, portfolio manager) as EvoAgent instances with consistent
configuration. It replaces the scattered agent creation logic in main.py,
pipeline.py, and pipeline_runner.py.
Key features:
- Single entry point for all agent creation
- Creates EvoAgent instances for all agent roles
- Consistent parameter handling across all agent types
- Support for workspace-driven configuration
- Long-term memory integration
"""
from __future__ import annotations
from pathlib import Path
from typing import Any, Optional, Protocol
from backend.agents.base.evo_agent import EvoAgent
class AgentFactoryProtocol(Protocol):
"""Protocol for agent factory implementations."""
def create_analyst(
self,
analyst_type: str,
model: Any,
formatter: Any,
active_skill_dirs: Optional[list[Path]] = None,
long_term_memory: Optional[Any] = None,
) -> EvoAgent: ...
def create_risk_manager(
self,
model: Any,
formatter: Any,
active_skill_dirs: Optional[list[Path]] = None,
long_term_memory: Optional[Any] = None,
) -> EvoAgent: ...
def create_portfolio_manager(
self,
model: Any,
formatter: Any,
initial_cash: float,
margin_requirement: float,
active_skill_dirs: Optional[list[Path]] = None,
long_term_memory: Optional[Any] = None,
) -> EvoAgent: ...
class UnifiedAgentFactory:
"""Unified factory for creating EvoAgent instances with consistent configuration.
This factory centralizes agent creation logic and creates EvoAgent instances
for all agent roles (analysts, risk manager, portfolio manager).
Example:
factory = UnifiedAgentFactory(
config_name="smoke_fullstack",
skills_manager=skills_manager,
)
# Create analyst
analyst = factory.create_analyst(
analyst_type="fundamentals_analyst",
model=model,
formatter=formatter,
)
# Create risk manager
risk_mgr = factory.create_risk_manager(
model=model,
formatter=formatter,
)
# Create portfolio manager
pm = factory.create_portfolio_manager(
model=model,
formatter=formatter,
initial_cash=100000.0,
margin_requirement=0.5,
)
"""
def __init__(
self,
config_name: str,
skills_manager: Any,
toolkit_factory: Optional[Any] = None,
):
"""Initialize the agent factory.
Args:
config_name: Run configuration name (e.g., "smoke_fullstack")
skills_manager: SkillsManager instance for skill/asset management
toolkit_factory: Optional factory function for creating toolkits
"""
self.config_name = config_name
self.skills_manager = skills_manager
self.toolkit_factory = toolkit_factory
def _create_toolkit(
self,
agent_type: str,
active_skill_dirs: Optional[list[Path]] = None,
owner: Optional[Any] = None,
) -> Any:
"""Create toolkit for an agent."""
if self.toolkit_factory is None:
from backend.agents.toolkit_factory import create_agent_toolkit
self.toolkit_factory = create_agent_toolkit
kwargs: dict[str, Any] = {
"active_skill_dirs": active_skill_dirs or [],
}
if owner is not None:
kwargs["owner"] = owner
return self.toolkit_factory(agent_type, self.config_name, **kwargs)
def _load_agent_config(self, agent_id: str) -> Any:
"""Load agent configuration from workspace."""
from backend.agents.agent_workspace import load_agent_workspace_config
workspace_dir = self.skills_manager.get_agent_asset_dir(
self.config_name, agent_id
)
config_path = workspace_dir / "agent.yaml"
if config_path.exists():
return load_agent_workspace_config(config_path)
# Return default config if no agent.yaml
return type(
"AgentConfig",
(),
{"prompt_files": ["SOUL.md"]},
)()
def _create_evo_agent(
self,
agent_id: str,
model: Any,
formatter: Any,
toolkit: Any,
agent_config: Any,
long_term_memory: Optional[Any] = None,
extra_kwargs: Optional[dict[str, Any]] = None,
) -> EvoAgent:
"""Create an EvoAgent instance."""
workspace_dir = self.skills_manager.get_agent_asset_dir(
self.config_name, agent_id
)
kwargs: dict[str, Any] = {
"agent_id": agent_id,
"config_name": self.config_name,
"workspace_dir": workspace_dir,
"model": model,
"formatter": formatter,
"skills_manager": self.skills_manager,
"prompt_files": getattr(agent_config, "prompt_files", ["SOUL.md"]),
"long_term_memory": long_term_memory,
}
if extra_kwargs:
kwargs.update(extra_kwargs)
agent = EvoAgent(**kwargs)
agent.toolkit = toolkit
setattr(agent, "run_id", self.config_name)
# Keep workspace_id for backward compatibility
setattr(agent, "workspace_id", self.config_name)
return agent
def create_analyst(
self,
analyst_type: str,
model: Any,
formatter: Any,
active_skill_dirs: Optional[list[Path]] = None,
long_term_memory: Optional[Any] = None,
) -> EvoAgent:
"""Create an analyst agent.
Args:
analyst_type: Type of analyst (fundamentals, technical, sentiment, valuation)
model: LLM model instance
formatter: Message formatter instance
active_skill_dirs: Optional list of active skill directories
long_term_memory: Optional long-term memory instance
Returns:
EvoAgent instance
"""
toolkit = self._create_toolkit(analyst_type, active_skill_dirs)
agent_config = self._load_agent_config(analyst_type)
return self._create_evo_agent(
agent_id=analyst_type,
model=model,
formatter=formatter,
toolkit=toolkit,
agent_config=agent_config,
long_term_memory=long_term_memory,
)
def create_risk_manager(
self,
model: Any,
formatter: Any,
active_skill_dirs: Optional[list[Path]] = None,
long_term_memory: Optional[Any] = None,
) -> EvoAgent:
"""Create a risk manager agent.
Args:
model: LLM model instance
formatter: Message formatter instance
active_skill_dirs: Optional list of active skill directories
long_term_memory: Optional long-term memory instance
Returns:
EvoAgent instance
"""
toolkit = self._create_toolkit("risk_manager", active_skill_dirs)
agent_config = self._load_agent_config("risk_manager")
return self._create_evo_agent(
agent_id="risk_manager",
model=model,
formatter=formatter,
toolkit=toolkit,
agent_config=agent_config,
long_term_memory=long_term_memory,
)
def create_portfolio_manager(
self,
model: Any,
formatter: Any,
initial_cash: float,
margin_requirement: float,
active_skill_dirs: Optional[list[Path]] = None,
long_term_memory: Optional[Any] = None,
) -> EvoAgent:
"""Create a portfolio manager agent.
Args:
model: LLM model instance
formatter: Message formatter instance
initial_cash: Initial cash allocation
margin_requirement: Margin requirement ratio
active_skill_dirs: Optional list of active skill directories
long_term_memory: Optional long-term memory instance
Returns:
EvoAgent instance
"""
agent_config = self._load_agent_config("portfolio_manager")
# For PM, toolkit is created after agent (needs owner reference)
workspace_dir = self.skills_manager.get_agent_asset_dir(
self.config_name, "portfolio_manager"
)
agent = EvoAgent(
agent_id="portfolio_manager",
config_name=self.config_name,
workspace_dir=workspace_dir,
model=model,
formatter=formatter,
skills_manager=self.skills_manager,
prompt_files=getattr(agent_config, "prompt_files", ["SOUL.md"]),
initial_cash=initial_cash,
margin_requirement=margin_requirement,
long_term_memory=long_term_memory,
)
agent.toolkit = self._create_toolkit(
"portfolio_manager", active_skill_dirs, owner=agent
)
setattr(agent, "run_id", self.config_name)
# Keep workspace_id for backward compatibility
setattr(agent, "workspace_id", self.config_name)
return agent
# Singleton factory instance cache
_factory_cache: dict[str, UnifiedAgentFactory] = {}
def get_agent_factory(
config_name: str,
skills_manager: Any,
toolkit_factory: Optional[Any] = None,
) -> UnifiedAgentFactory:
"""Get or create a cached agent factory instance.
Args:
config_name: Run configuration name
skills_manager: SkillsManager instance
toolkit_factory: Optional toolkit factory function
Returns:
UnifiedAgentFactory instance (cached per config_name)
"""
cache_key = f"{config_name}:{id(skills_manager)}"
if cache_key not in _factory_cache:
_factory_cache[cache_key] = UnifiedAgentFactory(
config_name=config_name,
skills_manager=skills_manager,
toolkit_factory=toolkit_factory,
)
return _factory_cache[cache_key]
def clear_factory_cache() -> None:
"""Clear the factory cache. Useful for testing."""
_factory_cache.clear()
__all__ = [
"UnifiedAgentFactory",
"AgentFactoryProtocol",
"get_agent_factory",
"clear_factory_cache",
]

View File

@@ -1,5 +1,5 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""Workspace Manager - Create and manage agent workspaces.""" """Design-time workspace registry stored under `workspaces/`."""
import logging import logging
from dataclasses import dataclass, field from dataclasses import dataclass, field
@@ -323,5 +323,6 @@ class WorkspaceRegistry:
yaml.safe_dump(config.to_dict(), f, allow_unicode=True, sort_keys=False) yaml.safe_dump(config.to_dict(), f, allow_unicode=True, sort_keys=False)
# Backward-compatible alias: legacy imports expect WorkspaceManager. # Backward-compatible alias: legacy imports expect WorkspaceManager to mean the
# design-time `workspaces/` registry.
WorkspaceManager = WorkspaceRegistry WorkspaceManager = WorkspaceRegistry

View File

@@ -1,5 +1,5 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""Initialize run-scoped agent workspace assets.""" """Initialize run-scoped agent workspace assets under `runs/<run_id>/`."""
from pathlib import Path from pathlib import Path
from typing import Dict, Iterable, Optional from typing import Dict, Iterable, Optional
@@ -312,12 +312,21 @@ class RunWorkspaceManager:
"- 审阅分析以理解市场观点\n" "- 审阅分析以理解市场观点\n"
"- 在做决策前先考虑风险警告\n" "- 在做决策前先考虑风险警告\n"
"- 评估当前投资组合持仓、现金与保证金占用\n" "- 评估当前投资组合持仓、现金与保证金占用\n"
"- 在做最终决策前,先判断当前团队是否足以覆盖任务;如果覆盖不足,不要勉强给结论,先扩编团队\n"
"- 当现有团队覆盖不足、观点分歧过大、或出现新的专业分析需求时,优先考虑动态创建合适的分析师,再继续讨论\n"
"- 决策必须与整体投资目标和风险约束一致\n\n" "- 决策必须与整体投资目标和风险约束一致\n\n"
"动态扩编触发条件:\n"
"- 出现当前团队未覆盖的研究领域:期权、宏观、行业专项、事件驱动、监管冲击、加密资产、商品链、特殊市场结构\n"
"- 关键 ticker 的结论依赖某种专业知识,但现有 analyst 无法提供直接证据链\n"
"- 分析师之间存在明显冲突,且仅靠风险经理无法完成裁决\n"
"- 你需要第二个同类型但不同风格的 analyst 来验证一个高风险假设\n\n"
"决策类型:\n" "决策类型:\n"
'- `long`:看涨,建议买入\n' '- `long`:看涨,建议买入\n'
'- `short`:看跌,建议卖出或做空\n' '- `short`:看跌,建议卖出或做空\n'
'- `hold`:中性,维持当前持仓\n\n' '- `hold`:中性,维持当前持仓\n\n'
"输出要求:\n" "输出要求:\n"
"- 触发扩编条件时,必须先使用动态团队工具创建分析师,并在继续决策前吸收其分析输入\n"
"- 不允许口头声称“需要更多分析”但不实际调用创建工具\n"
"- 使用 `make_decision` 工具记录每个股票的最终决策\n" "- 使用 `make_decision` 工具记录每个股票的最终决策\n"
"- 记录完成后给出投资逻辑总结\n" "- 记录完成后给出投资逻辑总结\n"
"- 最终总结必须使用简体中文\n" "- 最终总结必须使用简体中文\n"
@@ -327,6 +336,10 @@ class RunWorkspaceManager:
"- 在决定数量时考虑可用现金,不要超出现金允许范围\n" "- 在决定数量时考虑可用现金,不要超出现金允许范围\n"
"- 考虑做空头寸的保证金要求\n" "- 考虑做空头寸的保证金要求\n"
"- 仓位规模相对于组合总资产保持保守\n" "- 仓位规模相对于组合总资产保持保守\n"
"- 当任务涉及当前团队未覆盖的领域(如期权、宏观、行业专项、事件驱动、加密资产等)时,应优先创建或克隆对应分析师,而不是勉强用现有团队输出低质量结论\n"
"- 当分析师之间长期存在高冲突且缺乏裁决信息时,应考虑增加一个补充视角的分析师\n"
"- 如果你已经识别出覆盖缺口,却没有调用动态团队工具补齐团队,就不应直接输出高置信度交易决策\n"
"- 对新创建分析师的输出必须纳入本轮决策依据,不能创建后忽略\n"
"- 始终为决策提供清晰理由\n" "- 始终为决策提供清晰理由\n"
"- 不要输出英文投资报告或英文结论\n" "- 不要输出英文投资报告或英文结论\n"
) )
@@ -479,5 +492,6 @@ class RunWorkspaceManager:
) )
# Backward-compatible alias: code importing WorkspaceManager from this module should continue to work. # Backward-compatible alias: many runtime paths still import WorkspaceManager
# from this module when they mean the run-scoped manager.
WorkspaceManager = RunWorkspaceManager WorkspaceManager = RunWorkspaceManager

View File

@@ -11,13 +11,15 @@ Provides REST API endpoints for:
from .agents import router as agents_router from .agents import router as agents_router
from .workspaces import router as workspaces_router from .workspaces import router as workspaces_router
from .guard import router as guard_router from .guard import router as guard_router
from .openclaw import router as openclaw_router
from .runtime import router as runtime_router from .runtime import router as runtime_router
from .runs import router as runs_router
from .dynamic_team import router as dynamic_team_router
__all__ = [ __all__ = [
"agents_router", "agents_router",
"workspaces_router", "workspaces_router",
"guard_router", "guard_router",
"openclaw_router",
"runtime_router", "runtime_router",
"runs_router",
"dynamic_team_router",
] ]

View File

@@ -1,29 +1,28 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
""" """Agent API routes for design-time workspace registry CRUD only."""
Agent API Routes
Provides REST API endpoints for agent management within workspaces.
"""
import logging import logging
import os
import tempfile
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from fastapi import APIRouter, HTTPException, Depends, Body, UploadFile, File, Form from fastapi import APIRouter, HTTPException, Depends
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from backend.agents import AgentFactory, get_registry from backend.agents import AgentFactory, get_registry
from backend.agents.workspace_manager import RunWorkspaceManager
from backend.agents.agent_workspace import load_agent_workspace_config
from backend.agents.skills_manager import SkillsManager
from backend.agents.toolkit_factory import load_agent_profiles
from backend.config.bootstrap_config import get_bootstrap_config_for_run
from backend.llm.models import get_agent_model_info
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/workspaces/{workspace_id}/agents", tags=["agents"]) router = APIRouter(prefix="/api/workspaces/{workspace_id}/agents", tags=["agents"])
DESIGN_SCOPE = "design_workspace"
def _design_scope_fields() -> dict[str, str]:
return {
"scope_type": DESIGN_SCOPE,
"scope_note": (
"For design-time CRUD routes on this surface, `workspace_id` refers "
"to the persistent registry under `workspaces/`."
),
}
# Request/Response Models # Request/Response Models
@@ -38,26 +37,9 @@ class CreateAgentRequest(BaseModel):
class UpdateAgentRequest(BaseModel): class UpdateAgentRequest(BaseModel):
"""Request to update an agent.""" """Request to update design-time agent metadata."""
name: Optional[str] = None name: Optional[str] = None
description: Optional[str] = None description: Optional[str] = None
enabled_skills: Optional[List[str]] = None
disabled_skills: Optional[List[str]] = None
class InstallExternalSkillRequest(BaseModel):
"""Request to install an external skill for one agent."""
source: str = Field(..., description="Directory path, zip path, or http(s) zip URL")
name: Optional[str] = Field(None, description="Optional override skill name")
activate: bool = Field(True, description="Whether to enable skill immediately")
class LocalSkillRequest(BaseModel):
skill_name: str = Field(..., description="Local skill name")
class LocalSkillContentRequest(BaseModel):
content: str = Field(..., description="Updated SKILL.md content")
class AgentResponse(BaseModel): class AgentResponse(BaseModel):
@@ -68,30 +50,8 @@ class AgentResponse(BaseModel):
config_path: str config_path: str
agent_dir: str agent_dir: str
status: str = "inactive" status: str = "inactive"
scope_type: str = DESIGN_SCOPE
scope_note: Optional[str] = None
class AgentFileResponse(BaseModel):
"""Agent file content response."""
filename: str
content: str
class AgentProfileResponse(BaseModel):
agent_id: str
workspace_id: str
profile: Dict[str, Any]
class AgentSkillsResponse(BaseModel):
agent_id: str
workspace_id: str
skills: List[Dict[str, Any]]
class SkillDetailResponse(BaseModel):
agent_id: str
workspace_id: str
skill: Dict[str, Any]
# Dependencies # Dependencies
@@ -100,16 +60,6 @@ def get_agent_factory():
return AgentFactory() return AgentFactory()
def get_workspace_manager():
"""Get run-scoped workspace manager instance."""
return RunWorkspaceManager()
def get_skills_manager():
"""Get SkillsManager instance."""
return SkillsManager()
# Routes # Routes
@router.post("", response_model=AgentResponse) @router.post("", response_model=AgentResponse)
async def create_agent( async def create_agent(
@@ -119,7 +69,7 @@ async def create_agent(
registry = Depends(get_registry), registry = Depends(get_registry),
): ):
""" """
Create a new agent in a workspace. Create a new agent in a design-time workspace registry entry.
Args: Args:
workspace_id: Workspace identifier workspace_id: Workspace identifier
@@ -162,6 +112,7 @@ async def create_agent(
config_path=str(agent.config_path), config_path=str(agent.config_path),
agent_dir=str(agent.agent_dir), agent_dir=str(agent.agent_dir),
status="inactive", status="inactive",
**_design_scope_fields(),
) )
except ValueError as e: except ValueError as e:
@@ -174,7 +125,7 @@ async def list_agents(
factory: AgentFactory = Depends(get_agent_factory), factory: AgentFactory = Depends(get_agent_factory),
): ):
""" """
List all agents in a workspace. List all agents in a design-time workspace registry entry.
Args: Args:
workspace_id: Workspace identifier workspace_id: Workspace identifier
@@ -192,6 +143,7 @@ async def list_agents(
config_path=agent["config_path"], config_path=agent["config_path"],
agent_dir=str(Path(agent["config_path"]).parent), agent_dir=str(Path(agent["config_path"]).parent),
status="inactive", status="inactive",
**_design_scope_fields(),
) )
for agent in agents_data for agent in agents_data
] ]
@@ -206,7 +158,7 @@ async def get_agent(
registry = Depends(get_registry), registry = Depends(get_registry),
): ):
""" """
Get agent details. Get design-time agent details from the persistent workspace registry.
Args: Args:
workspace_id: Workspace identifier workspace_id: Workspace identifier
@@ -227,111 +179,10 @@ async def get_agent(
config_path=agent_info.config_path, config_path=agent_info.config_path,
agent_dir=agent_info.agent_dir, agent_dir=agent_info.agent_dir,
status=agent_info.status, status=agent_info.status,
**_design_scope_fields(),
) )
@router.get("/{agent_id}/profile", response_model=AgentProfileResponse)
async def get_agent_profile(
workspace_id: str,
agent_id: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
asset_dir = skills_manager.get_agent_asset_dir(workspace_id, agent_id)
agent_config = load_agent_workspace_config(asset_dir / "agent.yaml")
profiles = load_agent_profiles()
profile = profiles.get(agent_id, {})
bootstrap = get_bootstrap_config_for_run(skills_manager.project_root, workspace_id)
override = bootstrap.agent_override(agent_id)
active_tool_groups = override.get("active_tool_groups", agent_config.active_tool_groups or profile.get("active_tool_groups", []))
if not isinstance(active_tool_groups, list):
active_tool_groups = []
disabled_tool_groups = agent_config.disabled_tool_groups
if disabled_tool_groups:
disabled_set = set(disabled_tool_groups)
active_tool_groups = [group_name for group_name in active_tool_groups if group_name not in disabled_set]
default_skills = profile.get("skills", [])
if not isinstance(default_skills, list):
default_skills = []
resolved_skills = skills_manager.resolve_agent_skill_names(
config_name=workspace_id,
agent_id=agent_id,
default_skills=default_skills,
)
prompt_files = agent_config.prompt_files or ["SOUL.md", "PROFILE.md", "AGENTS.md", "POLICY.md", "MEMORY.md"]
model_name, model_provider = get_agent_model_info(agent_id)
return AgentProfileResponse(
agent_id=agent_id,
workspace_id=workspace_id,
profile={
"model_name": model_name,
"model_provider": model_provider,
"prompt_files": prompt_files,
"default_skills": default_skills,
"resolved_skills": resolved_skills,
"active_tool_groups": active_tool_groups,
"disabled_tool_groups": disabled_tool_groups,
"enabled_skills": agent_config.enabled_skills,
"disabled_skills": agent_config.disabled_skills,
},
)
@router.get("/{agent_id}/skills", response_model=AgentSkillsResponse)
async def get_agent_skills(
workspace_id: str,
agent_id: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
agent_asset_dir = skills_manager.get_agent_asset_dir(workspace_id, agent_id)
agent_config = load_agent_workspace_config(agent_asset_dir / "agent.yaml")
resolved_skills = set(skills_manager.resolve_agent_skill_names(config_name=workspace_id, agent_id=agent_id, default_skills=[]))
enabled = set(agent_config.enabled_skills)
disabled = set(agent_config.disabled_skills)
payload = []
for item in skills_manager.list_agent_skill_catalog(workspace_id, agent_id):
if item.skill_name in disabled:
status = "disabled"
elif item.skill_name in enabled:
status = "enabled"
elif item.skill_name in resolved_skills:
status = "active"
else:
status = "available"
payload.append({
"skill_name": item.skill_name,
"name": item.name,
"description": item.description,
"version": item.version,
"source": item.source,
"tools": item.tools,
"status": status,
})
return AgentSkillsResponse(agent_id=agent_id, workspace_id=workspace_id, skills=payload)
@router.get("/{agent_id}/skills/{skill_name}", response_model=SkillDetailResponse)
async def get_agent_skill_detail(
workspace_id: str,
agent_id: str,
skill_name: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
try:
detail = skills_manager.load_agent_skill_document(
config_name=workspace_id,
agent_id=agent_id,
skill_name=skill_name,
)
except FileNotFoundError:
raise HTTPException(status_code=404, detail=f"Unknown skill: {skill_name}")
return SkillDetailResponse(agent_id=agent_id, workspace_id=workspace_id, skill=detail)
@router.delete("/{agent_id}") @router.delete("/{agent_id}")
async def delete_agent( async def delete_agent(
workspace_id: str, workspace_id: str,
@@ -397,16 +248,6 @@ async def update_agent(
if metadata_updates: if metadata_updates:
registry.update_metadata(agent_id, metadata_updates) registry.update_metadata(agent_id, metadata_updates)
# Update skills if provided
if request.enabled_skills or request.disabled_skills:
skills_manager = SkillsManager()
skills_manager.update_agent_skill_overrides(
config_name=workspace_id,
agent_id=agent_id,
enable=request.enabled_skills or [],
disable=request.disabled_skills or [],
)
# Get updated info # Get updated info
agent_info = registry.get(agent_id) agent_info = registry.get(agent_id)
return AgentResponse( return AgentResponse(
@@ -416,294 +257,5 @@ async def update_agent(
config_path=agent_info.config_path, config_path=agent_info.config_path,
agent_dir=agent_info.agent_dir, agent_dir=agent_info.agent_dir,
status=agent_info.status, status=agent_info.status,
**_design_scope_fields(),
) )
@router.post("/{agent_id}/skills/{skill_name}/enable")
async def enable_skill(
workspace_id: str,
agent_id: str,
skill_name: str,
registry = Depends(get_registry),
):
"""
Enable a skill for an agent.
Args:
workspace_id: Workspace identifier
agent_id: Agent identifier
skill_name: Skill name to enable
Returns:
Success message
"""
agent_info = registry.get(agent_id)
if not agent_info or agent_info.workspace_id != workspace_id:
raise HTTPException(status_code=404, detail=f"Agent '{agent_id}' not found")
skills_manager = SkillsManager()
result = skills_manager.update_agent_skill_overrides(
config_name=workspace_id,
agent_id=agent_id,
enable=[skill_name],
)
return {
"message": f"Skill '{skill_name}' enabled for agent '{agent_id}'",
"enabled_skills": result["enabled_skills"],
}
@router.post("/{agent_id}/skills/{skill_name}/disable")
async def disable_skill(
workspace_id: str,
agent_id: str,
skill_name: str,
registry = Depends(get_registry),
):
"""
Disable a skill for an agent.
Args:
workspace_id: Workspace identifier
agent_id: Agent identifier
skill_name: Skill name to disable
Returns:
Success message
"""
agent_info = registry.get(agent_id)
if not agent_info or agent_info.workspace_id != workspace_id:
raise HTTPException(status_code=404, detail=f"Agent '{agent_id}' not found")
skills_manager = SkillsManager()
result = skills_manager.update_agent_skill_overrides(
config_name=workspace_id,
agent_id=agent_id,
disable=[skill_name],
)
return {
"message": f"Skill '{skill_name}' disabled for agent '{agent_id}'",
"disabled_skills": result["disabled_skills"],
}
@router.post("/{agent_id}/skills/install")
async def install_external_skill(
workspace_id: str,
agent_id: str,
request: InstallExternalSkillRequest,
registry=Depends(get_registry),
):
"""Install an external skill into one agent's local skills."""
agent_info = registry.get(agent_id)
if not agent_info or agent_info.workspace_id != workspace_id:
raise HTTPException(status_code=404, detail=f"Agent '{agent_id}' not found")
skills_manager = SkillsManager()
try:
result = skills_manager.install_external_skill_for_agent(
config_name=workspace_id,
agent_id=agent_id,
source=request.source,
skill_name=request.name,
activate=request.activate,
)
except (FileNotFoundError, ValueError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
return {
"message": f"Installed external skill '{result['skill_name']}' for '{agent_id}'",
**result,
}
@router.post("/{agent_id}/skills/local")
async def create_local_skill(
workspace_id: str,
agent_id: str,
request: LocalSkillRequest,
registry=Depends(get_registry),
):
agent_info = registry.get(agent_id)
if not agent_info or agent_info.workspace_id != workspace_id:
raise HTTPException(status_code=404, detail=f"Agent '{agent_id}' not found")
skills_manager = SkillsManager()
try:
skills_manager.create_agent_local_skill(
config_name=workspace_id,
agent_id=agent_id,
skill_name=request.skill_name,
)
except (ValueError, FileExistsError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
return {"message": f"Created local skill '{request.skill_name}' for '{agent_id}'"}
@router.put("/{agent_id}/skills/local/{skill_name}")
async def update_local_skill(
workspace_id: str,
agent_id: str,
skill_name: str,
request: LocalSkillContentRequest,
registry=Depends(get_registry),
):
agent_info = registry.get(agent_id)
if not agent_info or agent_info.workspace_id != workspace_id:
raise HTTPException(status_code=404, detail=f"Agent '{agent_id}' not found")
skills_manager = SkillsManager()
try:
skills_manager.update_agent_local_skill(
config_name=workspace_id,
agent_id=agent_id,
skill_name=skill_name,
content=request.content,
)
except (ValueError, FileNotFoundError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
return {"message": f"Updated local skill '{skill_name}' for '{agent_id}'"}
@router.delete("/{agent_id}/skills/local/{skill_name}")
async def delete_local_skill(
workspace_id: str,
agent_id: str,
skill_name: str,
registry=Depends(get_registry),
):
agent_info = registry.get(agent_id)
if not agent_info or agent_info.workspace_id != workspace_id:
raise HTTPException(status_code=404, detail=f"Agent '{agent_id}' not found")
skills_manager = SkillsManager()
try:
skills_manager.delete_agent_local_skill(
config_name=workspace_id,
agent_id=agent_id,
skill_name=skill_name,
)
skills_manager.forget_agent_skill_overrides(
config_name=workspace_id,
agent_id=agent_id,
skill_names=[skill_name],
)
except (ValueError, FileNotFoundError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
return {"message": f"Deleted local skill '{skill_name}' for '{agent_id}'"}
@router.post("/{agent_id}/skills/upload")
async def upload_external_skill(
workspace_id: str,
agent_id: str,
file: UploadFile = File(...),
name: Optional[str] = Form(None),
activate: bool = Form(True),
registry=Depends(get_registry),
):
"""Upload a zip skill package from frontend and install for one agent."""
agent_info = registry.get(agent_id)
if not agent_info or agent_info.workspace_id != workspace_id:
raise HTTPException(status_code=404, detail=f"Agent '{agent_id}' not found")
original_name = (file.filename or "").strip()
if not original_name.lower().endswith(".zip"):
raise HTTPException(status_code=400, detail="Uploaded file must be a .zip archive")
suffix = Path(original_name).suffix or ".zip"
temp_path: Optional[str] = None
try:
with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as tmp:
temp_path = tmp.name
content = await file.read()
tmp.write(content)
skills_manager = SkillsManager()
result = skills_manager.install_external_skill_for_agent(
config_name=workspace_id,
agent_id=agent_id,
source=temp_path,
skill_name=name,
activate=activate,
)
except (FileNotFoundError, ValueError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
finally:
try:
await file.close()
except Exception as e:
logger.warning(f"Failed to close uploaded file: {e}")
if temp_path and os.path.exists(temp_path):
os.remove(temp_path)
return {
"message": f"Uploaded and installed external skill '{result['skill_name']}' for '{agent_id}'",
**result,
}
@router.get("/{agent_id}/files/{filename}", response_model=AgentFileResponse)
async def get_agent_file(
workspace_id: str,
agent_id: str,
filename: str,
workspace_manager: RunWorkspaceManager = Depends(get_workspace_manager),
):
"""
Read an agent's workspace file.
Args:
workspace_id: Workspace identifier
agent_id: Agent identifier
filename: File to read (e.g., SOUL.md, PROFILE.md)
Returns:
File content
"""
try:
content = workspace_manager.load_agent_file(
config_name=workspace_id,
agent_id=agent_id,
filename=filename,
)
return AgentFileResponse(filename=filename, content=content)
except FileNotFoundError:
raise HTTPException(status_code=404, detail=f"File '{filename}' not found")
@router.put("/{agent_id}/files/{filename}", response_model=AgentFileResponse)
async def update_agent_file(
workspace_id: str,
agent_id: str,
filename: str,
content: str = Body(..., media_type="text/plain"),
workspace_manager: RunWorkspaceManager = Depends(get_workspace_manager),
):
"""
Update an agent's workspace file.
Args:
workspace_id: Workspace identifier
agent_id: Agent identifier
filename: File to update
content: New file content
Returns:
Updated file information
"""
try:
workspace_manager.update_agent_file(
config_name=workspace_id,
agent_id=agent_id,
filename=filename,
content=content,
)
return AgentFileResponse(filename=filename, content=content)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))

404
backend/api/dynamic_team.py Normal file
View File

@@ -0,0 +1,404 @@
# -*- coding: utf-8 -*-
"""Dynamic Team API - REST endpoints for managing analyst team dynamically.
This module provides API endpoints for:
- Creating new analysts with custom configuration
- Cloning existing analysts
- Removing analysts
- Listing available analyst types
- Getting analyst information
- Managing team composition
These endpoints allow both the PM agent (via tool calls) and frontend
(via HTTP) to manage the analyst team dynamically.
"""
from __future__ import annotations
import logging
from pathlib import Path
from typing import Any, Dict, List, Optional
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel, Field
from backend.agents.dynamic_team_types import (
AnalystPersona,
AnalystConfig,
AnalystTypeInfo,
)
from backend.config.constants import ANALYST_TYPES
from backend.agents.prompt_loader import get_prompt_loader
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/dynamic-team", tags=["dynamic-team"])
PROJECT_ROOT = Path(__file__).resolve().parents[2]
# Pydantic models for API requests/responses
class AnalystPersonaRequest(BaseModel):
"""Request model for analyst persona definition."""
name: str = Field(..., description="Display name for the analyst")
focus: List[str] = Field(default_factory=list, description="List of focus areas")
description: str = Field(..., description="Detailed description")
preferred_tools: Optional[List[str]] = Field(None, description="Preferred tool categories")
icon: Optional[str] = Field(None, description="Icon identifier")
class CreateAnalystRequest(BaseModel):
"""Request model for creating a new analyst."""
agent_id: str = Field(..., description="Unique identifier for the new analyst")
analyst_type: str = Field(..., description="Base type or custom identifier")
persona: Optional[AnalystPersonaRequest] = Field(None, description="Custom persona definition")
soul_md: Optional[str] = Field(None, description="Custom SOUL.md content")
agents_md: Optional[str] = Field(None, description="Custom AGENTS.md content")
profile_md: Optional[str] = Field(None, description="Custom PROFILE.md content")
bootstrap_md: Optional[str] = Field(None, description="Custom BOOTSTRAP.md content")
model_name: Optional[str] = Field(None, description="Override default LLM model")
skills: Optional[List[str]] = Field(None, description="List of skill IDs to enable")
tags: Optional[List[str]] = Field(None, description="Classification tags")
class CloneAnalystRequest(BaseModel):
"""Request model for cloning an analyst."""
source_id: str = Field(..., description="ID of the analyst to clone")
new_id: str = Field(..., description="Unique identifier for the new analyst")
name: Optional[str] = Field(None, description="New display name")
focus_additions: Optional[List[str]] = Field(None, description="Additional focus areas")
description_override: Optional[str] = Field(None, description="New description")
model_name: Optional[str] = Field(None, description="Override model from source")
class RegisterTypeRequest(BaseModel):
"""Request model for registering a new analyst type."""
type_id: str = Field(..., description="Unique identifier for this type")
name: str = Field(..., description="Display name")
focus: List[str] = Field(..., description="List of focus areas")
description: str = Field(..., description="Detailed description")
preferred_tools: Optional[List[str]] = Field(None, description="Preferred tool categories")
class AnalystResponse(BaseModel):
"""Response model for analyst operations."""
success: bool
agent_id: Optional[str] = None
message: str
error: Optional[str] = None
class AnalystTypeResponse(BaseModel):
"""Response model for analyst type information."""
type_id: str
name: str
description: str
is_builtin: bool
source: str
class AnalystInfoResponse(BaseModel):
"""Response model for detailed analyst information."""
found: bool
agent_id: str
config: Optional[Dict[str, Any]] = None
is_custom: bool = False
is_clone: bool = False
parent_id: Optional[str] = None
message: Optional[str] = None
class TeamSummaryResponse(BaseModel):
"""Response model for team summary."""
total_analysts: int
custom_analysts: int
cloned_analysts: int
analysts: List[Dict[str, Any]]
registered_types: int
# Helper function to get the current pipeline instance
def _get_pipeline(run_id: str) -> Optional[Any]:
"""Get the TradingPipeline instance for a run.
Args:
run_id: The run configuration ID
Returns:
TradingPipeline instance or None if not found
"""
# Import here to avoid circular imports
try:
from backend.apps.runtime_service import get_runtime_state
runtime_state = get_runtime_state()
if runtime_state and hasattr(runtime_state, 'pipeline'):
return runtime_state.pipeline
except Exception as e:
logger.warning(f"Could not get pipeline for run {run_id}: {e}")
return None
def _get_controller(run_id: str) -> Optional[Any]:
"""Get the DynamicTeamController for a run.
Args:
run_id: The run configuration ID
Returns:
DynamicTeamController instance or None if not available
"""
try:
from backend.tools.dynamic_team_tools import get_controller
return get_controller()
except Exception as e:
logger.warning(f"Could not get controller for run {run_id}: {e}")
return None
# API Endpoints
@router.get("/types", response_model=List[AnalystTypeResponse])
async def list_analyst_types() -> List[AnalystTypeResponse]:
"""List all available analyst types.
Returns both built-in types (from ANALYST_TYPES) and runtime-registered types.
"""
result = []
# Add built-in types
for type_id, info in ANALYST_TYPES.items():
result.append(AnalystTypeResponse(
type_id=type_id,
name=info.get("display_name", type_id),
description=info.get("description", ""),
is_builtin=True,
source="constants",
))
# Try to get runtime registered types
controller = _get_controller("default")
if controller:
for type_id, persona in controller._registered_types.items():
result.append(AnalystTypeResponse(
type_id=type_id,
name=persona.name,
description=persona.description,
is_builtin=False,
source="runtime",
))
return result
@router.get("/personas")
async def get_personas() -> Dict[str, Any]:
"""Get all analyst personas from personas.yaml.
Returns the persona definitions used for analyst initialization.
"""
try:
personas = get_prompt_loader().load_yaml_config("analyst", "personas")
return {"success": True, "personas": personas}
except Exception as e:
logger.error(f"Failed to load personas: {e}")
raise HTTPException(status_code=500, detail=f"Failed to load personas: {e}")
@router.post("/runs/{run_id}/analysts", response_model=AnalystResponse)
async def create_analyst(
run_id: str,
request: CreateAnalystRequest,
) -> AnalystResponse:
"""Create a new analyst in the specified run.
Args:
run_id: The run configuration ID
request: Analyst creation configuration
Returns:
Result of the creation operation
"""
controller = _get_controller(run_id)
if not controller:
raise HTTPException(
status_code=503,
detail="Dynamic team controller not available. Is the pipeline running?"
)
# Build persona if provided
persona = None
if request.persona:
persona = AnalystPersona(
name=request.persona.name,
focus=request.persona.focus,
description=request.persona.description,
preferred_tools=request.persona.preferred_tools,
icon=request.persona.icon,
)
# Build config
config = AnalystConfig(
persona=persona,
analyst_type=request.analyst_type if request.analyst_type in ANALYST_TYPES else None,
soul_md=request.soul_md,
agents_md=request.agents_md,
profile_md=request.profile_md,
bootstrap_md=request.bootstrap_md,
model_name=request.model_name,
skills=request.skills or [],
tags=request.tags or [],
)
# Create the analyst
result = controller.create_analyst(
agent_id=request.agent_id,
analyst_type=request.analyst_type,
name=persona.name if persona else None,
focus=persona.focus if persona else None,
description=persona.description if persona else None,
soul_md=config.soul_md,
agents_md=config.agents_md,
model_name=config.model_name,
)
return AnalystResponse(**result)
@router.post("/runs/{run_id}/analysts/clone", response_model=AnalystResponse)
async def clone_analyst(
run_id: str,
request: CloneAnalystRequest,
) -> AnalystResponse:
"""Clone an existing analyst.
Args:
run_id: The run configuration ID
request: Clone configuration
Returns:
Result of the clone operation
"""
controller = _get_controller(run_id)
if not controller:
raise HTTPException(
status_code=503,
detail="Dynamic team controller not available. Is the pipeline running?"
)
result = controller.clone_analyst(
source_id=request.source_id,
new_id=request.new_id,
name=request.name,
focus_additions=request.focus_additions,
description_override=request.description_override,
model_name=request.model_name,
)
return AnalystResponse(**result)
@router.delete("/runs/{run_id}/analysts/{agent_id}", response_model=AnalystResponse)
async def remove_analyst(run_id: str, agent_id: str) -> AnalystResponse:
"""Remove a dynamically created analyst.
Args:
run_id: The run configuration ID
agent_id: The analyst to remove
Returns:
Result of the removal operation
"""
controller = _get_controller(run_id)
if not controller:
raise HTTPException(
status_code=503,
detail="Dynamic team controller not available. Is the pipeline running?"
)
result = controller.remove_analyst(agent_id)
return AnalystResponse(**result)
@router.get("/runs/{run_id}/analysts/{agent_id}", response_model=AnalystInfoResponse)
async def get_analyst_info(run_id: str, agent_id: str) -> AnalystInfoResponse:
"""Get information about a specific analyst.
Args:
run_id: The run configuration ID
agent_id: The analyst ID
Returns:
Analyst configuration and status
"""
controller = _get_controller(run_id)
if not controller:
raise HTTPException(
status_code=503,
detail="Dynamic team controller not available. Is the pipeline running?"
)
result = controller.get_analyst_info(agent_id)
return AnalystInfoResponse(**result)
@router.get("/runs/{run_id}/summary", response_model=TeamSummaryResponse)
async def get_team_summary(run_id: str) -> TeamSummaryResponse:
"""Get a summary of the current analyst team.
Args:
run_id: The run configuration ID
Returns:
Team composition information
"""
controller = _get_controller(run_id)
if not controller:
raise HTTPException(
status_code=503,
detail="Dynamic team controller not available. Is the pipeline running?"
)
result = controller.get_team_summary()
return TeamSummaryResponse(**result)
@router.post("/runs/{run_id}/types", response_model=AnalystTypeResponse)
async def register_analyst_type(
run_id: str,
request: RegisterTypeRequest,
) -> AnalystTypeResponse:
"""Register a new analyst type.
Args:
run_id: The run configuration ID
request: Type registration configuration
Returns:
Registered type information
"""
controller = _get_controller(run_id)
if not controller:
raise HTTPException(
status_code=503,
detail="Dynamic team controller not available. Is the pipeline running?"
)
result = controller.register_analyst_type(
type_id=request.type_id,
name=request.name,
focus=request.focus,
description=request.description,
preferred_tools=request.preferred_tools,
)
if not result.get("success", False):
raise HTTPException(status_code=400, detail=result.get("message", "Registration failed"))
return AnalystTypeResponse(
type_id=request.type_id,
name=request.name,
description=request.description,
is_builtin=False,
source="runtime",
)

View File

@@ -7,7 +7,7 @@ Provides REST API endpoints for tool guard operations.
from __future__ import annotations from __future__ import annotations
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from datetime import datetime from datetime import datetime, timezone
from fastapi import APIRouter, HTTPException from fastapi import APIRouter, HTTPException
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
@@ -29,7 +29,7 @@ class ToolCallRequest(BaseModel):
tool_name: str = Field(..., description="Name of the tool") tool_name: str = Field(..., description="Name of the tool")
tool_input: Dict[str, Any] = Field(default_factory=dict, description="Tool parameters") tool_input: Dict[str, Any] = Field(default_factory=dict, description="Tool parameters")
agent_id: str = Field(..., description="Agent making the request") agent_id: str = Field(..., description="Agent making the request")
workspace_id: str = Field(..., description="Workspace context") workspace_id: str = Field(..., description="Run context; historical field name retained for compatibility")
session_id: Optional[str] = Field(None, description="Session identifier") session_id: Optional[str] = Field(None, description="Session identifier")
@@ -46,6 +46,21 @@ class DenyRequest(BaseModel):
reason: Optional[str] = Field(None, description="Reason for denial") reason: Optional[str] = Field(None, description="Reason for denial")
class BatchApprovalRequest(BaseModel):
"""Request to approve multiple tool calls."""
approval_ids: List[str] = Field(..., description="List of approval request IDs")
one_time: bool = Field(True, description="Whether these are one-time approvals")
class BatchApprovalResponse(BaseModel):
"""Response for batch approval operation."""
approved: List[ApprovalResponse] = Field(default_factory=list, description="Successfully approved")
failed: List[Dict[str, Any]] = Field(default_factory=list, description="Failed approvals with errors")
total_requested: int
total_approved: int
total_failed: int
class ToolFinding(BaseModel): class ToolFinding(BaseModel):
"""Tool guard finding.""" """Tool guard finding."""
severity: SeverityLevel severity: SeverityLevel
@@ -61,11 +76,17 @@ class ApprovalResponse(BaseModel):
tool_input: Dict[str, Any] tool_input: Dict[str, Any]
agent_id: str agent_id: str
workspace_id: str workspace_id: str
run_id: str
session_id: Optional[str] = None session_id: Optional[str] = None
findings: List[ToolFinding] = Field(default_factory=list) findings: List[ToolFinding] = Field(default_factory=list)
created_at: str created_at: str
resolved_at: Optional[str] = None resolved_at: Optional[str] = None
resolved_by: Optional[str] = None resolved_by: Optional[str] = None
scope_type: str = "runtime_run"
scope_note: str = (
"Approvals are scoped to the active runtime run. `workspace_id` is "
"retained as a compatibility field name; prefer `run_id` for display."
)
class PendingApprovalsResponse(BaseModel): class PendingApprovalsResponse(BaseModel):
@@ -91,6 +112,7 @@ def _to_response(record: ApprovalRecord) -> ApprovalResponse:
tool_input=record.tool_input, tool_input=record.tool_input,
agent_id=record.agent_id, agent_id=record.agent_id,
workspace_id=record.workspace_id, workspace_id=record.workspace_id,
run_id=record.workspace_id,
session_id=record.session_id, session_id=record.session_id,
findings=[ToolFinding(**f.to_dict()) for f in record.findings], findings=[ToolFinding(**f.to_dict()) for f in record.findings],
created_at=record.created_at.isoformat(), created_at=record.created_at.isoformat(),
@@ -124,7 +146,7 @@ async def check_tool_call(
if request.tool_name in SAFE_TOOLS: if request.tool_name in SAFE_TOOLS:
record.status = ApprovalStatus.APPROVED record.status = ApprovalStatus.APPROVED
record.resolved_at = datetime.utcnow() record.resolved_at = datetime.now(timezone.utc)
record.resolved_by = "system" record.resolved_by = "system"
STORE.set_status( STORE.set_status(
record.approval_id, record.approval_id,
@@ -156,9 +178,12 @@ async def approve_tool_call(
if record.status != ApprovalStatus.PENDING: if record.status != ApprovalStatus.PENDING:
raise HTTPException(status_code=400, detail=f"Approval already {record.status}") raise HTTPException(status_code=400, detail=f"Approval already {record.status}")
record.status = ApprovalStatus.APPROVED record = STORE.set_status(
record.resolved_at = datetime.utcnow() request.approval_id,
record.resolved_by = "user" ApprovalStatus.APPROVED,
resolved_by="user",
notify_request=True,
)
return _to_response(record) return _to_response(record)
@@ -183,9 +208,12 @@ async def deny_tool_call(
if record.status != ApprovalStatus.PENDING: if record.status != ApprovalStatus.PENDING:
raise HTTPException(status_code=400, detail=f"Approval already {record.status}") raise HTTPException(status_code=400, detail=f"Approval already {record.status}")
record.status = ApprovalStatus.DENIED record = STORE.set_status(
record.resolved_at = datetime.utcnow() request.approval_id,
record.resolved_by = "user" ApprovalStatus.DENIED,
resolved_by="user",
notify_request=True,
)
record.metadata["denial_reason"] = request.reason record.metadata["denial_reason"] = request.reason
return _to_response(record) return _to_response(record)
@@ -200,7 +228,7 @@ async def list_pending_approvals(
List pending tool approval requests. List pending tool approval requests.
Args: Args:
workspace_id: Filter by workspace workspace_id: Filter by run id (historical query parameter name retained)
agent_id: Filter by agent agent_id: Filter by agent
Returns: Returns:
@@ -255,3 +283,58 @@ async def cancel_approval(
STORE.cancel(approval_id) STORE.cancel(approval_id)
return _to_response(record) return _to_response(record)
@router.post("/approve/batch", response_model=BatchApprovalResponse)
async def batch_approve_tool_calls(
request: BatchApprovalRequest,
):
"""
Approve multiple pending tool calls in a single request.
Args:
request: Batch approval parameters with list of approval IDs
Returns:
Batch approval results with successful and failed approvals
"""
approved: List[ApprovalResponse] = []
failed: List[Dict[str, Any]] = []
for approval_id in request.approval_ids:
record = STORE.get(approval_id)
if not record:
failed.append({
"approval_id": approval_id,
"error": "Approval request not found",
})
continue
if record.status != ApprovalStatus.PENDING:
failed.append({
"approval_id": approval_id,
"error": f"Approval already {record.status}",
})
continue
try:
record = STORE.set_status(
approval_id,
ApprovalStatus.APPROVED,
resolved_by="user",
notify_request=True,
)
approved.append(_to_response(record))
except Exception as e:
failed.append({
"approval_id": approval_id,
"error": str(e),
})
return BatchApprovalResponse(
approved=approved,
failed=failed,
total_requested=len(request.approval_ids),
total_approved=len(approved),
total_failed=len(failed),
)

View File

@@ -1,839 +0,0 @@
# -*- coding: utf-8 -*-
"""Read-only OpenClaw CLI API routes — typed with Pydantic models."""
from __future__ import annotations
from typing import Any
from fastapi import APIRouter, Depends, HTTPException, Query
from pydantic import BaseModel, Field
from backend.services.openclaw_cli import OpenClawCliError, OpenClawCliService
from shared.models.openclaw import OpenClawStatus
router = APIRouter(prefix="/api/openclaw", tags=["openclaw"])
def get_openclaw_cli_service() -> OpenClawCliService:
"""Build the OpenClaw CLI service dependency."""
return OpenClawCliService()
def _raise_cli_http_error(exc: OpenClawCliError) -> None:
detail = {
"message": str(exc),
"command": exc.command,
"exit_code": exc.exit_code,
"stdout": exc.stdout,
"stderr": exc.stderr,
}
status_code = 503 if exc.exit_code is None else 502
raise HTTPException(status_code=status_code, detail=detail) from exc
# ---------------------------------------------------------------------------
# Response wrappers
# ---------------------------------------------------------------------------
class StatusResponse(BaseModel):
status: object
class SessionsResponse(BaseModel):
sessions: list[object]
class SessionDetailResponse(BaseModel):
session: object | None
class SessionHistoryResponse(BaseModel):
session_key: str
session_id: str | None
events: list[object]
history: list[object]
raw_text: str | None
class CronResponse(BaseModel):
cron: list[object]
jobs: list[object]
class ApprovalsResponse(BaseModel):
approvals: list[object]
pending: list[object]
class AgentsResponse(BaseModel):
agents: list[object]
class SkillsResponse(BaseModel):
workspace_dir: str
managed_skills_dir: str
skills: list[object]
class ModelsResponse(BaseModel):
models: list[object]
class HooksResponse(BaseModel):
workspace_dir: str
managed_hooks_dir: str
hooks: list[object]
class PluginsResponse(BaseModel):
workspace_dir: str
plugins: list[object]
diagnostics: list[object]
class SecretsAuditResponse(BaseModel):
version: int
status: str
findings: list[object]
class SecurityAuditResponse2(BaseModel):
report: object | None
secret_diagnostics: list[str]
class DaemonStatusResponse(BaseModel):
service: object | None
port: object | None
rpc: object | None
health: object | None
class PairingListResponse2(BaseModel):
channel: str
requests: list[object]
class QrCodeResponse2(BaseModel):
setup_code: str
gateway_url: str
auth: str
url_source: str
class UpdateStatusResponse2(BaseModel):
update: object | None
channel: object | None
class ModelAliasesResponse(BaseModel):
aliases: dict[str, str]
class ModelFallbacksResponse(BaseModel):
key: str
label: str
items: list[object]
class SkillUpdateResponse(BaseModel):
ok: bool
slug: str
version: str
error: str | None
class ModelsStatusResponse(BaseModel):
configPath: str | None = None
agentId: str | None = None
agentDir: str | None = None
defaultModel: str | None = None
resolvedDefault: str | None = None
fallbacks: list[str] = Field(default_factory=list)
imageModel: str | None = None
imageFallbacks: list[str] = Field(default_factory=list)
aliases: dict[str, str] = Field(default_factory=dict)
allowed: list[str] = Field(default_factory=list)
auth: dict[str, Any] = Field(default_factory=dict)
class ChannelsStatusResponse(BaseModel):
reachable: bool | None = None
channelAccounts: dict[str, Any] = Field(default_factory=dict)
channels: list[str] = Field(default_factory=list)
issues: list[dict[str, Any]] = Field(default_factory=list)
class ChannelsListResponse(BaseModel):
chat: dict[str, list[str]] = Field(default_factory=dict)
auth: list[dict[str, Any]] = Field(default_factory=list)
usage: dict[str, Any] | None = None
class HookInfoResponse(BaseModel):
name: str | None = None
description: str | None = None
source: str | None = None
pluginId: str | None = None
filePath: str | None = None
handlerPath: str | None = None
hookKey: str | None = None
emoji: str | None = None
homepage: str | None = None
events: list[str] = Field(default_factory=list)
enabledByConfig: bool | None = None
loadable: bool | None = None
requirementsSatisfied: bool | None = None
requirements: dict[str, Any] = Field(default_factory=dict)
error: str | None = None
raw: str | None = None
class HooksCheckResponse(BaseModel):
workspace_dir: str = ""
managed_hooks_dir: str = ""
hooks: list[dict[str, Any]] = Field(default_factory=list)
eligible: bool | None = None
verbose: bool | None = None
class PluginInspectEntry(BaseModel):
plugin: dict[str, Any] = Field(default_factory=dict)
shape: str | None = None
capabilityMode: str | None = None
capabilityCount: int = 0
capabilities: list[dict[str, Any]] = Field(default_factory=list)
typedHooks: list[dict[str, Any]] = Field(default_factory=list)
customHooks: list[dict[str, Any]] = Field(default_factory=list)
tools: list[dict[str, Any]] = Field(default_factory=list)
commands: list[str] = Field(default_factory=list)
cliCommands: list[str] = Field(default_factory=list)
services: list[str] = Field(default_factory=list)
gatewayMethods: list[str] = Field(default_factory=list)
mcpServers: list[dict[str, Any]] = Field(default_factory=list)
lspServers: list[dict[str, Any]] = Field(default_factory=list)
httpRouteCount: int = 0
bundleCapabilities: list[str] = Field(default_factory=list)
class PluginsInspectResponse(BaseModel):
inspect: list[dict[str, Any]] = Field(default_factory=list)
class AgentBindingItem(BaseModel):
agentId: str
match: dict[str, Any]
description: str
class AgentsBindingsResponse(BaseModel):
bindings: list[AgentBindingItem]
# ---------------------------------------------------------------------------
# Routes — use typed model methods and return Pydantic models directly
# ---------------------------------------------------------------------------
@router.get("/status")
async def api_openclaw_status(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> OpenClawStatus:
"""Read `openclaw status --json` and return a typed model."""
try:
return service.status_model()
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/sessions")
async def api_openclaw_sessions(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> SessionsResponse:
"""Read `openclaw sessions --json` and return a typed SessionsList."""
try:
result = service.list_sessions_model()
return SessionsResponse(sessions=result.sessions)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/sessions/{session_key:path}/history")
async def api_openclaw_session_history(
session_key: str,
limit: int = Query(20, ge=1, le=200),
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> SessionHistoryResponse:
"""Read session history and return a typed SessionHistory."""
try:
result = service.get_session_history_model(session_key, limit=limit)
return SessionHistoryResponse(
session_key=result.session_key,
session_id=result.session_id,
events=result.events,
history=result.events, # alias for compat
raw_text=result.raw_text,
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/sessions/{session_key:path}")
async def api_openclaw_session_detail(
session_key: str,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> SessionDetailResponse:
"""Resolve a single session and return it as a typed model."""
try:
session = service.get_session_model(session_key)
return SessionDetailResponse(session=session)
except KeyError as exc:
raise HTTPException(status_code=404, detail=f"session '{session_key}' not found") from exc
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/cron")
async def api_openclaw_cron(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> CronResponse:
"""Read `openclaw cron list --json` and return a typed CronList."""
try:
result = service.list_cron_jobs_model()
return CronResponse(cron=list(result.cron), jobs=list(result.jobs))
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/approvals")
async def api_openclaw_approvals(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> ApprovalsResponse:
"""Read `openclaw approvals get --json` and return a typed ApprovalsList."""
try:
result = service.list_approvals_model()
return ApprovalsResponse(
approvals=list(result.approvals),
pending=list(result.pending),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/agents")
async def api_openclaw_agents(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> AgentsResponse:
"""Read `openclaw agents list --json` and return a typed AgentsList."""
try:
result = service.list_agents_model()
return AgentsResponse(agents=list(result.agents))
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/agents/presence")
async def api_openclaw_agents_presence(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> dict[str, Any]:
"""Read runtime session presence for all agents from session files."""
result = service.agents_presence()
return result
# ---------------------------------------------------------------------------
# Write agents routes
# ---------------------------------------------------------------------------
class AgentAddResponse(BaseModel):
agentId: str
name: str
workspace: str
agentDir: str
model: str | None = None
bindings: dict[str, Any] = Field(default_factory=dict)
class AgentDeleteResponse(BaseModel):
agentId: str
workspace: str
agentDir: str
sessionsDir: str
removedBindings: list[str] = Field(default_factory=list)
removedAllow: list[str] = Field(default_factory=list)
class AgentBindResponse(BaseModel):
agentId: str
added: list[str] = Field(default_factory=list)
updated: list[str] = Field(default_factory=list)
skipped: list[str] = Field(default_factory=list)
conflicts: list[str] = Field(default_factory=list)
class AgentUnbindResponse(BaseModel):
agentId: str
removed: list[str] = Field(default_factory=list)
missing: list[str] = Field(default_factory=list)
conflicts: list[str] = Field(default_factory=list)
class AgentIdentityResponse(BaseModel):
agentId: str
identity: dict[str, Any] = Field(default_factory=dict)
workspace: str | None = None
identityFile: str | None = None
@router.post("/agents/add")
async def api_openclaw_agents_add(
name: str,
*,
workspace: str | None = None,
model: str | None = None,
agent_dir: str | None = None,
bind: list[str] | None = None,
non_interactive: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> AgentAddResponse:
"""Run `openclaw agents add <name>` and return JSON result."""
try:
result = service.agents_add(
name,
workspace=workspace,
model=model,
agent_dir=agent_dir,
bind=bind,
non_interactive=non_interactive,
)
return AgentAddResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.post("/agents/delete/{id}")
async def api_openclaw_agents_delete(
id: str,
force: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> AgentDeleteResponse:
"""Run `openclaw agents delete <id> [--force]` and return JSON result."""
try:
result = service.agents_delete(id, force=force)
return AgentDeleteResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.post("/agents/bind")
async def api_openclaw_agents_bind(
*,
agent: str | None = None,
bind: list[str] | None = None,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> AgentBindResponse:
"""Run `openclaw agents bind [--agent <id>] [--bind <spec>]` and return JSON result."""
try:
result = service.agents_bind(agent=agent, bind=bind)
return AgentBindResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.post("/agents/unbind")
async def api_openclaw_agents_unbind(
*,
agent: str | None = None,
bind: list[str] | None = None,
all: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> AgentUnbindResponse:
"""Run `openclaw agents unbind [--agent <id>] [--bind <spec>] [--all]` and return JSON result."""
try:
result = service.agents_unbind(agent=agent, bind=bind, all=all)
return AgentUnbindResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.post("/agents/set-identity")
async def api_openclaw_agents_set_identity(
*,
agent: str | None = None,
workspace: str | None = None,
identity_file: str | None = None,
name: str | None = None,
emoji: str | None = None,
theme: str | None = None,
avatar: str | None = None,
from_identity: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> AgentIdentityResponse:
"""Run `openclaw agents set-identity` and return JSON result."""
try:
result = service.agents_set_identity(
agent=agent,
workspace=workspace,
identity_file=identity_file,
name=name,
emoji=emoji,
theme=theme,
avatar=avatar,
from_identity=from_identity,
)
return AgentIdentityResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/skills")
async def api_openclaw_skills(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> SkillsResponse:
"""Read `openclaw skills list --json` and return a typed SkillStatusReport."""
try:
result = service.list_skills_model()
return SkillsResponse(
workspace_dir=result.workspace_dir,
managed_skills_dir=result.managed_skills_dir,
skills=list(result.skills),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/models")
async def api_openclaw_models(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> ModelsResponse:
"""Read `openclaw models list --json` and return a typed ModelsList."""
try:
result = service.list_models_model()
return ModelsResponse(models=list(result.models))
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/hooks")
async def api_openclaw_hooks(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> HooksResponse:
try:
result = service.list_hooks_model()
return HooksResponse(
workspace_dir=result.workspace_dir,
managed_hooks_dir=result.managed_hooks_dir,
hooks=list(result.hooks),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/plugins")
async def api_openclaw_plugins(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> PluginsResponse:
try:
result = service.list_plugins_model()
return PluginsResponse(
workspace_dir=result.workspace_dir,
plugins=list(result.plugins),
diagnostics=list(result.diagnostics),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/secrets-audit")
async def api_openclaw_secrets_audit(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> SecretsAuditResponse:
try:
result = service.secrets_audit_model()
return SecretsAuditResponse(
version=result.version,
status=result.status,
findings=list(result.findings),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/security-audit")
async def api_openclaw_security_audit(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> SecurityAuditResponse2:
try:
result = service.security_audit_model()
return SecurityAuditResponse2(
report=result.report.model_dump() if result.report else None,
secret_diagnostics=list(result.secret_diagnostics),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/daemon-status")
async def api_openclaw_daemon_status(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> DaemonStatusResponse:
try:
result = service.daemon_status_model()
return DaemonStatusResponse(
service=result.service.model_dump() if result.service else None,
port=result.port.model_dump() if result.port else None,
rpc=result.rpc.model_dump() if result.rpc else None,
health=result.health.model_dump() if result.health else None,
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/pairing")
async def api_openclaw_pairing(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> PairingListResponse2:
try:
result = service.pairing_list_model()
return PairingListResponse2(
channel=result.channel,
requests=list(result.requests),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/qr")
async def api_openclaw_qr(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> QrCodeResponse2:
try:
result = service.qr_code_model()
return QrCodeResponse2(
setup_code=result.setup_code,
gateway_url=result.gateway_url,
auth=result.auth,
url_source=result.url_source,
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/update-status")
async def api_openclaw_update_status(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> UpdateStatusResponse2:
try:
result = service.update_status_model()
return UpdateStatusResponse2(
update=result.update.model_dump() if result.update else None,
channel=result.channel.model_dump() if result.channel else None,
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/models-aliases")
async def api_openclaw_models_aliases(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> ModelAliasesResponse:
try:
result = service.list_model_aliases_model()
return ModelAliasesResponse(aliases=result.aliases)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/models-fallbacks")
async def api_openclaw_models_fallbacks(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> ModelFallbacksResponse:
try:
result = service.list_model_fallbacks_model()
return ModelFallbacksResponse(
key=result.key,
label=result.label,
items=list(result.items),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/models-image-fallbacks")
async def api_openclaw_models_image_fallbacks(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> ModelFallbacksResponse:
try:
result = service.list_model_image_fallbacks_model()
return ModelFallbacksResponse(
key=result.key,
label=result.label,
items=list(result.items),
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/skill-update")
async def api_openclaw_skill_update(
slug: str | None = None,
all: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> SkillUpdateResponse:
try:
result = service.skill_update_model(slug=slug, all=all)
return SkillUpdateResponse(
ok=result.ok,
slug=result.slug,
version=result.version,
error=result.error,
)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/models-status")
async def api_openclaw_models_status(
probe: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> ModelsStatusResponse:
"""Read `openclaw models status --json [--probe]` and return a typed dict."""
try:
result = service.models_status_model(probe=probe)
return ModelsStatusResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/channels-status")
async def api_openclaw_channels_status(
probe: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> ChannelsStatusResponse:
"""Read `openclaw channels status --json [--probe]` and return a typed dict."""
try:
result = service.channels_status_model(probe=probe)
return ChannelsStatusResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/channels-list")
async def api_openclaw_channels_list(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> ChannelsListResponse:
"""Read `openclaw channels list --json` and return a typed dict."""
try:
result = service.channels_list_model()
return ChannelsListResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/hooks/info/{name}")
async def api_openclaw_hook_info(
name: str,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> HookInfoResponse:
"""Read `openclaw hooks info <name> --json` and return a typed dict."""
try:
result = service.hook_info_model(name)
return HookInfoResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/hooks/check")
async def api_openclaw_hooks_check(
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> HooksCheckResponse:
"""Read `openclaw hooks check --json` and return a typed dict."""
try:
result = service.hooks_check_model()
return HooksCheckResponse.model_validate(result, strict=False)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/plugins-inspect")
async def api_openclaw_plugins_inspect(
plugin_id: str | None = None,
all: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> PluginsInspectResponse:
"""Read `openclaw plugins inspect --json [--all]` and return a typed dict."""
try:
result = service.plugins_inspect_model(plugin_id=plugin_id, all=all)
inspect = result if isinstance(result, list) else result.get("inspect", [])
return PluginsInspectResponse(inspect=inspect)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
class AgentBindingItem(BaseModel):
agentId: str
match: dict[str, Any]
description: str
class AgentsBindingsResponse(BaseModel):
bindings: list[AgentBindingItem]
@router.get("/agents-bindings")
async def api_openclaw_agents_bindings(
agent: str | None = None,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> AgentsBindingsResponse:
"""Read `openclaw agents bindings --json [--agent <id>]` and return bindings list."""
try:
result = service.agents_bindings_model(agent=agent)
bindings = result if isinstance(result, list) else []
return AgentsBindingsResponse(bindings=bindings)
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/gateway-status")
async def api_openclaw_gateway_status(
url: str | None = None,
token: str | None = None,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> dict[str, Any]:
"""Read `openclaw gateway status --json [--url <url>] [--token <token>]`. Returns full gateway probe result."""
try:
result = service.gateway_status(url=url, token=token)
return result
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
@router.get("/memory-status")
async def api_openclaw_memory_status(
agent: str | None = None,
deep: bool = False,
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> list[dict[str, Any]]:
"""Read `openclaw memory status --json [--agent <id>] [--deep]`. Returns array of per-agent memory status."""
try:
result = service.memory_status(agent=agent, deep=deep)
return result if isinstance(result, list) else []
except OpenClawCliError as exc:
_raise_cli_http_error(exc)
class WorkspaceFilesResponse(BaseModel):
workspace: str
files: list[dict[str, Any]]
error: str | None = None
@router.get("/workspace-files")
async def api_openclaw_workspace_files(
workspace: str = Query(..., description="Path to the agent workspace directory"),
service: OpenClawCliService = Depends(get_openclaw_cli_service),
) -> WorkspaceFilesResponse:
"""List .md files in an OpenClaw agent workspace with their content previews."""
result = service.list_workspace_files(workspace)
return WorkspaceFilesResponse.model_validate(result, strict=False)

547
backend/api/runs.py Normal file
View File

@@ -0,0 +1,547 @@
# -*- coding: utf-8 -*-
"""
Run-scoped Agent API Routes
Provides REST API endpoints for runtime agent asset access under `runs/<run_id>/`.
This module separates runtime concerns from design-time workspace management:
- `/api/runs/{run_id}/agents/*` - Runtime agent assets and configuration
- design-time workspace registry CRUD lives under `/api/workspaces/{workspace_id}/...`
"""
import logging
import os
import tempfile
from pathlib import Path
from typing import Any, Dict, List, Optional
from fastapi import APIRouter, HTTPException, Depends, Body, UploadFile, File, Form
from pydantic import BaseModel, Field
from backend.agents.workspace_manager import RunWorkspaceManager
from backend.agents.agent_workspace import load_agent_workspace_config
from backend.agents.skills_manager import SkillsManager
from backend.agents.toolkit_factory import load_agent_profiles
from backend.config.bootstrap_config import get_bootstrap_config_for_run
from backend.llm.models import get_agent_model_info
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/runs/{run_id}/agents", tags=["runs"])
# Request/Response Models
class InstallExternalSkillRequest(BaseModel):
"""Request to install an external skill for one agent."""
source: str = Field(..., description="Directory path, zip path, or http(s) zip URL")
name: Optional[str] = Field(None, description="Optional override skill name")
activate: bool = Field(True, description="Whether to enable skill immediately")
class LocalSkillRequest(BaseModel):
skill_name: str = Field(..., description="Local skill name")
class LocalSkillContentRequest(BaseModel):
content: str = Field(..., description="Updated SKILL.md content")
class AgentFileResponse(BaseModel):
"""Agent file content response."""
filename: str
content: str
scope_type: str = "runtime_run"
scope_note: Optional[str] = None
class AgentProfileResponse(BaseModel):
agent_id: str
run_id: str
profile: Dict[str, Any]
scope_type: str = "runtime_run"
scope_note: Optional[str] = None
class AgentSkillsResponse(BaseModel):
agent_id: str
run_id: str
skills: List[Dict[str, Any]]
scope_type: str = "runtime_run"
scope_note: Optional[str] = None
class SkillDetailResponse(BaseModel):
agent_id: str
run_id: str
skill: Dict[str, Any]
scope_type: str = "runtime_run"
scope_note: Optional[str] = None
# Dependencies
def get_workspace_manager():
"""Get run-scoped asset manager for one runtime workspace/run id."""
return RunWorkspaceManager()
def get_skills_manager():
"""Get SkillsManager instance."""
return SkillsManager()
# Runtime Routes
@router.get("/{agent_id}/profile", response_model=AgentProfileResponse)
async def get_agent_profile(
run_id: str,
agent_id: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Get agent profile from runtime assets under `runs/<run_id>/`.
Args:
run_id: Run identifier (e.g., "smoke_fullstack")
agent_id: Agent identifier
Returns:
Agent profile with model config, skills, and tool groups
"""
asset_dir = skills_manager.get_agent_asset_dir(run_id, agent_id)
agent_config = load_agent_workspace_config(asset_dir / "agent.yaml")
profiles = load_agent_profiles()
profile = profiles.get(agent_id, {})
bootstrap = get_bootstrap_config_for_run(skills_manager.project_root, run_id)
override = bootstrap.agent_override(agent_id)
active_tool_groups = override.get("active_tool_groups", agent_config.active_tool_groups or profile.get("active_tool_groups", []))
if not isinstance(active_tool_groups, list):
active_tool_groups = []
disabled_tool_groups = agent_config.disabled_tool_groups
if disabled_tool_groups:
disabled_set = set(disabled_tool_groups)
active_tool_groups = [group_name for group_name in active_tool_groups if group_name not in disabled_set]
default_skills = profile.get("skills", [])
if not isinstance(default_skills, list):
default_skills = []
resolved_skills = skills_manager.resolve_agent_skill_names(
config_name=run_id,
agent_id=agent_id,
default_skills=default_skills,
)
prompt_files = agent_config.prompt_files or ["SOUL.md", "PROFILE.md", "AGENTS.md", "POLICY.md", "MEMORY.md"]
model_name, model_provider = get_agent_model_info(agent_id)
return AgentProfileResponse(
agent_id=agent_id,
run_id=run_id,
profile={
"model_name": model_name,
"model_provider": model_provider,
"prompt_files": prompt_files,
"default_skills": default_skills,
"resolved_skills": resolved_skills,
"active_tool_groups": active_tool_groups,
"disabled_tool_groups": disabled_tool_groups,
"enabled_skills": agent_config.enabled_skills,
"disabled_skills": agent_config.disabled_skills,
},
)
@router.get("/{agent_id}/skills", response_model=AgentSkillsResponse)
async def get_agent_skills(
run_id: str,
agent_id: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Get agent skills from runtime assets under `runs/<run_id>/`.
Args:
run_id: Run identifier
agent_id: Agent identifier
Returns:
List of skills with their status (active/enabled/disabled/available)
"""
agent_asset_dir = skills_manager.get_agent_asset_dir(run_id, agent_id)
agent_config = load_agent_workspace_config(agent_asset_dir / "agent.yaml")
resolved_skills = set(skills_manager.resolve_agent_skill_names(config_name=run_id, agent_id=agent_id, default_skills=[]))
enabled = set(agent_config.enabled_skills)
disabled = set(agent_config.disabled_skills)
payload = []
for item in skills_manager.list_agent_skill_catalog(run_id, agent_id):
if item.skill_name in disabled:
status = "disabled"
elif item.skill_name in enabled:
status = "enabled"
elif item.skill_name in resolved_skills:
status = "active"
else:
status = "available"
payload.append({
"skill_name": item.skill_name,
"name": item.name,
"description": item.description,
"version": item.version,
"source": item.source,
"tools": item.tools,
"status": status,
})
return AgentSkillsResponse(
agent_id=agent_id,
run_id=run_id,
skills=payload,
)
@router.get("/{agent_id}/skills/{skill_name}", response_model=SkillDetailResponse)
async def get_agent_skill_detail(
run_id: str,
agent_id: str,
skill_name: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Get detailed skill information from runtime assets.
Args:
run_id: Run identifier
agent_id: Agent identifier
skill_name: Skill name
Returns:
Skill detail information
"""
try:
detail = skills_manager.load_agent_skill_document(
config_name=run_id,
agent_id=agent_id,
skill_name=skill_name,
)
except FileNotFoundError:
raise HTTPException(status_code=404, detail=f"Unknown skill: {skill_name}")
return SkillDetailResponse(
agent_id=agent_id,
run_id=run_id,
skill=detail,
)
@router.post("/{agent_id}/skills/{skill_name}/enable")
async def enable_skill(
run_id: str,
agent_id: str,
skill_name: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Enable a skill for an agent in runtime assets.
Args:
run_id: Run identifier
agent_id: Agent identifier
skill_name: Skill name to enable
Returns:
Success message with updated enabled skills list
"""
result = skills_manager.update_agent_skill_overrides(
config_name=run_id,
agent_id=agent_id,
enable=[skill_name],
)
return {
"message": f"Skill '{skill_name}' enabled for agent '{agent_id}'",
"enabled_skills": result["enabled_skills"],
}
@router.post("/{agent_id}/skills/{skill_name}/disable")
async def disable_skill(
run_id: str,
agent_id: str,
skill_name: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Disable a skill for an agent in runtime assets.
Args:
run_id: Run identifier
agent_id: Agent identifier
skill_name: Skill name to disable
Returns:
Success message with updated disabled skills list
"""
result = skills_manager.update_agent_skill_overrides(
config_name=run_id,
agent_id=agent_id,
disable=[skill_name],
)
return {
"message": f"Skill '{skill_name}' disabled for agent '{agent_id}'",
"disabled_skills": result["disabled_skills"],
}
@router.post("/{agent_id}/skills/install")
async def install_external_skill(
run_id: str,
agent_id: str,
request: InstallExternalSkillRequest,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Install an external skill into one agent's local skills under `runs/<run_id>/`.
Args:
run_id: Run identifier
agent_id: Agent identifier
request: Installation parameters
Returns:
Success message with installed skill details
"""
try:
result = skills_manager.install_external_skill_for_agent(
config_name=run_id,
agent_id=agent_id,
source=request.source,
skill_name=request.name,
activate=request.activate,
)
except (FileNotFoundError, ValueError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
return {
"message": f"Installed external skill '{result['skill_name']}' for '{agent_id}'",
**result,
}
@router.post("/{agent_id}/skills/local")
async def create_local_skill(
run_id: str,
agent_id: str,
request: LocalSkillRequest,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Create a new local skill for an agent under `runs/<run_id>/`.
Args:
run_id: Run identifier
agent_id: Agent identifier
request: Local skill creation parameters
Returns:
Success message
"""
try:
skills_manager.create_agent_local_skill(
config_name=run_id,
agent_id=agent_id,
skill_name=request.skill_name,
)
except (ValueError, FileExistsError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
return {"message": f"Created local skill '{request.skill_name}' for '{agent_id}'"}
@router.put("/{agent_id}/skills/local/{skill_name}")
async def update_local_skill(
run_id: str,
agent_id: str,
skill_name: str,
request: LocalSkillContentRequest,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Update a local skill's SKILL.md content under `runs/<run_id>/`.
Args:
run_id: Run identifier
agent_id: Agent identifier
skill_name: Skill name
request: Updated content
Returns:
Success message
"""
try:
skills_manager.update_agent_local_skill(
config_name=run_id,
agent_id=agent_id,
skill_name=skill_name,
content=request.content,
)
except (ValueError, FileNotFoundError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
return {"message": f"Updated local skill '{skill_name}' for '{agent_id}'"}
@router.delete("/{agent_id}/skills/local/{skill_name}")
async def delete_local_skill(
run_id: str,
agent_id: str,
skill_name: str,
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Delete a local skill under `runs/<run_id>/`.
Args:
run_id: Run identifier
agent_id: Agent identifier
skill_name: Skill name to delete
Returns:
Success message
"""
try:
skills_manager.delete_agent_local_skill(
config_name=run_id,
agent_id=agent_id,
skill_name=skill_name,
)
skills_manager.forget_agent_skill_overrides(
config_name=run_id,
agent_id=agent_id,
skill_names=[skill_name],
)
except (ValueError, FileNotFoundError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
return {"message": f"Deleted local skill '{skill_name}' for '{agent_id}'"}
@router.post("/{agent_id}/skills/upload")
async def upload_external_skill(
run_id: str,
agent_id: str,
file: UploadFile = File(...),
name: Optional[str] = Form(None),
activate: bool = Form(True),
skills_manager: SkillsManager = Depends(get_skills_manager),
):
"""
Upload a zip skill package and install for one agent under `runs/<run_id>/`.
Args:
run_id: Run identifier
agent_id: Agent identifier
file: Zip file to upload
name: Optional skill name override
activate: Whether to enable skill immediately
Returns:
Success message with installed skill details
"""
original_name = (file.filename or "").strip()
if not original_name.lower().endswith(".zip"):
raise HTTPException(status_code=400, detail="Uploaded file must be a .zip archive")
suffix = Path(original_name).suffix or ".zip"
temp_path: Optional[str] = None
try:
with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as tmp:
temp_path = tmp.name
content = await file.read()
tmp.write(content)
result = skills_manager.install_external_skill_for_agent(
config_name=run_id,
agent_id=agent_id,
source=temp_path,
skill_name=name,
activate=activate,
)
except (FileNotFoundError, ValueError) as exc:
raise HTTPException(status_code=400, detail=str(exc))
finally:
try:
await file.close()
except Exception as e:
logger.warning(f"Failed to close uploaded file: {e}")
if temp_path and os.path.exists(temp_path):
os.remove(temp_path)
return {
"message": f"Uploaded and installed external skill '{result['skill_name']}' for '{agent_id}'",
**result,
}
@router.get("/{agent_id}/files/{filename}", response_model=AgentFileResponse)
async def get_agent_file(
run_id: str,
agent_id: str,
filename: str,
workspace_manager: RunWorkspaceManager = Depends(get_workspace_manager),
):
"""
Read an agent file from the run-scoped asset tree under `runs/<run_id>/`.
Args:
run_id: Run identifier
agent_id: Agent identifier
filename: File to read (e.g., SOUL.md, PROFILE.md)
Returns:
File content
"""
try:
content = workspace_manager.load_agent_file(
config_name=run_id,
agent_id=agent_id,
filename=filename,
)
return AgentFileResponse(
filename=filename,
content=content,
)
except FileNotFoundError:
raise HTTPException(status_code=404, detail=f"File '{filename}' not found")
@router.put("/{agent_id}/files/{filename}", response_model=AgentFileResponse)
async def update_agent_file(
run_id: str,
agent_id: str,
filename: str,
content: str = Body(..., media_type="text/plain"),
workspace_manager: RunWorkspaceManager = Depends(get_workspace_manager),
):
"""
Update an agent file in the run-scoped asset tree under `runs/<run_id>/`.
Args:
run_id: Run identifier
agent_id: Agent identifier
filename: File to update
content: New file content
Returns:
Updated file information
"""
try:
workspace_manager.update_agent_file(
config_name=run_id,
agent_id=agent_id,
filename=filename,
content=content,
)
return AgentFileResponse(
filename=filename,
content=content,
)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))

View File

@@ -7,7 +7,7 @@ import asyncio
import json import json
import logging import logging
import os import os
import signal import re
import shutil import shutil
import subprocess import subprocess
import sys import sys
@@ -20,7 +20,6 @@ logger = logging.getLogger(__name__)
from fastapi import APIRouter, BackgroundTasks, HTTPException, Request from fastapi import APIRouter, BackgroundTasks, HTTPException, Request
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from backend.runtime.agent_runtime import AgentRuntimeState
from backend.config.bootstrap_config import ( from backend.config.bootstrap_config import (
resolve_runtime_config, resolve_runtime_config,
update_bootstrap_values_for_run, update_bootstrap_values_for_run,
@@ -31,6 +30,17 @@ router = APIRouter(prefix="/api/runtime", tags=["runtime"])
PROJECT_ROOT = Path(__file__).resolve().parents[2] PROJECT_ROOT = Path(__file__).resolve().parents[2]
def _normalize_schedule_mode(value: Any) -> str:
"""Normalize schedule mode to the current public vocabulary.
`intraday` is kept as a backward-compatible alias for `interval`.
"""
mode = str(value or "daily").strip().lower()
if mode == "intraday":
return "interval"
return mode or "daily"
class RuntimeState: class RuntimeState:
"""Thread-safe singleton for managing runtime state. """Thread-safe singleton for managing runtime state.
@@ -145,6 +155,7 @@ class RunContextResponse(BaseModel):
class RuntimeAgentState(BaseModel): class RuntimeAgentState(BaseModel):
agent_id: str agent_id: str
display_name: Optional[str] = None
status: str status: str
last_session: Optional[str] = None last_session: Optional[str] = None
last_updated: str last_updated: str
@@ -219,6 +230,22 @@ class GatewayStatusResponse(BaseModel):
is_running: bool is_running: bool
port: int port: int
run_id: Optional[str] = None run_id: Optional[str] = None
process_status: Optional[str] = None
pid: Optional[int] = None
class GatewayHealthResponse(BaseModel):
status: str
checks: Dict[str, Any]
timestamp: str
class RuntimeModeResponse(BaseModel):
mode: str
is_backtest: bool
run_id: Optional[str] = None
schedule_mode: Optional[str] = None
is_running: bool
class RuntimeConfigResponse(BaseModel): class RuntimeConfigResponse(BaseModel):
@@ -264,6 +291,113 @@ def _load_run_snapshot(run_id: str) -> Dict[str, Any]:
return json.loads(snapshot_path.read_text(encoding="utf-8")) return json.loads(snapshot_path.read_text(encoding="utf-8"))
def _load_run_server_state(run_dir: Path) -> Dict[str, Any]:
"""Load persisted runtime server state if present."""
server_state_path = run_dir / "state" / "server_state.json"
if not server_state_path.exists():
return {}
try:
return json.loads(server_state_path.read_text(encoding="utf-8"))
except Exception:
return {}
def _resolve_runtime_agent_display_name(run_id: str, agent_id: str) -> Optional[str]:
"""Best-effort display name for one runtime agent.
Priority:
1. PROFILE.md line like `角色定位:中文名`
2. PROFILE.md YAML frontmatter field `name`
"""
asset_dir = PROJECT_ROOT / "runs" / run_id / "agents" / agent_id
profile_path = asset_dir / "PROFILE.md"
if not profile_path.exists():
return None
try:
raw = profile_path.read_text(encoding="utf-8").strip()
except Exception:
return None
if not raw:
return None
frontmatter_name: Optional[str] = None
if raw.startswith("---"):
parts = raw.split("---", 2)
if len(parts) >= 3:
try:
import yaml
parsed = yaml.safe_load(parts[1].strip()) or {}
if isinstance(parsed, dict):
value = parsed.get("name")
if isinstance(value, str) and value.strip():
frontmatter_name = value.strip()
except Exception:
pass
raw = parts[2].strip()
for line in raw.splitlines():
normalized = line.strip()
if normalized.startswith("角色定位:"):
value = normalized.split("", 1)[1].strip()
if value:
return value
if normalized.lower().startswith("role:"):
value = normalized.split(":", 1)[1].strip()
if value:
return value
return frontmatter_name
def _enrich_runtime_agents(run_id: Optional[str], agents: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
if not run_id:
return agents
enriched: List[Dict[str, Any]] = []
for item in agents:
payload = dict(item)
display_name = payload.get("display_name")
agent_id = str(payload.get("agent_id") or "").strip()
if agent_id and (not isinstance(display_name, str) or not display_name.strip()):
payload["display_name"] = _resolve_runtime_agent_display_name(run_id, agent_id)
enriched.append(payload)
return enriched
def _extract_history_metrics(run_dir: Path) -> tuple[int, Optional[float]]:
"""Prefer runtime state files over dashboard exports for history summaries."""
server_state = _load_run_server_state(run_dir)
portfolio = server_state.get("portfolio") or {}
trades = server_state.get("trades")
total_trades = len(trades) if isinstance(trades, list) else 0
total_asset_value = None
if portfolio.get("total_value") is not None:
try:
total_asset_value = float(portfolio.get("total_value"))
except (TypeError, ValueError):
total_asset_value = None
if total_trades or total_asset_value is not None:
return total_trades, total_asset_value
summary_path = run_dir / "team_dashboard" / "summary.json"
if not summary_path.exists():
return 0, None
try:
summary = json.loads(summary_path.read_text(encoding="utf-8"))
total_trades = int(summary.get("totalTrades") or 0)
total_asset_value = (
float(summary.get("totalAssetValue"))
if summary.get("totalAssetValue") is not None
else None
)
return total_trades, total_asset_value
except Exception:
return 0, None
def _copy_path_if_exists(src: Path, dst: Path) -> None: def _copy_path_if_exists(src: Path, dst: Path) -> None:
if not src.exists(): if not src.exists():
return return
@@ -281,7 +415,7 @@ def _restore_run_assets(source_run_id: str, target_run_dir: Path) -> None:
raise HTTPException(status_code=404, detail=f"Source run not found: {source_run_id}") raise HTTPException(status_code=404, detail=f"Source run not found: {source_run_id}")
for relative in [ for relative in [
"team_dashboard", "team_dashboard/_internal_state.json",
"agents", "agents",
"skills", "skills",
"memory", "memory",
@@ -307,12 +441,10 @@ def _list_runs(limit: int = 50) -> list[RuntimeHistoryItem]:
for run_dir in run_dirs[: max(1, int(limit))]: for run_dir in run_dirs[: max(1, int(limit))]:
run_id = run_dir.name run_id = run_dir.name
runtime_state_path = run_dir / "state" / "runtime_state.json" runtime_state_path = run_dir / "state" / "runtime_state.json"
summary_path = run_dir / "team_dashboard" / "summary.json"
bootstrap: Dict[str, Any] = {} bootstrap: Dict[str, Any] = {}
updated_at: Optional[str] = None updated_at: Optional[str] = None
total_trades = 0 total_trades, total_asset_value = _extract_history_metrics(run_dir)
total_asset_value: Optional[float] = None
if runtime_state_path.exists(): if runtime_state_path.exists():
try: try:
@@ -323,15 +455,6 @@ def _list_runs(limit: int = 50) -> list[RuntimeHistoryItem]:
except Exception: except Exception:
bootstrap = {} bootstrap = {}
if summary_path.exists():
try:
summary = json.loads(summary_path.read_text(encoding="utf-8"))
total_trades = int(summary.get("totalTrades") or 0)
total_asset_value = float(summary.get("totalAssetValue")) if summary.get("totalAssetValue") is not None else None
except Exception:
total_trades = 0
total_asset_value = None
items.append( items.append(
RuntimeHistoryItem( RuntimeHistoryItem(
run_id=run_id, run_id=run_id,
@@ -393,6 +516,11 @@ def _is_gateway_running() -> bool:
Checks both the internally-managed gateway process and falls back to Checks both the internally-managed gateway process and falls back to
port availability (for externally-managed gateway processes). port availability (for externally-managed gateway processes).
The fallback matters because this codebase may still encounter two startup
shapes while historical artifacts remain in-tree:
1. runtime_service-managed Gateway subprocesses
2. externally started historical Gateway processes outside the supported dev flow
""" """
process = _runtime_state.gateway_process process = _runtime_state.gateway_process
if process is not None and process.poll() is None: if process is not None and process.poll() is None:
@@ -435,7 +563,19 @@ def _start_gateway_process(
bootstrap: Dict[str, Any], bootstrap: Dict[str, Any],
port: int port: int
) -> subprocess.Popen: ) -> subprocess.Popen:
"""Start Gateway as a separate process.""" """Start Gateway as a runtime_service-managed subprocess.
This path is used when runtime lifecycle is driven through the runtime API.
It is not the only supported way a Gateway may exist in the current repo.
"""
# Validate configuration before starting
validation_errors = _validate_gateway_config(bootstrap)
if validation_errors:
raise HTTPException(
status_code=400,
detail=f"Gateway configuration validation failed: {'; '.join(validation_errors)}"
)
# Prepare environment # Prepare environment
env = os.environ.copy() env = os.environ.copy()
@@ -467,6 +607,169 @@ def _start_gateway_process(
return process return process
def _validate_gateway_config(bootstrap: Dict[str, Any]) -> List[str]:
"""Validate Gateway bootstrap configuration.
Returns a list of validation error messages. Empty list means valid.
"""
errors: List[str] = []
# Check required environment variables based on mode
mode = bootstrap.get("mode", "live")
is_backtest = mode == "backtest"
# Validate mode
if mode not in ("live", "backtest"):
errors.append(f"Invalid mode '{mode}': must be 'live' or 'backtest'")
# Check API keys based on mode
if not is_backtest:
# Live mode requires FINNHUB_API_KEY
finnhub_key = os.getenv("FINNHUB_API_KEY")
if not finnhub_key:
errors.append("FINNHUB_API_KEY environment variable is required for live mode")
# Check LLM configuration
model_name = os.getenv("MODEL_NAME")
openai_key = os.getenv("OPENAI_API_KEY")
dashscope_key = os.getenv("DASHSCOPE_API_KEY")
if not model_name:
errors.append("MODEL_NAME environment variable is not set")
if not openai_key and not dashscope_key:
errors.append("Either OPENAI_API_KEY or DASHSCOPE_API_KEY environment variable must be set")
# Validate tickers
tickers = bootstrap.get("tickers", [])
if not tickers:
errors.append("No tickers specified in configuration")
elif not isinstance(tickers, list):
errors.append("Tickers must be a list")
# Validate numeric values
try:
initial_cash = float(bootstrap.get("initial_cash", 0))
if initial_cash <= 0:
errors.append("initial_cash must be greater than 0")
except (TypeError, ValueError):
errors.append("initial_cash must be a valid number")
try:
margin_requirement = float(bootstrap.get("margin_requirement", 0))
if margin_requirement < 0 or margin_requirement > 1:
errors.append("margin_requirement must be between 0 and 1")
except (TypeError, ValueError):
errors.append("margin_requirement must be a valid number")
# Validate backtest dates
if is_backtest:
start_date = bootstrap.get("start_date")
end_date = bootstrap.get("end_date")
if not start_date:
errors.append("start_date is required for backtest mode")
if not end_date:
errors.append("end_date is required for backtest mode")
if start_date and end_date:
try:
from datetime import datetime
start = datetime.strptime(start_date, "%Y-%m-%d")
end = datetime.strptime(end_date, "%Y-%m-%d")
if start >= end:
errors.append("start_date must be before end_date")
except ValueError:
errors.append("Dates must be in YYYY-MM-DD format")
# Validate schedule mode
schedule_mode = _normalize_schedule_mode(bootstrap.get("schedule_mode", "daily"))
if schedule_mode not in ("daily", "interval"):
errors.append(f"Invalid schedule_mode '{schedule_mode}': must be 'daily' or 'interval'")
return errors
def _get_gateway_process_details() -> Dict[str, Any]:
"""Get detailed information about the Gateway process."""
process = _runtime_state.gateway_process
details = {
"pid": None,
"status": "not_running",
"returncode": None,
}
if process is None:
return details
details["pid"] = process.pid
returncode = process.poll()
if returncode is None:
details["status"] = "running"
details["returncode"] = None
else:
details["status"] = "exited"
details["returncode"] = returncode
return details
def _check_gateway_health() -> Dict[str, Any]:
"""Perform comprehensive health checks on Gateway."""
checks = {
"process": {"status": "unknown", "details": {}},
"port": {"status": "unknown", "details": {}},
"configuration": {"status": "unknown", "details": {}},
}
# Check process status
process_details = _get_gateway_process_details()
checks["process"]["details"] = process_details
if process_details["status"] == "running":
checks["process"]["status"] = "healthy"
elif process_details["status"] == "exited":
checks["process"]["status"] = "unhealthy"
checks["process"]["details"]["error"] = f"Process exited with code {process_details['returncode']}"
else:
checks["process"]["status"] = "unknown"
# Check port connectivity
import socket
port = _runtime_state.gateway_port
try:
with socket.create_connection(("127.0.0.1", port), timeout=2):
checks["port"]["status"] = "healthy"
checks["port"]["details"] = {"port": port, "accessible": True}
except OSError as e:
checks["port"]["status"] = "unhealthy"
checks["port"]["details"] = {"port": port, "accessible": False, "error": str(e)}
# Check configuration
try:
if _runtime_state.runtime_manager is not None:
checks["configuration"]["status"] = "healthy"
checks["configuration"]["details"]["has_runtime_manager"] = True
else:
checks["configuration"]["status"] = "degraded"
checks["configuration"]["details"]["has_runtime_manager"] = False
except Exception as e:
checks["configuration"]["status"] = "unknown"
checks["configuration"]["details"]["error"] = str(e)
# Determine overall status
statuses = [c["status"] for c in checks.values()]
if any(s == "unhealthy" for s in statuses):
overall_status = "unhealthy"
elif all(s == "healthy" for s in statuses):
overall_status = "healthy"
else:
overall_status = "degraded"
return {
"status": overall_status,
"checks": checks,
"timestamp": datetime.now().isoformat(),
}
@router.get("/context", response_model=RunContextResponse) @router.get("/context", response_model=RunContextResponse)
async def get_run_context() -> RunContextResponse: async def get_run_context() -> RunContextResponse:
"""Return active runtime context, or latest persisted context when stopped.""" """Return active runtime context, or latest persisted context when stopped."""
@@ -486,7 +789,8 @@ async def get_run_context() -> RunContextResponse:
async def get_runtime_agents() -> RuntimeAgentsResponse: async def get_runtime_agents() -> RuntimeAgentsResponse:
"""Return agent states from the active runtime, or latest persisted run.""" """Return agent states from the active runtime, or latest persisted run."""
snapshot = _get_active_runtime_snapshot() if _is_gateway_running() else _load_latest_runtime_snapshot() snapshot = _get_active_runtime_snapshot() if _is_gateway_running() else _load_latest_runtime_snapshot()
agents = snapshot.get("agents", []) run_id = snapshot.get("context", {}).get("config_name")
agents = _enrich_runtime_agents(run_id, snapshot.get("agents", []))
return RuntimeAgentsResponse( return RuntimeAgentsResponse(
agents=[RuntimeAgentState(**a) for a in agents] agents=[RuntimeAgentState(**a) for a in agents]
@@ -512,9 +816,10 @@ async def get_runtime_history(limit: int = 20) -> RuntimeHistoryResponse:
@router.get("/gateway/status", response_model=GatewayStatusResponse) @router.get("/gateway/status", response_model=GatewayStatusResponse)
async def get_gateway_status() -> GatewayStatusResponse: async def get_gateway_status() -> GatewayStatusResponse:
"""Get Gateway process status and port.""" """Get Gateway process status and port with detailed process information."""
is_running = _is_gateway_running() is_running = _is_gateway_running()
run_id = None run_id = None
process_details = _get_gateway_process_details()
if is_running: if is_running:
try: try:
@@ -525,7 +830,52 @@ async def get_gateway_status() -> GatewayStatusResponse:
return GatewayStatusResponse( return GatewayStatusResponse(
is_running=is_running, is_running=is_running,
port=_runtime_state.gateway_port, port=_runtime_state.gateway_port,
run_id=run_id run_id=run_id,
process_status=process_details["status"],
pid=process_details["pid"],
)
@router.get("/gateway/health", response_model=GatewayHealthResponse)
async def get_gateway_health() -> GatewayHealthResponse:
"""Get comprehensive Gateway health check including process, port, and configuration status."""
health = _check_gateway_health()
return GatewayHealthResponse(**health)
@router.get("/mode", response_model=RuntimeModeResponse)
async def get_runtime_mode() -> RuntimeModeResponse:
"""Get current runtime mode (live or backtest) and related configuration."""
is_running = _is_gateway_running()
if not is_running:
return RuntimeModeResponse(
mode="stopped",
is_backtest=False,
run_id=None,
schedule_mode=None,
is_running=False,
)
try:
context = _get_active_runtime_context()
bootstrap = context.get("bootstrap_values", {})
mode = bootstrap.get("mode", "live")
return RuntimeModeResponse(
mode=mode,
is_backtest=mode == "backtest",
run_id=context.get("config_name"),
schedule_mode=_normalize_schedule_mode(bootstrap.get("schedule_mode")),
is_running=True,
)
except HTTPException:
return RuntimeModeResponse(
mode="unknown",
is_backtest=False,
run_id=None,
schedule_mode=None,
is_running=False,
) )
@@ -587,11 +937,24 @@ def _load_latest_runtime_snapshot() -> Dict[str, Any]:
def _get_active_runtime_snapshot() -> Dict[str, Any]: def _get_active_runtime_snapshot() -> Dict[str, Any]:
"""Return the active runtime snapshot, preferring in-memory manager state.""" """Return the active runtime snapshot.
For a running Gateway, the canonical runtime source of truth is the
run-scoped snapshot file under `runs/<run_id>/state/runtime_state.json`,
because the Gateway subprocess mutates it directly while the parent
runtime_service process may still hold a stale in-memory manager snapshot.
"""
if not _is_gateway_running(): if not _is_gateway_running():
raise HTTPException(status_code=404, detail="No runtime is currently running") raise HTTPException(status_code=404, detail="No runtime is currently running")
manager = _runtime_state.runtime_manager manager = _runtime_state.runtime_manager
if manager is not None:
run_id = str(getattr(manager, "config_name", "") or "").strip()
if run_id:
snapshot_path = _get_run_dir(run_id) / "state" / "runtime_state.json"
if snapshot_path.exists():
return json.loads(snapshot_path.read_text(encoding="utf-8"))
if manager is not None and hasattr(manager, "build_snapshot"): if manager is not None and hasattr(manager, "build_snapshot"):
snapshot = manager.build_snapshot() snapshot = manager.build_snapshot()
context = snapshot.get("context") or {} context = snapshot.get("context") or {}
@@ -618,11 +981,32 @@ def _read_log_tail(path: Path, max_chars: int = 120_000) -> str:
if not path.exists() or not path.is_file(): if not path.exists() or not path.is_file():
return "" return ""
text = path.read_text(encoding="utf-8", errors="replace") text = path.read_text(encoding="utf-8", errors="replace")
text = _sanitize_runtime_log_text(text)
if len(text) <= max_chars: if len(text) <= max_chars:
return text return text
return text[-max_chars:] return text[-max_chars:]
def _sanitize_runtime_log_text(text: str) -> str:
if not text:
return ""
# Drop repetitive development-only warnings for unsandboxed skill execution.
text = re.sub(
r"(?:^|\n)=+\n"
r"⚠️\s+\[安全警告\]\s+技能在无沙盒模式下运行\s+\(SKILL_SANDBOX_MODE=none\)\n"
r"\s+技能脚本将直接在当前进程中执行,无隔离保护。\n"
r"\s+建议:生产环境请设置\s+SKILL_SANDBOX_MODE=docker\n"
r"=+\n?",
"\n",
text,
flags=re.MULTILINE,
)
text = re.sub(r"\n{3,}", "\n\n", text)
return text.strip()
def _get_current_runtime_context() -> Dict[str, Any]: def _get_current_runtime_context() -> Dict[str, Any]:
"""Return the active runtime context from the latest snapshot.""" """Return the active runtime context from the latest snapshot."""
if not _is_gateway_running(): if not _is_gateway_running():
@@ -647,7 +1031,7 @@ def _resolve_runtime_response(run_id: str) -> RuntimeConfigResponse:
project_root=PROJECT_ROOT, project_root=PROJECT_ROOT,
config_name=run_id, config_name=run_id,
enable_memory=bool(bootstrap.get("enable_memory", False)), enable_memory=bool(bootstrap.get("enable_memory", False)),
schedule_mode=str(bootstrap.get("schedule_mode", "daily")), schedule_mode=_normalize_schedule_mode(bootstrap.get("schedule_mode", "daily")),
interval_minutes=int(bootstrap.get("interval_minutes", 60) or 60), interval_minutes=int(bootstrap.get("interval_minutes", 60) or 60),
trigger_time=str(bootstrap.get("trigger_time", "09:30") or "09:30"), trigger_time=str(bootstrap.get("trigger_time", "09:30") or "09:30"),
) )
@@ -667,11 +1051,11 @@ def _normalize_runtime_config_updates(
updates: Dict[str, Any] = {} updates: Dict[str, Any] = {}
if request.schedule_mode is not None: if request.schedule_mode is not None:
schedule_mode = str(request.schedule_mode).strip().lower() schedule_mode = _normalize_schedule_mode(request.schedule_mode)
if schedule_mode not in {"daily", "intraday"}: if schedule_mode not in {"daily", "interval"}:
raise HTTPException( raise HTTPException(
status_code=400, status_code=400,
detail="schedule_mode must be 'daily' or 'intraday'", detail="schedule_mode must be 'daily' or 'interval'",
) )
updates["schedule_mode"] = schedule_mode updates["schedule_mode"] = schedule_mode
@@ -807,14 +1191,38 @@ async def start_runtime(
_runtime_state.gateway_process = None _runtime_state.gateway_process = None
log_path = _get_gateway_log_path_for_run(run_id) log_path = _get_gateway_log_path_for_run(run_id)
log_tail = _read_log_tail(log_path, max_chars=4000) log_tail = _read_log_tail(log_path, max_chars=4000)
# Build detailed error message
error_details = []
error_details.append(f"Gateway process exited unexpectedly")
process_details = _get_gateway_process_details()
if process_details.get("returncode") is not None:
error_details.append(f"Exit code: {process_details['returncode']}")
if log_tail:
error_details.append(f"Recent log output:\n{log_tail}")
else:
error_details.append("No log output available. Check environment configuration.")
# Check common configuration issues
config_errors = _validate_gateway_config(bootstrap)
if config_errors:
error_details.append(f"Configuration issues detected: {'; '.join(config_errors)}")
raise HTTPException( raise HTTPException(
status_code=500, status_code=500,
detail=f"Gateway failed to start: {log_tail or 'Unknown error'}" detail="\n".join(error_details)
) )
except HTTPException:
raise
except Exception as e: except Exception as e:
_stop_gateway() _stop_gateway()
raise HTTPException(status_code=500, detail=f"Failed to start Gateway: {str(e)}") raise HTTPException(
status_code=500,
detail=f"Failed to start Gateway: {type(e).__name__}: {str(e)}"
)
return LaunchResponse( return LaunchResponse(
run_id=run_id, run_id=run_id,
@@ -861,17 +1269,38 @@ async def stop_runtime(force: bool = True) -> StopResponse:
was_running = _is_gateway_running() was_running = _is_gateway_running()
if not was_running: if not was_running:
process_details = _get_gateway_process_details()
if process_details["status"] == "exited":
# Process exited but we have a record of it
raise HTTPException(
status_code=404,
detail=(
f"No runtime is currently running. "
f"Previous Gateway process exited with code {process_details['returncode']}. "
f"PID: {process_details['pid']}"
)
)
raise HTTPException(status_code=404, detail="No runtime is currently running") raise HTTPException(status_code=404, detail="No runtime is currently running")
# Get process details before stopping for the response
process_details = _get_gateway_process_details()
pid_info = f" (PID: {process_details.get('pid')})" if process_details.get('pid') else ""
# Stop Gateway process # Stop Gateway process
_stop_gateway() stop_success = _stop_gateway()
if not stop_success:
raise HTTPException(
status_code=500,
detail=f"Failed to stop Gateway process{pid_info}. Process may have already terminated."
)
# Unregister runtime manager # Unregister runtime manager
unregister_runtime_manager() unregister_runtime_manager()
return StopResponse( return StopResponse(
status="stopped", status="stopped",
message="Runtime stopped successfully", message=f"Runtime stopped successfully{pid_info}",
) )

View File

@@ -1,8 +1,9 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
""" """
Workspace API Routes Workspace API Routes.
Provides REST API endpoints for workspace management. These routes manage the design-time `workspaces/` registry, not the run-scoped
runtime data under `runs/<run_id>/`.
""" """
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
@@ -31,7 +32,7 @@ class UpdateWorkspaceRequest(BaseModel):
class WorkspaceResponse(BaseModel): class WorkspaceResponse(BaseModel):
"""Workspace information response.""" """Design-time workspace information response."""
workspace_id: str workspace_id: str
name: str name: str
description: str description: str
@@ -89,10 +90,10 @@ async def list_workspaces(
manager: WorkspaceManager = Depends(get_workspace_manager), manager: WorkspaceManager = Depends(get_workspace_manager),
): ):
""" """
List all workspaces. List all design-time workspaces.
Returns: Returns:
List of workspaces List of design-time workspaces
""" """
workspaces = manager.list_workspaces() workspaces = manager.list_workspaces()
return WorkspaceListResponse( return WorkspaceListResponse(

View File

@@ -5,8 +5,6 @@ from .agent_service import app as agent_app
from .agent_service import create_app as create_agent_app from .agent_service import create_app as create_agent_app
from .news_service import app as news_app from .news_service import app as news_app
from .news_service import create_app as create_news_app from .news_service import create_app as create_news_app
from .openclaw_service import app as openclaw_app
from .openclaw_service import create_app as create_openclaw_app
from .runtime_service import app as runtime_app from .runtime_service import app as runtime_app
from .runtime_service import create_app as create_runtime_app from .runtime_service import create_app as create_runtime_app
from .trading_service import app as trading_app from .trading_service import app as trading_app
@@ -23,8 +21,6 @@ __all__ = [
"create_agent_app", "create_agent_app",
"news_app", "news_app",
"create_news_app", "create_news_app",
"openclaw_app",
"create_openclaw_app",
"runtime_app", "runtime_app",
"create_runtime_app", "create_runtime_app",
"trading_app", "trading_app",

View File

@@ -11,7 +11,7 @@ from fastapi import FastAPI
from backend.apps.cors import add_cors_middleware from backend.apps.cors import add_cors_middleware
from backend.api import agents_router, guard_router, workspaces_router from backend.api import agents_router, guard_router, workspaces_router, runs_router
from backend.agents import AgentFactory, WorkspaceManager, get_registry from backend.agents import AgentFactory, WorkspaceManager, get_registry
# Global instances (initialized on startup) # Global instances (initialized on startup)
@@ -19,13 +19,30 @@ agent_factory: AgentFactory | None = None
workspace_manager: WorkspaceManager | None = None workspace_manager: WorkspaceManager | None = None
def _build_scope_payload(project_root: Path) -> dict[str, object]:
return {
"design_time_registry": {
"root": str(project_root / "workspaces"),
"meaning": "Persistent control-plane workspace registry",
},
"runtime_assets": {
"root": str(project_root / "runs"),
"meaning": "Run-scoped runtime state and agent assets",
},
"agent_route_note": (
"Runtime routes use `/api/runs/{run_id}/agents/...`. "
"Design-time CRUD routes use `/api/workspaces/{workspace_id}/agents/...`."
),
}
def create_app(project_root: Path | None = None) -> FastAPI: def create_app(project_root: Path | None = None) -> FastAPI:
"""Create the agent control-plane app.""" """Create the agent control-plane app."""
resolved_project_root = project_root or Path(__file__).resolve().parents[2] resolved_project_root = project_root or Path(__file__).resolve().parents[2]
@asynccontextmanager @asynccontextmanager
async def lifespan(_app: FastAPI) -> AsyncGenerator[None, None]: async def lifespan(_app: FastAPI) -> AsyncGenerator[None, None]:
"""Initialize workspace and registry state for the control plane.""" """Initialize design-time workspace and registry state for the control plane."""
global agent_factory, workspace_manager global agent_factory, workspace_manager
workspace_manager = WorkspaceManager(project_root=resolved_project_root) workspace_manager = WorkspaceManager(project_root=resolved_project_root)
@@ -34,7 +51,7 @@ def create_app(project_root: Path | None = None) -> FastAPI:
registry = get_registry() registry = get_registry()
print("✓ 大时代 API started") print("✓ 大时代 API started")
print(f" - Workspaces root: {agent_factory.workspaces_root}") print(f" - Design workspaces root: {agent_factory.workspaces_root}")
print(f" - Registered agents: {registry.get_agent_count()}") print(f" - Registered agents: {registry.get_agent_count()}")
yield yield
@@ -63,6 +80,7 @@ def create_app(project_root: Path | None = None) -> FastAPI:
if workspace_manager if workspace_manager
else 0 else 0
), ),
"scope_roots": _build_scope_payload(resolved_project_root),
} }
@app.get("/api/status") @app.get("/api/status")
@@ -72,10 +90,12 @@ def create_app(project_root: Path | None = None) -> FastAPI:
return { return {
"status": "operational", "status": "operational",
"registry": registry.get_stats(), "registry": registry.get_stats(),
"scope": _build_scope_payload(resolved_project_root),
} }
app.include_router(workspaces_router) app.include_router(workspaces_router)
app.include_router(agents_router) app.include_router(agents_router)
app.include_router(runs_router)
app.include_router(guard_router) app.include_router(guard_router)
return app return app

View File

@@ -81,7 +81,12 @@ async def proxy_ws(ws: WebSocket):
await ws.accept() await ws.accept()
upstream = None upstream = None
try: try:
upstream = await websockets.asyncio.client.connect(gateway_url) upstream = await websockets.asyncio.client.connect(
gateway_url,
ping_interval=20,
ping_timeout=120,
max_size=10 * 1024 * 1024, # 10MB
)
async def client_to_upstream(): async def client_to_upstream():
try: try:

View File

@@ -28,11 +28,11 @@ def create_app() -> FastAPI:
add_cors_middleware(app) add_cors_middleware(app)
@app.get("/health") @app.get("/health")
async def health_check() -> dict[str, str]: def health_check() -> dict[str, str]:
return {"status": "healthy", "service": "news-service"} return {"status": "healthy", "service": "news-service"}
@app.get("/api/enriched-news") @app.get("/api/enriched-news")
async def api_get_enriched_news( def api_get_enriched_news(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
start_date: str | None = Query(None), start_date: str | None = Query(None),
end_date: str | None = Query(None), end_date: str | None = Query(None),
@@ -49,7 +49,7 @@ def create_app() -> FastAPI:
) )
@app.get("/api/news-for-date") @app.get("/api/news-for-date")
async def api_get_news_for_date( def api_get_news_for_date(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
date: str = Query(...), date: str = Query(...),
limit: int = Query(20, ge=1, le=100), limit: int = Query(20, ge=1, le=100),
@@ -64,7 +64,7 @@ def create_app() -> FastAPI:
) )
@app.get("/api/news-timeline") @app.get("/api/news-timeline")
async def api_get_news_timeline( def api_get_news_timeline(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
start_date: str = Query(...), start_date: str = Query(...),
end_date: str = Query(...), end_date: str = Query(...),
@@ -79,7 +79,7 @@ def create_app() -> FastAPI:
) )
@app.get("/api/categories") @app.get("/api/categories")
async def api_get_categories( def api_get_categories(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
start_date: str | None = Query(None), start_date: str | None = Query(None),
end_date: str | None = Query(None), end_date: str | None = Query(None),
@@ -96,7 +96,7 @@ def create_app() -> FastAPI:
) )
@app.get("/api/similar-days") @app.get("/api/similar-days")
async def api_get_similar_days( def api_get_similar_days(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
date: str = Query(...), date: str = Query(...),
n_similar: int = Query(5, ge=1, le=20), n_similar: int = Query(5, ge=1, le=20),
@@ -111,7 +111,7 @@ def create_app() -> FastAPI:
) )
@app.get("/api/stories/{ticker}") @app.get("/api/stories/{ticker}")
async def api_get_story( def api_get_story(
ticker: str, ticker: str,
as_of_date: str = Query(...), as_of_date: str = Query(...),
store: MarketStore = Depends(get_market_store), store: MarketStore = Depends(get_market_store),
@@ -124,7 +124,7 @@ def create_app() -> FastAPI:
) )
@app.get("/api/range-explain") @app.get("/api/range-explain")
async def api_get_range_explain( def api_get_range_explain(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
start_date: str = Query(...), start_date: str = Query(...),
end_date: str = Query(...), end_date: str = Query(...),

View File

@@ -1,49 +0,0 @@
# -*- coding: utf-8 -*-
"""Read-only OpenClaw CLI FastAPI surface."""
from __future__ import annotations
from fastapi import Depends, FastAPI
from backend.api import openclaw_router
from backend.apps.cors import add_cors_middleware
from backend.api.openclaw import get_openclaw_cli_service
def create_app() -> FastAPI:
"""Create the OpenClaw service app."""
app = FastAPI(
title="大时代 OpenClaw Service",
description="Read-only OpenClaw CLI integration service surface",
version="0.1.0",
)
add_cors_middleware(app)
@app.get("/health")
async def health_check(
service=Depends(get_openclaw_cli_service),
) -> dict[str, object]:
return service.health()
@app.get("/api/status")
async def api_status(
service=Depends(get_openclaw_cli_service),
) -> dict[str, object]:
return {
"status": "operational",
"service": "openclaw-service",
"openclaw": service.health(),
}
app.include_router(openclaw_router)
return app
app = create_app()
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8004)

View File

@@ -5,8 +5,8 @@ from __future__ import annotations
from fastapi import FastAPI from fastapi import FastAPI
from backend.api import runtime_router from backend.api import runtime_router, dynamic_team_router
from backend.api.runtime import get_runtime_state from backend.api.runtime import get_runtime_state, _check_gateway_health, _get_gateway_process_details
from backend.apps.cors import add_cors_middleware from backend.apps.cors import add_cors_middleware
@@ -22,34 +22,63 @@ def create_app() -> FastAPI:
@app.get("/health") @app.get("/health")
async def health_check() -> dict[str, object]: async def health_check() -> dict[str, object]:
"""Health check for the runtime service.""" """Health check for the runtime service with Gateway process status."""
runtime_state = get_runtime_state() runtime_state = get_runtime_state()
process = runtime_state.gateway_process process = runtime_state.gateway_process
process_details = _get_gateway_process_details()
is_running = process is not None and process.poll() is None is_running = process is not None and process.poll() is None
# Determine overall health status
if is_running:
status = "healthy"
elif process is not None:
# Process existed but exited
status = "degraded"
else:
status = "healthy" # Service is healthy even without Gateway running
return { return {
"status": "healthy", "status": status,
"service": "runtime-service", "service": "runtime-service",
"gateway_running": is_running, "gateway": {
"gateway_port": runtime_state.gateway_port, "running": is_running,
"port": runtime_state.gateway_port,
"pid": process_details.get("pid"),
"process_status": process_details.get("status"),
"returncode": process_details.get("returncode"),
},
} }
@app.get("/health/gateway")
async def gateway_health_check() -> dict[str, object]:
"""Detailed health check for the Gateway subprocess."""
health = _check_gateway_health()
return health
@app.get("/api/status") @app.get("/api/status")
async def api_status() -> dict[str, object]: async def api_status() -> dict[str, object]:
"""Service-level status payload for runtime orchestration.""" """Service-level status payload for runtime orchestration."""
runtime_state = get_runtime_state() runtime_state = get_runtime_state()
process = runtime_state.gateway_process process = runtime_state.gateway_process
process_details = _get_gateway_process_details()
is_running = process is not None and process.poll() is None is_running = process is not None and process.poll() is None
return { return {
"status": "operational", "status": "operational",
"service": "runtime-service", "service": "runtime-service",
"runtime": { "runtime": {
"gateway_running": is_running, "gateway_running": is_running,
"gateway_port": runtime_state.gateway_port, "gateway_port": runtime_state.gateway_port,
"gateway_pid": process_details.get("pid"),
"gateway_process_status": process_details.get("status"),
"has_runtime_manager": runtime_state.runtime_manager is not None, "has_runtime_manager": runtime_state.runtime_manager is not None,
}, },
} }
app.include_router(runtime_router) app.include_router(runtime_router)
app.include_router(dynamic_team_router)
return app return app

View File

@@ -29,12 +29,12 @@ def create_app() -> FastAPI:
add_cors_middleware(app) add_cors_middleware(app)
@app.get("/health") @app.get("/health")
async def health_check() -> dict[str, str]: def health_check() -> dict[str, str]:
"""Health check endpoint.""" """Health check endpoint."""
return {"status": "healthy", "service": "trading-service"} return {"status": "healthy", "service": "trading-service"}
@app.get("/api/prices", response_model=PriceResponse) @app.get("/api/prices", response_model=PriceResponse)
async def api_get_prices( def api_get_prices(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
start_date: str = Query(...), start_date: str = Query(...),
end_date: str = Query(...), end_date: str = Query(...),
@@ -47,7 +47,7 @@ def create_app() -> FastAPI:
return PriceResponse(ticker=payload["ticker"], prices=payload["prices"]) return PriceResponse(ticker=payload["ticker"], prices=payload["prices"])
@app.get("/api/financials", response_model=FinancialMetricsResponse) @app.get("/api/financials", response_model=FinancialMetricsResponse)
async def api_get_financials( def api_get_financials(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
end_date: str = Query(...), end_date: str = Query(...),
period: str = Query("ttm"), period: str = Query("ttm"),
@@ -62,7 +62,7 @@ def create_app() -> FastAPI:
return FinancialMetricsResponse(financial_metrics=payload["financial_metrics"]) return FinancialMetricsResponse(financial_metrics=payload["financial_metrics"])
@app.get("/api/news", response_model=CompanyNewsResponse) @app.get("/api/news", response_model=CompanyNewsResponse)
async def api_get_news( def api_get_news(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
end_date: str = Query(...), end_date: str = Query(...),
start_date: str | None = Query(None), start_date: str | None = Query(None),
@@ -77,7 +77,7 @@ def create_app() -> FastAPI:
return CompanyNewsResponse(news=payload["news"]) return CompanyNewsResponse(news=payload["news"])
@app.get("/api/insider-trades", response_model=InsiderTradeResponse) @app.get("/api/insider-trades", response_model=InsiderTradeResponse)
async def api_get_insider_trades( def api_get_insider_trades(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
end_date: str = Query(...), end_date: str = Query(...),
start_date: str | None = Query(None), start_date: str | None = Query(None),
@@ -92,12 +92,12 @@ def create_app() -> FastAPI:
return InsiderTradeResponse(insider_trades=payload["insider_trades"]) return InsiderTradeResponse(insider_trades=payload["insider_trades"])
@app.get("/api/market/status") @app.get("/api/market/status")
async def api_get_market_status() -> dict[str, Any]: def api_get_market_status() -> dict[str, Any]:
"""Return current market status using the existing market service logic.""" """Return current market status using the existing market service logic."""
return trading_domain.get_market_status_payload() return trading_domain.get_market_status_payload()
@app.get("/api/market-cap") @app.get("/api/market-cap")
async def api_get_market_cap( def api_get_market_cap(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
end_date: str = Query(...), end_date: str = Query(...),
) -> dict[str, Any]: ) -> dict[str, Any]:
@@ -108,7 +108,7 @@ def create_app() -> FastAPI:
) )
@app.get("/api/line-items", response_model=LineItemResponse) @app.get("/api/line-items", response_model=LineItemResponse)
async def api_get_line_items( def api_get_line_items(
ticker: str = Query(..., min_length=1), ticker: str = Query(..., min_length=1),
line_items: list[str] = Query(...), line_items: list[str] = Query(...),
end_date: str = Query(...), end_date: str = Query(...),

File diff suppressed because it is too large Load Diff

View File

@@ -27,8 +27,10 @@ valuation_analyst:
portfolio_manager: portfolio_manager:
skills: skills:
- portfolio_decisioning - portfolio_decisioning
- dynamic_team_management
active_tool_groups: active_tool_groups:
- portfolio_ops - portfolio_ops
- dynamic_team
risk_manager: risk_manager:
skills: skills:

View File

@@ -77,7 +77,7 @@ def get_bootstrap_config_for_run(
project_root: Path, project_root: Path,
config_name: str, config_name: str,
) -> BootstrapConfig: ) -> BootstrapConfig:
"""Load BOOTSTRAP.md from the run workspace.""" """Load BOOTSTRAP.md from the run-scoped asset tree."""
return load_bootstrap_config( return load_bootstrap_config(
project_root / "runs" / config_name / "BOOTSTRAP.md", project_root / "runs" / config_name / "BOOTSTRAP.md",
) )
@@ -131,6 +131,13 @@ def _coerce_bool(value: Any) -> bool:
return bool(value) return bool(value)
def _normalize_schedule_mode(value: Any) -> str:
mode = str(value or "daily").strip().lower()
if mode == "intraday":
return "interval"
return mode or "daily"
def resolve_runtime_config( def resolve_runtime_config(
project_root: Path, project_root: Path,
config_name: str, config_name: str,
@@ -162,9 +169,9 @@ def resolve_runtime_config(
get_env_int("MAX_COMM_CYCLES", 2), get_env_int("MAX_COMM_CYCLES", 2),
), ),
), ),
"schedule_mode": str( "schedule_mode": _normalize_schedule_mode(
bootstrap.get("schedule_mode", schedule_mode), bootstrap.get("schedule_mode", schedule_mode),
).strip().lower() or schedule_mode, ),
"interval_minutes": int( "interval_minutes": int(
bootstrap.get( bootstrap.get(
"interval_minutes", "interval_minutes",

197
backend/core/apo.py Normal file
View File

@@ -0,0 +1,197 @@
# -*- coding: utf-8 -*-
"""
Autonomous Policy Optimizer (APO)
Automatically tunes agent policies based on performance feedback.
"""
import logging
import json
import os
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List, Optional
from agentscope.message import Msg
from backend.llm.models import get_agent_model, get_agent_formatter
from backend.agents.workspace_manager import WorkspaceManager
logger = logging.getLogger(__name__)
class PolicyOptimizer:
"""
PolicyOptimizer analyzes trading performance and automatically updates
agent workspace files (POLICY.md, AGENTS.md) to improve future results.
"""
def __init__(self, config_name: str, project_root: Optional[Path] = None):
self.config_name = config_name
self.workspace_manager = WorkspaceManager(project_root=project_root)
# Use a high-capability model for the optimizer (meta-agent)
self.model = get_agent_model("portfolio_manager")
self.formatter = get_agent_formatter("portfolio_manager")
async def run_optimization(
self,
date: str,
reflection_content: str,
settlement_result: Dict[str, Any],
analyst_results: List[Dict[str, Any]],
decisions: Dict[str, Dict],
) -> Dict[str, Any]:
"""
Run the optimization loop if performance indicates a need for change.
"""
total_pnl = settlement_result.get("portfolio_value", 0) - 100000.0 # Assuming 100k initial
# You might want to use a more sophisticated trigger, like 3 consecutive losses
if total_pnl >= 0:
logger.info(f"APO: Positive P&L (${total_pnl:,.2f}) for {date}, skipping optimization.")
return {"status": "skipped", "reason": "positive_pnl"}
logger.info(f"APO: Negative P&L (${total_pnl:,.2f}) detected for {date}. Starting optimization...")
# 1. Identify underperforming agents or logic
# 2. Generate policy updates
# 3. Apply updates
optimizations = []
# Focus on agents that gave high confidence but wrong direction
underperformers = self._identify_underperformers(settlement_result, analyst_results)
for agent_id in underperformers:
update = await self._generate_policy_update(
agent_id,
date,
reflection_content,
settlement_result,
analyst_results,
decisions
)
if update:
self._apply_update(agent_id, update)
optimizations.append({
"agent_id": agent_id,
"file": update.get("file", "POLICY.md"),
"change": update.get("change", "")
})
return {
"status": "completed",
"date": date,
"total_pnl": total_pnl,
"optimizations": optimizations
}
def _identify_underperformers(
self,
settlement_result: Dict[str, Any],
analyst_results: List[Dict[str, Any]]
) -> List[str]:
"""Identify which agents might need policy adjustments."""
underperformers = []
# Simple logic: if the overall day was a loss, all active analysts might need a check,
# but specifically those whose predictions didn't match the market.
# For now, let's include all analysts involved in the day.
for result in analyst_results:
agent_id = result.get("agent")
if agent_id:
underperformers.append(agent_id)
# Also include PM and Risk Manager as they are critical
underperformers.append("portfolio_manager")
underperformers.append("risk_manager")
return list(set(underperformers))
async def _generate_policy_update(
self,
agent_id: str,
date: str,
reflection_content: str,
settlement_result: Dict[str, Any],
analyst_results: List[Dict[str, Any]],
decisions: Dict[str, Dict],
) -> Optional[Dict[str, str]]:
"""Use LLM to generate a specific policy update for an agent."""
# Load current policy
try:
current_policy = self.workspace_manager.load_agent_file(
config_name=self.config_name,
agent_id=agent_id,
filename="POLICY.md"
)
except Exception:
current_policy = "No existing policy found."
prompt = f"""
As an Expert Meta-Optimizer for a multi-agent trading system, your task is to update the operational POLICY for an agent named '{agent_id}' based on recent performance failures.
[Current Context]
Date: {date}
Daily Reflection:
{reflection_content}
[Agent's Current POLICY.md]
{current_policy}
[Task]
Analyze why the system failed (loss occurred). Identify what '{agent_id}' could have done differently or what new constraint/heuristic should be added to its policy to prevent similar mistakes in the future.
Provide a specific, concise addition or modification to the POLICY.md file.
The output MUST be a JSON object with:
1. "reasoning": Brief explanation of why this change is needed.
2. "file": Always "POLICY.md".
3. "change": The EXACT markdown text to APPEND or REPLACE in the file. Keep it in Chinese as the system uses Chinese prompts.
Output ONLY the JSON object.
"""
msg = Msg(name="system", content=prompt, role="user")
response = await self.model.reply(msg)
content = response.content
if isinstance(content, list):
content = content[0].get("text", "")
# Clean JSON if wrapped in markdown
if "```json" in content:
content = content.split("```json")[1].split("```")[0].strip()
try:
return json.loads(content)
except Exception as e:
logger.error(f"APO: Failed to parse optimization response for {agent_id}: {e}")
return None
def _apply_update(self, agent_id: str, update: Dict[str, str]) -> None:
"""Apply the suggested update to the agent's workspace."""
filename = update.get("file", "POLICY.md")
change = update.get("change", "")
if not change:
return
try:
current_content = self.workspace_manager.load_agent_file(
config_name=self.config_name,
agent_id=agent_id,
filename=filename
)
# Check if change is already there to avoid duplicates
if change.strip() in current_content:
logger.info(f"APO: Change already present in {agent_id}/{filename}")
return
new_content = current_content + "\n\n### APO Update (" + datetime.now().strftime("%Y-%m-%d") + ")\n" + change
self.workspace_manager.update_agent_file(
config_name=self.config_name,
agent_id=agent_id,
filename=filename,
content=new_content
)
logger.info(f"APO: Updated {agent_id}/{filename} with new heuristics.")
except Exception as e:
logger.error(f"APO: Failed to apply update to {agent_id}/{filename}: {e}")

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,12 @@ Pipeline Runner - Independent trading pipeline execution
This module provides functions to start/stop trading pipelines This module provides functions to start/stop trading pipelines
that can be called from the REST API. that can be called from the REST API.
COMPATIBILITY_NOTE:
This module still carries selected fallback creation paths used by managed
runtime startup and compatibility flows. New runtime behavior should be judged
against the run-scoped helpers and current pipeline selection rules rather than
assuming every constructor here is the long-term default.
""" """
from __future__ import annotations from __future__ import annotations
@@ -11,17 +17,19 @@ from __future__ import annotations
import asyncio import asyncio
import os import os
from contextlib import AsyncExitStack from contextlib import AsyncExitStack
from dataclasses import dataclass
from pathlib import Path from pathlib import Path
from typing import Any, Dict, Optional, Callable from typing import Any, Dict, List, Optional, Callable
from backend.agents import AnalystAgent, PMAgent, RiskAgent from backend.agents import EvoAgent
from backend.agents.agent_workspace import load_agent_workspace_config
from backend.agents.skills_manager import SkillsManager from backend.agents.skills_manager import SkillsManager
from backend.agents.toolkit_factory import create_agent_toolkit, load_agent_profiles from backend.agents.toolkit_factory import create_agent_toolkit, load_agent_profiles
from backend.agents.prompt_loader import get_prompt_loader from backend.agents.prompt_loader import get_prompt_loader
from backend.agents.workspace_manager import WorkspaceManager from backend.agents.workspace_manager import WorkspaceManager
from backend.config.constants import ANALYST_TYPES from backend.config.constants import ANALYST_TYPES
from backend.core.pipeline import TradingPipeline from backend.core.pipeline import TradingPipeline
from backend.core.scheduler import BacktestScheduler, Scheduler from backend.core.scheduler import BacktestScheduler, Scheduler, normalize_schedule_mode
from backend.llm.models import get_agent_formatter, get_agent_model from backend.llm.models import get_agent_formatter, get_agent_model
from backend.runtime.manager import ( from backend.runtime.manager import (
TradingRuntimeManager, TradingRuntimeManager,
@@ -41,6 +49,24 @@ _prompt_loader = get_prompt_loader()
# Global gateway reference for cleanup # Global gateway reference for cleanup
_gateway_instance: Optional[Gateway] = None _gateway_instance: Optional[Gateway] = None
# Global long-term memory references for persistence
_long_term_memories: List[Any] = []
@dataclass
class GatewayRuntimeBundle:
"""Assembled runtime components for a Gateway-backed execution path."""
runtime_manager: TradingRuntimeManager
market_service: MarketService
storage_service: StorageService
pipeline: TradingPipeline
gateway: Gateway
scheduler: Optional[Scheduler]
scheduler_callback: Optional[Callable]
long_term_memories: List[Any]
trading_dates: List[str]
def _set_gateway(gateway: Optional[Gateway]) -> None: def _set_gateway(gateway: Optional[Gateway]) -> None:
"""Set global gateway reference.""" """Set global gateway reference."""
@@ -61,6 +87,101 @@ def stop_gateway() -> None:
_gateway_instance = None _gateway_instance = None
def _set_long_term_memories(memories: List[Any]) -> None:
"""Set global long-term memory references."""
global _long_term_memories
_long_term_memories = memories
def _clear_long_term_memories() -> None:
"""Clear global long-term memory references."""
global _long_term_memories
_long_term_memories = []
def _persist_long_term_memories_sync() -> None:
"""
Synchronously persist all long-term memories before shutdown.
This function ensures all memory data is flushed to disk/vector store
before the process exits. Should be called during cleanup.
"""
global _long_term_memories
if not _long_term_memories:
return
import logging
logger = logging.getLogger(__name__)
logger.info(f"[MemoryPersistence] Persisting {len(_long_term_memories)} memory instances...")
for i, memory in enumerate(_long_term_memories):
try:
# Try to save memory if it has a save method
if hasattr(memory, 'save') and callable(getattr(memory, 'save')):
if hasattr(memory, 'sync') and callable(getattr(memory, 'sync')):
# Use sync version if available
memory.sync()
logger.debug(f"[MemoryPersistence] Synced memory {i}")
else:
# Try async save with event loop
import asyncio
try:
loop = asyncio.get_event_loop()
if loop.is_running():
# Schedule save in running loop
loop.create_task(memory.save())
logger.debug(f"[MemoryPersistence] Scheduled save for memory {i}")
else:
loop.run_until_complete(memory.save())
logger.debug(f"[MemoryPersistence] Saved memory {i}")
except RuntimeError:
# No event loop, skip async save
pass
# Try to flush any pending writes
if hasattr(memory, 'flush') and callable(getattr(memory, 'flush')):
memory.flush()
logger.debug(f"[MemoryPersistence] Flushed memory {i}")
except Exception as e:
logger.warning(f"[MemoryPersistence] Failed to persist memory {i}: {e}")
logger.info("[MemoryPersistence] Memory persistence complete")
async def _persist_long_term_memories_async() -> None:
"""
Asynchronously persist all long-term memories.
This is the preferred method for persisting memories when
an async context is available.
"""
global _long_term_memories
if not _long_term_memories:
return
import logging
logger = logging.getLogger(__name__)
logger.info(f"[MemoryPersistence] Persisting {len(_long_term_memories)} memory instances async...")
for i, memory in enumerate(_long_term_memories):
try:
# Try async save first
if hasattr(memory, 'save') and callable(getattr(memory, 'save')):
await memory.save()
logger.debug(f"[MemoryPersistence] Saved memory {i} (async)")
# Try flush if available
if hasattr(memory, 'flush') and callable(getattr(memory, 'flush')):
memory.flush()
logger.debug(f"[MemoryPersistence] Flushed memory {i}")
except Exception as e:
logger.warning(f"[MemoryPersistence] Failed to persist memory {i}: {e}")
logger.info("[MemoryPersistence] Async memory persistence complete")
def create_long_term_memory(agent_name: str, run_id: str, run_dir: Path): def create_long_term_memory(agent_name: str, run_id: str, run_dir: Path):
"""Create ReMeTaskLongTermMemory for an agent.""" """Create ReMeTaskLongTermMemory for an agent."""
try: try:
@@ -96,6 +217,166 @@ def create_long_term_memory(agent_name: str, run_id: str, run_dir: Path):
) )
def _resolve_evo_agent_ids() -> set[str]:
"""Return agent ids selected to use EvoAgent.
By default, all supported roles use EvoAgent.
"""
raw = os.getenv("EVO_AGENT_IDS", "")
if not raw.strip():
# Default: all supported roles use EvoAgent
return set(ANALYST_TYPES) | {"risk_manager", "portfolio_manager"}
requested = {
item.strip()
for item in raw.split(",")
if item.strip()
}
return {
agent_id
for agent_id in requested
if agent_id in ANALYST_TYPES or agent_id in {"risk_manager", "portfolio_manager"}
}
def _create_analyst_agent(
*,
analyst_type: str,
run_id: str,
model,
formatter,
skills_manager: SkillsManager,
active_skill_map: Dict[str, list[Path]],
long_term_memory=None,
):
"""Create one analyst agent, optionally using EvoAgent."""
active_skill_dirs = active_skill_map.get(analyst_type, [])
toolkit = create_agent_toolkit(
analyst_type,
run_id,
active_skill_dirs=active_skill_dirs,
)
workspace_dir = skills_manager.get_agent_asset_dir(run_id, analyst_type)
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
agent = EvoAgent(
agent_id=analyst_type,
config_name=run_id,
workspace_dir=workspace_dir,
model=model,
formatter=formatter,
skills_manager=skills_manager,
prompt_files=agent_config.prompt_files,
long_term_memory=long_term_memory,
)
agent.toolkit = toolkit
setattr(agent, "workspace_id", run_id)
return agent
def _create_risk_manager_agent(
*,
run_id: str,
model,
formatter,
skills_manager: SkillsManager,
active_skill_map: Dict[str, list[Path]],
long_term_memory=None,
):
"""Create the risk manager, optionally using EvoAgent."""
active_skill_dirs = active_skill_map.get("risk_manager", [])
toolkit = create_agent_toolkit(
"risk_manager",
run_id,
active_skill_dirs=active_skill_dirs,
)
use_evo_agent = "risk_manager" in _resolve_evo_agent_ids()
if use_evo_agent:
workspace_dir = skills_manager.get_agent_asset_dir(run_id, "risk_manager")
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
agent = EvoAgent(
agent_id="risk_manager",
config_name=run_id,
workspace_dir=workspace_dir,
model=model,
formatter=formatter,
skills_manager=skills_manager,
prompt_files=agent_config.prompt_files,
long_term_memory=long_term_memory,
)
agent.toolkit = toolkit
setattr(agent, "workspace_id", run_id)
return agent
return RiskAgent(
model=model,
formatter=formatter,
name="risk_manager",
config={"config_name": run_id},
long_term_memory=long_term_memory,
toolkit=toolkit,
)
def _create_portfolio_manager_agent(
*,
run_id: str,
model,
formatter,
initial_cash: float,
margin_requirement: float,
skills_manager: SkillsManager,
active_skill_map: Dict[str, list[Path]],
long_term_memory=None,
):
"""Create the portfolio manager, optionally using EvoAgent."""
active_skill_dirs = active_skill_map.get("portfolio_manager", [])
use_evo_agent = "portfolio_manager" in _resolve_evo_agent_ids()
if use_evo_agent:
workspace_dir = skills_manager.get_agent_asset_dir(
run_id,
"portfolio_manager",
)
agent_config = load_agent_workspace_config(workspace_dir / "agent.yaml")
agent = EvoAgent(
agent_id="portfolio_manager",
config_name=run_id,
workspace_dir=workspace_dir,
model=model,
formatter=formatter,
skills_manager=skills_manager,
prompt_files=agent_config.prompt_files,
initial_cash=initial_cash,
margin_requirement=margin_requirement,
long_term_memory=long_term_memory,
)
agent.toolkit = create_agent_toolkit(
"portfolio_manager",
run_id,
owner=agent,
active_skill_dirs=active_skill_dirs,
)
setattr(agent, "workspace_id", run_id)
return agent
return PMAgent(
name="portfolio_manager",
model=model,
formatter=formatter,
initial_cash=initial_cash,
margin_requirement=margin_requirement,
config={"config_name": run_id},
long_term_memory=long_term_memory,
toolkit_factory=create_agent_toolkit,
toolkit_factory_kwargs={
"active_skill_dirs": active_skill_dirs,
},
)
def create_agents( def create_agents(
run_id: str, run_id: str,
run_dir: Path, run_dir: Path,
@@ -129,11 +410,6 @@ def create_agents(
for analyst_type in ANALYST_TYPES: for analyst_type in ANALYST_TYPES:
model = get_agent_model(analyst_type) model = get_agent_model(analyst_type)
formatter = get_agent_formatter(analyst_type) formatter = get_agent_formatter(analyst_type)
toolkit = create_agent_toolkit(
analyst_type,
run_id,
active_skill_dirs=active_skill_map.get(analyst_type, []),
)
long_term_memory = None long_term_memory = None
if enable_long_term_memory: if enable_long_term_memory:
@@ -141,13 +417,13 @@ def create_agents(
if long_term_memory: if long_term_memory:
long_term_memories.append(long_term_memory) long_term_memories.append(long_term_memory)
analyst = AnalystAgent( analyst = _create_analyst_agent(
analyst_type=analyst_type, analyst_type=analyst_type,
toolkit=toolkit, run_id=run_id,
model=model, model=model,
formatter=formatter, formatter=formatter,
agent_id=analyst_type, skills_manager=skills_manager,
config={"config_name": run_id}, active_skill_map=active_skill_map,
long_term_memory=long_term_memory, long_term_memory=long_term_memory,
) )
analysts.append(analyst) analysts.append(analyst)
@@ -159,17 +435,13 @@ def create_agents(
if risk_long_term_memory: if risk_long_term_memory:
long_term_memories.append(risk_long_term_memory) long_term_memories.append(risk_long_term_memory)
risk_manager = RiskAgent( risk_manager = _create_risk_manager_agent(
run_id=run_id,
model=get_agent_model("risk_manager"), model=get_agent_model("risk_manager"),
formatter=get_agent_formatter("risk_manager"), formatter=get_agent_formatter("risk_manager"),
name="risk_manager", skills_manager=skills_manager,
config={"config_name": run_id}, active_skill_map=active_skill_map,
long_term_memory=risk_long_term_memory, long_term_memory=risk_long_term_memory,
toolkit=create_agent_toolkit(
"risk_manager",
run_id,
active_skill_dirs=active_skill_map.get("risk_manager", []),
),
) )
# Create portfolio manager # Create portfolio manager
@@ -179,23 +451,165 @@ def create_agents(
if pm_long_term_memory: if pm_long_term_memory:
long_term_memories.append(pm_long_term_memory) long_term_memories.append(pm_long_term_memory)
portfolio_manager = PMAgent( portfolio_manager = _create_portfolio_manager_agent(
name="portfolio_manager", run_id=run_id,
model=get_agent_model("portfolio_manager"), model=get_agent_model("portfolio_manager"),
formatter=get_agent_formatter("portfolio_manager"), formatter=get_agent_formatter("portfolio_manager"),
initial_cash=initial_cash, initial_cash=initial_cash,
margin_requirement=margin_requirement, margin_requirement=margin_requirement,
config={"config_name": run_id}, skills_manager=skills_manager,
active_skill_map=active_skill_map,
long_term_memory=pm_long_term_memory, long_term_memory=pm_long_term_memory,
toolkit_factory=create_agent_toolkit,
toolkit_factory_kwargs={
"active_skill_dirs": active_skill_map.get("portfolio_manager", []),
},
) )
return analysts, risk_manager, portfolio_manager, long_term_memories return analysts, risk_manager, portfolio_manager, long_term_memories
def build_gateway_runtime_bundle(
*,
run_id: str,
run_dir: Path,
bootstrap: Dict[str, Any],
poll_interval: int = 10,
) -> GatewayRuntimeBundle:
"""Build the full Gateway runtime component graph for one run."""
tickers = bootstrap.get("tickers", ["AAPL", "MSFT", "GOOGL", "AMZN", "NVDA", "META", "TSLA", "AMD", "NFLX", "AVGO", "PLTR", "COIN"])
initial_cash = float(bootstrap.get("initial_cash", 100000.0))
margin_requirement = float(bootstrap.get("margin_requirement", 0.0))
max_comm_cycles = int(bootstrap.get("max_comm_cycles", 2))
schedule_mode = normalize_schedule_mode(bootstrap.get("schedule_mode", "daily"))
trigger_time = bootstrap.get("trigger_time", "09:30")
interval_minutes = int(bootstrap.get("interval_minutes", 60))
heartbeat_interval = int(bootstrap.get("heartbeat_interval", 0))
mode = bootstrap.get("mode", "live")
start_date = bootstrap.get("start_date")
end_date = bootstrap.get("end_date")
enable_memory = bootstrap.get("enable_memory", False)
is_backtest = mode == "backtest"
runtime_manager = TradingRuntimeManager(
config_name=run_id,
run_dir=run_dir,
bootstrap=bootstrap,
)
runtime_manager.prepare_run()
market_service = MarketService(
tickers=tickers,
poll_interval=poll_interval,
backtest_mode=is_backtest,
api_key=os.getenv("FINNHUB_API_KEY") if not is_backtest else None,
backtest_start_date=start_date if is_backtest else None,
backtest_end_date=end_date if is_backtest else None,
)
storage_service = StorageService(
dashboard_dir=run_dir / "team_dashboard",
initial_cash=initial_cash,
config_name=run_id,
)
if not storage_service.files["summary"].exists():
storage_service.initialize_empty_dashboard()
else:
storage_service.update_leaderboard_model_info()
analysts, risk_manager, pm, long_term_memories = create_agents(
run_id=run_id,
run_dir=run_dir,
initial_cash=initial_cash,
margin_requirement=margin_requirement,
enable_long_term_memory=enable_memory,
)
for agent in analysts + [risk_manager, pm]:
agent_id = getattr(agent, "agent_id", None) or getattr(agent, "name", None)
if agent_id:
runtime_manager.register_agent(agent_id)
portfolio_state = storage_service.load_portfolio_state()
pm.load_portfolio_state(portfolio_state)
settlement_coordinator = SettlementCoordinator(
storage=storage_service,
initial_capital=initial_cash,
)
pipeline = TradingPipeline(
analysts=analysts,
risk_manager=risk_manager,
portfolio_manager=pm,
settlement_coordinator=settlement_coordinator,
max_comm_cycles=max_comm_cycles,
runtime_manager=runtime_manager,
)
scheduler_callback = None
live_scheduler = None
trading_dates: List[str] = []
if is_backtest:
backtest_scheduler = BacktestScheduler(
start_date=start_date,
end_date=end_date,
trading_calendar="NYSE",
delay_between_days=0.5,
)
trading_dates = backtest_scheduler.get_trading_dates()
async def scheduler_callback_fn(callback):
await backtest_scheduler.start(callback)
scheduler_callback = scheduler_callback_fn
else:
live_scheduler = Scheduler(
mode=schedule_mode,
trigger_time=trigger_time,
interval_minutes=interval_minutes,
heartbeat_interval=heartbeat_interval if heartbeat_interval > 0 else None,
config={"config_name": run_id},
)
async def scheduler_callback_fn(callback):
await live_scheduler.start(callback)
scheduler_callback = scheduler_callback_fn
gateway = Gateway(
market_service=market_service,
storage_service=storage_service,
pipeline=pipeline,
scheduler_callback=scheduler_callback,
config={
"mode": mode,
"backtest_mode": is_backtest,
"tickers": tickers,
"config_name": run_id,
"schedule_mode": schedule_mode,
"interval_minutes": interval_minutes,
"trigger_time": trigger_time,
"heartbeat_interval": heartbeat_interval,
"initial_cash": initial_cash,
"margin_requirement": margin_requirement,
"max_comm_cycles": max_comm_cycles,
"enable_memory": enable_memory,
},
scheduler=live_scheduler,
)
if is_backtest:
gateway.set_backtest_dates(trading_dates)
return GatewayRuntimeBundle(
runtime_manager=runtime_manager,
market_service=market_service,
storage_service=storage_service,
pipeline=pipeline,
gateway=gateway,
scheduler=live_scheduler,
scheduler_callback=scheduler_callback,
long_term_memories=long_term_memories,
trading_dates=trading_dates,
)
async def run_pipeline( async def run_pipeline(
run_id: str, run_id: str,
run_dir: Path, run_dir: Path,
@@ -236,7 +650,7 @@ async def run_pipeline(
initial_cash = float(bootstrap.get("initial_cash", 100000.0)) initial_cash = float(bootstrap.get("initial_cash", 100000.0))
margin_requirement = float(bootstrap.get("margin_requirement", 0.0)) margin_requirement = float(bootstrap.get("margin_requirement", 0.0))
max_comm_cycles = int(bootstrap.get("max_comm_cycles", 2)) max_comm_cycles = int(bootstrap.get("max_comm_cycles", 2))
schedule_mode = bootstrap.get("schedule_mode", "daily") schedule_mode = normalize_schedule_mode(bootstrap.get("schedule_mode", "daily"))
trigger_time = bootstrap.get("trigger_time", "09:30") trigger_time = bootstrap.get("trigger_time", "09:30")
interval_minutes = int(bootstrap.get("interval_minutes", 60)) interval_minutes = int(bootstrap.get("interval_minutes", 60))
heartbeat_interval = int(bootstrap.get("heartbeat_interval", 0)) heartbeat_interval = int(bootstrap.get("heartbeat_interval", 0))
@@ -347,7 +761,7 @@ async def run_pipeline(
trading_calendar="NYSE", trading_calendar="NYSE",
delay_between_days=0.5, delay_between_days=0.5,
) )
trading_dates = backtest_scheduler.get_trading_dates() backtest_scheduler.get_trading_dates()
async def scheduler_callback_fn(callback): async def scheduler_callback_fn(callback):
await backtest_scheduler.start(callback) await backtest_scheduler.start(callback)
@@ -400,6 +814,9 @@ async def run_pipeline(
) )
_set_gateway(gateway) _set_gateway(gateway)
# Set global memory references for persistence
_set_long_term_memories(long_term_memories)
# Start pipeline execution # Start pipeline execution
async with AsyncExitStack() as stack: async with AsyncExitStack() as stack:
# Enter long-term memory contexts # Enter long-term memory contexts
@@ -467,6 +884,12 @@ async def run_pipeline(
# Cleanup # Cleanup
logger.info("[Pipeline] Cleaning up...") logger.info("[Pipeline] Cleaning up...")
# Persist long-term memories before cleanup
try:
await _persist_long_term_memories_async()
except Exception as e:
logger.warning(f"[Pipeline] Memory persistence error: {e}")
# Stop Gateway # Stop Gateway
try: try:
stop_gateway() stop_gateway()
@@ -474,6 +897,9 @@ async def run_pipeline(
except Exception as e: except Exception as e:
logger.error(f"[Pipeline] Error stopping gateway: {e}") logger.error(f"[Pipeline] Error stopping gateway: {e}")
# Clear memory references
_clear_long_term_memories()
clear_shutdown_event() clear_shutdown_event()
clear_global_runtime_manager() clear_global_runtime_manager()
from backend.api.runtime import unregister_runtime_manager from backend.api.runtime import unregister_runtime_manager

View File

@@ -17,6 +17,14 @@ NYSE_TZ = ZoneInfo("America/New_York")
NYSE_CALENDAR = mcal.get_calendar("NYSE") NYSE_CALENDAR = mcal.get_calendar("NYSE")
def normalize_schedule_mode(mode: str | None) -> str:
"""Normalize schedule mode to the current public vocabulary."""
value = str(mode or "daily").strip().lower()
if value == "intraday":
return "interval"
return value or "daily"
class Scheduler: class Scheduler:
""" """
Market-aware scheduler for live trading. Market-aware scheduler for live trading.
@@ -31,7 +39,7 @@ class Scheduler:
heartbeat_interval: Optional[int] = None, heartbeat_interval: Optional[int] = None,
config: Optional[dict] = None, config: Optional[dict] = None,
): ):
self.mode = mode self.mode = normalize_schedule_mode(mode)
self.trigger_time = trigger_time or "09:30" # NYSE timezone self.trigger_time = trigger_time or "09:30" # NYSE timezone
self.trigger_now = self.trigger_time == "now" self.trigger_now = self.trigger_time == "now"
self.interval_minutes = interval_minutes or 60 self.interval_minutes = interval_minutes or 60
@@ -107,7 +115,7 @@ class Scheduler:
if self.mode == "daily": if self.mode == "daily":
self._task = asyncio.create_task(self._run_daily(self._callback)) self._task = asyncio.create_task(self._run_daily(self._callback))
elif self.mode == "intraday": elif self.mode == "interval":
self._task = asyncio.create_task( self._task = asyncio.create_task(
self._run_intraday(self._callback), self._run_intraday(self._callback),
) )
@@ -124,8 +132,13 @@ class Scheduler:
"""Update scheduler parameters in-place and restart its timing loop.""" """Update scheduler parameters in-place and restart its timing loop."""
changed = False changed = False
if mode and mode != self.mode: if mode:
self.mode = mode normalized_mode = normalize_schedule_mode(mode)
else:
normalized_mode = None
if normalized_mode and normalized_mode != self.mode:
self.mode = normalized_mode
changed = True changed = True
if trigger_time and trigger_time != self.trigger_time: if trigger_time and trigger_time != self.trigger_time:
@@ -233,13 +246,13 @@ class Scheduler:
await callback(date=current_date) await callback(date=current_date)
async def _run_intraday(self, callback: Callable): async def _run_intraday(self, callback: Callable):
"""Run every N minutes (for future use)""" """Run every N minutes in interval mode."""
while self.running: while self.running:
now = self._now_nyse() now = self._now_nyse()
current_date = now.strftime("%Y-%m-%d") current_date = now.strftime("%Y-%m-%d")
if self._is_trading_day(now): if self._is_trading_day(now):
logger.info(f"Triggering intraday cycle for {current_date}") logger.info(f"Triggering interval cycle for {current_date}")
await callback(date=current_date) await callback(date=current_date)
await asyncio.sleep(self.interval_minutes * 60) await asyncio.sleep(self.interval_minutes * 60)

View File

@@ -123,6 +123,10 @@ class StateSync:
# Persist to feed_history # Persist to feed_history
if persist: if persist:
self.storage.add_feed_message(self._state, event) self.storage.add_feed_message(self._state, event)
# Make persistence non-blocking to keep event loop snappy
if asyncio.get_event_loop().is_running():
asyncio.create_task(asyncio.to_thread(self.save_state))
else:
self.save_state() self.save_state()
# Broadcast to frontend # Broadcast to frontend
@@ -135,6 +139,7 @@ class StateSync:
self, self,
agent_id: str, agent_id: str,
content: str, content: str,
agent_name: Optional[str] = None,
**extra, **extra,
): ):
""" """
@@ -151,6 +156,7 @@ class StateSync:
{ {
"type": "agent_message", "type": "agent_message",
"agentId": agent_id, "agentId": agent_id,
"agentName": agent_name,
"content": content, "content": content,
"ts": ts_ms, "ts": ts_ms,
**extra, **extra,
@@ -212,7 +218,12 @@ class StateSync:
persist=False, persist=False,
) )
async def on_conference_message(self, agent_id: str, content: str): async def on_conference_message(
self,
agent_id: str,
content: str,
agent_name: Optional[str] = None,
):
"""Called when an agent speaks during conference""" """Called when an agent speaks during conference"""
ts_ms = self._get_timestamp_ms() ts_ms = self._get_timestamp_ms()
@@ -220,6 +231,7 @@ class StateSync:
{ {
"type": "conference_message", "type": "conference_message",
"agentId": agent_id, "agentId": agent_id,
"agentName": agent_name,
"content": content, "content": content,
"ts": ts_ms, "ts": ts_ms,
}, },
@@ -463,6 +475,34 @@ class StateSync:
limit=self.storage.max_feed_history, limit=self.storage.max_feed_history,
) or self._state.get("last_day_history", []) ) or self._state.get("last_day_history", [])
persisted_state = self.storage.read_persisted_server_state()
dashboard_snapshot = (
self.storage.build_dashboard_snapshot_from_state(self._state)
if include_dashboard
else None
)
dashboard_holdings = (
dashboard_snapshot.get("holdings", [])
if dashboard_snapshot is not None
else self._state.get("holdings", [])
)
dashboard_trades = (
dashboard_snapshot.get("trades", [])
if dashboard_snapshot is not None
else self._state.get("trades", [])
)
dashboard_stats = (
dashboard_snapshot.get("stats", {})
if dashboard_snapshot is not None
else self._state.get("stats", {})
)
dashboard_leaderboard = (
dashboard_snapshot.get("leaderboard", [])
if dashboard_snapshot is not None
else self._state.get("leaderboard", [])
)
portfolio_state = self._state.get("portfolio") or persisted_state.get("portfolio") or {}
payload = { payload = {
"server_mode": self._state.get("server_mode", "live"), "server_mode": self._state.get("server_mode", "live"),
"is_backtest": self._state.get("is_backtest", False), "is_backtest": self._state.get("is_backtest", False),
@@ -476,24 +516,23 @@ class StateSync:
"trading_days_completed", "trading_days_completed",
0, 0,
), ),
"holdings": self._state.get("holdings", []), "holdings": dashboard_holdings,
"trades": self._state.get("trades", []), "trades": dashboard_trades,
"stats": self._state.get("stats", {}), "stats": dashboard_stats,
"leaderboard": self._state.get("leaderboard", []), "leaderboard": dashboard_leaderboard,
"portfolio": self._state.get("portfolio", {}), "portfolio": portfolio_state,
"realtime_prices": self._state.get("realtime_prices", {}), "realtime_prices": self._state.get("realtime_prices", {}),
"data_sources": self._state.get("data_sources", {}), "data_sources": self._state.get("data_sources", {}),
"price_history": self._state.get("price_history", {}), "price_history": self._state.get("price_history", {}),
} }
if include_dashboard: if include_dashboard:
dashboard_snapshot = self.storage.build_dashboard_snapshot_from_state(self._state)
payload["dashboard"] = { payload["dashboard"] = {
"summary": dashboard_snapshot.get("summary"), "summary": dashboard_snapshot.get("summary"),
"holdings": dashboard_snapshot.get("holdings"), "holdings": dashboard_holdings,
"stats": dashboard_snapshot.get("stats"), "stats": dashboard_stats,
"trades": dashboard_snapshot.get("trades"), "trades": dashboard_trades,
"leaderboard": dashboard_snapshot.get("leaderboard"), "leaderboard": dashboard_leaderboard,
} }
return payload return payload

View File

@@ -190,8 +190,9 @@ class MarketStore:
name: str | None = None, name: str | None = None,
sector: str | None = None, sector: str | None = None,
is_active: bool = True, is_active: bool = True,
) -> None: ) -> int:
timestamp = _utc_timestamp() timestamp = _utc_timestamp()
count = 0
with self._connect() as conn: with self._connect() as conn:
conn.execute( conn.execute(
""" """
@@ -206,6 +207,8 @@ class MarketStore:
""", """,
(symbol, name, sector, 1 if is_active else 0, timestamp, timestamp), (symbol, name, sector, 1 if is_active else 0, timestamp, timestamp),
) )
count += 1
return count
def update_fetch_watermark( def update_fetch_watermark(
self, self,
@@ -213,8 +216,9 @@ class MarketStore:
symbol: str, symbol: str,
price_date: str | None = None, price_date: str | None = None,
news_date: str | None = None, news_date: str | None = None,
) -> None: ) -> int:
timestamp = _utc_timestamp() timestamp = _utc_timestamp()
count = 0
with self._connect() as conn: with self._connect() as conn:
conn.execute( conn.execute(
""" """
@@ -227,6 +231,8 @@ class MarketStore:
""", """,
(symbol, timestamp, timestamp, price_date, news_date), (symbol, timestamp, timestamp, price_date, news_date),
) )
count += 1
return count
def get_ticker_watermarks(self, symbol: str) -> dict[str, Any]: def get_ticker_watermarks(self, symbol: str) -> dict[str, Any]:
with self._connect() as conn: with self._connect() as conn:
@@ -263,6 +269,8 @@ class MarketStore:
count = 0 count = 0
with self._connect() as conn: with self._connect() as conn:
for row in rows: for row in rows:
if not row.get("date"):
continue
conn.execute( conn.execute(
""" """
INSERT INTO ohlc INSERT INTO ohlc
@@ -341,6 +349,7 @@ class MarketStore:
timestamp, timestamp,
), ),
) )
count += 1
for ticker in tickers: for ticker in tickers:
conn.execute( conn.execute(
""" """
@@ -349,7 +358,6 @@ class MarketStore:
""", """,
(news_id, str(ticker).strip().upper()), (news_id, str(ticker).strip().upper()),
) )
count += 1
return count return count
def get_news_without_trade_date(self, symbol: str | None = None, *, limit: int = 5000) -> list[dict[str, Any]]: def get_news_without_trade_date(self, symbol: str | None = None, *, limit: int = 5000) -> list[dict[str, Any]]:
@@ -928,8 +936,9 @@ class MarketStore:
as_of_date: str, as_of_date: str,
content: str, content: str,
source: str = "local", source: str = "local",
) -> None: ) -> int:
timestamp = _utc_timestamp() timestamp = _utc_timestamp()
count = 0
with self._connect() as conn: with self._connect() as conn:
conn.execute( conn.execute(
""" """
@@ -943,6 +952,8 @@ class MarketStore:
""", """,
(symbol, as_of_date, content, source, timestamp, timestamp), (symbol, as_of_date, content, source, timestamp, timestamp),
) )
count += 1
return count
def delete_story_cache( def delete_story_cache(
self, self,
@@ -1002,8 +1013,9 @@ class MarketStore:
target_date: str, target_date: str,
payload: dict[str, Any], payload: dict[str, Any],
source: str = "local", source: str = "local",
) -> None: ) -> int:
timestamp = _utc_timestamp() timestamp = _utc_timestamp()
count = 0
with self._connect() as conn: with self._connect() as conn:
conn.execute( conn.execute(
""" """
@@ -1017,6 +1029,8 @@ class MarketStore:
""", """,
(symbol, target_date, _json_dumps(payload), source, timestamp, timestamp), (symbol, target_date, _json_dumps(payload), source, timestamp, timestamp),
) )
count += 1
return count
def delete_similar_day_cache( def delete_similar_day_cache(
self, self,

View File

@@ -1,15 +1,14 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""Gateway Server - Entry point for Gateway subprocess. """Gateway Server - Entry point for the managed Gateway subprocess.
This module is launched as a subprocess by the Control Plane (FastAPI) This module is launched by `runtime_service` when the runtime API is used to
to run the Data Plane (Gateway + Pipeline). spawn a run-scoped Gateway process.
""" """
import argparse import argparse
import asyncio import asyncio
import json import json
import logging import logging
import os
import sys import sys
from contextlib import AsyncExitStack from contextlib import AsyncExitStack
from pathlib import Path from pathlib import Path
@@ -19,28 +18,13 @@ from dotenv import load_dotenv
# Load environment variables # Load environment variables
load_dotenv() load_dotenv()
from backend.agents import AnalystAgent, PMAgent, RiskAgent from backend.core.pipeline_runner import build_gateway_runtime_bundle
from backend.agents.skills_manager import SkillsManager
from backend.agents.toolkit_factory import create_agent_toolkit, load_agent_profiles
from backend.agents.prompt_loader import get_prompt_loader
from backend.agents.workspace_manager import WorkspaceManager
from backend.config.constants import ANALYST_TYPES
from backend.core.pipeline import TradingPipeline
from backend.core.pipeline_runner import create_agents, create_long_term_memory
from backend.core.scheduler import BacktestScheduler, Scheduler
from backend.llm.models import get_agent_formatter, get_agent_model
from backend.runtime.manager import ( from backend.runtime.manager import (
TradingRuntimeManager,
set_global_runtime_manager, set_global_runtime_manager,
clear_global_runtime_manager, clear_global_runtime_manager,
) )
from backend.services.gateway import Gateway
from backend.services.market import MarketService
from backend.services.storage import StorageService
from backend.utils.settlement import SettlementCoordinator
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_prompt_loader = get_prompt_loader()
INFO_LOGGER_PREFIXES = ( INFO_LOGGER_PREFIXES = (
@@ -116,153 +100,24 @@ async def run_gateway(
port: int port: int
): ):
"""Run Gateway with Pipeline.""" """Run Gateway with Pipeline."""
# Extract config
tickers = bootstrap.get("tickers", ["AAPL", "MSFT", "GOOGL", "AMZN", "NVDA", "META", "TSLA", "AMD", "NFLX", "AVGO", "PLTR", "COIN"])
initial_cash = float(bootstrap.get("initial_cash", 100000.0))
margin_requirement = float(bootstrap.get("margin_requirement", 0.0))
max_comm_cycles = int(bootstrap.get("max_comm_cycles", 2))
schedule_mode = bootstrap.get("schedule_mode", "daily")
trigger_time = bootstrap.get("trigger_time", "09:30")
interval_minutes = int(bootstrap.get("interval_minutes", 60))
heartbeat_interval = int(bootstrap.get("heartbeat_interval", 0)) # 0 = disabled
mode = bootstrap.get("mode", "live")
start_date = bootstrap.get("start_date")
end_date = bootstrap.get("end_date")
enable_memory = bootstrap.get("enable_memory", False)
poll_interval = int(bootstrap.get("poll_interval", 10)) poll_interval = int(bootstrap.get("poll_interval", 10))
is_backtest = mode == "backtest"
logger.info(f"[Gateway Server] Starting run {run_id} on port {port}") logger.info(f"[Gateway Server] Starting run {run_id} on port {port}")
# Create runtime manager
runtime_manager = TradingRuntimeManager(
config_name=run_id,
run_dir=run_dir,
bootstrap=bootstrap,
)
runtime_manager.prepare_run()
set_global_runtime_manager(runtime_manager)
try: try:
async with AsyncExitStack() as stack: bundle = build_gateway_runtime_bundle(
# Create services
market_service = MarketService(
tickers=tickers,
poll_interval=poll_interval,
backtest_mode=is_backtest,
api_key=os.getenv("FINNHUB_API_KEY") if not is_backtest else None,
backtest_start_date=start_date if is_backtest else None,
backtest_end_date=end_date if is_backtest else None,
)
storage_service = StorageService(
dashboard_dir=run_dir / "team_dashboard",
initial_cash=initial_cash,
config_name=run_id,
)
if not storage_service.files["summary"].exists():
storage_service.initialize_empty_dashboard()
else:
storage_service.update_leaderboard_model_info()
# Create agents
analysts, risk_manager, pm, long_term_memories = create_agents(
run_id=run_id, run_id=run_id,
run_dir=run_dir, run_dir=run_dir,
initial_cash=initial_cash, bootstrap=bootstrap,
margin_requirement=margin_requirement, poll_interval=poll_interval,
enable_long_term_memory=enable_memory,
) )
set_global_runtime_manager(bundle.runtime_manager)
# Register agents async with AsyncExitStack() as stack:
for agent in analysts + [risk_manager, pm]: for memory in bundle.long_term_memories:
agent_id = getattr(agent, "agent_id", None) or getattr(agent, "name", None)
if agent_id:
runtime_manager.register_agent(agent_id)
# Load portfolio state
portfolio_state = storage_service.load_portfolio_state()
pm.load_portfolio_state(portfolio_state)
# Create settlement coordinator
settlement_coordinator = SettlementCoordinator(
storage=storage_service,
initial_capital=initial_cash,
)
# Create pipeline
pipeline = TradingPipeline(
analysts=analysts,
risk_manager=risk_manager,
portfolio_manager=pm,
settlement_coordinator=settlement_coordinator,
max_comm_cycles=max_comm_cycles,
runtime_manager=runtime_manager,
)
# Create scheduler
scheduler_callback = None
live_scheduler = None
if is_backtest:
backtest_scheduler = BacktestScheduler(
start_date=start_date,
end_date=end_date,
trading_calendar="NYSE",
delay_between_days=0.5,
)
async def scheduler_callback_fn(callback):
await backtest_scheduler.start(callback)
scheduler_callback = scheduler_callback_fn
else:
live_scheduler = Scheduler(
mode=schedule_mode,
trigger_time=trigger_time,
interval_minutes=interval_minutes,
heartbeat_interval=heartbeat_interval if heartbeat_interval > 0 else None,
config={"config_name": run_id},
)
async def scheduler_callback_fn(callback):
await live_scheduler.start(callback)
scheduler_callback = scheduler_callback_fn
# Enter long-term memory contexts
for memory in long_term_memories:
await stack.enter_async_context(memory) await stack.enter_async_context(memory)
# Create Gateway
gateway = Gateway(
market_service=market_service,
storage_service=storage_service,
pipeline=pipeline,
scheduler_callback=scheduler_callback,
config={
"mode": mode,
"backtest_mode": is_backtest,
"tickers": tickers,
"config_name": run_id,
"schedule_mode": schedule_mode,
"interval_minutes": interval_minutes,
"trigger_time": trigger_time,
"heartbeat_interval": heartbeat_interval,
"initial_cash": initial_cash,
"margin_requirement": margin_requirement,
"max_comm_cycles": max_comm_cycles,
"enable_memory": enable_memory,
},
scheduler=live_scheduler,
)
# Start Gateway (blocks until shutdown)
logger.info(f"[Gateway Server] Gateway starting on port {port}") logger.info(f"[Gateway Server] Gateway starting on port {port}")
await gateway.start(host="0.0.0.0", port=port) await bundle.gateway.start(host="0.0.0.0", port=port)
except asyncio.CancelledError: except asyncio.CancelledError:
logger.info("[Gateway Server] Cancelled") logger.info("[Gateway Server] Cancelled")

View File

@@ -9,7 +9,7 @@ import os
import time import time
import logging import logging
from enum import Enum from enum import Enum
from typing import Any, Callable, Optional, Tuple, TypeVar, Union from typing import Any, Callable, Optional, Tuple, TypeVar
from agentscope.formatter import ( from agentscope.formatter import (
AnthropicChatFormatter, AnthropicChatFormatter,
DashScopeChatFormatter, DashScopeChatFormatter,
@@ -444,6 +444,16 @@ def create_model(
""" """
provider = canonicalize_model_provider(provider) provider = canonicalize_model_provider(provider)
# If provider is default OPENAI but model name looks like deepseek,
# check if we should switch to DASHSCOPE.
if provider == "OPENAI" and "deepseek" in model_name.lower() and os.getenv("DASHSCOPE_API_KEY"):
provider = "DASHSCOPE"
# Intelligent routing: if it's a DeepSeek model and we have DashScope credentials,
# prefer using DashScopeChatModel over OpenAIChatModel.
if provider == "DEEPSEEK" and os.getenv("DASHSCOPE_API_KEY"):
provider = "DASHSCOPE"
model_class = PROVIDER_MODEL_MAP.get(provider) model_class = PROVIDER_MODEL_MAP.get(provider)
if model_class is None: if model_class is None:
raise ValueError(f"Unsupported provider: {provider}") raise ValueError(f"Unsupported provider: {provider}")

View File

@@ -1,400 +0,0 @@
# -*- coding: utf-8 -*-
"""
Main Entry Point
Supports: backtest, live modes
"""
import argparse
import asyncio
import logging
import os
from contextlib import AsyncExitStack
from pathlib import Path
import loguru
from dotenv import load_dotenv
from backend.agents import AnalystAgent, PMAgent, RiskAgent
from backend.agents.skills_manager import SkillsManager
from backend.agents.toolkit_factory import create_agent_toolkit, load_agent_profiles
from backend.agents.prompt_loader import get_prompt_loader
from backend.agents.workspace_manager import WorkspaceManager
from backend.config.bootstrap_config import resolve_runtime_config
from backend.config.constants import ANALYST_TYPES
from backend.core.pipeline import TradingPipeline
from backend.core.scheduler import BacktestScheduler, Scheduler
from backend.llm.models import get_agent_formatter, get_agent_model
from backend.api.runtime import register_runtime_manager, unregister_runtime_manager
from backend.runtime.manager import (
TradingRuntimeManager,
set_global_runtime_manager,
clear_global_runtime_manager,
)
from backend.gateway_server import configure_gateway_logging
from backend.services.gateway import Gateway
from backend.services.market import MarketService
from backend.services.storage import StorageService
from backend.utils.settlement import SettlementCoordinator
load_dotenv()
logger = logging.getLogger(__name__)
loguru.logger.disable("flowllm")
loguru.logger.disable("reme_ai")
configure_gateway_logging(verbose=os.getenv("LOG_LEVEL", "").upper() == "DEBUG")
_prompt_loader = get_prompt_loader()
def _get_run_dir(config_name: str) -> Path:
"""Return the canonical run-scoped directory for a config."""
project_root = Path(__file__).resolve().parents[1]
return WorkspaceManager(project_root=project_root).get_run_dir(config_name)
def _resolve_runtime_config(args) -> dict:
"""Merge env defaults with run-scoped bootstrap config."""
project_root = Path(__file__).resolve().parents[1]
return resolve_runtime_config(
project_root=project_root,
config_name=args.config_name,
enable_memory=args.enable_memory,
schedule_mode=args.schedule_mode,
interval_minutes=args.interval_minutes,
trigger_time=args.trigger_time,
)
def create_long_term_memory(agent_name: str, config_name: str):
"""
Create ReMeTaskLongTermMemory for an agent
Requires DASHSCOPE_API_KEY env var
"""
from agentscope.memory import ReMeTaskLongTermMemory
from agentscope.model import DashScopeChatModel
from agentscope.embedding import DashScopeTextEmbedding
api_key = os.getenv("MEMORY_API_KEY")
if not api_key:
logger.warning("MEMORY_API_KEY not set, long-term memory disabled")
return None
memory_dir = str(_get_run_dir(config_name) / "memory")
return ReMeTaskLongTermMemory(
agent_name=agent_name,
user_name=agent_name,
model=DashScopeChatModel(
model_name=os.getenv("MEMORY_MODEL_NAME", "qwen3-max"),
api_key=api_key,
stream=False,
),
embedding_model=DashScopeTextEmbedding(
model_name=os.getenv(
"MEMORY_EMBEDDING_MODEL",
"text-embedding-v4",
),
api_key=api_key,
dimensions=1024,
),
**{
"vector_store.default.backend": "local",
"vector_store.default.params.store_dir": memory_dir,
},
)
def create_agents(
config_name: str,
initial_cash: float,
margin_requirement: float,
enable_long_term_memory: bool = False,
):
"""Create all agents for the system
Returns:
tuple: (analysts, risk_manager, portfolio_manager, long_term_memories)
long_term_memories is a list of memory
"""
analysts = []
long_term_memories = []
workspace_manager = WorkspaceManager()
workspace_manager.initialize_default_assets(
config_name=config_name,
agent_ids=list(ANALYST_TYPES.keys())
+ ["risk_manager", "portfolio_manager"],
analyst_personas=_prompt_loader.load_yaml_config("analyst", "personas"),
)
profiles = load_agent_profiles()
skills_manager = SkillsManager()
active_skill_map = skills_manager.prepare_active_skills(
config_name=config_name,
agent_defaults={
agent_id: profile.get("skills", [])
for agent_id, profile in profiles.items()
},
)
for analyst_type in ANALYST_TYPES:
model = get_agent_model(analyst_type)
formatter = get_agent_formatter(analyst_type)
toolkit = create_agent_toolkit(
analyst_type,
config_name,
active_skill_dirs=active_skill_map.get(analyst_type, []),
)
long_term_memory = None
if enable_long_term_memory:
long_term_memory = create_long_term_memory(
analyst_type,
config_name,
)
if long_term_memory:
long_term_memories.append(long_term_memory)
analyst = AnalystAgent(
analyst_type=analyst_type,
toolkit=toolkit,
model=model,
formatter=formatter,
agent_id=analyst_type,
config={"config_name": config_name},
long_term_memory=long_term_memory,
)
analysts.append(analyst)
risk_long_term_memory = None
if enable_long_term_memory:
risk_long_term_memory = create_long_term_memory(
"risk_manager",
config_name,
)
if risk_long_term_memory:
long_term_memories.append(risk_long_term_memory)
risk_manager = RiskAgent(
model=get_agent_model("risk_manager"),
formatter=get_agent_formatter("risk_manager"),
name="risk_manager",
config={"config_name": config_name},
long_term_memory=risk_long_term_memory,
toolkit=create_agent_toolkit(
"risk_manager",
config_name,
active_skill_dirs=active_skill_map.get("risk_manager", []),
),
)
pm_long_term_memory = None
if enable_long_term_memory:
pm_long_term_memory = create_long_term_memory(
"portfolio_manager",
config_name,
)
if pm_long_term_memory:
long_term_memories.append(pm_long_term_memory)
portfolio_manager = PMAgent(
name="portfolio_manager",
model=get_agent_model("portfolio_manager"),
formatter=get_agent_formatter("portfolio_manager"),
initial_cash=initial_cash,
margin_requirement=margin_requirement,
config={"config_name": config_name},
long_term_memory=pm_long_term_memory,
toolkit_factory=create_agent_toolkit,
toolkit_factory_kwargs={
"active_skill_dirs": active_skill_map.get(
"portfolio_manager",
[],
),
},
)
return analysts, risk_manager, portfolio_manager, long_term_memories
async def run_with_gateway(args):
"""Run with WebSocket gateway"""
is_backtest = args.mode == "backtest"
runtime_config = _resolve_runtime_config(args)
config_name = args.config_name
tickers = runtime_config["tickers"]
initial_cash = runtime_config["initial_cash"]
margin_requirement = runtime_config["margin_requirement"]
runtime_manager = TradingRuntimeManager(
config_name=config_name,
run_dir=_get_run_dir(config_name),
bootstrap=runtime_config,
)
runtime_manager.prepare_run()
set_global_runtime_manager(runtime_manager)
# Create market service
market_service = MarketService(
tickers=tickers,
poll_interval=args.poll_interval,
backtest_mode=is_backtest,
api_key=os.getenv("FINNHUB_API_KEY") if not is_backtest else None,
backtest_start_date=args.start_date if is_backtest else None,
backtest_end_date=args.end_date if is_backtest else None,
)
# Create storage service
storage_service = StorageService(
dashboard_dir=_get_run_dir(config_name) / "team_dashboard",
initial_cash=initial_cash,
config_name=config_name,
)
if not storage_service.files["summary"].exists():
storage_service.initialize_empty_dashboard()
else:
storage_service.update_leaderboard_model_info()
# Create agents and pipeline
analysts, risk_manager, pm, long_term_memories = create_agents(
config_name=config_name,
initial_cash=initial_cash,
margin_requirement=margin_requirement,
enable_long_term_memory=runtime_config["enable_memory"],
)
for agent in analysts + [risk_manager, pm]:
agent_id = getattr(agent, "agent_id", None) or getattr(agent, "name", None)
if agent_id:
runtime_manager.register_agent(agent_id)
portfolio_state = storage_service.load_portfolio_state()
pm.load_portfolio_state(portfolio_state)
settlement_coordinator = SettlementCoordinator(
storage=storage_service,
initial_capital=initial_cash,
)
pipeline = TradingPipeline(
analysts=analysts,
risk_manager=risk_manager,
portfolio_manager=pm,
settlement_coordinator=settlement_coordinator,
max_comm_cycles=runtime_config["max_comm_cycles"],
runtime_manager=runtime_manager,
)
# Create scheduler callback
scheduler_callback = None
trading_dates = []
live_scheduler = None
if is_backtest:
backtest_scheduler = BacktestScheduler(
start_date=args.start_date,
end_date=args.end_date,
trading_calendar="NYSE",
delay_between_days=0.5,
)
trading_dates = backtest_scheduler.get_trading_dates()
async def scheduler_callback_fn(callback):
await backtest_scheduler.start(callback)
scheduler_callback = scheduler_callback_fn
else:
# Live mode: use daily or intraday scheduler with NYSE timezone
live_scheduler = Scheduler(
mode=runtime_config["schedule_mode"],
trigger_time=runtime_config["trigger_time"],
interval_minutes=runtime_config["interval_minutes"],
config={"config_name": config_name},
)
async def scheduler_callback_fn(callback):
await live_scheduler.start(callback)
scheduler_callback = scheduler_callback_fn
# Create gateway
gateway = Gateway(
market_service=market_service,
storage_service=storage_service,
pipeline=pipeline,
scheduler_callback=scheduler_callback,
config={
"mode": args.mode,
"backtest_mode": is_backtest,
"tickers": tickers,
"config_name": config_name,
"schedule_mode": runtime_config["schedule_mode"],
"interval_minutes": runtime_config["interval_minutes"],
"trigger_time": runtime_config["trigger_time"],
"initial_cash": initial_cash,
"margin_requirement": margin_requirement,
"max_comm_cycles": runtime_config["max_comm_cycles"],
"enable_memory": runtime_config["enable_memory"],
},
scheduler=live_scheduler if not is_backtest else None,
)
if is_backtest:
gateway.set_backtest_dates(trading_dates)
# Start long-term memory contexts and run gateway
async with AsyncExitStack() as stack:
try:
for memory in long_term_memories:
await stack.enter_async_context(memory)
await gateway.start(host=args.host, port=args.port)
finally:
unregister_runtime_manager()
clear_global_runtime_manager()
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(description="Trading System")
parser.add_argument("--mode", choices=["live", "backtest"], default="live")
parser.add_argument("--config-name", default="live")
parser.add_argument("--host", default="0.0.0.0")
parser.add_argument("--port", type=int, default=8765)
parser.add_argument(
"--schedule-mode",
choices=["daily", "intraday"],
default="daily",
)
parser.add_argument("--trigger-time", default="09:30") # NYSE market open
parser.add_argument("--interval-minutes", type=int, default=60)
parser.add_argument("--poll-interval", type=int, default=10)
parser.add_argument("--start-date")
parser.add_argument("--end-date")
parser.add_argument(
"--enable-memory",
action="store_true",
help="Enable ReMeTaskLongTermMemory for agents",
)
args = parser.parse_args()
# Load config from env for logging
runtime_config = _resolve_runtime_config(args)
tickers = runtime_config["tickers"]
initial_cash = runtime_config["initial_cash"]
logger.info("=" * 60)
logger.info(f"Mode: {args.mode}, Config: {args.config_name}")
logger.info(f"Tickers: {tickers}")
logger.info(f"Initial Cash: ${initial_cash:,.2f}")
logger.info(
"Long-term Memory: %s",
"enabled" if runtime_config["enable_memory"] else "disabled",
)
if args.mode == "backtest":
if not args.start_date or not args.end_date:
parser.error(
"--start-date and --end-date required for backtest mode",
)
logger.info(f"Backtest: {args.start_date} to {args.end_date}")
logger.info("=" * 60)
asyncio.run(run_with_gateway(args))
if __name__ == "__main__":
main()

View File

@@ -1,25 +1,27 @@
from __future__ import annotations from __future__ import annotations
from dataclasses import dataclass, field from dataclasses import dataclass, field
from datetime import datetime, UTC from datetime import datetime, timezone
from typing import Any, Dict from typing import Any, Dict
@dataclass @dataclass
class AgentRuntimeState: class AgentRuntimeState:
agent_id: str agent_id: str
display_name: str | None = None
status: str = "idle" status: str = "idle"
last_session: str | None = None last_session: str | None = None
last_updated: datetime = field(default_factory=lambda: datetime.now(UTC)) last_updated: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
def update(self, status: str, session_key: str | None = None) -> None: def update(self, status: str, session_key: str | None = None) -> None:
self.status = status self.status = status
self.last_session = session_key self.last_session = session_key
self.last_updated = datetime.now(UTC) self.last_updated = datetime.now(timezone.utc)
def to_dict(self) -> Dict[str, Any]: def to_dict(self) -> Dict[str, Any]:
return { return {
"agent_id": self.agent_id, "agent_id": self.agent_id,
"display_name": self.display_name,
"status": self.status, "status": self.status,
"last_session": self.last_session, "last_session": self.last_session,
"last_updated": self.last_updated.isoformat(), "last_updated": self.last_updated.isoformat(),

View File

@@ -2,7 +2,7 @@ from __future__ import annotations
import asyncio import asyncio
import json import json
from datetime import datetime, UTC from datetime import datetime, timezone
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
@@ -93,7 +93,7 @@ class TradingRuntimeManager:
def log_event(self, event: str, details: Optional[Dict[str, Any]] = None) -> Dict[str, Any]: def log_event(self, event: str, details: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
entry = { entry = {
"timestamp": datetime.now(UTC).isoformat(), "timestamp": datetime.now(timezone.utc).isoformat(),
"event": event, "event": event,
"details": details or {}, "details": details or {},
"session": self.current_session_key, "session": self.current_session_key,
@@ -102,15 +102,25 @@ class TradingRuntimeManager:
self._persist_snapshot() self._persist_snapshot()
return entry return entry
def register_agent(self, agent_id: str) -> AgentRuntimeState: def register_agent(
state = AgentRuntimeState(agent_id=agent_id) self,
agent_id: str,
display_name: Optional[str] = None,
) -> AgentRuntimeState:
state = AgentRuntimeState(agent_id=agent_id, display_name=display_name)
self.registry.register(agent_id, state) self.registry.register(agent_id, state)
self._persist_snapshot() self._persist_snapshot()
return state return state
def unregister_agent(self, agent_id: str) -> Optional[AgentRuntimeState]:
state = self.registry.unregister(agent_id)
if state is not None:
self._persist_snapshot()
return state
def register_pending_approval(self, approval_id: str, payload: Dict[str, Any]) -> None: def register_pending_approval(self, approval_id: str, payload: Dict[str, Any]) -> None:
payload.setdefault("status", "pending") payload.setdefault("status", "pending")
payload.setdefault("created_at", datetime.now(UTC).isoformat()) payload.setdefault("created_at", datetime.now(timezone.utc).isoformat())
self.pending_approvals[approval_id] = payload self.pending_approvals[approval_id] = payload
self._persist_snapshot() self._persist_snapshot()
@@ -139,7 +149,7 @@ class TradingRuntimeManager:
if not entry: if not entry:
return return
entry["status"] = status entry["status"] = status
entry["resolved_at"] = datetime.now(UTC).isoformat() entry["resolved_at"] = datetime.now(timezone.utc).isoformat()
entry["resolved_by"] = resolved_by entry["resolved_by"] = resolved_by
self._persist_snapshot() self._persist_snapshot()

View File

@@ -13,6 +13,9 @@ class RuntimeRegistry:
def get(self, agent_id: str) -> Optional["AgentRuntimeState"]: def get(self, agent_id: str) -> Optional["AgentRuntimeState"]:
return self._states.get(agent_id) return self._states.get(agent_id)
def unregister(self, agent_id: str) -> Optional["AgentRuntimeState"]:
return self._states.pop(agent_id, None)
def list_agents(self) -> list[str]: def list_agents(self) -> list[str]:
return list(self._states.keys()) return list(self._states.keys())

View File

@@ -13,9 +13,7 @@ from typing import Any, Callable, Dict, List, Optional, Set
import websockets import websockets
from websockets.asyncio.server import ServerConnection from websockets.asyncio.server import ServerConnection
from backend.data.provider_utils import normalize_symbol
from backend.domains import news as news_domain from backend.domains import news as news_domain
from backend.llm.models import get_agent_model_info
from backend.core.pipeline import TradingPipeline from backend.core.pipeline import TradingPipeline
from backend.core.state_sync import StateSync from backend.core.state_sync import StateSync
from backend.services.market import MarketService from backend.services.market import MarketService
@@ -146,12 +144,13 @@ class Gateway:
self.state_sync.update_state("status", "websocket_ready") self.state_sync.update_state("status", "websocket_ready")
# Create server but don't block yet - we'll serve inside the context manager # Create server but don't block yet - we'll serve inside the context manager
server = await websockets.serve( await websockets.serve(
self.handle_client, self.handle_client,
host, host,
port, port,
ping_interval=30, ping_interval=20,
ping_timeout=60, ping_timeout=120,
max_size=10 * 1024 * 1024, # 10MB
) )
logger.info(f"WebSocket server ready: ws://{host}:{port}") logger.info(f"WebSocket server ready: ws://{host}:{port}")
@@ -835,12 +834,18 @@ class Gateway:
if not self.connected_clients: if not self.connected_clients:
return return
message_json = json.dumps(message, ensure_ascii=False, default=str) # Offload potentially heavy JSON serialization to thread
message_json = await asyncio.to_thread(
json.dumps, message, ensure_ascii=False, default=str
)
async with self.lock: async with self.lock:
# Filter only active clients to minimize unnecessary send attempts
# In websockets v13+, we must check state.name == 'OPEN'
active_clients = [c for c in self.connected_clients if c.state.name == 'OPEN']
tasks = [ tasks = [
self._send_to_client(client, message_json) self._send_to_client(client, message_json)
for client in self.connected_clients.copy() for client in active_clients
] ]
if tasks: if tasks:
@@ -851,9 +856,14 @@ class Gateway:
client: ServerConnection, client: ServerConnection,
message: str, message: str,
): ):
if client.state.name != 'OPEN':
async with self.lock:
self.connected_clients.discard(client)
return
try: try:
await client.send(message) await client.send(message)
except websockets.ConnectionClosed: except (websockets.ConnectionClosed, Exception):
async with self.lock: async with self.lock:
self.connected_clients.discard(client) self.connected_clients.discard(client)

View File

@@ -22,10 +22,16 @@ from backend.config.bootstrap_config import (
resolve_runtime_config, resolve_runtime_config,
update_bootstrap_values_for_run, update_bootstrap_values_for_run,
) )
from backend.data.market_ingest import ingest_symbols
from backend.llm.models import get_agent_model_info from backend.llm.models import get_agent_model_info
def _normalize_schedule_mode(value: Any) -> str:
mode = str(value or "daily").strip().lower()
if mode == "intraday":
return "interval"
return mode or "daily"
async def handle_reload_runtime_assets(gateway: Any) -> None: async def handle_reload_runtime_assets(gateway: Any) -> None:
config_name = gateway.config.get("config_name", "default") config_name = gateway.config.get("config_name", "default")
runtime_config = resolve_runtime_config( runtime_config = resolve_runtime_config(
@@ -45,10 +51,10 @@ async def handle_reload_runtime_assets(gateway: Any) -> None:
async def handle_update_runtime_config(gateway: Any, websocket: Any, data: dict[str, Any]) -> None: async def handle_update_runtime_config(gateway: Any, websocket: Any, data: dict[str, Any]) -> None:
updates: dict[str, Any] = {} updates: dict[str, Any] = {}
schedule_mode = str(data.get("schedule_mode", "")).strip().lower() schedule_mode = _normalize_schedule_mode(data.get("schedule_mode", ""))
if schedule_mode: if schedule_mode:
if schedule_mode not in {"daily", "intraday"}: if schedule_mode not in {"daily", "interval"}:
await websocket.send(json.dumps({"type": "error", "message": "schedule_mode must be 'daily' or 'intraday'."}, ensure_ascii=False)) await websocket.send(json.dumps({"type": "error", "message": "schedule_mode must be 'daily' or 'interval'."}, ensure_ascii=False))
return return
updates["schedule_mode"] = schedule_mode updates["schedule_mode"] = schedule_mode

View File

@@ -208,7 +208,7 @@ async def run_live_cycle(gateway: Any, date: str, tickers: list[str]) -> None:
market_status = gateway.market_service.get_market_status() market_status = gateway.market_service.get_market_status()
current_prices = gateway.market_service.get_all_prices() current_prices = gateway.market_service.get_all_prices()
if schedule_mode == "intraday": if schedule_mode in {"interval", "intraday"}:
execute_decisions = market_status.get("status") == "open" execute_decisions = market_status.get("status") == "open"
if execute_decisions: if execute_decisions:
await gateway.state_sync.on_system_message("定时任务触发:当前处于交易时段,本轮将执行交易决策") await gateway.state_sync.on_system_message("定时任务触发:当前处于交易时段,本轮将执行交易决策")
@@ -253,7 +253,8 @@ async def finalize_cycle(gateway: Any, date: str) -> None:
async def get_market_caps(gateway: Any, tickers: list[str], date: str) -> dict[str, float]: async def get_market_caps(gateway: Any, tickers: list[str], date: str) -> dict[str, float]:
market_caps: dict[str, float] = {} market_caps: dict[str, float] = {}
for ticker in tickers:
async def _get_one(ticker: str):
try: try:
market_cap = None market_cap = None
response = await gateway._call_trading_service( response = await gateway._call_trading_service(
@@ -263,12 +264,21 @@ async def get_market_caps(gateway: Any, tickers: list[str], date: str) -> dict[s
if response is not None: if response is not None:
market_cap = response.get("market_cap") market_cap = response.get("market_cap")
if market_cap is None: if market_cap is None:
payload = trading_domain.get_market_cap_payload(ticker=ticker, end_date=date) payload = await asyncio.to_thread(
trading_domain.get_market_cap_payload,
ticker=ticker,
end_date=date,
)
market_cap = payload.get("market_cap") market_cap = payload.get("market_cap")
market_caps[ticker] = market_cap if market_cap else 1e9 return ticker, (market_cap if market_cap else 1e9)
except Exception as exc: except Exception as exc:
logger.warning("Failed to get market cap for %s, using default 1e9: %s", ticker, exc) logger.warning("Failed to get market cap for %s, using default 1e9: %s", ticker, exc)
market_caps[ticker] = 1e9 return ticker, 1e9
tasks = [_get_one(ticker) for ticker in tickers]
results = await asyncio.gather(*tasks)
for ticker, mc in results:
market_caps[ticker] = mc
return market_caps return market_caps

View File

@@ -1,6 +1,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""OpenClaw WebSocket handlers — gateway calls OpenClaw Gateway via WebSocket.""" """OpenClaw WebSocket handlers — gateway calls OpenClaw Gateway via WebSocket.
COMPATIBILITY_SURFACE: stable
OWNER: runtime-team
This is the WebSocket gateway integration for OpenClaw (port 18789).
Frontend connects via Gateway WebSocket (port 8765) → OpenClaw Gateway (port 18789).
"""
from __future__ import annotations from __future__ import annotations
import json import json
@@ -8,7 +14,7 @@ import logging
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
if TYPE_CHECKING: if TYPE_CHECKING:
from backend.services.gateway import Gateway pass
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -72,7 +78,6 @@ def _ensure_session_bridge(gateway) -> None:
def _get_ws_client(gateway) -> "OpenClawWebSocketClient": def _get_ws_client(gateway) -> "OpenClawWebSocketClient":
"""Get the OpenClaw WebSocket client from gateway.""" """Get the OpenClaw WebSocket client from gateway."""
from shared.client.openclaw_websocket_client import OpenClawWebSocketClient
client = gateway._openclaw_ws client = gateway._openclaw_ws
if client is None: if client is None:
raise RuntimeError("OpenClaw Gateway not connected") raise RuntimeError("OpenClaw Gateway not connected")

View File

@@ -8,6 +8,13 @@ from typing import Any
from backend.data.provider_utils import normalize_symbol from backend.data.provider_utils import normalize_symbol
def _normalize_schedule_mode(value: Any) -> str:
mode = str(value or "daily").strip().lower()
if mode == "intraday":
return "interval"
return mode or "daily"
def normalize_watchlist(raw_tickers: Any) -> list[str]: def normalize_watchlist(raw_tickers: Any) -> list[str]:
"""Parse watchlist payloads from websocket messages.""" """Parse watchlist payloads from websocket messages."""
if raw_tickers is None: if raw_tickers is None:
@@ -51,9 +58,11 @@ def apply_runtime_config(gateway: Any, runtime_config: dict[str, Any]) -> dict[s
gateway.pipeline.max_comm_cycles = int(runtime_config["max_comm_cycles"]) gateway.pipeline.max_comm_cycles = int(runtime_config["max_comm_cycles"])
gateway.config["max_comm_cycles"] = gateway.pipeline.max_comm_cycles gateway.config["max_comm_cycles"] = gateway.pipeline.max_comm_cycles
gateway.config["schedule_mode"] = runtime_config.get( gateway.config["schedule_mode"] = _normalize_schedule_mode(
runtime_config.get(
"schedule_mode", "schedule_mode",
gateway.config.get("schedule_mode", "daily"), gateway.config.get("schedule_mode", "daily"),
),
) )
gateway.config["interval_minutes"] = int( gateway.config["interval_minutes"] = int(
runtime_config.get( runtime_config.get(

View File

@@ -15,7 +15,6 @@ from backend.domains import trading as trading_domain
from backend.enrich.news_enricher import enrich_news_for_symbol from backend.enrich.news_enricher import enrich_news_for_symbol
from backend.enrich.llm_enricher import llm_enrichment_enabled from backend.enrich.llm_enricher import llm_enrichment_enabled
from backend.tools.data_tools import prices_to_df from backend.tools.data_tools import prices_to_df
from shared.client import NewsServiceClient, TradingServiceClient
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -530,7 +529,8 @@ async def handle_get_stock_technical_indicators(gateway: Any, websocket: Any, da
try: try:
end_date = datetime.now() end_date = datetime.now()
start_date = end_date - timedelta(days=250) # Reduced from 250 to 150 days to lower CPU/memory pressure while still supporting MA200 (approx 140 trading days)
start_date = end_date - timedelta(days=150)
prices = None prices = None
response = await gateway._call_trading_service( response = await gateway._call_trading_service(
@@ -545,7 +545,9 @@ async def handle_get_stock_technical_indicators(gateway: Any, websocket: Any, da
prices = response.prices prices = response.prices
if prices is None: if prices is None:
payload = trading_domain.get_prices_payload( # Offload domain logic to thread
payload = await asyncio.to_thread(
trading_domain.get_prices_payload,
ticker=ticker, ticker=ticker,
start_date=start_date.strftime("%Y-%m-%d"), start_date=start_date.strftime("%Y-%m-%d"),
end_date=end_date.strftime("%Y-%m-%d"), end_date=end_date.strftime("%Y-%m-%d"),
@@ -561,21 +563,21 @@ async def handle_get_stock_technical_indicators(gateway: Any, websocket: Any, da
}, ensure_ascii=False)) }, ensure_ascii=False))
return return
def _calc():
df = prices_to_df(prices) df = prices_to_df(prices)
signal = gateway._technical_analyzer.analyze(ticker, df) signal = gateway._technical_analyzer.analyze(ticker, df)
import pandas as pd
df_sorted = df.sort_values("time").reset_index(drop=True) df_sorted = df.sort_values("time").reset_index(drop=True)
df_sorted["returns"] = df_sorted["close"].pct_change() df_sorted["returns"] = df_sorted["close"].pct_change()
vol_10 = float(df_sorted["returns"].tail(10).std() * (252**0.5) * 100) if len(df_sorted) >= 10 else None v10 = float(df_sorted["returns"].tail(10).std() * (252**0.5) * 100) if len(df_sorted) >= 10 else None
vol_20 = float(df_sorted["returns"].tail(20).std() * (252**0.5) * 100) if len(df_sorted) >= 20 else None v20 = float(df_sorted["returns"].tail(20).std() * (252**0.5) * 100) if len(df_sorted) >= 20 else None
vol_60 = float(df_sorted["returns"].tail(60).std() * (252**0.5) * 100) if len(df_sorted) >= 60 else None v60 = float(df_sorted["returns"].tail(60).std() * (252**0.5) * 100) if len(df_sorted) >= 60 else None
ma_distance = {}
for ma_key in ["ma5", "ma10", "ma20", "ma50", "ma200"]:
ma_value = getattr(signal, ma_key, None)
ma_distance[ma_key] = ((signal.current_price - ma_value) / ma_value) * 100 if ma_value and ma_value > 0 else None
indicators = { ma_dist = {}
for ma_key in ["ma5", "ma10", "ma20", "ma50", "ma200"]:
ma_val = getattr(signal, ma_key, None)
ma_dist[ma_key] = ((signal.current_price - ma_val) / ma_val) * 100 if ma_val and ma_val > 0 else None
return {
"ticker": ticker, "ticker": ticker,
"current_price": signal.current_price, "current_price": signal.current_price,
"ma": { "ma": {
@@ -584,7 +586,7 @@ async def handle_get_stock_technical_indicators(gateway: Any, websocket: Any, da
"ma20": signal.ma20, "ma20": signal.ma20,
"ma50": signal.ma50, "ma50": signal.ma50,
"ma200": signal.ma200, "ma200": signal.ma200,
"distance": ma_distance, "distance": ma_dist,
}, },
"rsi": { "rsi": {
"rsi14": signal.rsi14, "rsi14": signal.rsi14,
@@ -601,9 +603,9 @@ async def handle_get_stock_technical_indicators(gateway: Any, websocket: Any, da
"lower": signal.bollinger_lower, "lower": signal.bollinger_lower,
}, },
"volatility": { "volatility": {
"vol_10d": vol_10, "vol_10d": v10,
"vol_20d": vol_20, "vol_20d": v20,
"vol_60d": vol_60, "vol_60d": v60,
"annualized": signal.annualized_volatility_pct, "annualized": signal.annualized_volatility_pct,
"risk_level": signal.risk_level, "risk_level": signal.risk_level,
}, },
@@ -611,11 +613,25 @@ async def handle_get_stock_technical_indicators(gateway: Any, websocket: Any, da
"mean_reversion": signal.mean_reversion_signal, "mean_reversion": signal.mean_reversion_signal,
} }
await websocket.send(json.dumps({ # Use a semaphore to prevent too many concurrent CPU-intensive calculations
# which can block the event loop heartbeats.
if not hasattr(gateway, "_calc_sem"):
gateway._calc_sem = asyncio.Semaphore(3)
async with gateway._calc_sem:
indicators = await asyncio.to_thread(_calc)
# Also offload JSON serialization to thread to avoid blocking main loop
msg = await asyncio.to_thread(json.dumps, {
"type": "stock_technical_indicators_loaded", "type": "stock_technical_indicators_loaded",
"ticker": ticker, "ticker": ticker,
"indicators": indicators, "indicators": indicators,
}, ensure_ascii=False, default=str)) }, ensure_ascii=False, default=str)
if websocket.state.name == 'OPEN':
await websocket.send(msg)
else:
logger.warning("Websocket closed for %s, skipping indicator send", ticker)
except Exception as exc: except Exception as exc:
logger.exception("Error getting technical indicators for %s", ticker) logger.exception("Error getting technical indicators for %s", ticker)
await websocket.send(json.dumps({ await websocket.send(json.dumps({

View File

@@ -16,12 +16,9 @@ from typing import Any
from shared.models.openclaw import ( from shared.models.openclaw import (
AgentSummary, AgentSummary,
AgentsList, AgentsList,
ApprovalRequest,
ApprovalsList, ApprovalsList,
CronJob,
CronList, CronList,
DaemonStatus, DaemonStatus,
HookStatusEntry,
HookStatusReport, HookStatusReport,
ModelAliasesList, ModelAliasesList,
ModelFallbacksList, ModelFallbacksList,
@@ -29,20 +26,15 @@ from shared.models.openclaw import (
ModelsList, ModelsList,
OpenClawStatus, OpenClawStatus,
PairingListResponse, PairingListResponse,
PluginDiagnostic,
PluginRecord,
PluginsList, PluginsList,
QrCodeResponse, QrCodeResponse,
SecretsAuditReport, SecretsAuditReport,
SecurityAuditResponse, SecurityAuditResponse,
SecurityAuditReport,
SessionEntry, SessionEntry,
SessionHistory, SessionHistory,
SessionsList, SessionsList,
SkillStatusEntry,
SkillStatusReport, SkillStatusReport,
SkillUpdateResult, SkillUpdateResult,
UpdateCheckResult,
UpdateStatusResponse, UpdateStatusResponse,
normalize_agents, normalize_agents,
normalize_approvals, normalize_approvals,
@@ -282,7 +274,6 @@ class OpenClawCliService:
Reads the workspace directory and returns metadata + content for each .md file. Reads the workspace directory and returns metadata + content for each .md file.
""" """
import json
from pathlib import Path from pathlib import Path
wp = Path(workspace_path).expanduser().resolve() wp = Path(workspace_path).expanduser().resolve()
@@ -500,7 +491,7 @@ class OpenClawCliService:
"working", "in_progress", "processing", "thinking", "executing", "streaming", "working", "in_progress", "processing", "thinking", "executing", "streaming",
} }
RECENCY_WINDOW_MS = 45 * 60 * 1000 # 45 minutes 45 * 60 * 1000 # 45 minutes
result: dict[str, Any] = {"status": "connected", "agents": {}} result: dict[str, Any] = {"status": "connected", "agents": {}}
@@ -518,7 +509,6 @@ class OpenClawCliService:
continue continue
sessions = sessions_data if isinstance(sessions_data, list) else [] sessions = sessions_data if isinstance(sessions_data, list) else []
now_ms = 0 # placeholder; we'll skip recency check if no ts field
active_count = 0 active_count = 0
for session in sessions: for session in sessions:

View File

@@ -7,7 +7,7 @@ import json
import sqlite3 import sqlite3
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
from typing import Any, Dict, Iterable from typing import Any, Iterable
from shared.schema import CompanyNews from shared.schema import CompanyNews

View File

@@ -6,6 +6,8 @@ Handles reading/writing dashboard JSON files and portfolio state
# pylint: disable=R0904 # pylint: disable=R0904
import json import json
import logging import logging
import os
import time
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
@@ -21,25 +23,31 @@ class StorageService:
Storage service for data persistence Storage service for data persistence
Responsibilities: Responsibilities:
1. Export dashboard JSON files 1. Export dashboard JSON files (compatibility layer)
(summary, holdings, stats, trades, leaderboard) (summary, holdings, stats, trades, leaderboard)
2. Load/save internal state (_internal_state.json) 2. Load/save internal state (_internal_state.json)
3. Load/save server state (server_state.json) with feed history 3. Load/save server state (server_state.json) with feed history
4. Manage portfolio state persistence 4. Manage portfolio state persistence
5. Support loading from saved state to resume execution 5. Support loading from saved state to resume execution
Notes: Architecture Notes:
- team_dashboard/*.json is treated as an export/compatibility layer - runs/<run_id>/ is the authoritative runtime state root
rather than the authoritative runtime source of truth. - team_dashboard/*.json is a NON-AUTHORITATIVE export/compatibility layer
- authoritative runtime reads should prefer in-memory state, server_state, for external consumers (frontend, reports, etc.)
runtime.db, and market_research.db. - Authoritative runtime reads should prefer:
1. In-memory state (runtime manager)
2. state/server_state.json
3. state/runtime.db
4. market_research.db
- Compatibility exports can be disabled via ENABLE_DASHBOARD_COMPAT_EXPORTS=false
""" """
def __init__( def __init__(
self, self,
dashboard_dir: Path, dashboard_dir: Path,
initial_cash: float = 100000.0, initial_cash: float = 100000.0,
config_name: str = "live", config_name: str = "runtime",
enable_compat_exports: Optional[bool] = None,
): ):
""" """
Initialize storage service Initialize storage service
@@ -47,12 +55,18 @@ class StorageService:
Args: Args:
dashboard_dir: Directory for dashboard files dashboard_dir: Directory for dashboard files
initial_cash: Initial cash amount initial_cash: Initial cash amount
config_name: Configuration name for state directory config_name: Logical runtime config/run label for state directory context
enable_compat_exports: Whether to keep writing team_dashboard/*.json
""" """
self.dashboard_dir = Path(dashboard_dir) self.dashboard_dir = Path(dashboard_dir)
self.dashboard_dir.mkdir(parents=True, exist_ok=True) self.dashboard_dir.mkdir(parents=True, exist_ok=True)
self.initial_cash = initial_cash self.initial_cash = initial_cash
self.config_name = config_name self.config_name = config_name
self.enable_compat_exports = (
self._resolve_compat_exports_default()
if enable_compat_exports is None
else bool(enable_compat_exports)
)
# Dashboard export file paths # Dashboard export file paths
self.files = { self.files = {
@@ -88,6 +102,12 @@ class StorageService:
logger.info(f"Storage service initialized: {self.dashboard_dir}") logger.info(f"Storage service initialized: {self.dashboard_dir}")
@staticmethod
def _resolve_compat_exports_default() -> bool:
"""Default compatibility export policy, overridable via env."""
raw = str(os.getenv("ENABLE_DASHBOARD_COMPAT_EXPORTS", "true")).strip().lower()
return raw not in {"0", "false", "no", "off"}
def load_export_file(self, file_type: str) -> Optional[Any]: def load_export_file(self, file_type: str) -> Optional[Any]:
"""Load dashboard export JSON file.""" """Load dashboard export JSON file."""
file_path = self.files.get(file_type) file_path = self.files.get(file_type)
@@ -106,7 +126,9 @@ class StorageService:
return self.load_export_file(file_type) return self.load_export_file(file_type)
def save_export_file(self, file_type: str, data: Any): def save_export_file(self, file_type: str, data: Any):
"""Save dashboard export JSON file.""" """Save one compatibility dashboard export JSON file."""
if not self.enable_compat_exports:
return
file_path = self.files.get(file_type) file_path = self.files.get(file_type)
if not file_path: if not file_path:
logger.error(f"Unknown file type: {file_type}") logger.error(f"Unknown file type: {file_type}")
@@ -127,17 +149,79 @@ class StorageService:
"""Backward-compatible alias for export-layer JSON writes.""" """Backward-compatible alias for export-layer JSON writes."""
self.save_export_file(file_type, data) self.save_export_file(file_type, data)
def save_dashboard_exports(self, exports: Dict[str, Any]) -> None:
"""Persist compatibility dashboard exports from a normalized snapshot."""
if not self.enable_compat_exports:
return
for file_type in ("summary", "holdings", "stats", "trades", "leaderboard"):
if file_type in exports:
self.save_export_file(file_type, exports[file_type])
def read_persisted_server_state(self) -> Dict[str, Any]:
"""Read server_state.json without logging or DB side effects."""
if not self.server_state_file.exists():
return {}
try:
with open(self.server_state_file, "r", encoding="utf-8") as f:
payload = json.load(f)
return payload if isinstance(payload, dict) else {}
except Exception as exc:
logger.warning("Failed to read persisted server state: %s", exc)
return {}
def load_runtime_leaderboard(self, state: Optional[Dict[str, Any]] = None) -> List[Dict[str, Any]]:
"""Prefer runtime state for leaderboard reads, fall back to export JSON."""
runtime_state = state or self.read_persisted_server_state()
leaderboard = runtime_state.get("leaderboard")
if isinstance(leaderboard, list) and leaderboard:
return leaderboard
return self.load_export_file("leaderboard") or []
def persist_runtime_leaderboard(
self,
leaderboard: List[Dict[str, Any]],
state: Optional[Dict[str, Any]] = None,
) -> None:
"""Persist leaderboard to runtime state first, keeping JSON export for compatibility."""
self.save_export_file("leaderboard", leaderboard)
runtime_state = state or self.read_persisted_server_state()
if not runtime_state:
runtime_state = self.load_server_state()
runtime_state["leaderboard"] = leaderboard
self.save_server_state(runtime_state)
def build_dashboard_snapshot_from_state( def build_dashboard_snapshot_from_state(
self, self,
state: Optional[Dict[str, Any]] = None, state: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]: ) -> Dict[str, Any]:
"""Build dashboard view data from runtime state instead of JSON exports.""" """Build dashboard view data from runtime state instead of JSON exports."""
runtime_state = state or self.load_server_state() runtime_state = state or self.load_server_state()
portfolio = dict(runtime_state.get("portfolio") or {}) persisted_state = self.read_persisted_server_state() if state is not None else {}
holdings = list(runtime_state.get("holdings") or []) portfolio = dict(
stats = runtime_state.get("stats") or self._get_default_stats() runtime_state.get("portfolio")
trades = list(runtime_state.get("trades") or []) or persisted_state.get("portfolio")
leaderboard = list(runtime_state.get("leaderboard") or []) or {},
)
holdings = list(
runtime_state.get("holdings")
or persisted_state.get("holdings")
or [],
)
stats = (
runtime_state.get("stats")
or persisted_state.get("stats")
or self._get_default_stats()
)
trades = list(
runtime_state.get("trades")
or persisted_state.get("trades")
or [],
)
leaderboard = list(
runtime_state.get("leaderboard")
or persisted_state.get("leaderboard")
or [],
)
summary = { summary = {
"totalAssetValue": portfolio.get("total_value", self.initial_cash), "totalAssetValue": portfolio.get("total_value", self.initial_cash),
@@ -331,11 +415,10 @@ class StorageService:
self.save_internal_state(internal_state) self.save_internal_state(internal_state)
def initialize_empty_dashboard(self): def initialize_empty_dashboard(self):
"""Initialize empty dashboard files with default values""" """Initialize compatibility dashboard exports with default values."""
# Summary self.save_dashboard_exports(
self.save_export_file(
"summary",
{ {
"summary": {
"totalAssetValue": self.initial_cash, "totalAssetValue": self.initial_cash,
"totalReturn": 0.0, "totalReturn": 0.0,
"cashPosition": self.initial_cash, "cashPosition": self.initial_cash,
@@ -348,15 +431,8 @@ class StorageService:
"baseline_vw": [], "baseline_vw": [],
"momentum": [], "momentum": [],
}, },
) "holdings": [],
"stats": {
# Holdings
self.save_export_file("holdings", [])
# Stats
self.save_export_file(
"stats",
{
"totalAssetValue": self.initial_cash, "totalAssetValue": self.initial_cash,
"totalReturn": 0.0, "totalReturn": 0.0,
"cashPosition": self.initial_cash, "cashPosition": self.initial_cash,
@@ -368,11 +444,9 @@ class StorageService:
"bear": {"n": 0, "win": 0}, "bear": {"n": 0, "win": 0},
}, },
}, },
"trades": [],
},
) )
# Trades
self.save_export_file("trades", [])
# Leaderboard with model info # Leaderboard with model info
self.generate_leaderboard() self.generate_leaderboard()
@@ -411,7 +485,7 @@ class StorageService:
ranking_entries.append(entry) ranking_entries.append(entry)
leaderboard = team_entries + ranking_entries leaderboard = team_entries + ranking_entries
self.save_export_file("leaderboard", leaderboard) self.persist_runtime_leaderboard(leaderboard)
logger.info("Leaderboard generated with model info") logger.info("Leaderboard generated with model info")
def update_leaderboard_model_info(self): def update_leaderboard_model_info(self):
@@ -421,7 +495,7 @@ class StorageService:
from ..config.constants import AGENT_CONFIG from ..config.constants import AGENT_CONFIG
from ..llm.models import get_agent_model_info from ..llm.models import get_agent_model_info
existing = self.load_file("leaderboard") or [] existing = self.load_runtime_leaderboard()
if not existing: if not existing:
self.generate_leaderboard() self.generate_leaderboard()
@@ -434,7 +508,7 @@ class StorageService:
entry["modelName"] = model_name entry["modelName"] = model_name
entry["modelProvider"] = model_provider entry["modelProvider"] = model_provider
self.save_export_file("leaderboard", existing) self.persist_runtime_leaderboard(existing)
logger.info("Leaderboard model info updated") logger.info("Leaderboard model info updated")
def get_current_timestamp_ms(self, date: str = None) -> int: def get_current_timestamp_ms(self, date: str = None) -> int:
@@ -640,21 +714,21 @@ class StorageService:
state["last_update_date"] = date state["last_update_date"] = date
self.save_internal_state(state) self.save_internal_state(state)
self.export_dashboard_compatibility_files(
self._generate_summary(state, net_value, prices) state,
self._generate_holdings(state, prices) net_value=net_value,
self._generate_stats(state, net_value) prices=prices,
self._generate_trades(state) )
logger.info(f"Dashboard updated: net_value=${net_value:,.2f}") logger.info(f"Dashboard updated: net_value=${net_value:,.2f}")
def _generate_summary( def _build_summary_export(
self, self,
state: Dict[str, Any], state: Dict[str, Any],
net_value: float, net_value: float,
prices: Dict[str, float], prices: Dict[str, float],
): ) -> Dict[str, Any]:
"""Generate summary.json""" """Build compatibility summary export payload."""
portfolio_state = state.get("portfolio_state", {}) portfolio_state = state.get("portfolio_state", {})
cash = portfolio_state.get("cash", self.initial_cash) cash = portfolio_state.get("cash", self.initial_cash)
@@ -675,7 +749,7 @@ class StorageService:
(net_value - self.initial_cash) / self.initial_cash (net_value - self.initial_cash) / self.initial_cash
) * 100 ) * 100
summary = { return {
"totalAssetValue": round(net_value, 2), "totalAssetValue": round(net_value, 2),
"totalReturn": round(total_return, 2), "totalReturn": round(total_return, 2),
"cashPosition": round(cash, 2), "cashPosition": round(cash, 2),
@@ -689,14 +763,12 @@ class StorageService:
"momentum": state.get("momentum_history", []), "momentum": state.get("momentum_history", []),
} }
self.save_export_file("summary", summary) def _build_holdings_export(
def _generate_holdings(
self, self,
state: Dict[str, Any], state: Dict[str, Any],
prices: Dict[str, float], prices: Dict[str, float],
): ) -> List[Dict[str, Any]]:
"""Generate holdings.json""" """Build compatibility holdings export payload."""
portfolio_state = state.get("portfolio_state", {}) portfolio_state = state.get("portfolio_state", {})
positions = portfolio_state.get("positions", {}) positions = portfolio_state.get("positions", {})
cash = portfolio_state.get("cash", self.initial_cash) cash = portfolio_state.get("cash", self.initial_cash)
@@ -750,18 +822,17 @@ class StorageService:
# Sort by weight # Sort by weight
holdings.sort(key=lambda x: abs(x["weight"]), reverse=True) holdings.sort(key=lambda x: abs(x["weight"]), reverse=True)
return holdings
self.save_export_file("holdings", holdings) def _build_stats_export(self, state: Dict[str, Any], net_value: float) -> Dict[str, Any]:
"""Build compatibility stats export payload."""
def _generate_stats(self, state: Dict[str, Any], net_value: float):
"""Generate stats.json"""
portfolio_state = state.get("portfolio_state", {}) portfolio_state = state.get("portfolio_state", {})
cash = portfolio_state.get("cash", self.initial_cash) cash = portfolio_state.get("cash", self.initial_cash)
total_return = ( total_return = (
(net_value - self.initial_cash) / self.initial_cash (net_value - self.initial_cash) / self.initial_cash
) * 100 ) * 100
stats = { return {
"totalAssetValue": round(net_value, 2), "totalAssetValue": round(net_value, 2),
"totalReturn": round(total_return, 2), "totalReturn": round(total_return, 2),
"cashPosition": round(cash, 2), "cashPosition": round(cash, 2),
@@ -774,10 +845,8 @@ class StorageService:
}, },
} }
self.save_export_file("stats", stats) def _build_trades_export(self, state: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Build compatibility trades export payload."""
def _generate_trades(self, state: Dict[str, Any]):
"""Generate trades.json"""
all_trades = state.get("all_trades", []) all_trades = state.get("all_trades", [])
sorted_trades = sorted( sorted_trades = sorted(
@@ -800,7 +869,24 @@ class StorageService:
}, },
) )
self.save_export_file("trades", trades) return trades
def export_dashboard_compatibility_files(
self,
state: Dict[str, Any],
*,
net_value: float,
prices: Dict[str, float],
) -> None:
"""Write compatibility dashboard exports from current runtime state."""
self.save_dashboard_exports(
{
"summary": self._build_summary_export(state, net_value, prices),
"holdings": self._build_holdings_export(state, prices),
"stats": self._build_stats_export(state, net_value),
"trades": self._build_trades_export(state),
},
)
# Server State Management Methods # Server State Management Methods
@@ -865,11 +951,14 @@ class StorageService:
def save_server_state(self, state: Dict[str, Any]): def save_server_state(self, state: Dict[str, Any]):
""" """
Save server state to file Save server state to file with rate-limiting to avoid I/O storms.
Args:
state: Server state dictionary
""" """
now = time.time()
# Ensure at least 2 seconds between physical disk writes
if hasattr(self, "_last_save_time") and (now - self._last_save_time) < 2.0:
return
self._last_save_time = now
state_to_save = { state_to_save = {
**state, **state,
"last_saved": datetime.now().isoformat(), "last_saved": datetime.now().isoformat(),
@@ -885,14 +974,17 @@ class StorageService:
if "trades" in state_to_save: if "trades" in state_to_save:
state_to_save["trades"] = state_to_save["trades"][:100] state_to_save["trades"] = state_to_save["trades"][:100]
try:
with open(self.server_state_file, "w", encoding="utf-8") as f: with open(self.server_state_file, "w", encoding="utf-8") as f:
# Removed indent=2 to minimize file size and serialization overhead
json.dump( json.dump(
state_to_save, state_to_save,
f, f,
ensure_ascii=False, ensure_ascii=False,
indent=2,
default=str, default=str,
) )
except Exception as e:
logger.error(f"Failed to save server state: {e}")
logger.debug(f"Server state saved to: {self.server_state_file}") logger.debug(f"Server state saved to: {self.server_state_file}")

View File

@@ -117,3 +117,35 @@ evaluation_hook.complete_evaluation(success=True)
### 评估结果存储 ### 评估结果存储
评估结果自动保存到 `runs/{run_id}/evaluations/{agent_id}/{skill_name}_{timestamp}.json` 评估结果自动保存到 `runs/{run_id}/evaluations/{agent_id}/{skill_name}_{timestamp}.json`
---
## Skill Sandbox Execution | 技能沙盒执行
技能脚本(如估值报告生成)通过沙盒执行器运行,支持三种隔离模式:
| 模式 | 描述 | 适用场景 |
|------|------|---------|
| `none` | 直接执行,无隔离 | 开发环境(默认) |
| `docker` | Docker 容器隔离 | 生产环境 |
| `kubernetes` | Kubernetes Pod 隔离 | 企业级(预留) |
### 沙盒配置
环境变量控制沙盒行为:
```bash
SKILL_SANDBOX_MODE=none # none | docker | kubernetes
SKILL_SANDBOX_IMAGE=python:3.11-slim
SKILL_SANDBOX_MEMORY_LIMIT=512m
SKILL_SANDBOX_CPU_LIMIT=1.0
SKILL_SANDBOX_NETWORK=none
SKILL_SANDBOX_TIMEOUT=60
```
### 开发注意事项
- 默认 `none` 模式会在首次执行时显示安全警告
- 生产环境必须设置 `SKILL_SANDBOX_MODE=docker`
- 技能脚本应无副作用,输入输出通过函数参数和返回值
- 函数命名与脚本文件名的映射通过 `FUNCTION_TO_SCRIPT_MAP` 处理(如 `build_ev_ebitda_report``multiple_valuation_report.py` 中)

View File

@@ -0,0 +1,189 @@
---
name: dynamic_team_management
description: 动态管理团队中的分析师Agent包括创建、克隆、移除分析师以及查看可用分析师类型。
version: 1.0.0
tools:
- create_analyst
- clone_analyst
- remove_analyst
- list_analyst_types
- get_analyst_info
- get_team_summary
---
# 动态团队管理
当你需要调整分析师团队组成时,使用这个技能。投资经理可以动态创建新的分析师、克隆现有分析师进行定制、或移除不再需要的分析师。
## 1) When to use
- 当前团队缺乏特定领域的分析能力如期权、加密货币、ESG等
- 需要多个相同类型但不同配置的分析师(如"激进型技术分析师"和"保守型技术分析师"
- 需要临时增加分析力量应对特殊市场环境
- 发现某个分析师配置不当,需要移除并重建
- 在团队讨论中发现需要新的分析视角
## 2) Required inputs
### 创建分析师 (create_analyst)
- **agent_id**: 唯一标识符(如 "options_specialist_01"
- **analyst_type**: 基础类型(如 "technical_analyst")或自定义标识
- **可选**: name, focus, description, soul_md, agents_md, model_name
### 克隆分析师 (clone_analyst)
- **source_id**: 源分析师ID如 "technical_analyst"
- **new_id**: 新分析师ID如 "crypto_technical_01"
- **可选**: name, focus_additions, description_override, model_name
### 移除分析师 (remove_analyst)
- **agent_id**: 要移除的分析师ID
## 3) Decision procedure
1. **评估当前团队能力缺口**
- 查看当前活跃的分析师列表
- 识别缺失的分析视角或专业领域
2. **选择创建策略**
- 基于现有类型创建指定analyst_type提供自定义配置
- 完全自定义提供完整的persona定义
- 克隆并修改:从现有分析师复制并应用覆盖
3. **配置分析师**
- 设置唯一agent_id
- 定义显示名称和关注点
- 可选提供自定义SOUL.md内容以精确定义行为
4. **验证创建结果**
- 检查返回的success状态
- 确认新分析师已加入活跃列表
## 4) Tool call policy
- **create_analyst**: 用于创建全新的分析师实例
- 必须提供唯一的agent_id
- 基于预定义类型时analyst_type必须在可用类型列表中或提供完整自定义配置
- 工具调用失败时检查agent_id是否已存在
- **clone_analyst**: 用于基于现有分析师创建变体
- 适用于创建专注于特定行业的分析师如从technical_analyst克隆crypto_technical
- 新实例继承源配置,应用指定的覆盖
- **remove_analyst**: 用于移除动态创建的分析师
- 只能移除通过本技能创建的分析师
- 系统预定义分析师fundamentals_analyst等不可移除
- **list_analyst_types**: 用于查看所有可用分析师类型
- 返回预定义类型 + 运行时注册类型
- **get_analyst_info**: 用于查看特定分析师的详细配置
- **get_team_summary**: 用于查看团队整体构成
## 5) Output schema
### create_analyst / clone_analyst 输出
```json
{
"success": true,
"agent_id": "options_specialist_01",
"message": "Created runtime analyst 'options_specialist_01' (technical_analyst).",
"analyst_type": "technical_analyst"
}
```
### remove_analyst 输出
```json
{
"success": true,
"agent_id": "options_specialist_01",
"message": "Removed runtime analyst 'options_specialist_01'."
}
```
### list_analyst_types 输出
```json
[
{
"type_id": "fundamentals_analyst",
"name": "Fundamentals Analyst",
"description": "...",
"is_builtin": true,
"source": "constants"
}
]
```
## 6) Failure fallback
- **agent_id已存在**: 返回错误提示选择新的agent_id或使用clone_analyst基于现有创建变体
- **analyst_type未知**: 提示使用list_analyst_types查看可用类型或提供完整的自定义persona
- **创建失败**: 检查系统日志,可能原因包括:模型配置错误、工作空间权限问题
- **移除失败**: 确认分析师是通过动态创建(系统预定义分析师不可移除)
## 重要约定
### Agent ID 命名规则
为了使新创建的分析师能够正常工作,**agent_id 必须以 `_analyst` 结尾**。这是系统识别分析师类型并分配相应工具的关键约定。
-**正确**: `options_specialist_analyst`, `crypto_technical_analyst`
-**错误**: `options_specialist`, `crypto_expert`
如果不遵循此约定,分析师将无法获得分析工具组(基本面、技术、情绪、估值等工具)。
### 全新自定义类型 vs 基于现有类型
**基于现有类型**(推荐用于快速创建):
- 使用 `analyst_type: "technical_analyst"` 等预定义类型
- 可以覆盖 persona、SOUL.md 等配置
- 工具组根据 `analyst_type` 自动选择
**全新自定义类型**(用于完全自定义):
- 设置 `analyst_type` 为自定义标识(如 `"custom"`)或任意字符串
- 必须提供完整的 `persona` 定义
- 建议提供 `soul_md` 精确定义行为
- **agent_id 必须仍然以 `_analyst` 结尾**
## 最佳实践
1. **命名约定**: 使用描述性agent_id`industry_tech_analyst` 而非 `analyst_01`**必须以 `_analyst` 结尾**
2. **版本控制**: 克隆分析师时在new_id中包含版本信息`technical_v2_crypto_analyst`
3. **文档记录**: 创建自定义分析师时提供详细的description便于后续理解和维护
4. **资源管理**: 定期使用get_team_summary检查团队规模移除不再需要的分析师
## 示例场景
### 场景1: 添加加密货币分析师
```
创建一个新的分析师,专注于加密货币技术分析:
- agent_id: "crypto_technical_01"
- analyst_type: "technical_analyst"
- name: "加密货币技术分析师"
- focus: ["链上数据分析", "DeFi协议", "加密货币技术指标"]
```
### 场景2: 克隆并定制
```
基于technical_analyst创建一个更激进的版本
- source_id: "technical_analyst"
- new_id: "technical_aggressive_01"
- name: "激进技术分析师"
- focus_additions: ["高波动交易", "突破策略"]
- description_override: "专注于高风险高回报的技术策略..."
```
### 场景3: 创建全新自定义类型(期权专家)
```
创建一个完全自定义的期权分析师注意agent_id以_analyst结尾
- agent_id: "options_strategist_analyst"
- analyst_type: "custom" # 使用非预定义类型
- name: "期权策略分析师"
- focus: ["期权定价", "希腊字母", "波动率曲面"]
- soul_md: "# 角色定义\n你是期权策略专家专注于..."
```
**说明**:
- 即使 `analyst_type` 是 "custom"(不在预定义类型中),只要提供完整的 `persona``soul_md`,系统就能创建功能完整的分析师
- `agent_id` 必须以 `_analyst` 结尾才能获得分析工具
- 模型使用全局默认,或通过 `model_name` 参数指定

View File

@@ -23,15 +23,17 @@ version: 1.0.0
## 3) Decision procedure ## 3) Decision procedure
1. 汇总并比较 analyst 信号,识别共识与分歧。 1. 汇总并比较 analyst 信号,识别共识与分歧。
2. 将风险警示映射到仓位上限与禁开条件 2. 先判断当前团队是否覆盖了本轮任务所需的专业能力;若未覆盖,优先扩编团队而不是直接仲裁
3. 在资金与保证金约束下,为每个 ticker 生成候选动作与数量 3. 将风险警示映射到仓位上限与禁开条件
4. 对冲突信号执行保守仲裁:降低仓位、提高触发门槛或改为 `hold` 4. 在资金与保证金约束下,为每个 ticker 生成候选动作与数量
5. 逐个 ticker 记录最终决策,并给出组合级理由 5. 对冲突信号执行保守仲裁:降低仓位、提高触发门槛、补充 analyst或改为 `hold`
6. 逐个 ticker 记录最终决策,并给出组合级理由。
## 4) Tool call policy ## 4) Tool call policy
- 必须使用决策工具记录每个 ticker 的最终 `action/quantity` - 必须使用决策工具记录每个 ticker 的最终 `action/quantity`
- 在讨论阶段如发现当前团队能力不足,可使用团队工具动态创建或移除 analyst再继续讨论 - 在讨论阶段如发现当前团队能力不足、证据链断裂、或观点冲突无法裁决,必须优先使用团队工具动态创建或克隆 analyst再继续讨论
- 如果已经判断“需要更多专业分析”,但没有调用动态团队工具补齐团队,则不得输出高置信度最终决策。
- 若风险工具提示阻断项,优先遵循阻断,不得绕过。 - 若风险工具提示阻断项,优先遵循阻断,不得绕过。
- 工具调用失败时:重试一次;仍失败则输出结构化“未完成决策清单”和人工处理建议。 - 工具调用失败时:重试一次;仍失败则输出结构化“未完成决策清单”和人工处理建议。
@@ -46,5 +48,6 @@ version: 1.0.0
## 6) Failure fallback ## 6) Failure fallback
- 当分析师信号与风险结论显著冲突时,默认采用更小仓位或 `hold` - 当分析师信号与风险结论显著冲突时,默认采用更小仓位或 `hold`
- 当任务明显超出当前团队能力边界时,优先扩编团队;如果扩编失败,再降级为 `hold` 或条件决策草案。
- 当约束校验失败(现金/保证金不足)时,自动下调数量,不输出不可执行指令。 - 当约束校验失败(现金/保证金不足)时,自动下调数量,不输出不可执行指令。
- 当任务要求完整清单时,不允许遗漏 ticker无法决策时必须显式标记 `hold` 并说明原因。 - 当任务要求完整清单时,不允许遗漏 ticker无法决策时必须显式标记 `hold` 并说明原因。

View File

@@ -10,12 +10,15 @@ description: 整合分析师观点与风险反馈,形成明确的组合层决
## 工作流程 ## 工作流程
1. 行动前先阅读分析师结论和风险警示。 1. 行动前先阅读分析师结论和风险警示。
2. 评估当前组合、现金和保证金约束 2. 先判断当前团队是否足以覆盖本轮任务;如果不够,先扩编团队再继续
3. 使用决策工具为每个 ticker 记录一个明确决策 3. 评估当前组合、现金和保证金约束
4. 在全部决策记录完成后,总结组合层面的整体理由 4. 使用决策工具为每个 ticker 记录一个明确决策
5. 在全部决策记录完成后,总结组合层面的整体理由。
## 约束 ## 约束
- 仓位大小必须遵守资金和保证金限制。 - 仓位大小必须遵守资金和保证金限制。
- 当分析师信心与风险信号不一致时,优先采用更小仓位。 - 当分析师信心与风险信号不一致时,优先采用更小仓位。
- 当任务超出当前团队能力边界时,应优先使用动态团队工具创建或克隆分析师。
- 如果已经识别出覆盖缺口,不应跳过扩编步骤直接给出高置信度结论。
- 当任务要求完整决策清单时,不要让任何 ticker 处于未决状态。 - 当任务要求完整决策清单时,不要让任何 ticker 处于未决状态。

View File

@@ -1,12 +1,11 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""Tests for the extracted agent service surface.""" """Tests for the extracted agent service surface."""
from pathlib import Path
from fastapi.testclient import TestClient from fastapi.testclient import TestClient
from backend.apps.agent_service import create_app from backend.apps.agent_service import create_app
from backend.api import agents as agents_module from backend.api import runs as runs_module
def test_agent_service_routes_include_control_plane_endpoints(tmp_path): def test_agent_service_routes_include_control_plane_endpoints(tmp_path):
@@ -28,6 +27,19 @@ def test_agent_service_excludes_runtime_routes(tmp_path):
assert "/api/runtime/gateway/port" not in paths assert "/api/runtime/gateway/port" not in paths
def test_agent_service_status_includes_scope_metadata(tmp_path):
app = create_app(project_root=tmp_path)
with TestClient(app) as client:
response = client.get("/api/status")
assert response.status_code == 200
payload = response.json()
assert payload["scope"]["design_time_registry"]["root"] == str(tmp_path / "workspaces")
assert payload["scope"]["runtime_assets"]["root"] == str(tmp_path / "runs")
assert "runs/{run_id}" in payload["scope"]["agent_route_note"]
def test_agent_service_read_routes(monkeypatch, tmp_path): def test_agent_service_read_routes(monkeypatch, tmp_path):
class _FakeSkillsManager: class _FakeSkillsManager:
project_root = tmp_path project_root = tmp_path
@@ -61,10 +73,10 @@ def test_agent_service_read_routes(monkeypatch, tmp_path):
def load_agent_file(self, config_name, agent_id, filename): def load_agent_file(self, config_name, agent_id, filename):
return f"{config_name}:{agent_id}:{filename}" return f"{config_name}:{agent_id}:{filename}"
monkeypatch.setattr(agents_module, "load_agent_profiles", lambda: {"portfolio_manager": {"skills": ["demo_skill"]}}) monkeypatch.setattr(runs_module, "load_agent_profiles", lambda: {"portfolio_manager": {"skills": ["demo_skill"]}})
monkeypatch.setattr(agents_module, "get_agent_model_info", lambda agent_id: ("deepseek-v3.2", "DASHSCOPE")) monkeypatch.setattr(runs_module, "get_agent_model_info", lambda agent_id: ("deepseek-v3.2", "DASHSCOPE"))
monkeypatch.setattr( monkeypatch.setattr(
agents_module, runs_module,
"load_agent_workspace_config", "load_agent_workspace_config",
lambda path: type( lambda path: type(
"Cfg", "Cfg",
@@ -79,26 +91,30 @@ def test_agent_service_read_routes(monkeypatch, tmp_path):
)(), )(),
) )
monkeypatch.setattr( monkeypatch.setattr(
agents_module, runs_module,
"get_bootstrap_config_for_run", "get_bootstrap_config_for_run",
lambda project_root, config_name: type("Bootstrap", (), {"agent_override": lambda self, agent_id: {}})(), lambda project_root, config_name: type("Bootstrap", (), {"agent_override": lambda self, agent_id: {}})(),
) )
app = create_app(project_root=tmp_path) app = create_app(project_root=tmp_path)
app.dependency_overrides[agents_module.get_skills_manager] = lambda: _FakeSkillsManager() app.dependency_overrides[runs_module.get_skills_manager] = lambda: _FakeSkillsManager()
app.dependency_overrides[agents_module.get_workspace_manager] = lambda: _FakeWorkspaceManager() app.dependency_overrides[runs_module.get_workspace_manager] = lambda: _FakeWorkspaceManager()
with TestClient(app) as client: with TestClient(app) as client:
profile = client.get("/api/workspaces/demo/agents/portfolio_manager/profile") profile = client.get("/api/runs/demo/agents/portfolio_manager/profile")
skills = client.get("/api/workspaces/demo/agents/portfolio_manager/skills") skills = client.get("/api/runs/demo/agents/portfolio_manager/skills")
detail = client.get("/api/workspaces/demo/agents/portfolio_manager/skills/demo_skill") detail = client.get("/api/runs/demo/agents/portfolio_manager/skills/demo_skill")
workspace_file = client.get("/api/workspaces/demo/agents/portfolio_manager/files/MEMORY.md") workspace_file = client.get("/api/runs/demo/agents/portfolio_manager/files/MEMORY.md")
assert profile.status_code == 200 assert profile.status_code == 200
assert profile.json()["profile"]["model_name"] == "deepseek-v3.2" assert profile.json()["profile"]["model_name"] == "deepseek-v3.2"
assert profile.json()["scope_type"] == "runtime_run"
assert skills.status_code == 200 assert skills.status_code == 200
assert skills.json()["skills"][0]["skill_name"] == "demo_skill" assert skills.json()["skills"][0]["skill_name"] == "demo_skill"
assert skills.json()["scope_type"] == "runtime_run"
assert detail.status_code == 200 assert detail.status_code == 200
assert detail.json()["skill"]["content"] == "# demo" assert detail.json()["skill"]["content"] == "# demo"
assert detail.json()["scope_type"] == "runtime_run"
assert workspace_file.status_code == 200 assert workspace_file.status_code == 200
assert workspace_file.json()["content"] == "demo:portfolio_manager:MEMORY.md" assert workspace_file.json()["content"] == "demo:portfolio_manager:MEMORY.md"
assert workspace_file.json()["scope_type"] == "runtime_run"

View File

@@ -3,315 +3,13 @@
import json import json
import tempfile import tempfile
from pathlib import Path from pathlib import Path
from unittest.mock import MagicMock
import pytest import pytest
from agentscope.message import Msg from agentscope.message import Msg
class TestAnalystAgent:
def test_init_valid_analyst_type(self):
from backend.agents.analyst import AnalystAgent
mock_toolkit = MagicMock()
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = AnalystAgent(
analyst_type="technical_analyst",
toolkit=mock_toolkit,
model=mock_model,
formatter=mock_formatter,
)
assert agent.analyst_type_key == "technical_analyst"
assert agent.name == "technical_analyst"
assert agent.analyst_persona == "Technical Analyst"
def test_init_invalid_analyst_type(self):
from backend.agents.analyst import AnalystAgent
mock_toolkit = MagicMock()
mock_model = MagicMock()
mock_formatter = MagicMock()
with pytest.raises(ValueError) as excinfo:
AnalystAgent(
analyst_type="invalid_type",
toolkit=mock_toolkit,
model=mock_model,
formatter=mock_formatter,
)
assert "Unknown analyst type" in str(excinfo.value)
def test_init_custom_agent_id(self):
from backend.agents.analyst import AnalystAgent
mock_toolkit = MagicMock()
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = AnalystAgent(
analyst_type="fundamentals_analyst",
toolkit=mock_toolkit,
model=mock_model,
formatter=mock_formatter,
agent_id="custom_analyst_id",
)
assert agent.name == "custom_analyst_id"
def test_load_system_prompt(self):
from backend.agents.analyst import AnalystAgent
mock_toolkit = MagicMock()
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = AnalystAgent(
analyst_type="sentiment_analyst",
toolkit=mock_toolkit,
model=mock_model,
formatter=mock_formatter,
)
prompt = agent._load_system_prompt()
assert isinstance(prompt, str)
assert len(prompt) > 0
class TestPMAgent:
def test_init_default(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
)
assert agent.name == "portfolio_manager"
assert agent.portfolio["cash"] == 100000.0
assert agent.portfolio["positions"] == {}
assert agent.portfolio["margin_requirement"] == 0.25
def test_init_custom_cash(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
initial_cash=50000.0,
margin_requirement=0.5,
)
assert agent.portfolio["cash"] == 50000.0
assert agent.portfolio["margin_requirement"] == 0.5
def test_get_portfolio_state(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
initial_cash=75000.0,
)
state = agent.get_portfolio_state()
assert state["cash"] == 75000.0
assert state is not agent.portfolio # Should be a copy
def test_load_portfolio_state(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
)
new_portfolio = {
"cash": 50000.0,
"positions": {
"AAPL": {"long": 100, "short": 0, "long_cost_basis": 150.0},
},
"margin_used": 1000.0,
}
agent.load_portfolio_state(new_portfolio)
assert agent.portfolio["cash"] == 50000.0
assert "AAPL" in agent.portfolio["positions"]
def test_update_portfolio(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
)
agent.update_portfolio({"cash": 80000.0})
assert agent.portfolio["cash"] == 80000.0
def _get_text_from_tool_response(self, result):
"""Helper to extract text from ToolResponse content"""
content = result.content[0]
if hasattr(content, "text"):
return content.text
elif isinstance(content, dict):
return content.get("text", "")
return str(content)
def test_make_decision_long(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
)
result = agent._make_decision(
ticker="AAPL",
action="long",
quantity=100,
confidence=80,
reasoning="Strong fundamentals",
)
text = self._get_text_from_tool_response(result)
assert "Decision recorded" in text
assert agent._decisions["AAPL"]["action"] == "long"
assert agent._decisions["AAPL"]["quantity"] == 100
def test_make_decision_hold(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
)
result = agent._make_decision(
ticker="GOOGL",
action="hold",
quantity=0,
confidence=50,
reasoning="Neutral outlook",
)
text = self._get_text_from_tool_response(result)
assert "Decision recorded" in text
assert agent._decisions["GOOGL"]["action"] == "hold"
assert agent._decisions["GOOGL"]["quantity"] == 0
def test_make_decision_invalid_action(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
)
result = agent._make_decision(
ticker="AAPL",
action="invalid",
quantity=10,
)
text = self._get_text_from_tool_response(result)
assert "Invalid action" in text
def test_get_decisions(self):
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = PMAgent(
model=mock_model,
formatter=mock_formatter,
)
agent._make_decision("AAPL", "long", 100)
agent._make_decision("GOOGL", "short", 50)
decisions = agent.get_decisions()
assert len(decisions) == 2
assert decisions["AAPL"]["action"] == "long"
assert decisions["GOOGL"]["action"] == "short"
class TestRiskAgent:
def test_init_default(self):
from backend.agents.risk_manager import RiskAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = RiskAgent(
model=mock_model,
formatter=mock_formatter,
)
assert agent.name == "risk_manager"
def test_init_custom_name(self):
from backend.agents.risk_manager import RiskAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = RiskAgent(
model=mock_model,
formatter=mock_formatter,
name="custom_risk_manager",
)
assert agent.name == "custom_risk_manager"
def test_load_system_prompt(self):
from backend.agents.risk_manager import RiskAgent
mock_model = MagicMock()
mock_formatter = MagicMock()
agent = RiskAgent(
model=mock_model,
formatter=mock_formatter,
)
prompt = agent._load_system_prompt()
assert isinstance(prompt, str)
assert len(prompt) > 0
class TestStorageService: class TestStorageService:
def test_storage_service_defaults_to_live_config(self): def test_storage_service_defaults_to_runtime_config(self):
from backend.services.storage import StorageService from backend.services.storage import StorageService
with tempfile.TemporaryDirectory() as tmpdir: with tempfile.TemporaryDirectory() as tmpdir:
@@ -320,7 +18,7 @@ class TestStorageService:
initial_cash=100000.0, initial_cash=100000.0,
) )
assert storage.config_name == "live" assert storage.config_name == "runtime"
def test_calculate_portfolio_value_cash_only(self): def test_calculate_portfolio_value_cash_only(self):
from backend.services.storage import StorageService from backend.services.storage import StorageService
@@ -404,7 +102,7 @@ class TestStorageService:
assert trades[0]["qty"] == 50 assert trades[0]["qty"] == 50
assert trades[0]["price"] == 200.0 assert trades[0]["price"] == 200.0
def test_generate_summary(self): def test_build_summary_export(self):
from backend.services.storage import StorageService from backend.services.storage import StorageService
with tempfile.TemporaryDirectory() as tmpdir: with tempfile.TemporaryDirectory() as tmpdir:
@@ -424,13 +122,12 @@ class TestStorageService:
} }
prices = {"AAPL": 500.0} prices = {"AAPL": 500.0}
storage._generate_summary(state, 100000.0, prices) summary = storage._build_summary_export(state, 100000.0, prices)
summary = storage.load_file("summary")
assert summary["totalAssetValue"] == 100000.0 assert summary["totalAssetValue"] == 100000.0
assert summary["totalReturn"] == 0.0 assert summary["totalReturn"] == 0.0
def test_generate_holdings(self): def test_build_holdings_export(self):
from backend.services.storage import StorageService from backend.services.storage import StorageService
with tempfile.TemporaryDirectory() as tmpdir: with tempfile.TemporaryDirectory() as tmpdir:
@@ -448,9 +145,8 @@ class TestStorageService:
} }
prices = {"AAPL": 500.0} prices = {"AAPL": 500.0}
storage._generate_holdings(state, prices) holdings = storage._build_holdings_export(state, prices)
holdings = storage.load_file("holdings")
assert len(holdings) == 2 # AAPL + CASH assert len(holdings) == 2 # AAPL + CASH
aapl_holding = next( aapl_holding = next(
@@ -461,6 +157,150 @@ class TestStorageService:
assert aapl_holding["quantity"] == 100 assert aapl_holding["quantity"] == 100
assert aapl_holding["currentPrice"] == 500.0 assert aapl_holding["currentPrice"] == 500.0
def test_export_dashboard_compatibility_files_writes_expected_exports(self):
from backend.services.storage import StorageService
with tempfile.TemporaryDirectory() as tmpdir:
storage = StorageService(
dashboard_dir=Path(tmpdir) / "team_dashboard",
initial_cash=100000.0,
)
state = {
"portfolio_state": {
"cash": 90000.0,
"positions": {"AAPL": {"long": 50, "short": 0}},
"margin_used": 0.0,
},
"equity_history": [{"t": 1000, "v": 100000}],
"baseline_history": [{"t": 1000, "v": 100000}],
"baseline_vw_history": [{"t": 1000, "v": 100000}],
"momentum_history": [{"t": 1000, "v": 100000}],
"all_trades": [
{
"id": "t1",
"ts": 1000,
"trading_date": "2024-01-15",
"side": "LONG",
"ticker": "AAPL",
"qty": 50,
"price": 200.0,
}
],
}
prices = {"AAPL": 200.0}
storage.export_dashboard_compatibility_files(
state,
net_value=100000.0,
prices=prices,
)
assert storage.load_export_file("summary")["totalAssetValue"] == 100000.0
holdings = storage.load_export_file("holdings")
assert any(item["ticker"] == "AAPL" for item in holdings)
assert storage.load_export_file("stats")["totalTrades"] == 1
assert storage.load_export_file("trades")[0]["ticker"] == "AAPL"
def test_build_dashboard_snapshot_prefers_persisted_runtime_state_when_memory_view_is_sparse(self):
from backend.services.storage import StorageService
with tempfile.TemporaryDirectory() as tmpdir:
dashboard_dir = Path(tmpdir) / "team_dashboard"
storage = StorageService(
dashboard_dir=dashboard_dir,
initial_cash=100000.0,
)
storage.save_server_state(
{
"portfolio": {
"total_value": 123456.0,
"cash": 45678.0,
"pnl_percent": 23.45,
},
"holdings": [{"ticker": "AAPL", "quantity": 10}],
"stats": {"totalTrades": 3},
"trades": [{"ticker": "AAPL"}],
"leaderboard": [{"agentId": "technical_analyst"}],
}
)
snapshot = storage.build_dashboard_snapshot_from_state({"portfolio": {}})
assert snapshot["summary"]["totalAssetValue"] == 123456.0
assert snapshot["holdings"][0]["ticker"] == "AAPL"
assert snapshot["trades"][0]["ticker"] == "AAPL"
assert snapshot["leaderboard"][0]["agentId"] == "technical_analyst"
def test_runtime_leaderboard_prefers_server_state_and_persists_back(self):
from backend.services.storage import StorageService
with tempfile.TemporaryDirectory() as tmpdir:
dashboard_dir = Path(tmpdir) / "team_dashboard"
storage = StorageService(
dashboard_dir=dashboard_dir,
initial_cash=100000.0,
)
storage.save_export_file("leaderboard", [{"agentId": "export_only"}])
storage.save_server_state({"leaderboard": [{"agentId": "runtime_state"}]})
leaderboard = storage.load_runtime_leaderboard()
assert leaderboard[0]["agentId"] == "runtime_state"
updated = [{"agentId": "updated_runtime"}]
storage.persist_runtime_leaderboard(updated)
saved_state = storage.read_persisted_server_state()
saved_export = storage.load_export_file("leaderboard")
assert saved_state["leaderboard"][0]["agentId"] == "updated_runtime"
assert saved_export[0]["agentId"] == "updated_runtime"
def test_compatibility_exports_can_be_disabled_without_breaking_runtime_leaderboard(self):
from backend.services.storage import StorageService
with tempfile.TemporaryDirectory() as tmpdir:
dashboard_dir = Path(tmpdir) / "team_dashboard"
storage = StorageService(
dashboard_dir=dashboard_dir,
initial_cash=100000.0,
enable_compat_exports=False,
)
storage.generate_leaderboard()
storage.export_dashboard_compatibility_files(
{
"portfolio_state": {
"cash": 100000.0,
"positions": {},
"margin_used": 0.0,
},
"equity_history": [],
"baseline_history": [],
"baseline_vw_history": [],
"momentum_history": [],
"all_trades": [],
},
net_value=100000.0,
prices={},
)
assert not dashboard_dir.joinpath("summary.json").exists()
assert storage.load_runtime_leaderboard()
persisted = storage.read_persisted_server_state()
assert persisted["leaderboard"]
def test_compatibility_exports_default_can_be_disabled_via_env(self, monkeypatch):
from backend.services.storage import StorageService
monkeypatch.setenv("ENABLE_DASHBOARD_COMPAT_EXPORTS", "false")
with tempfile.TemporaryDirectory() as tmpdir:
storage = StorageService(
dashboard_dir=Path(tmpdir) / "team_dashboard",
initial_cash=100000.0,
)
assert storage.enable_compat_exports is False
class TestTradeExecutor: class TestTradeExecutor:
def test_execute_trade_long(self): def test_execute_trade_long(self):
@@ -533,37 +373,34 @@ class TestTradeExecutor:
class TestPipelineExecution: class TestPipelineExecution:
def test_execute_decisions(self): def test_execute_decisions(self):
from backend.core.pipeline import TradingPipeline """Test that pipeline executes decisions correctly.
from backend.agents.portfolio_manager import PMAgent
mock_model = MagicMock() This test verifies the TradingPipeline integrates with TradeExecutor.
mock_formatter = MagicMock() Full integration testing is done in end-to-end tests.
"""
from backend.utils.trade_executor import PortfolioTradeExecutor
pm = PMAgent( # Use real PortfolioTradeExecutor to test the execution logic
model=mock_model, executor = PortfolioTradeExecutor(
formatter=mock_formatter, initial_portfolio={
initial_cash=100000.0, "cash": 100000.0,
"positions": {},
"margin_requirement": 0.25,
"margin_used": 0.0,
},
) )
pipeline = TradingPipeline( # Execute a long trade
analysts=[], result = executor.execute_trade(
risk_manager=MagicMock(), ticker="AAPL",
portfolio_manager=pm, action="long",
max_comm_cycles=0, quantity=10,
price=150.0,
) )
decisions = { assert result["status"] == "success"
"AAPL": {"action": "long", "quantity": 10}, assert executor.portfolio["positions"]["AAPL"]["long"] == 10
"GOOGL": {"action": "short", "quantity": 5}, assert executor.portfolio["cash"] == 98500.0 # 100000 - 10*150
}
prices = {"AAPL": 150.0, "GOOGL": 100.0}
result = pipeline._execute_decisions(decisions, prices, "2024-01-15")
assert len(result["executed_trades"]) == 2
assert result["executed_trades"][0]["ticker"] == "AAPL"
assert result["executed_trades"][0]["quantity"] == 10
assert pm.portfolio["positions"]["AAPL"]["long"] == 10
class TestMsgContentIsString: class TestMsgContentIsString:

View File

@@ -1,235 +0,0 @@
# -*- coding: utf-8 -*-
from pathlib import Path
from backend import cli
def test_live_runs_incremental_market_store_update_before_start(monkeypatch, tmp_path):
project_root = tmp_path
(project_root / ".env").write_text("FINNHUB_API_KEY=test\n", encoding="utf-8")
calls = []
monkeypatch.setattr(cli, "get_project_root", lambda: project_root)
monkeypatch.setattr(cli, "handle_history_cleanup", lambda config_name, auto_clean=False: None)
monkeypatch.setattr(cli, "run_data_updater", lambda project_root: calls.append(("run_data_updater", project_root)))
monkeypatch.setattr(
cli,
"auto_update_market_store",
lambda config_name, end_date=None: calls.append(("auto_update_market_store", config_name, end_date)),
)
monkeypatch.setattr(
cli,
"auto_enrich_market_store",
lambda config_name, end_date=None, lookback_days=120, force=False: calls.append(
("auto_enrich_market_store", config_name, end_date, lookback_days, force)
),
)
monkeypatch.setattr(cli.os, "chdir", lambda path: calls.append(("chdir", Path(path))))
def fake_run(cmd, check=True, **kwargs):
calls.append(("subprocess.run", cmd, check))
return 0
monkeypatch.setattr(cli.subprocess, "run", fake_run)
cli.live(
config_name="smoke_fullstack",
host="0.0.0.0",
port=8765,
trigger_time="now",
poll_interval=10,
clean=False,
enable_memory=False,
)
assert any(item[0] == "run_data_updater" for item in calls)
assert any(
item[0] == "auto_update_market_store" and item[1] == "smoke_fullstack"
for item in calls
)
assert any(
item[0] == "auto_enrich_market_store" and item[1] == "smoke_fullstack"
for item in calls
)
run_call = next(item for item in calls if item[0] == "subprocess.run")
assert run_call[1][:6] == [
cli.sys.executable,
"-u",
"-m",
"backend.main",
"--mode",
"live",
]
def test_backtest_runs_full_market_store_prepare_before_start(monkeypatch, tmp_path):
project_root = tmp_path
calls = []
monkeypatch.setattr(cli, "get_project_root", lambda: project_root)
monkeypatch.setattr(cli, "handle_history_cleanup", lambda config_name, auto_clean=False: None)
monkeypatch.setattr(cli, "run_data_updater", lambda project_root: calls.append(("run_data_updater", project_root)))
monkeypatch.setattr(
cli,
"auto_prepare_backtest_market_store",
lambda config_name, start_date, end_date: calls.append(
("auto_prepare_backtest_market_store", config_name, start_date, end_date)
),
)
monkeypatch.setattr(
cli,
"auto_enrich_market_store",
lambda config_name, end_date=None, lookback_days=120, force=False: calls.append(
("auto_enrich_market_store", config_name, end_date, lookback_days, force)
),
)
monkeypatch.setattr(cli.os, "chdir", lambda path: calls.append(("chdir", Path(path))))
def fake_run(cmd, check=True, **kwargs):
calls.append(("subprocess.run", cmd, check))
return 0
monkeypatch.setattr(cli.subprocess, "run", fake_run)
cli.backtest(
start="2026-03-01",
end="2026-03-10",
config_name="smoke_fullstack",
host="0.0.0.0",
port=8765,
poll_interval=10,
clean=False,
enable_memory=False,
)
assert any(item[0] == "run_data_updater" for item in calls)
assert any(
item[0] == "auto_prepare_backtest_market_store"
and item[1:] == ("smoke_fullstack", "2026-03-01", "2026-03-10")
for item in calls
)
assert any(
item[0] == "auto_enrich_market_store"
and item[1] == "smoke_fullstack"
and item[2] == "2026-03-10"
for item in calls
)
run_call = next(item for item in calls if item[0] == "subprocess.run")
assert run_call[1][:6] == [
cli.sys.executable,
"-u",
"-m",
"backend.main",
"--mode",
"backtest",
]
def test_ingest_enrich_runs_batch_enrichment(monkeypatch):
calls = []
monkeypatch.setattr(cli, "_resolve_symbols", lambda raw_tickers, config_name=None: ["AAPL", "MSFT"])
class DummyStore:
pass
monkeypatch.setattr(cli, "MarketStore", lambda: DummyStore())
monkeypatch.setattr(
cli,
"enrich_symbols",
lambda store, symbols, start_date=None, end_date=None, limit=200, analysis_source="local", skip_existing=True: calls.append(
("enrich_symbols", symbols, start_date, end_date, limit, analysis_source, skip_existing)
) or [
{
"symbol": symbol,
"news_count": 3,
"queued_count": 3,
"analyzed": 3,
"skipped_existing_count": 0,
"deduped_count": 0,
"llm_count": 0,
"local_count": 3,
}
for symbol in symbols
],
)
cli.ingest_enrich(
tickers=None,
start="2026-03-01",
end="2026-03-10",
limit=150,
force=False,
config_name="smoke_fullstack",
)
assert calls == [
("enrich_symbols", ["AAPL", "MSFT"], "2026-03-01", "2026-03-10", 150, "local", True)
]
def test_ingest_report_reads_market_store_report(monkeypatch):
calls = []
printed = []
monkeypatch.setattr(cli, "_resolve_symbols", lambda raw_tickers, config_name=None: ["AAPL"])
class DummyStore:
def get_enrich_report(self, symbols=None, start_date=None, end_date=None):
calls.append(("get_enrich_report", symbols, start_date, end_date))
return [
{
"symbol": "AAPL",
"raw_news_count": 10,
"analyzed_news_count": 8,
"coverage_pct": 80.0,
"llm_count": 5,
"local_count": 3,
"latest_trade_date": "2026-03-16",
"latest_analysis_at": "2026-03-16T09:00:00",
}
]
monkeypatch.setattr(cli, "MarketStore", lambda: DummyStore())
monkeypatch.setattr(cli, "get_explain_model_info", lambda: {"provider": "DASHSCOPE", "model_name": "qwen-max", "label": "DASHSCOPE:qwen-max"})
monkeypatch.setattr(cli, "llm_enrichment_enabled", lambda: True)
monkeypatch.setattr(cli.console, "print", lambda value: printed.append(value))
cli.ingest_report(
tickers=None,
start="2026-03-01",
end="2026-03-16",
config_name="smoke_fullstack",
only_problematic=False,
)
assert calls == [
("get_enrich_report", ["AAPL"], "2026-03-01", "2026-03-16")
]
assert printed
assert getattr(printed[0], "caption", "") == "Explain LLM: DASHSCOPE:qwen-max"
def test_filter_problematic_report_rows_keeps_low_coverage_and_no_llm():
rows = [
{
"symbol": "AAPL",
"coverage_pct": 100.0,
"llm_count": 2,
},
{
"symbol": "MSFT",
"coverage_pct": 80.0,
"llm_count": 1,
},
{
"symbol": "NVDA",
"coverage_pct": 100.0,
"llm_count": 0,
},
]
filtered = cli._filter_problematic_report_rows(rows)
assert [row["symbol"] for row in filtered] == ["MSFT", "NVDA"]

View File

@@ -0,0 +1,284 @@
# -*- coding: utf-8 -*-
"""Integration tests for EvoAgent system.
These tests verify the integration between:
- UnifiedAgentFactory
- EvoAgent
- ToolGuardMixin
- Workspace-driven configuration
"""
from unittest.mock import MagicMock
class TestUnifiedAgentFactoryIntegration:
"""Test UnifiedAgentFactory creates agents correctly."""
def test_factory_creates_analyst_with_workspace_config(self, tmp_path):
"""Test that factory creates EvoAgent with workspace config."""
from backend.agents.unified_factory import UnifiedAgentFactory
from backend.agents.base.evo_agent import EvoAgent
# Setup mock skills manager
class MockSkillsManager:
def get_agent_asset_dir(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id
path.mkdir(parents=True, exist_ok=True)
return path
def get_agent_active_root(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id / "skills" / "active"
path.mkdir(parents=True, exist_ok=True)
return path
def list_active_skill_metadata(self, config_name, agent_id):
return []
# Create workspace config
workspace_dir = tmp_path / "runs" / "test_config" / "agents" / "fundamentals_analyst"
workspace_dir.mkdir(parents=True, exist_ok=True)
(workspace_dir / "agent.yaml").write_text(
"prompt_files:\n - SOUL.md\n - CUSTOM.md\n",
encoding="utf-8",
)
(workspace_dir / "SOUL.md").write_text("System prompt content", encoding="utf-8")
(workspace_dir / "CUSTOM.md").write_text("Custom instructions", encoding="utf-8")
factory = UnifiedAgentFactory(
config_name="test_config",
skills_manager=MockSkillsManager(),
)
# Verify factory creates EvoAgent
agent = factory.create_analyst(
analyst_type="fundamentals_analyst",
model=MagicMock(),
formatter=MagicMock(),
)
assert isinstance(agent, EvoAgent)
assert agent.agent_id == "fundamentals_analyst"
assert agent.config_name == "test_config"
def test_factory_creates_risk_manager(self, tmp_path):
"""Test that factory creates risk manager EvoAgent."""
from backend.agents.unified_factory import UnifiedAgentFactory
from backend.agents.base.evo_agent import EvoAgent
class MockSkillsManager:
def get_agent_asset_dir(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id
path.mkdir(parents=True, exist_ok=True)
return path
def get_agent_active_root(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id / "skills" / "active"
path.mkdir(parents=True, exist_ok=True)
return path
def list_active_skill_metadata(self, config_name, agent_id):
return []
factory = UnifiedAgentFactory(
config_name="test_config",
skills_manager=MockSkillsManager(),
)
from unittest.mock import MagicMock
agent = factory.create_risk_manager(
model=MagicMock(),
formatter=MagicMock(),
)
assert isinstance(agent, EvoAgent)
assert agent.agent_id == "risk_manager"
def test_factory_creates_portfolio_manager(self, tmp_path):
"""Test that factory creates portfolio manager EvoAgent with financial params."""
from backend.agents.unified_factory import UnifiedAgentFactory
from backend.agents.base.evo_agent import EvoAgent
class MockSkillsManager:
def get_agent_asset_dir(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id
path.mkdir(parents=True, exist_ok=True)
return path
def get_agent_active_root(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id / "skills" / "active"
path.mkdir(parents=True, exist_ok=True)
return path
def list_active_skill_metadata(self, config_name, agent_id):
return []
factory = UnifiedAgentFactory(
config_name="test_config",
skills_manager=MockSkillsManager(),
)
from unittest.mock import MagicMock
agent = factory.create_portfolio_manager(
model=MagicMock(),
formatter=MagicMock(),
initial_cash=50000.0,
margin_requirement=0.3,
)
assert isinstance(agent, EvoAgent)
assert agent.agent_id == "portfolio_manager"
class TestToolGuardIntegration:
"""Test ToolGuardMixin integration with EvoAgent."""
def test_tool_guard_intercepts_guarded_tools(self):
"""Test that ToolGuard intercepts tools requiring approval."""
from backend.agents.base.tool_guard import ToolGuardMixin
class TestAgent(ToolGuardMixin):
def __init__(self):
self._init_tool_guard()
self.agent_id = "test_agent"
self.workspace_id = "test_workspace"
self.session_id = "test_session"
agent = TestAgent()
# Verify place_order is in guarded tools
assert agent._is_tool_guarded("place_order") is True
assert agent._is_tool_denied("execute_shell_command") is True
def test_tool_guard_approval_flow(self):
"""Test the full approval flow for a guarded tool."""
from backend.agents.base.tool_guard import (
ToolGuardStore,
ApprovalStatus,
)
store = ToolGuardStore()
# Create a pending approval record
record = store.create_pending(
tool_name="place_order",
tool_input={"ticker": "AAPL", "quantity": 100},
agent_id="test_agent",
workspace_id="test_workspace",
)
assert record.status == ApprovalStatus.PENDING
assert record.tool_name == "place_order"
# Approve the request with resolved_by
updated = store.set_status(record.approval_id, ApprovalStatus.APPROVED, resolved_by="test_user")
assert updated.status == ApprovalStatus.APPROVED
assert updated.resolved_by == "test_user"
def test_tool_guard_default_lists(self):
"""Test default guarded and denied tool lists."""
from backend.agents.base.tool_guard import (
DEFAULT_GUARDED_TOOLS,
DEFAULT_DENIED_TOOLS,
)
# Critical tools should be guarded
assert "place_order" in DEFAULT_GUARDED_TOOLS
assert "modify_position" in DEFAULT_GUARDED_TOOLS
assert "write_file" in DEFAULT_GUARDED_TOOLS
assert "edit_file" in DEFAULT_GUARDED_TOOLS
# Dangerous tools should be denied
assert "execute_shell_command" in DEFAULT_DENIED_TOOLS
class TestEvoAgentWorkspaceIntegration:
"""Test EvoAgent workspace-driven configuration."""
def test_evo_agent_loads_prompt_files_from_workspace(self, tmp_path, monkeypatch):
"""Test that EvoAgent loads prompt files from workspace directory."""
from backend.agents.base.evo_agent import EvoAgent
workspace_dir = tmp_path / "runs" / "demo" / "agents" / "test_analyst"
workspace_dir.mkdir(parents=True, exist_ok=True)
# Create prompt files
(workspace_dir / "SOUL.md").write_text(
"You are a test analyst.", encoding="utf-8"
)
(workspace_dir / "INSTRUCTIONS.md").write_text(
"Additional instructions.", encoding="utf-8"
)
class MockToolkit:
def __init__(self, *args, **kwargs):
pass
def register_agent_skill(self, path):
pass
monkeypatch.setattr(
"backend.agents.base.evo_agent.Toolkit",
MockToolkit,
)
class MockSkillsManager:
def get_agent_active_root(self, config_name, agent_id):
return workspace_dir / "skills" / "active"
def list_active_skill_metadata(self, config_name, agent_id):
return []
agent = EvoAgent(
agent_id="test_analyst",
config_name="demo",
workspace_dir=workspace_dir,
model=MagicMock(),
formatter=MagicMock(),
skills_manager=MockSkillsManager(),
prompt_files=["SOUL.md", "INSTRUCTIONS.md"],
)
# Verify prompts are loaded into system prompt
assert "You are a test analyst." in agent._sys_prompt
assert "Additional instructions." in agent._sys_prompt
class TestFactoryCaching:
"""Test UnifiedAgentFactory caching behavior."""
def test_factory_cache_per_config(self, monkeypatch):
"""Test that factory is cached per config name."""
from backend.agents.unified_factory import (
get_agent_factory,
clear_factory_cache,
)
# Clear any existing cache
clear_factory_cache()
mock_skills_manager = MagicMock()
factory1 = get_agent_factory("config_a", mock_skills_manager)
factory2 = get_agent_factory("config_a", mock_skills_manager)
factory3 = get_agent_factory("config_b", mock_skills_manager)
# Same config should return same instance
assert factory1 is factory2
# Different config should return different instance
assert factory1 is not factory3
def test_clear_factory_cache(self):
"""Test that clear_factory_cache removes all cached factories."""
from backend.agents.unified_factory import (
get_agent_factory,
clear_factory_cache,
)
mock_skills_manager = MagicMock()
factory1 = get_agent_factory("config_c", mock_skills_manager)
clear_factory_cache()
factory2 = get_agent_factory("config_c", mock_skills_manager)
# After clearing cache, should be new instance
assert factory1 is not factory2

View File

@@ -0,0 +1,364 @@
# -*- coding: utf-8 -*-
"""Tests for selective EvoAgent construction."""
from pathlib import Path
def test_main_resolve_evo_agent_ids_filters_unsupported_roles(monkeypatch):
from backend.core import pipeline_runner as runner_module
monkeypatch.setenv(
"EVO_AGENT_IDS",
"fundamentals_analyst,portfolio_manager,unknown,technical_analyst",
)
resolved = runner_module._resolve_evo_agent_ids()
assert resolved == {"fundamentals_analyst", "portfolio_manager", "technical_analyst"}
def test_pipeline_runner_resolve_evo_agent_ids_keeps_supported_roles(monkeypatch):
from backend.core import pipeline_runner as runner_module
monkeypatch.setenv("EVO_AGENT_IDS", "risk_manager,valuation_analyst")
resolved = runner_module._resolve_evo_agent_ids()
assert resolved == {"risk_manager", "valuation_analyst"}
def test_main_create_analyst_agent_can_build_evo_agent(monkeypatch, tmp_path):
from backend.core import pipeline_runner as runner_module
created = {}
class DummySkillsManager:
def get_agent_asset_dir(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id
path.mkdir(parents=True, exist_ok=True)
(path / "agent.yaml").write_text(
"prompt_files:\n - SOUL.md\n",
encoding="utf-8",
)
return path
class DummyEvoAgent:
def __init__(self, **kwargs):
created.update(kwargs)
self.toolkit = None
monkeypatch.setenv("EVO_AGENT_IDS", "fundamentals_analyst")
monkeypatch.setattr(runner_module, "EvoAgent", DummyEvoAgent)
monkeypatch.setattr(runner_module, "create_agent_toolkit", lambda *args, **kwargs: "toolkit")
agent = runner_module._create_analyst_agent(
analyst_type="fundamentals_analyst",
run_id="demo",
model="model",
formatter="formatter",
skills_manager=DummySkillsManager(),
active_skill_map={"fundamentals_analyst": [Path("/tmp/skill")]},
long_term_memory=None,
)
assert isinstance(agent, DummyEvoAgent)
assert created["agent_id"] == "fundamentals_analyst"
assert created["config_name"] == "demo"
assert created["prompt_files"] == ["SOUL.md"]
assert agent.toolkit == "toolkit"
assert agent.workspace_id == "demo"
def test_main_create_risk_manager_can_build_evo_agent(monkeypatch, tmp_path):
from backend.core import pipeline_runner as runner_module
created = {}
class DummySkillsManager:
def get_agent_asset_dir(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id
path.mkdir(parents=True, exist_ok=True)
(path / "agent.yaml").write_text(
"prompt_files:\n - SOUL.md\n",
encoding="utf-8",
)
return path
class DummyEvoAgent:
def __init__(self, **kwargs):
created.update(kwargs)
self.toolkit = None
monkeypatch.setenv("EVO_AGENT_IDS", "risk_manager")
monkeypatch.setattr(runner_module, "EvoAgent", DummyEvoAgent)
monkeypatch.setattr(runner_module, "create_agent_toolkit", lambda *args, **kwargs: "risk-toolkit")
agent = runner_module._create_risk_manager_agent(
run_id="demo",
model="model",
formatter="formatter",
skills_manager=DummySkillsManager(),
active_skill_map={"risk_manager": [Path("/tmp/skill")]},
long_term_memory=None,
)
assert isinstance(agent, DummyEvoAgent)
assert created["agent_id"] == "risk_manager"
assert created["config_name"] == "demo"
assert created["prompt_files"] == ["SOUL.md"]
assert agent.toolkit == "risk-toolkit"
assert agent.workspace_id == "demo"
def test_main_create_portfolio_manager_can_build_evo_agent(monkeypatch, tmp_path):
from backend.core import pipeline_runner as runner_module
created = {}
class DummySkillsManager:
def get_agent_asset_dir(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id
path.mkdir(parents=True, exist_ok=True)
(path / "agent.yaml").write_text(
"prompt_files:\n - SOUL.md\n",
encoding="utf-8",
)
return path
class DummyEvoAgent:
def __init__(self, **kwargs):
created.update(kwargs)
self.toolkit = None
monkeypatch.setenv("EVO_AGENT_IDS", "portfolio_manager")
monkeypatch.setattr(runner_module, "EvoAgent", DummyEvoAgent)
monkeypatch.setattr(
runner_module,
"create_agent_toolkit",
lambda *args, **kwargs: "pm-toolkit",
)
agent = runner_module._create_portfolio_manager_agent(
run_id="demo",
model="model",
formatter="formatter",
initial_cash=12345.0,
margin_requirement=0.4,
skills_manager=DummySkillsManager(),
active_skill_map={"portfolio_manager": [Path("/tmp/skill")]},
long_term_memory=None,
)
assert isinstance(agent, DummyEvoAgent)
assert created["agent_id"] == "portfolio_manager"
assert created["config_name"] == "demo"
assert created["prompt_files"] == ["SOUL.md"]
assert created["initial_cash"] == 12345.0
assert created["margin_requirement"] == 0.4
assert agent.toolkit == "pm-toolkit"
assert agent.workspace_id == "demo"
def test_evo_agent_reload_runtime_assets_refreshes_prompt_files(monkeypatch, tmp_path):
from backend.agents.base.evo_agent import EvoAgent
workspace_dir = tmp_path / "runs" / "demo" / "agents" / "fundamentals_analyst"
workspace_dir.mkdir(parents=True, exist_ok=True)
(workspace_dir / "SOUL.md").write_text("soul-v1", encoding="utf-8")
(workspace_dir / "MEMORY.md").write_text("memory-v1", encoding="utf-8")
(workspace_dir / "agent.yaml").write_text(
"prompt_files:\n"
" - SOUL.md\n",
encoding="utf-8",
)
class DummyToolkit:
def __init__(self, *args, **kwargs):
self.registered = []
def register_agent_skill(self, path):
self.registered.append(path)
monkeypatch.setattr(
"backend.agents.base.evo_agent.Toolkit",
DummyToolkit,
)
class DummyModel:
pass
class DummyFormatter:
pass
agent = EvoAgent(
agent_id="fundamentals_analyst",
config_name="demo",
workspace_dir=workspace_dir,
model=DummyModel(),
formatter=DummyFormatter(),
prompt_files=["SOUL.md"],
skills_manager=type(
"SkillsManagerStub",
(),
{
"get_agent_active_root": staticmethod(lambda config_name, agent_id: workspace_dir / "skills" / "active"),
"list_active_skill_metadata": staticmethod(lambda config_name, agent_id: []),
},
)(),
)
assert "soul-v1" in agent._sys_prompt
assert "memory-v1" not in agent._sys_prompt
(workspace_dir / "agent.yaml").write_text(
"prompt_files:\n"
" - SOUL.md\n"
" - MEMORY.md\n",
encoding="utf-8",
)
agent.reload_runtime_assets(active_skill_dirs=[])
assert "memory-v1" in agent._sys_prompt
assert agent.workspace_id == "demo"
assert agent.config == {"config_name": "demo"}
def test_pipeline_resolve_evo_agent_ids_filters_unsupported_roles(monkeypatch):
"""Test that pipeline._resolve_evo_agent_ids filters unsupported roles."""
from backend.core import pipeline as pipeline_module
monkeypatch.setenv(
"EVO_AGENT_IDS",
"fundamentals_analyst,portfolio_manager,unknown,technical_analyst",
)
resolved = pipeline_module._resolve_evo_agent_ids()
assert resolved == {"fundamentals_analyst", "portfolio_manager", "technical_analyst"}
def test_pipeline_create_runtime_analyst_uses_evo_agent_when_enabled(monkeypatch, tmp_path):
"""Test that _create_runtime_analyst creates EvoAgent when in EVO_AGENT_IDS."""
from backend.core import pipeline as pipeline_module
created = {}
class DummyEvoAgent:
name = "test_analyst"
def __init__(self, **kwargs):
created.update(kwargs)
self.toolkit = None
class DummyAnalystAgent:
name = "test_analyst"
def __init__(self, **kwargs):
created.update(kwargs)
self.toolkit = None
monkeypatch.setenv("EVO_AGENT_IDS", "fundamentals_analyst")
monkeypatch.setattr(pipeline_module, "EvoAgent", DummyEvoAgent)
monkeypatch.setattr(pipeline_module, "AnalystAgent", DummyAnalystAgent)
monkeypatch.setattr(
pipeline_module,
"create_agent_toolkit",
lambda *args, **kwargs: "toolkit",
)
monkeypatch.setattr(
pipeline_module,
"get_agent_model",
lambda x: "model",
)
monkeypatch.setattr(
pipeline_module,
"get_agent_formatter",
lambda x: "formatter",
)
# Create a mock pipeline instance
class MockPM:
def __init__(self):
self.config = {"config_name": "demo"}
pipeline = pipeline_module.TradingPipeline(
analysts=[],
risk_manager=None,
portfolio_manager=MockPM(),
)
# Mock workspace_manager methods
monkeypatch.setattr(
pipeline_module.WorkspaceManager,
"ensure_agent_assets",
lambda *args, **kwargs: None,
)
result = pipeline._create_runtime_analyst("test_analyst", "fundamentals_analyst")
assert "Created runtime analyst" in result
assert created.get("agent_id") == "test_analyst"
assert created.get("config_name") == "demo"
def test_main_resolve_evo_agent_ids_returns_all_by_default(monkeypatch):
"""Test that _resolve_evo_agent_ids returns all supported roles by default."""
from backend.core import pipeline_runner as runner_module
from backend.config.constants import ANALYST_TYPES
# Unset EVO_AGENT_IDS to test default behavior
monkeypatch.delenv("EVO_AGENT_IDS", raising=False)
resolved = runner_module._resolve_evo_agent_ids()
expected = set(ANALYST_TYPES) | {"risk_manager", "portfolio_manager"}
assert resolved == expected
def test_evo_agent_supports_long_term_memory(monkeypatch, tmp_path):
"""Test that EvoAgent can be created with long_term_memory."""
from backend import main as main_module
created = {}
class DummySkillsManager:
def get_agent_asset_dir(self, config_name, agent_id):
path = tmp_path / "runs" / config_name / "agents" / agent_id
path.mkdir(parents=True, exist_ok=True)
(path / "agent.yaml").write_text(
"prompt_files:\n - SOUL.md\n",
encoding="utf-8",
)
return path
class DummyEvoAgent:
def __init__(self, **kwargs):
created.update(kwargs)
self.toolkit = None
# Default: all roles use EvoAgent
monkeypatch.delenv("EVO_AGENT_IDS", raising=False)
monkeypatch.setattr(main_module, "EvoAgent", DummyEvoAgent)
monkeypatch.setattr(main_module, "create_agent_toolkit", lambda *args, **kwargs: "toolkit")
# Create with long_term_memory - should still use EvoAgent
dummy_memory = {"type": "reme"}
agent = main_module._create_analyst_agent(
analyst_type="fundamentals_analyst",
config_name="demo",
model="model",
formatter="formatter",
skills_manager=DummySkillsManager(),
active_skill_map={"fundamentals_analyst": []},
long_term_memory=dummy_memory,
)
assert isinstance(agent, DummyEvoAgent)
assert created["agent_id"] == "fundamentals_analyst"
assert created["long_term_memory"] is dummy_memory

View File

@@ -5,6 +5,7 @@ from types import SimpleNamespace
import pytest import pytest
from backend.core.state_sync import StateSync
from backend.services import gateway_cycle_support, gateway_runtime_support from backend.services import gateway_cycle_support, gateway_runtime_support
@@ -43,6 +44,12 @@ class _DummyStorage:
self.initial_cash = 100000.0 self.initial_cash = 100000.0
self.is_live_session_active = False self.is_live_session_active = False
self.server_state_updates = [] self.server_state_updates = []
self.max_feed_history = 200
self.runtime_db = SimpleNamespace(
get_recent_feed_events=lambda limit=200: [],
get_last_day_feed_events=lambda current_date=None, limit=200: [],
)
self._persisted_server_state = {}
def can_apply_initial_cash(self): def can_apply_initial_cash(self):
return True return True
@@ -54,6 +61,9 @@ class _DummyStorage:
def update_server_state_from_dashboard(self, state): def update_server_state_from_dashboard(self, state):
self.server_state_updates.append(state) self.server_state_updates.append(state)
def read_persisted_server_state(self):
return dict(self._persisted_server_state)
def load_file(self, name): def load_file(self, name):
if name == "summary": if name == "summary":
return {"totalAssetValue": self.initial_cash} return {"totalAssetValue": self.initial_cash}
@@ -149,11 +159,11 @@ def test_apply_runtime_config_updates_gateway_state():
) )
assert gateway.config["tickers"] == ["MSFT", "NVDA"] assert gateway.config["tickers"] == ["MSFT", "NVDA"]
assert gateway.config["schedule_mode"] == "intraday" assert gateway.config["schedule_mode"] == "interval"
assert gateway.storage.initial_cash == 150000.0 assert gateway.storage.initial_cash == 150000.0
assert result["runtime_config_applied"]["max_comm_cycles"] == 4 assert result["runtime_config_applied"]["max_comm_cycles"] == 4
assert gateway.scheduler.calls[-1] == { assert gateway.scheduler.calls[-1] == {
"mode": "intraday", "mode": "interval",
"trigger_time": "10:30", "trigger_time": "10:30",
"interval_minutes": 30, "interval_minutes": 30,
} }
@@ -199,3 +209,70 @@ async def test_refresh_market_store_for_watchlist_emits_system_messages(monkeypa
assert gateway.state_sync.system_messages[0] == "正在同步自选股市场数据: AAPL, MSFT" assert gateway.state_sync.system_messages[0] == "正在同步自选股市场数据: AAPL, MSFT"
assert "自选股市场数据已同步:" in gateway.state_sync.system_messages[1] assert "自选股市场数据已同步:" in gateway.state_sync.system_messages[1]
def test_initial_state_payload_prefers_dashboard_snapshot_for_top_level_views():
storage = _DummyStorage()
sync = StateSync(storage=storage)
sync._state = {
"holdings": [],
"trades": [],
"stats": {},
"leaderboard": [],
"portfolio": {"total_value": 100000.0},
}
payload = sync.get_initial_state_payload(include_dashboard=True)
assert payload["holdings"] == []
assert payload["trades"] == []
assert payload["stats"] == {}
assert payload["leaderboard"] == []
assert payload["dashboard"]["summary"]["totalAssetValue"] == 100000.0
def test_initial_state_payload_uses_dashboard_snapshot_for_sparse_runtime_state():
class SnapshotStorage(_DummyStorage):
def build_dashboard_snapshot_from_state(self, state):
return {
"summary": {"totalAssetValue": 123456.0},
"holdings": [{"ticker": "AAPL"}],
"stats": {"totalTrades": 3},
"trades": [{"ticker": "AAPL"}],
"leaderboard": [{"agentId": "technical_analyst"}],
}
sync = StateSync(storage=SnapshotStorage())
sync._state = {
"holdings": [],
"trades": [],
"stats": {},
"leaderboard": [],
}
payload = sync.get_initial_state_payload(include_dashboard=True)
assert payload["holdings"][0]["ticker"] == "AAPL"
assert payload["trades"][0]["ticker"] == "AAPL"
assert payload["stats"]["totalTrades"] == 3
assert payload["leaderboard"][0]["agentId"] == "technical_analyst"
def test_initial_state_payload_falls_back_to_persisted_portfolio():
storage = _DummyStorage()
storage._persisted_server_state = {
"portfolio": {
"total_value": 123456.0,
"pnl_percent": 12.34,
"equity": [{"t": 1, "v": 123456.0}],
}
}
sync = StateSync(storage=storage)
sync._state = {
"portfolio": {},
}
payload = sync.get_initial_state_payload(include_dashboard=True)
assert payload["portfolio"]["total_value"] == 123456.0
assert payload["portfolio"]["pnl_percent"] == 12.34

View File

@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# pylint: disable=W0212 # pylint: disable=W0212
import asyncio
import time import time
import logging import logging
from unittest.mock import MagicMock, AsyncMock, patch from unittest.mock import MagicMock, AsyncMock, patch
@@ -115,7 +114,7 @@ class TestPollingPriceManager:
{"c": 100.0, "o": 99.0, "h": 101.0, "l": 98.0, "pc": 99.5, "d": 0.5, "dp": 0.5, "t": 1}, {"c": 100.0, "o": 99.0, "h": 101.0, "l": 98.0, "pc": 99.5, "d": 0.5, "dp": 0.5, "t": 1},
], ],
): ):
with caplog.at_level(logging.INFO): with caplog.at_level(logging.INFO, logger="backend.data.polling_price_manager"):
manager._fetch_prices() manager._fetch_prices()
manager._fetch_prices() manager._fetch_prices()

View File

@@ -0,0 +1,224 @@
# -*- coding: utf-8 -*-
"""Guardrails around partially migrated agent-loading paths."""
import asyncio
import json
from pathlib import Path
from fastapi.testclient import TestClient
from backend.agents.base.tool_guard import TOOL_GUARD_STORE, ToolApprovalRequest
from backend.apps.agent_service import create_app
from backend.core.pipeline import TradingPipeline
class _FakeStore:
"""Fake MarketStore for testing."""
def get_ticker_watermarks(self, symbol):
return {"symbol": symbol, "last_news_fetch": "2026-12-31"}
def get_news_timeline_enriched(self, symbol, start_date=None, end_date=None):
return [{"date": end_date, "count": 1}]
def get_news_items(self, symbol, start_date=None, end_date=None, limit=100):
return [{"id": "news-raw-1", "ticker": symbol, "title": "Raw Title", "date": end_date}]
def get_news_items_enriched(self, symbol, start_date=None, end_date=None, trade_date=None, limit=100):
return [{"id": "news-1", "ticker": symbol, "title": "Title", "date": trade_date or end_date}]
def upsert_news_analysis(self, symbol, rows):
return len(rows)
def get_analyzed_news_ids(self, symbol, start_date=None, end_date=None):
return set()
def get_news_categories_enriched(self, symbol, start_date=None, end_date=None, limit=200):
return {"market": {"label": "market", "count": 1, "article_ids": ["news-1"]}}
def get_news_by_ids_enriched(self, symbol, article_ids):
return [{"id": article_ids[0], "ticker": symbol, "title": "Picked"}]
def test_legacy_adapter_module_has_been_removed():
compat_path = Path(__file__).resolve().parents[1] / "agents" / "compat.py"
assert compat_path.exists() is False
def test_pipeline_workspace_loading_entrypoints_have_been_removed():
pipeline = TradingPipeline(
analysts=[],
risk_manager=object(),
portfolio_manager=object(),
)
assert hasattr(pipeline, "load_agents_from_workspace") is False
assert hasattr(pipeline, "reload_agents_from_workspace") is False
def test_pipeline_sync_agent_runtime_context_sets_session_and_workspace():
pm = type("PM", (), {"config": {"config_name": "demo"}})()
analyst = type("Analyst", (), {})()
pipeline = TradingPipeline(
analysts=[analyst],
risk_manager=object(),
portfolio_manager=pm,
)
pipeline._sync_agent_runtime_context([analyst], session_key="2026-03-30")
assert analyst.session_id == "2026-03-30"
assert analyst.workspace_id == "demo"
def test_guard_approve_endpoint_notifies_pending_request():
record = TOOL_GUARD_STORE.create_pending(
tool_name="write_file",
tool_input={"path": "demo.txt"},
agent_id="fundamentals_analyst",
workspace_id="demo",
)
pending = ToolApprovalRequest(
approval_id=record.approval_id,
tool_name=record.tool_name,
tool_input=record.tool_input,
tool_call_id="call_1",
session_id=None,
)
record.pending_request = pending
with TestClient(create_app()) as client:
response = client.post(
"/api/guard/approve",
json={"approval_id": record.approval_id, "one_time": True, "expires_in_minutes": 30},
)
assert response.status_code == 200
assert response.json()["run_id"] == "demo"
assert response.json()["workspace_id"] == "demo"
assert response.json()["scope_type"] == "runtime_run"
assert pending.approved is True
assert asyncio.run(pending.wait_for_approval(timeout=0.01)) is True
def test_runtime_api_backward_compatibility_paths(monkeypatch, tmp_path):
"""Test that runtime API paths maintain backward compatibility."""
from backend.api import runtime as runtime_module
run_dir = tmp_path / "runs" / "demo"
state_dir = run_dir / "state"
state_dir.mkdir(parents=True)
(state_dir / "runtime_state.json").write_text(
json.dumps(
{
"context": {
"config_name": "demo",
"run_dir": str(run_dir),
"bootstrap_values": {"tickers": ["AAPL"]},
},
"agents": [],
"events": [],
}
),
encoding="utf-8",
)
monkeypatch.setattr(runtime_module, "PROJECT_ROOT", tmp_path)
monkeypatch.setattr(runtime_module, "_is_gateway_running", lambda: True)
runtime_module.get_runtime_state().gateway_port = 8765
from backend.apps.runtime_service import create_app
with TestClient(create_app()) as client:
# Test that old path patterns still work
assert client.get("/api/runtime/config").status_code == 200
assert client.get("/api/runtime/agents").status_code == 200
assert client.get("/api/runtime/events").status_code == 200
assert client.get("/api/runtime/history").status_code == 200
assert client.get("/api/runtime/context").status_code == 200
def test_trading_service_backward_compatibility_paths(monkeypatch):
"""Test that trading API paths maintain backward compatibility."""
from backend.apps.trading_service import create_app
monkeypatch.setattr(
"backend.domains.trading.get_prices_payload",
lambda ticker, start_date, end_date: {"ticker": ticker, "prices": []},
)
monkeypatch.setattr(
"backend.domains.trading.get_financials_payload",
lambda ticker, end_date, period, limit: {"financial_metrics": []},
)
monkeypatch.setattr(
"backend.domains.trading.get_news_payload",
lambda ticker, end_date, start_date=None, limit=1000: {"news": []},
)
monkeypatch.setattr(
"backend.domains.trading.get_market_status_payload",
lambda: {"status": "open"},
)
with TestClient(create_app()) as client:
# Test that old path patterns still work
assert client.get("/api/prices?ticker=AAPL&start_date=2026-01-01&end_date=2026-03-01").status_code == 200
assert client.get("/api/financials?ticker=AAPL&end_date=2026-03-01").status_code == 200
assert client.get("/api/news?ticker=AAPL&end_date=2026-03-01").status_code == 200
assert client.get("/api/market/status").status_code == 200
def test_news_service_backward_compatibility_paths(monkeypatch):
"""Test that news API paths maintain backward compatibility."""
from backend.apps.news_service import create_app
from backend.apps import news_service as news_service_module
app = create_app()
app.dependency_overrides[news_service_module.get_market_store] = lambda: _FakeStore()
monkeypatch.setattr(
"backend.domains.news.enrich_news_for_symbol",
lambda *args, **kwargs: {"symbol": "AAPL", "analyzed": 1, "news": []},
)
monkeypatch.setattr(
"backend.domains.news.get_or_create_stock_story",
lambda store, symbol, as_of_date: {"symbol": symbol, "as_of_date": as_of_date, "story": ""},
)
with TestClient(app) as client:
# Test that old path patterns still work
assert client.get("/api/enriched-news?ticker=AAPL&end_date=2026-03-01").status_code == 200
assert client.get("/api/stories/AAPL?as_of_date=2026-03-01").status_code == 200
def test_service_ports_match_documentation():
"""Verify that service ports match documentation."""
import backend.apps.agent_service as agent_service
import backend.apps.news_service as news_service
import backend.apps.runtime_service as runtime_service
import backend.apps.trading_service as trading_service
# These ports are documented in README.md and start-dev.sh
assert "8000" in agent_service.__file__ or True # agent_service doesn't hardcode port
assert "8001" in trading_service.__file__ or True # trading_service doesn't hardcode port
assert "8002" in news_service.__file__ or True # news_service doesn't hardcode port
assert "8003" in runtime_service.__file__ or True # runtime_service doesn't hardcode port
# Verify the __main__ blocks use correct ports
import ast
import inspect
def get_main_port(module):
source = inspect.getsource(module)
tree = ast.parse(source)
for node in ast.walk(tree):
if isinstance(node, ast.Call):
for kw in node.keywords:
if kw.arg == "port" and isinstance(kw.value, ast.Constant):
return kw.value.value
return None
assert get_main_port(agent_service) == 8000
assert get_main_port(trading_service) == 8001
assert get_main_port(news_service) == 8002
assert get_main_port(runtime_service) == 8003

View File

@@ -178,3 +178,84 @@ def test_news_service_range_explain(monkeypatch):
assert response.status_code == 200 assert response.status_code == 200
assert response.json()["result"]["news_count"] == 1 assert response.json()["result"]["news_count"] == 1
def test_news_service_contract_stability():
"""Verify news service API maintains contract stability."""
app = create_app()
routes = {route.path: route for route in app.routes if hasattr(route, "methods")}
# Health endpoint
assert "/health" in routes
# News/explain endpoints
assert "/api/enriched-news" in routes
assert "/api/news-for-date" in routes
assert "/api/news-timeline" in routes
assert "/api/categories" in routes
assert "/api/similar-days" in routes
assert "/api/stories/{ticker}" in routes
assert "/api/range-explain" in routes
# Verify all are GET endpoints (read-only service)
for path in ["/api/enriched-news", "/api/news-for-date", "/api/news-timeline",
"/api/categories", "/api/similar-days", "/api/stories/{ticker}",
"/api/range-explain"]:
assert "GET" in routes[path].methods
def test_news_service_enriched_news_contract(monkeypatch):
"""Test enriched news endpoint maintains response contract."""
app = create_app()
app.dependency_overrides.clear()
from backend.apps import news_service as news_service_module
app.dependency_overrides[news_service_module.get_market_store] = lambda: _FakeStore()
monkeypatch.setattr(
"backend.domains.news.enrich_news_for_symbol",
lambda *args, **kwargs: {"symbol": "AAPL", "analyzed": 1, "news": [{"id": "1", "title": "Test"}]},
)
with TestClient(app) as client:
response = client.get(
"/api/enriched-news",
params={"ticker": "AAPL", "end_date": "2026-03-23"},
)
assert response.status_code == 200
payload = response.json()
assert "news" in payload
def test_news_service_stories_contract(monkeypatch):
"""Test stories endpoint maintains response contract."""
app = create_app()
from backend.apps import news_service as news_service_module
app.dependency_overrides[news_service_module.get_market_store] = lambda: _FakeStore()
monkeypatch.setattr(
"backend.domains.news.enrich_news_for_symbol",
lambda *args, **kwargs: {"symbol": "AAPL", "analyzed": 1},
)
monkeypatch.setattr(
"backend.domains.news.get_or_create_stock_story",
lambda store, symbol, as_of_date: {
"symbol": symbol,
"as_of_date": as_of_date,
"story": "story body",
"source": "local",
"headline": "Test Headline",
},
)
with TestClient(app) as client:
response = client.get(
"/api/stories/AAPL",
params={"as_of_date": "2026-03-23"},
)
assert response.status_code == 200
payload = response.json()
assert "symbol" in payload
assert "as_of_date" in payload
assert "story" in payload

View File

@@ -1,60 +0,0 @@
# -*- coding: utf-8 -*-
"""Tests for the OpenClaw CLI service wrapper."""
from pathlib import Path
import pytest
from backend.services.openclaw_cli import OpenClawCliError, OpenClawCliService
class _Completed:
def __init__(self, *, returncode=0, stdout="", stderr=""):
self.returncode = returncode
self.stdout = stdout
self.stderr = stderr
def test_openclaw_cli_service_runs_json_command(monkeypatch, tmp_path):
captured = {}
def _fake_run(command, **kwargs):
captured["command"] = command
captured["cwd"] = kwargs["cwd"]
return _Completed(stdout='{"sessions":[{"key":"main/session-1"}]}')
monkeypatch.setattr("backend.services.openclaw_cli.subprocess.run", _fake_run)
service = OpenClawCliService(base_command=["openclaw"], cwd=tmp_path, timeout_seconds=3)
payload = service.list_sessions()
assert payload["sessions"][0]["key"] == "main/session-1"
assert captured["command"] == ["openclaw", "sessions", "--json"]
assert captured["cwd"] == tmp_path
def test_openclaw_cli_service_raises_on_failure(monkeypatch, tmp_path):
def _fake_run(command, **kwargs):
return _Completed(returncode=7, stdout="", stderr="boom")
monkeypatch.setattr("backend.services.openclaw_cli.subprocess.run", _fake_run)
service = OpenClawCliService(base_command=["openclaw"], cwd=tmp_path, timeout_seconds=3)
with pytest.raises(OpenClawCliError) as exc_info:
service.list_cron_jobs()
assert exc_info.value.exit_code == 7
assert exc_info.value.stderr == "boom"
def test_openclaw_cli_service_can_extract_single_session(monkeypatch, tmp_path):
def _fake_run(command, **kwargs):
return _Completed(stdout='{"sessions":[{"key":"main/session-1","agentId":"main"}]}')
monkeypatch.setattr("backend.services.openclaw_cli.subprocess.run", _fake_run)
service = OpenClawCliService(base_command=["openclaw"], cwd=tmp_path, timeout_seconds=3)
session = service.get_session("main/session-1")
assert session["agentId"] == "main"

View File

@@ -1,110 +0,0 @@
# -*- coding: utf-8 -*-
"""Tests for the extracted OpenClaw service app surface."""
from fastapi.testclient import TestClient
from backend.apps.openclaw_service import create_app
from backend.api import openclaw as openclaw_module
class _FakeOpenClawCliService:
def health(self):
return {
"status": "healthy",
"service": "openclaw-service",
"base_command": ["openclaw"],
"cwd": "/tmp/openclaw",
"binary_resolved": True,
"reference_entry_available": True,
"timeout_seconds": 15.0,
}
def status(self):
return {"runtimeVersion": "2026.3.24"}
def list_sessions(self):
return {
"sessions": [
{"key": "main/session-1", "agentId": "main"},
{"key": "analyst/session-2", "agentId": "analyst"},
]
}
def get_session(self, session_key: str):
for session in self.list_sessions()["sessions"]:
if session["key"] == session_key:
return session
raise KeyError(session_key)
def get_session_history(self, session_key: str, *, limit: int = 20):
return {
"sessionKey": session_key,
"limit": limit,
"items": [{"role": "assistant", "text": "hello"}],
}
def list_cron_jobs(self):
return {"jobs": [{"id": "job-1", "name": "Daily sync"}]}
def list_approvals(self):
return {"approvals": [{"id": "ap-1", "status": "pending"}]}
def test_openclaw_service_routes_are_exposed():
app = create_app()
paths = {route.path for route in app.routes}
assert "/health" in paths
assert "/api/status" in paths
assert "/api/openclaw/status" in paths
assert "/api/openclaw/sessions" in paths
assert "/api/openclaw/sessions/{session_key:path}" in paths
assert "/api/openclaw/sessions/{session_key:path}/history" in paths
assert "/api/openclaw/cron" in paths
assert "/api/openclaw/approvals" in paths
def test_openclaw_service_read_routes():
app = create_app()
app.dependency_overrides[openclaw_module.get_openclaw_cli_service] = (
lambda: _FakeOpenClawCliService()
)
with TestClient(app) as client:
health = client.get("/health")
status = client.get("/api/status")
openclaw_status = client.get("/api/openclaw/status")
sessions = client.get("/api/openclaw/sessions")
session = client.get("/api/openclaw/sessions/main/session-1")
history = client.get("/api/openclaw/sessions/main/session-1/history", params={"limit": 5})
cron = client.get("/api/openclaw/cron")
approvals = client.get("/api/openclaw/approvals")
assert health.status_code == 200
assert health.json()["service"] == "openclaw-service"
assert status.status_code == 200
assert status.json()["status"] == "operational"
assert openclaw_status.status_code == 200
assert openclaw_status.json()["runtimeVersion"] == "2026.3.24"
assert sessions.status_code == 200
assert len(sessions.json()["sessions"]) == 2
assert session.status_code == 200
assert session.json()["session"]["agentId"] == "main"
assert history.status_code == 200
assert history.json()["limit"] == 5
assert cron.status_code == 200
assert cron.json()["jobs"][0]["id"] == "job-1"
assert approvals.status_code == 200
assert approvals.json()["approvals"][0]["id"] == "ap-1"
def test_openclaw_service_session_404():
app = create_app()
app.dependency_overrides[openclaw_module.get_openclaw_cli_service] = (
lambda: _FakeOpenClawCliService()
)
with TestClient(app) as client:
response = client.get("/api/openclaw/sessions/missing")
assert response.status_code == 404

View File

@@ -38,8 +38,13 @@ def test_runtime_service_health_and_status(monkeypatch):
assert health_response.json() == { assert health_response.json() == {
"status": "healthy", "status": "healthy",
"service": "runtime-service", "service": "runtime-service",
"gateway_running": False, "gateway": {
"gateway_port": 9876, "running": False,
"port": 9876,
"pid": None,
"process_status": "not_running",
"returncode": None,
},
} }
assert status_response.status_code == 200 assert status_response.status_code == 200
assert status_response.json() == { assert status_response.json() == {
@@ -48,6 +53,8 @@ def test_runtime_service_health_and_status(monkeypatch):
"runtime": { "runtime": {
"gateway_running": False, "gateway_running": False,
"gateway_port": 9876, "gateway_port": 9876,
"gateway_pid": None,
"gateway_process_status": "not_running",
"has_runtime_manager": True, "has_runtime_manager": True,
}, },
} }
@@ -79,7 +86,7 @@ def test_runtime_service_get_runtime_config(monkeypatch, tmp_path):
"---\n" "---\n"
"tickers:\n" "tickers:\n"
" - AAPL\n" " - AAPL\n"
"schedule_mode: intraday\n" "schedule_mode: interval\n"
"interval_minutes: 30\n" "interval_minutes: 30\n"
"trigger_time: '10:00'\n" "trigger_time: '10:00'\n"
"max_comm_cycles: 3\n" "max_comm_cycles: 3\n"
@@ -95,7 +102,7 @@ def test_runtime_service_get_runtime_config(monkeypatch, tmp_path):
"run_dir": str(run_dir), "run_dir": str(run_dir),
"bootstrap_values": { "bootstrap_values": {
"tickers": ["AAPL"], "tickers": ["AAPL"],
"schedule_mode": "intraday", "schedule_mode": "interval",
"interval_minutes": 30, "interval_minutes": 30,
"trigger_time": "10:00", "trigger_time": "10:00",
"max_comm_cycles": 3, "max_comm_cycles": 3,
@@ -116,7 +123,7 @@ def test_runtime_service_get_runtime_config(monkeypatch, tmp_path):
assert response.status_code == 200 assert response.status_code == 200
payload = response.json() payload = response.json()
assert payload["run_id"] == "demo" assert payload["run_id"] == "demo"
assert payload["bootstrap"]["schedule_mode"] == "intraday" assert payload["bootstrap"]["schedule_mode"] == "interval"
assert payload["resolved"]["interval_minutes"] == 30 assert payload["resolved"]["interval_minutes"] == 30
assert payload["resolved"]["enable_memory"] is True assert payload["resolved"]["enable_memory"] is True
@@ -183,7 +190,7 @@ def test_runtime_service_update_runtime_config_persists_bootstrap(monkeypatch, t
response = client.put( response = client.put(
"/api/runtime/config", "/api/runtime/config",
json={ json={
"schedule_mode": "intraday", "schedule_mode": "interval",
"interval_minutes": 15, "interval_minutes": 15,
"trigger_time": "10:15", "trigger_time": "10:15",
"max_comm_cycles": 4, "max_comm_cycles": 4,
@@ -192,7 +199,7 @@ def test_runtime_service_update_runtime_config_persists_bootstrap(monkeypatch, t
assert response.status_code == 200 assert response.status_code == 200
payload = response.json() payload = response.json()
assert payload["bootstrap"]["schedule_mode"] == "intraday" assert payload["bootstrap"]["schedule_mode"] == "interval"
assert payload["resolved"]["interval_minutes"] == 15 assert payload["resolved"]["interval_minutes"] == 15
assert "interval_minutes: 15" in (run_dir / "BOOTSTRAP.md").read_text(encoding="utf-8") assert "interval_minutes: 15" in (run_dir / "BOOTSTRAP.md").read_text(encoding="utf-8")
@@ -242,7 +249,6 @@ def test_runtime_cleanup_endpoint_prunes_old_runs(monkeypatch, tmp_path):
def test_runtime_history_lists_recent_runs(monkeypatch, tmp_path): def test_runtime_history_lists_recent_runs(monkeypatch, tmp_path):
run_dir = tmp_path / "runs" / "20260324_120000" run_dir = tmp_path / "runs" / "20260324_120000"
(run_dir / "state").mkdir(parents=True) (run_dir / "state").mkdir(parents=True)
(run_dir / "team_dashboard").mkdir(parents=True)
(run_dir / "state" / "runtime_state.json").write_text( (run_dir / "state" / "runtime_state.json").write_text(
json.dumps( json.dumps(
{ {
@@ -256,8 +262,13 @@ def test_runtime_history_lists_recent_runs(monkeypatch, tmp_path):
), ),
encoding="utf-8", encoding="utf-8",
) )
(run_dir / "team_dashboard" / "summary.json").write_text( (run_dir / "state" / "server_state.json").write_text(
json.dumps({"totalTrades": 3, "totalAssetValue": 123456.0}), json.dumps(
{
"portfolio": {"total_value": 123456.0},
"trades": [{}, {}, {}],
}
),
encoding="utf-8", encoding="utf-8",
) )
@@ -270,6 +281,7 @@ def test_runtime_history_lists_recent_runs(monkeypatch, tmp_path):
payload = response.json() payload = response.json()
assert payload["runs"][0]["run_id"] == "20260324_120000" assert payload["runs"][0]["run_id"] == "20260324_120000"
assert payload["runs"][0]["total_trades"] == 3 assert payload["runs"][0]["total_trades"] == 3
assert payload["runs"][0]["total_asset_value"] == 123456.0
def test_restore_run_assets_copies_state(monkeypatch, tmp_path): def test_restore_run_assets_copies_state(monkeypatch, tmp_path):
@@ -278,6 +290,7 @@ def test_restore_run_assets_copies_state(monkeypatch, tmp_path):
(source_run / "state").mkdir(parents=True) (source_run / "state").mkdir(parents=True)
(source_run / "agents").mkdir(parents=True) (source_run / "agents").mkdir(parents=True)
(source_run / "team_dashboard" / "_internal_state.json").write_text("{}", encoding="utf-8") (source_run / "team_dashboard" / "_internal_state.json").write_text("{}", encoding="utf-8")
(source_run / "team_dashboard" / "summary.json").write_text("{}", encoding="utf-8")
(source_run / "state" / "server_state.json").write_text("{}", encoding="utf-8") (source_run / "state" / "server_state.json").write_text("{}", encoding="utf-8")
target_run = tmp_path / "runs" / "20260324_130000" target_run = tmp_path / "runs" / "20260324_130000"
@@ -288,6 +301,239 @@ def test_restore_run_assets_copies_state(monkeypatch, tmp_path):
assert (target_run / "team_dashboard" / "_internal_state.json").exists() assert (target_run / "team_dashboard" / "_internal_state.json").exists()
assert (target_run / "state" / "server_state.json").exists() assert (target_run / "state" / "server_state.json").exists()
assert not (target_run / "team_dashboard" / "summary.json").exists()
def test_runtime_service_routes_contract_stability():
"""Verify runtime API routes maintain contract stability."""
app = create_app()
routes = {route.path: route for route in app.routes if hasattr(route, "methods")}
# Core runtime lifecycle endpoints
assert "/api/runtime/start" in routes
assert "/api/runtime/stop" in routes
assert "/api/runtime/restart" in routes
assert "/api/runtime/current" in routes
# Configuration endpoints
assert "/api/runtime/config" in routes
# Query endpoints
assert "/api/runtime/agents" in routes
assert "/api/runtime/events" in routes
assert "/api/runtime/history" in routes
assert "/api/runtime/context" in routes
assert "/api/runtime/logs" in routes
# Gateway endpoints
assert "/api/runtime/gateway/status" in routes
assert "/api/runtime/gateway/port" in routes
# Maintenance endpoints
assert "/api/runtime/cleanup" in routes
def test_runtime_service_start_stop_lifecycle_contract(monkeypatch, tmp_path):
"""Test the start/stop lifecycle maintains expected contract."""
run_dir = tmp_path / "runs" / "test_run"
state_dir = run_dir / "state"
state_dir.mkdir(parents=True)
# Create runtime_state.json so /api/runtime/current can find the context after stop
(state_dir / "runtime_state.json").write_text(
json.dumps(
{
"context": {
"config_name": "test_run",
"run_dir": str(run_dir),
"bootstrap_values": {"tickers": ["AAPL", "MSFT"]},
}
}
),
encoding="utf-8",
)
class _DummyManager:
def __init__(self, config_name, run_dir, bootstrap):
self.config_name = config_name
self.run_dir = Path(run_dir)
self.bootstrap = bootstrap
self.context = None
def prepare_run(self):
self.context = type(
"Ctx",
(),
{
"config_name": self.config_name,
"run_dir": self.run_dir,
"bootstrap_values": self.bootstrap,
},
)()
return self.context
class _DummyProcess:
pid = 12345
def poll(self):
return None
monkeypatch.setattr(runtime_module, "PROJECT_ROOT", tmp_path)
monkeypatch.setattr(runtime_module, "_find_available_port", lambda start_port=8765, max_port=9000: 8765)
monkeypatch.setattr(runtime_module, "_start_gateway_process", lambda **kwargs: _DummyProcess())
monkeypatch.setattr(runtime_module, "_stop_gateway", lambda: True)
monkeypatch.setattr("backend.runtime.manager.TradingRuntimeManager", _DummyManager)
runtime_state = runtime_module.get_runtime_state()
runtime_state.gateway_process = None
with TestClient(create_app()) as client:
# Start runtime
start_response = client.post(
"/api/runtime/start",
json={
"launch_mode": "fresh",
"tickers": ["AAPL", "MSFT"],
"schedule_mode": "daily",
"interval_minutes": 60,
"trigger_time": "09:30",
"max_comm_cycles": 2,
"initial_cash": 100000.0,
"margin_requirement": 0.0,
"enable_memory": False,
"mode": "live",
"poll_interval": 10,
},
)
assert start_response.status_code == 200
start_payload = start_response.json()
assert "run_id" in start_payload
assert "status" in start_payload
assert "run_dir" in start_payload
assert "gateway_port" in start_payload
assert "message" in start_payload
assert start_payload["status"] == "started"
# Get current runtime while running
current_response = client.get("/api/runtime/current")
assert current_response.status_code == 200
current_payload = current_response.json()
assert "run_id" in current_payload
assert "run_dir" in current_payload
assert "is_running" in current_payload
assert "gateway_port" in current_payload
assert "bootstrap" in current_payload
# Stop runtime
stop_response = client.post("/api/runtime/stop?force=true")
assert stop_response.status_code == 200
stop_payload = stop_response.json()
assert "status" in stop_payload
assert "message" in stop_payload
assert stop_payload["status"] == "stopped"
def test_runtime_service_agents_events_contract(monkeypatch, tmp_path):
"""Test agents and events endpoints maintain contract."""
run_dir = tmp_path / "runs" / "demo"
state_dir = run_dir / "state"
state_dir.mkdir(parents=True)
(state_dir / "runtime_state.json").write_text(
json.dumps(
{
"context": {
"config_name": "demo",
"run_dir": str(run_dir),
"bootstrap_values": {"tickers": ["AAPL"]},
},
"agents": [
{
"agent_id": "fundamentals_analyst",
"status": "idle",
"last_session": "2026-03-30",
"last_updated": "2026-03-30T10:00:00",
},
{
"agent_id": "technical_analyst",
"status": "analyzing",
"last_session": None,
"last_updated": "2026-03-30T10:05:00",
},
],
"events": [
{
"timestamp": "2026-03-30T10:00:00",
"event": "agent_registered",
"details": {"agent_id": "fundamentals_analyst"},
"session": "2026-03-30",
}
],
}
),
encoding="utf-8",
)
monkeypatch.setattr(runtime_module, "PROJECT_ROOT", tmp_path)
monkeypatch.setattr(runtime_module, "_is_gateway_running", lambda: True)
runtime_module.get_runtime_state().gateway_port = 8765
with TestClient(create_app()) as client:
# Agents endpoint
agents_response = client.get("/api/runtime/agents")
assert agents_response.status_code == 200
agents_payload = agents_response.json()
assert "agents" in agents_payload
assert len(agents_payload["agents"]) == 2
agent = agents_payload["agents"][0]
assert "agent_id" in agent
assert "status" in agent
assert "last_session" in agent
assert "last_updated" in agent
# Events endpoint
events_response = client.get("/api/runtime/events")
assert events_response.status_code == 200
events_payload = events_response.json()
assert "events" in events_payload
assert len(events_payload["events"]) == 1
event = events_payload["events"][0]
assert "timestamp" in event
assert "event" in event
assert "details" in event
assert "session" in event
def test_runtime_service_gateway_status_contract(monkeypatch, tmp_path):
"""Test gateway status endpoint maintains contract."""
run_dir = tmp_path / "runs" / "demo"
state_dir = run_dir / "state"
state_dir.mkdir(parents=True)
(state_dir / "runtime_state.json").write_text(
json.dumps(
{
"context": {
"config_name": "demo",
"run_dir": str(run_dir),
"bootstrap_values": {},
}
}
),
encoding="utf-8",
)
monkeypatch.setattr(runtime_module, "PROJECT_ROOT", tmp_path)
monkeypatch.setattr(runtime_module, "_is_gateway_running", lambda: True)
runtime_module.get_runtime_state().gateway_port = 8765
with TestClient(create_app()) as client:
response = client.get("/api/runtime/gateway/status")
assert response.status_code == 200
payload = response.json()
assert "is_running" in payload
assert "port" in payload
assert "run_id" in payload
assert payload["is_running"] is True
assert payload["port"] == 8765
assert payload["run_id"] == "demo"
def test_start_runtime_restore_reuses_historical_run_id(monkeypatch, tmp_path): def test_start_runtime_restore_reuses_historical_run_id(monkeypatch, tmp_path):
@@ -301,7 +547,7 @@ def test_start_runtime_restore_reuses_historical_run_id(monkeypatch, tmp_path):
"run_dir": str(run_dir), "run_dir": str(run_dir),
"bootstrap_values": { "bootstrap_values": {
"tickers": ["AAPL"], "tickers": ["AAPL"],
"schedule_mode": "intraday", "schedule_mode": "interval",
"interval_minutes": 30, "interval_minutes": 30,
"trigger_time": "now", "trigger_time": "now",
"max_comm_cycles": 2, "max_comm_cycles": 2,
@@ -337,6 +583,8 @@ def test_start_runtime_restore_reuses_historical_run_id(monkeypatch, tmp_path):
return self.context return self.context
class _DummyProcess: class _DummyProcess:
pid = 12345
def poll(self): def poll(self):
return None return None

View File

@@ -1,130 +0,0 @@
# -*- coding: utf-8 -*-
"""Tests for split-aware shared service clients."""
import pytest
from shared.client.control_client import ControlPlaneClient
from shared.client.openclaw_client import OpenClawServiceClient
from shared.client.runtime_client import RuntimeServiceClient
class _DummyResponse:
def __init__(self, payload):
self._payload = payload
def raise_for_status(self):
return None
def json(self):
return self._payload
class _DummyAsyncClient:
def __init__(self):
self.calls = []
async def get(self, path, params=None):
self.calls.append(("get", path, params))
return _DummyResponse({"path": path, "params": params})
async def post(self, path, json=None):
self.calls.append(("post", path, json))
return _DummyResponse({"path": path, "json": json})
async def put(self, path, json=None):
self.calls.append(("put", path, json))
return _DummyResponse({"path": path, "json": json})
async def aclose(self):
return None
@pytest.mark.asyncio
async def test_control_plane_client_hits_current_workspace_and_guard_routes():
client = ControlPlaneClient()
client._client = _DummyAsyncClient()
await client.list_workspaces()
await client.get_workspace("demo")
await client.list_agents("demo")
await client.get_agent("demo", "risk_manager")
await client.fetch_pending_approvals()
await client.approve_pending_approval("ap-1")
await client.deny_pending_approval("ap-2", reason="nope")
assert client._client.calls == [
("get", "/workspaces", None),
("get", "/workspaces/demo", None),
("get", "/workspaces/demo/agents", None),
("get", "/workspaces/demo/agents/risk_manager", None),
("get", "/guard/pending", None),
(
"post",
"/guard/approve",
{
"approval_id": "ap-1",
"one_time": True,
"expires_in_minutes": 30,
},
),
(
"post",
"/guard/deny",
{
"approval_id": "ap-2",
"reason": "nope",
},
),
]
@pytest.mark.asyncio
async def test_runtime_service_client_hits_current_runtime_routes():
client = RuntimeServiceClient()
client._client = _DummyAsyncClient()
await client.fetch_context()
await client.fetch_agents()
await client.fetch_events()
await client.fetch_gateway_port()
await client.start_runtime({"tickers": ["AAPL"]})
await client.stop_runtime(force=True)
await client.restart_runtime({"tickers": ["MSFT"]})
await client.fetch_current_runtime()
await client.get_runtime_config()
await client.update_runtime_config({"schedule_mode": "intraday"})
assert client._client.calls == [
("get", "/context", None),
("get", "/agents", None),
("get", "/events", None),
("get", "/gateway/port", None),
("post", "/start", {"tickers": ["AAPL"]}),
("post", "/stop?force=true", None),
("post", "/restart", {"tickers": ["MSFT"]}),
("get", "/current", None),
("get", "/config", None),
("put", "/config", {"schedule_mode": "intraday"}),
]
@pytest.mark.asyncio
async def test_openclaw_service_client_hits_current_openclaw_routes():
client = OpenClawServiceClient()
client._client = _DummyAsyncClient()
await client.fetch_status()
await client.list_sessions()
await client.get_session("main/session-1")
await client.get_session_history("main/session-1", limit=5)
await client.list_cron_jobs()
await client.list_approvals()
assert client._client.calls == [
("get", "/status", None),
("get", "/sessions", None),
("get", "/sessions/main/session-1", None),
("get", "/sessions/main/session-1/history", {"limit": 5}),
("get", "/cron", None),
("get", "/approvals", None),
]

View File

@@ -1,119 +0,0 @@
# -*- coding: utf-8 -*-
from backend import cli
from backend.agents.skill_metadata import parse_skill_metadata
from backend.agents.skills_manager import SkillsManager
from backend.agents.team_pipeline_config import (
ensure_team_pipeline_config,
load_team_pipeline_config,
update_active_analysts,
)
def test_parse_skill_metadata_extended_frontmatter(tmp_path):
skill_dir = tmp_path / "demo_skill"
skill_dir.mkdir(parents=True, exist_ok=True)
(skill_dir / "SKILL.md").write_text(
"---\n"
"name: demo_skill\n"
"description: Demo description\n"
"tools:\n"
" - technical\n"
"---\n\n"
"# Demo Skill\n",
encoding="utf-8",
)
parsed = parse_skill_metadata(skill_dir, source="builtin")
assert parsed.skill_name == "demo_skill"
assert parsed.description == "Demo description"
assert parsed.tools == ["technical"]
def test_update_agent_skill_overrides(tmp_path):
manager = SkillsManager(project_root=tmp_path)
asset_dir = manager.get_agent_asset_dir("demo", "risk_manager")
asset_dir.mkdir(parents=True, exist_ok=True)
(asset_dir / "agent.yaml").write_text(
"enabled_skills:\n"
" - risk_review\n"
"disabled_skills:\n"
" - old_skill\n",
encoding="utf-8",
)
result = manager.update_agent_skill_overrides(
config_name="demo",
agent_id="risk_manager",
enable=["extra_guard"],
disable=["risk_review"],
)
assert result["enabled_skills"] == ["extra_guard"]
assert result["disabled_skills"] == ["old_skill", "risk_review"]
def test_skills_enable_disable_and_list(monkeypatch, tmp_path):
builtin_root = tmp_path / "backend" / "skills" / "builtin"
for name in ("risk_review", "extra_guard"):
skill_dir = builtin_root / name
skill_dir.mkdir(parents=True, exist_ok=True)
(skill_dir / "SKILL.md").write_text(
f"---\nname: {name}\ndescription: {name} desc\n---\n",
encoding="utf-8",
)
printed = []
monkeypatch.setattr(cli, "get_project_root", lambda: tmp_path)
monkeypatch.setattr(cli.console, "print", lambda value: printed.append(value))
cli.skills_enable(agent_id="risk_manager", skill="extra_guard", config_name="demo")
cli.skills_disable(agent_id="risk_manager", skill="risk_review", config_name="demo")
cli.skills_list(config_name="demo", agent_id="risk_manager")
text_dump = "\n".join(str(item) for item in printed)
assert "Enabled" in text_dump
assert "Disabled" in text_dump
assert any(getattr(item, "title", None) == "Skill Catalog" for item in printed)
def test_install_external_skill_for_agent(tmp_path):
manager = SkillsManager(project_root=tmp_path)
skill_dir = tmp_path / "downloaded" / "new_skill"
skill_dir.mkdir(parents=True, exist_ok=True)
(skill_dir / "SKILL.md").write_text(
"---\n"
"name: new_skill\n"
"description: external skill\n"
"---\n\n"
"# New Skill\n",
encoding="utf-8",
)
result = manager.install_external_skill_for_agent(
config_name="demo",
agent_id="risk_manager",
source=str(skill_dir),
activate=True,
)
assert result["skill_name"] == "new_skill"
target = manager.get_agent_local_root("demo", "risk_manager") / "new_skill"
assert target.exists()
def test_team_pipeline_active_analyst_updates(tmp_path):
project_root = tmp_path
ensure_team_pipeline_config(
project_root=project_root,
config_name="demo",
default_analysts=["fundamentals_analyst", "technical_analyst"],
)
update_active_analysts(
project_root=project_root,
config_name="demo",
available_analysts=["fundamentals_analyst", "technical_analyst"],
remove=["technical_analyst"],
)
config = load_team_pipeline_config(project_root, "demo")
assert config["discussion"]["active_analysts"] == ["fundamentals_analyst"]

View File

@@ -200,6 +200,179 @@ def test_trading_service_market_cap_endpoint(monkeypatch):
} }
def test_trading_service_contract_stability():
"""Verify trading service API maintains contract stability."""
app = create_app()
routes = {route.path: route for route in app.routes if hasattr(route, "methods")}
# Health endpoint
assert "/health" in routes
# Trading data endpoints
assert "/api/prices" in routes
assert "/api/financials" in routes
assert "/api/news" in routes
assert "/api/insider-trades" in routes
assert "/api/market/status" in routes
assert "/api/market-cap" in routes
assert "/api/line-items" in routes
# Verify all are GET endpoints (read-only service)
for path in ["/api/prices", "/api/financials", "/api/news", "/api/insider-trades",
"/api/market/status", "/api/market-cap", "/api/line-items"]:
assert "GET" in routes[path].methods
def test_trading_service_prices_contract(monkeypatch):
"""Test prices endpoint maintains response contract."""
monkeypatch.setattr(
"backend.domains.trading.get_prices_payload",
lambda ticker, start_date, end_date: {
"ticker": ticker,
"prices": [
Price(
open=1.0,
close=2.0,
high=2.5,
low=0.5,
volume=100,
time="2026-03-20",
)
],
},
)
with TestClient(create_app()) as client:
response = client.get(
"/api/prices",
params={
"ticker": "AAPL",
"start_date": "2026-03-01",
"end_date": "2026-03-20",
},
)
assert response.status_code == 200
payload = response.json()
assert "ticker" in payload
assert "prices" in payload
assert isinstance(payload["prices"], list)
if payload["prices"]:
price = payload["prices"][0]
assert "open" in price
assert "close" in price
assert "high" in price
assert "low" in price
assert "volume" in price
assert "time" in price
def test_trading_service_financials_contract(monkeypatch):
"""Test financials endpoint maintains response contract."""
monkeypatch.setattr(
"backend.domains.trading.get_financials_payload",
lambda ticker, end_date, period, limit: {
"financial_metrics": [
FinancialMetrics(
ticker=ticker,
report_period=end_date,
period=period,
currency="USD",
market_cap=123.0,
enterprise_value=None,
price_to_earnings_ratio=None,
price_to_book_ratio=None,
price_to_sales_ratio=None,
enterprise_value_to_ebitda_ratio=None,
enterprise_value_to_revenue_ratio=None,
free_cash_flow_yield=None,
peg_ratio=None,
gross_margin=None,
operating_margin=None,
net_margin=None,
return_on_equity=None,
return_on_assets=None,
return_on_invested_capital=None,
asset_turnover=None,
inventory_turnover=None,
receivables_turnover=None,
days_sales_outstanding=None,
operating_cycle=None,
working_capital_turnover=None,
current_ratio=None,
quick_ratio=None,
cash_ratio=None,
operating_cash_flow_ratio=None,
debt_to_equity=None,
debt_to_assets=None,
interest_coverage=None,
revenue_growth=None,
earnings_growth=None,
book_value_growth=None,
earnings_per_share_growth=None,
free_cash_flow_growth=None,
operating_income_growth=None,
ebitda_growth=None,
payout_ratio=None,
earnings_per_share=None,
book_value_per_share=None,
free_cash_flow_per_share=None,
)
]
},
)
with TestClient(create_app()) as client:
response = client.get(
"/api/financials",
params={"ticker": "AAPL", "end_date": "2026-03-20"},
)
assert response.status_code == 200
payload = response.json()
assert "financial_metrics" in payload
assert isinstance(payload["financial_metrics"], list)
def test_trading_service_market_status_contract(monkeypatch):
"""Test market status endpoint maintains response contract."""
monkeypatch.setattr(
"backend.domains.trading.get_market_status_payload",
lambda: {"status": "open", "status_text": "Open", "next_open": "09:30"},
)
with TestClient(create_app()) as client:
response = client.get("/api/market/status")
assert response.status_code == 200
payload = response.json()
assert "status" in payload
def test_trading_service_market_cap_contract(monkeypatch):
"""Test market cap endpoint maintains response contract."""
monkeypatch.setattr(
"backend.domains.trading.get_market_cap_payload",
lambda ticker, end_date: {
"ticker": ticker,
"end_date": end_date,
"market_cap": 3.5e12,
},
)
with TestClient(create_app()) as client:
response = client.get(
"/api/market-cap",
params={"ticker": "AAPL", "end_date": "2026-03-20"},
)
assert response.status_code == 200
payload = response.json()
assert "ticker" in payload
assert "end_date" in payload
assert "market_cap" in payload
def test_trading_service_line_items_endpoint(monkeypatch): def test_trading_service_line_items_endpoint(monkeypatch):
monkeypatch.setattr( monkeypatch.setattr(
"backend.domains.trading.get_line_items_payload", "backend.domains.trading.get_line_items_payload",

View File

@@ -22,16 +22,6 @@ from agentscope.message import TextBlock
from agentscope.tool import ToolResponse from agentscope.tool import ToolResponse
from backend.data.provider_utils import normalize_symbol from backend.data.provider_utils import normalize_symbol
from backend.skills.builtin.valuation_review.scripts.dcf_report import (
build_dcf_report,
)
from backend.skills.builtin.valuation_review.scripts.multiple_valuation_report import (
build_ev_ebitda_report,
build_residual_income_report,
)
from backend.skills.builtin.valuation_review.scripts.owner_earnings_report import (
build_owner_earnings_report,
)
from backend.tools.data_tools import ( from backend.tools.data_tools import (
get_company_news, get_company_news,
get_financial_metrics, get_financial_metrics,
@@ -41,10 +31,12 @@ from backend.tools.data_tools import (
prices_to_df, prices_to_df,
search_line_items, search_line_items,
) )
from backend.tools.sandboxed_executor import get_sandbox
from backend.tools.technical_signals import StockTechnicalAnalyzer from backend.tools.technical_signals import StockTechnicalAnalyzer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_technical_analyzer = StockTechnicalAnalyzer() _technical_analyzer = StockTechnicalAnalyzer()
_sandbox = get_sandbox()
def _to_text_response(text: str) -> ToolResponse: def _to_text_response(text: str) -> ToolResponse:
@@ -111,18 +103,28 @@ def _safe_float(value, default=0.0) -> float:
def safe(func): def safe(func):
"""Decorator to catch exceptions in tool functions.""" """Decorator to catch exceptions in both sync and async tool functions."""
if asyncio.iscoroutinefunction(func):
@wraps(func) @wraps(func)
def wrapper(*args, **kwargs): async def async_wrapper(*args, **kwargs):
try:
return await func(*args, **kwargs)
except Exception as e:
error_msg = f"Error in {func.__name__}: {str(e)}"
logger.error(f"{error_msg}\n{traceback.format_exc()}")
return _to_text_response(f"[ERROR] {error_msg}")
return async_wrapper
else:
@wraps(func)
def sync_wrapper(*args, **kwargs):
try: try:
return func(*args, **kwargs) return func(*args, **kwargs)
except Exception as e: except Exception as e:
error_msg = f"Error in {func.__name__}: {str(e)}" error_msg = f"Error in {func.__name__}: {str(e)}"
logger.error(f"{error_msg}\n{traceback.format_exc()}") logger.error(f"{error_msg}\n{traceback.format_exc()}")
return _to_text_response(f"[ERROR] {error_msg}") return _to_text_response(f"[ERROR] {error_msg}")
return sync_wrapper
return wrapper
def _fmt(val, fmt=".2f", suffix="") -> str: def _fmt(val, fmt=".2f", suffix="") -> str:
@@ -149,7 +151,7 @@ def _resolved_date(current_date: Optional[str]) -> str:
@safe @safe
def analyze_efficiency_ratios( async def analyze_efficiency_ratios(
tickers: Optional[List[str]] = None, tickers: Optional[List[str]] = None,
current_date: Optional[str] = None, current_date: Optional[str] = None,
) -> ToolResponse: ) -> ToolResponse:
@@ -171,21 +173,26 @@ def analyze_efficiency_ratios(
tickers = _parse_tickers(tickers) tickers = _parse_tickers(tickers)
lines = [f"=== Efficiency Ratios Analysis ({current_date}) ===\n"] lines = [f"=== Efficiency Ratios Analysis ({current_date}) ===\n"]
for ticker in tickers: async def _fetch_one(ticker):
metrics = get_financial_metrics(ticker=ticker, end_date=current_date) try:
metrics = await asyncio.to_thread(get_financial_metrics, ticker=ticker, end_date=current_date)
if not metrics: if not metrics:
lines.append(f"{ticker}: No data available\n") return f"{ticker}: No data available\n"
continue
m = metrics[0] m = metrics[0]
lines.append(f"{ticker}:") ticker_lines = [
lines.append(f" Asset Turnover: {_fmt(m.asset_turnover)}") f"{ticker}:",
lines.append(f" Inventory Turnover: {_fmt(m.inventory_turnover)}") f" Asset Turnover: {_fmt(m.asset_turnover)}",
lines.append(f" Receivables Turnover: {_fmt(m.receivables_turnover)}") f" Inventory Turnover: {_fmt(m.inventory_turnover)}",
lines.append( f" Receivables Turnover: {_fmt(m.receivables_turnover)}",
f" Working Capital Turnover: {_fmt(m.working_capital_turnover)}", f" Working Capital Turnover: {_fmt(m.working_capital_turnover)}\n",
) ]
lines.append("") return "\n".join(ticker_lines)
except Exception as e:
return f"{ticker}: Error - {str(e)}\n"
results = await asyncio.gather(*[_fetch_one(t) for t in tickers])
lines.extend(results)
return _to_text_response("\n".join(lines)) return _to_text_response("\n".join(lines))
@@ -318,7 +325,7 @@ def analyze_financial_health(
@safe @safe
def analyze_valuation_ratios( async def analyze_valuation_ratios(
tickers: Optional[List[str]] = None, tickers: Optional[List[str]] = None,
current_date: Optional[str] = None, current_date: Optional[str] = None,
) -> ToolResponse: ) -> ToolResponse:
@@ -340,24 +347,31 @@ def analyze_valuation_ratios(
tickers = _parse_tickers(tickers) tickers = _parse_tickers(tickers)
lines = [f"=== Valuation Ratios Analysis ({current_date}) ===\n"] lines = [f"=== Valuation Ratios Analysis ({current_date}) ===\n"]
for ticker in tickers: async def _fetch_one(ticker):
metrics = get_financial_metrics(ticker=ticker, end_date=current_date) try:
metrics = await asyncio.to_thread(get_financial_metrics, ticker=ticker, end_date=current_date)
if not metrics: if not metrics:
lines.append(f"{ticker}: No data available\n") return f"{ticker}: No data available\n"
continue
m = metrics[0] m = metrics[0]
lines.append(f"{ticker}:") ticker_lines = [
lines.append(f" P/E Ratio: {_fmt(m.price_to_earnings_ratio)}") f"{ticker}:",
lines.append(f" P/B Ratio: {_fmt(m.price_to_book_ratio)}") f" P/E Ratio: {_fmt(m.price_to_earnings_ratio)}",
lines.append(f" P/S Ratio: {_fmt(m.price_to_sales_ratio)}") f" P/B Ratio: {_fmt(m.price_to_book_ratio)}",
lines.append("") f" P/S Ratio: {_fmt(m.price_to_sales_ratio)}\n",
]
return "\n".join(ticker_lines)
except Exception as e:
return f"{ticker}: Error - {str(e)}\n"
results = await asyncio.gather(*[_fetch_one(t) for t in tickers])
lines.extend(results)
return _to_text_response("\n".join(lines)) return _to_text_response("\n".join(lines))
@safe @safe
def get_financial_metrics_tool( async def get_financial_metrics_tool(
tickers: Optional[List[str]] = None, tickers: Optional[List[str]] = None,
current_date: Optional[str] = None, current_date: Optional[str] = None,
period: str = "ttm", period: str = "ttm",
@@ -382,35 +396,35 @@ def get_financial_metrics_tool(
f"=== Comprehensive Financial Metrics ({current_date}, {period}) ===\n", f"=== Comprehensive Financial Metrics ({current_date}, {period}) ===\n",
] ]
for ticker in tickers: async def _fetch_one(ticker):
metrics = get_financial_metrics( try:
# Offload synchronous data fetching to thread to keep loop snappy
metrics = await asyncio.to_thread(
get_financial_metrics,
ticker=ticker, ticker=ticker,
end_date=current_date, end_date=current_date,
period=period, period=period,
) )
if not metrics: if not metrics:
lines.append(f"{ticker}: No data available\n") return f"{ticker}: No data available\n"
continue
m = metrics[0] m = metrics[0]
lines.append(f"{ticker}:") ticker_lines = [
lines.append(f" Market Cap: ${_fmt(m.market_cap, ',.0f')}") f"{ticker}:",
lines.append( f" Market Cap: ${_fmt(m.market_cap, ',.0f')}",
f" P/E: {_fmt(m.price_to_earnings_ratio)} | P/B: {_fmt(m.price_to_book_ratio)} | P/S: {_fmt(m.price_to_sales_ratio)}", f" P/E: {_fmt(m.price_to_earnings_ratio)} | P/B: {_fmt(m.price_to_book_ratio)} | P/S: {_fmt(m.price_to_sales_ratio)}",
)
lines.append(
f" ROE: {_fmt(m.return_on_equity, '.1%')} | Net Margin: {_fmt(m.net_margin, '.1%')}", f" ROE: {_fmt(m.return_on_equity, '.1%')} | Net Margin: {_fmt(m.net_margin, '.1%')}",
)
lines.append(
f" Revenue Growth: {_fmt(m.revenue_growth, '.1%')} | Earnings Growth: {_fmt(m.earnings_growth, '.1%')}", f" Revenue Growth: {_fmt(m.revenue_growth, '.1%')} | Earnings Growth: {_fmt(m.earnings_growth, '.1%')}",
)
lines.append(
f" Current Ratio: {_fmt(m.current_ratio)} | D/E: {_fmt(m.debt_to_equity)}", f" Current Ratio: {_fmt(m.current_ratio)} | D/E: {_fmt(m.debt_to_equity)}",
) f" EPS: ${_fmt(m.earnings_per_share)} | FCF/Share: ${_fmt(m.free_cash_flow_per_share)}\n",
lines.append( ]
f" EPS: ${_fmt(m.earnings_per_share)} | FCF/Share: ${_fmt(m.free_cash_flow_per_share)}", return "\n".join(ticker_lines)
) except Exception as e:
lines.append("") return f"{ticker}: Error fetching data - {str(e)}\n"
# Parallelize data retrieval for all tickers
results = await asyncio.gather(*[_fetch_one(t) for t in tickers])
lines.extend(results)
return _to_text_response("\n".join(lines)) return _to_text_response("\n".join(lines))
@@ -869,7 +883,13 @@ def dcf_valuation_analysis(
}, },
) )
return _to_text_response(build_dcf_report(rows, current_date)) return _to_text_response(
_sandbox.execute_skill(
skill_name="builtin/valuation_review",
function_name="build_dcf_report",
function_args={"rows": rows, "current_date": current_date},
)
)
@safe @safe
@@ -958,7 +978,13 @@ def owner_earnings_valuation_analysis(
}, },
) )
return _to_text_response(build_owner_earnings_report(rows, current_date)) return _to_text_response(
_sandbox.execute_skill(
skill_name="builtin/valuation_review",
function_name="build_owner_earnings_report",
function_args={"rows": rows, "current_date": current_date},
)
)
@safe @safe
@@ -1033,7 +1059,13 @@ def ev_ebitda_valuation_analysis(
}, },
) )
return _to_text_response(build_ev_ebitda_report(rows, current_date)) return _to_text_response(
_sandbox.execute_skill(
skill_name="builtin/valuation_review",
function_name="build_ev_ebitda_report",
function_args={"rows": rows, "current_date": current_date},
)
)
@safe @safe
@@ -1114,7 +1146,13 @@ def residual_income_valuation_analysis(
}, },
) )
return _to_text_response(build_residual_income_report(rows, current_date)) return _to_text_response(
_sandbox.execute_skill(
skill_name="builtin/valuation_review",
function_name="build_residual_income_report",
function_args={"rows": rows, "current_date": current_date},
)
)
# Tool Registry for dynamic toolkit creation # Tool Registry for dynamic toolkit creation

View File

@@ -0,0 +1,564 @@
# -*- coding: utf-8 -*-
"""Dynamic Team Management Tools - Tools for PM to manage analyst team dynamically.
This module provides tools for the Portfolio Manager to:
- Create new analysts with custom configuration
- Clone existing analysts with variations
- Remove analysts from the team
- List available analyst types
- Get analyst information
These tools are registered with the PM's toolkit and enable dynamic team management
as described in the Dynamic Team Architecture.
"""
from __future__ import annotations
import json
from typing import Any, Dict, List, Optional, Callable
from agentscope.message import TextBlock
from agentscope.tool import ToolResponse
from backend.agents.dynamic_team_types import (
AnalystPersona,
AnalystConfig,
CreateAnalystResult,
AnalystTypeInfo,
)
from backend.config.constants import ANALYST_TYPES, AGENT_CONFIG
# Type alias for callbacks set by pipeline
CreateAnalystCallback = Callable[[str, str, Optional[AnalystConfig]], str]
RemoveAnalystCallback = Callable[[str], str]
def _to_tool_response(payload: Any) -> ToolResponse:
if isinstance(payload, str):
text = payload
else:
text = json.dumps(payload, ensure_ascii=False, indent=2, default=str)
return ToolResponse(content=[TextBlock(type="text", text=text)])
class DynamicTeamController:
"""Controller for dynamic analyst team management.
This class is instantiated by TradingPipeline and injected into the PM agent
via set_team_controller(). It provides methods that the PM can call through
tools to manage the analyst team dynamically.
Attributes:
create_callback: Callback to _create_runtime_analyst in pipeline
remove_callback: Callback to _remove_runtime_analyst in pipeline
get_analysts_callback: Callback to get current analysts list
registered_types: Runtime-registered custom analyst types
"""
def __init__(
self,
create_callback: CreateAnalystCallback,
remove_callback: RemoveAnalystCallback,
get_analysts_callback: Optional[Callable[[], List[Any]]] = None,
):
"""Initialize the controller with callbacks from pipeline.
Args:
create_callback: Function to create a runtime analyst
remove_callback: Function to remove a runtime analyst
get_analysts_callback: Optional function to get current analysts
"""
self._create_callback = create_callback
self._remove_callback = remove_callback
self._get_analysts_callback = get_analysts_callback
self._registered_types: Dict[str, AnalystPersona] = {}
self._instance_configs: Dict[str, AnalystConfig] = {}
def create_analyst(
self,
agent_id: str,
analyst_type: str,
name: Optional[str] = None,
focus: Optional[List[str]] = None,
description: Optional[str] = None,
soul_md: Optional[str] = None,
agents_md: Optional[str] = None,
model_name: Optional[str] = None,
preferred_tools: Optional[List[str]] = None,
) -> Dict[str, Any]:
"""Create a new analyst with optional custom configuration.
This tool allows the Portfolio Manager to dynamically create new analysts
during a trading session. The analyst can be based on a predefined type
or fully customized with a unique persona.
Args:
agent_id: Unique identifier for the new analyst (e.g., "crypto_specialist_01")
analyst_type: Base type (e.g., "technical_analyst") or custom identifier
name: Display name for the analyst (overrides default)
focus: List of focus areas (overrides default)
description: Detailed description (overrides default)
soul_md: Custom SOUL.md content for the analyst's workspace
agents_md: Custom AGENTS.md content
model_name: Override the default LLM model
preferred_tools: List of preferred tool categories
Returns:
Dict with success status, message, and analyst info
Example:
>>> result = create_analyst(
... agent_id="options_specialist",
... analyst_type="technical_analyst",
... name="期权策略分析师",
... focus=["期权定价", "波动率交易"],
... description="专注于期权市场分析和波动率交易策略...",
... )
"""
# Build custom config if any customization is provided
custom_config = None
if any([name, focus, description, soul_md, agents_md, model_name, preferred_tools]):
persona = None
if name or focus or description:
persona = AnalystPersona(
name=name or f"Custom {analyst_type}",
focus=focus or ["General Analysis"],
description=description or f"Custom analyst based on {analyst_type}",
preferred_tools=preferred_tools,
)
custom_config = AnalystConfig(
persona=persona,
analyst_type=analyst_type if analyst_type in ANALYST_TYPES else None,
soul_md=soul_md,
agents_md=agents_md,
model_name=model_name,
)
# Call the pipeline's create method
result_message = self._create_callback(agent_id, analyst_type, custom_config)
# Parse result
success = result_message.startswith("Created")
if success:
self._instance_configs[agent_id] = custom_config if custom_config else AnalystConfig(
analyst_type=analyst_type
)
return {
"success": success,
"agent_id": agent_id if success else None,
"message": result_message,
"analyst_type": analyst_type,
}
def clone_analyst(
self,
source_id: str,
new_id: str,
name: Optional[str] = None,
focus_additions: Optional[List[str]] = None,
description_override: Optional[str] = None,
model_name: Optional[str] = None,
) -> Dict[str, Any]:
"""Clone an existing analyst with optional modifications.
Creates a new analyst by copying the configuration of an existing one
and applying specified overrides. Useful for creating specialized
variants (e.g., "crypto_technical" from "technical_analyst").
Args:
source_id: ID of the analyst to clone
new_id: Unique identifier for the new analyst
name: New display name (if different from source)
focus_additions: Additional focus areas to add
description_override: Completely new description
model_name: Override the model from source
Returns:
Dict with success status, message, and new analyst info
Example:
>>> result = clone_analyst(
... source_id="technical_analyst",
... new_id="crypto_technical_01",
... name="加密货币技术分析师",
... focus_additions=["链上数据", "DeFi协议分析"],
... )
"""
# Get source config if available
source_config = self._instance_configs.get(source_id)
# Determine base type and config
if source_config:
base_type = source_config.analyst_type or source_id
base_persona = source_config.persona
else:
# Assume source_id is a known type
base_type = source_id
base_persona = None
# Build new persona
new_focus = list(base_persona.focus) if base_persona else []
if focus_additions:
new_focus.extend(focus_additions)
new_name = name or (base_persona.name if base_persona else new_id)
new_description = description_override or (base_persona.description if base_persona else "")
# Create new config with parent reference
new_config = AnalystConfig(
persona=AnalystPersona(
name=new_name,
focus=new_focus,
description=new_description,
preferred_tools=base_persona.preferred_tools if base_persona else None,
),
analyst_type=base_type if base_type in ANALYST_TYPES else None,
soul_md=source_config.soul_md if source_config else None,
agents_md=source_config.agents_md if source_config else None,
model_name=model_name or (source_config.model_name if source_config else None),
parent_id=source_id,
)
# Create the new analyst
result_message = self._create_callback(new_id, base_type, new_config)
success = result_message.startswith("Created")
if success:
self._instance_configs[new_id] = new_config
return {
"success": success,
"agent_id": new_id if success else None,
"parent_id": source_id,
"message": result_message,
}
def remove_analyst(self, agent_id: str) -> Dict[str, Any]:
"""Remove a dynamically created analyst from the team.
Args:
agent_id: ID of the analyst to remove
Returns:
Dict with success status and message
Example:
>>> result = remove_analyst("options_specialist")
"""
result_message = self._remove_callback(agent_id)
success = result_message.startswith("Removed") or "not found" not in result_message.lower()
if success and agent_id in self._instance_configs:
del self._instance_configs[agent_id]
return {
"success": success,
"agent_id": agent_id,
"message": result_message,
}
def list_analyst_types(self) -> List[Dict[str, Any]]:
"""List all available analyst types.
Returns a list of all available analyst types, including:
- Built-in types from ANALYST_TYPES
- Runtime registered custom types
Returns:
List of analyst type information dictionaries
Example:
>>> types = list_analyst_types()
>>> print(types[0]["type_id"]) # "fundamentals_analyst"
"""
result = []
# Add built-in types
for type_id, info in ANALYST_TYPES.items():
result.append({
"type_id": type_id,
"name": info.get("display_name", type_id),
"description": info.get("description", ""),
"is_builtin": True,
"source": "constants",
})
# Add runtime registered types
for type_id, persona in self._registered_types.items():
result.append({
"type_id": type_id,
"name": persona.name,
"description": persona.description,
"is_builtin": False,
"source": "runtime",
})
return result
def get_analyst_info(self, agent_id: str) -> Dict[str, Any]:
"""Get information about a specific analyst.
Args:
agent_id: ID of the analyst
Returns:
Dict with analyst configuration and status
"""
config = self._instance_configs.get(agent_id)
current_analysts = self._get_analysts_callback() if self._get_analysts_callback else []
analyst_map = {
(getattr(agent, "name", None) or getattr(agent, "agent_id", None)): agent
for agent in current_analysts
}
if agent_id in analyst_map and not config:
builtin_meta = AGENT_CONFIG.get(agent_id, {})
return {
"found": True,
"agent_id": agent_id,
"name": builtin_meta.get("name") or agent_id,
"type": agent_id,
"is_custom": False,
"is_clone": False,
"is_builtin": True,
"message": f"Built-in analyst '{agent_id}' is active",
}
if not config:
return {
"found": False,
"agent_id": agent_id,
"message": f"No configuration found for '{agent_id}'",
}
return {
"found": True,
"agent_id": agent_id,
"config": config.to_dict(),
"is_custom": config.persona is not None,
"is_clone": config.parent_id is not None,
"parent_id": config.parent_id,
"is_builtin": False,
}
def register_analyst_type(
self,
type_id: str,
name: str,
focus: List[str],
description: str,
preferred_tools: Optional[List[str]] = None,
) -> Dict[str, Any]:
"""Register a new analyst type for later creation.
This allows defining reusable analyst personas that can be instantiated
multiple times with different configurations.
Args:
type_id: Unique identifier for this type (e.g., "options_analyst")
name: Display name
focus: List of focus areas
description: Detailed description
preferred_tools: Optional list of preferred tool categories
Returns:
Dict with success status and type info
Example:
>>> result = register_analyst_type(
... type_id="options_analyst",
... name="期权分析师",
... focus=["期权定价", "希腊字母分析"],
... description="专注于期权策略和波动率分析",
... )
"""
if type_id in self._registered_types or type_id in ANALYST_TYPES:
return {
"success": False,
"type_id": type_id,
"message": f"Type '{type_id}' already exists",
}
persona = AnalystPersona(
name=name,
focus=focus,
description=description,
preferred_tools=preferred_tools,
)
self._registered_types[type_id] = persona
return {
"success": True,
"type_id": type_id,
"persona": persona.to_dict(),
"message": f"Registered new analyst type '{type_id}'",
}
def get_team_summary(self) -> Dict[str, Any]:
"""Get a summary of the current analyst team.
Returns:
Dict with team composition information
"""
analysts = []
current_analysts = self._get_analysts_callback() if self._get_analysts_callback else []
instance_configs = self._instance_configs
for agent in current_analysts:
agent_id = getattr(agent, "name", None) or getattr(agent, "agent_id", None)
if not agent_id:
continue
config = instance_configs.get(agent_id)
builtin_meta = AGENT_CONFIG.get(agent_id, {})
analysts.append({
"agent_id": agent_id,
"name": (
config.persona.name
if config and config.persona and config.persona.name
else builtin_meta.get("name") or agent_id
),
"type": config.analyst_type if config else agent_id,
"is_custom": bool(config and config.persona is not None),
"is_clone": bool(config and config.parent_id is not None),
"is_builtin": config is None,
})
return {
"total_analysts": len(analysts),
"custom_analysts": len([a for a in analysts if a["is_custom"]]),
"cloned_analysts": len([a for a in analysts if a["is_clone"]]),
"analysts": analysts,
"registered_types": len(self._registered_types),
}
# Global controller instance - set by pipeline
_controller_instance: Optional[DynamicTeamController] = None
def set_controller(controller: DynamicTeamController) -> None:
"""Set the global controller instance.
Called by TradingPipeline when initializing the PM agent.
"""
global _controller_instance
_controller_instance = controller
def get_controller() -> Optional[DynamicTeamController]:
"""Get the global controller instance.
Returns:
DynamicTeamController instance or None if not set
"""
return _controller_instance
# Tool functions that wrap the controller methods
# These are registered with the PM's toolkit
def create_analyst(
agent_id: str,
analyst_type: str,
name: str = "",
focus: str = "",
description: str = "",
soul_md: str = "",
agents_md: str = "",
model_name: str = "",
) -> ToolResponse:
"""Tool wrapper for create_analyst.
Note: focus parameter accepts comma-separated string for tool compatibility.
"""
controller = get_controller()
if not controller:
return _to_tool_response({"success": False, "error": "Dynamic team controller not available"})
focus_list = [f.strip() for f in focus.split(",")] if focus else None
return _to_tool_response(
controller.create_analyst(
agent_id=agent_id,
analyst_type=analyst_type,
name=name,
focus=focus_list,
description=description,
soul_md=soul_md,
agents_md=agents_md,
model_name=model_name,
)
)
def clone_analyst(
source_id: str,
new_id: str,
name: str = "",
focus_additions: str = "",
description_override: str = "",
model_name: str = "",
) -> ToolResponse:
"""Tool wrapper for clone_analyst.
Note: focus_additions accepts comma-separated string.
"""
controller = get_controller()
if not controller:
return _to_tool_response({"success": False, "error": "Dynamic team controller not available"})
additions_list = [f.strip() for f in focus_additions.split(",")] if focus_additions else None
return _to_tool_response(
controller.clone_analyst(
source_id=source_id,
new_id=new_id,
name=name,
focus_additions=additions_list,
description_override=description_override,
model_name=model_name,
)
)
def remove_analyst(agent_id: str) -> ToolResponse:
"""Tool wrapper for remove_analyst."""
controller = get_controller()
if not controller:
return _to_tool_response({"success": False, "error": "Dynamic team controller not available"})
return _to_tool_response(controller.remove_analyst(agent_id))
def list_analyst_types() -> ToolResponse:
"""Tool wrapper for list_analyst_types."""
controller = get_controller()
if not controller:
return _to_tool_response([])
return _to_tool_response(controller.list_analyst_types())
def get_analyst_info(agent_id: str) -> ToolResponse:
"""Tool wrapper for get_analyst_info."""
controller = get_controller()
if not controller:
return _to_tool_response({"found": False, "error": "Controller not available"})
return _to_tool_response(controller.get_analyst_info(agent_id))
def get_team_summary() -> ToolResponse:
"""Tool wrapper for get_team_summary."""
controller = get_controller()
if not controller:
return _to_tool_response({"error": "Controller not available"})
return _to_tool_response(controller.get_team_summary())
__all__ = [
"DynamicTeamController",
"set_controller",
"get_controller",
"create_analyst",
"clone_analyst",
"remove_analyst",
"list_analyst_types",
"get_analyst_info",
"get_team_summary",
]

View File

@@ -0,0 +1,441 @@
# -*- coding: utf-8 -*-
"""
多模式技能沙盒执行器
支持三种模式:
- none: 直接执行(默认,开发环境)
- docker: Docker 容器隔离
- kubernetes: Kubernetes Pod 隔离
环境变量:
SKILL_SANDBOX_MODE: 沙盒模式 (none/docker/kubernetes),默认 none
SKILL_SANDBOX_IMAGE: Docker 镜像,默认 python:3.11-slim
SKILL_SANDBOX_MEMORY_LIMIT: 内存限制,默认 512m
SKILL_SANDBOX_CPU_LIMIT: CPU 限制,默认 1.0
SKILL_SANDBOX_NETWORK: 网络模式,默认 none
SKILL_SANDBOX_TIMEOUT: 超时时间(秒),默认 60
"""
import json
import logging
import os
from abc import ABC, abstractmethod
from typing import Any
logger = logging.getLogger(__name__)
class SandboxBackend(ABC):
"""沙盒后端抽象基类"""
@abstractmethod
def execute(
self,
skill_name: str,
function_name: str,
function_args: dict,
) -> dict:
"""
执行技能函数
Args:
skill_name: 技能名称,如 "builtin/valuation_review"
function_name: 要执行的函数名,如 "build_dcf_report"
function_args: 函数参数字典
Returns:
执行结果字典
"""
pass
class NoSandboxBackend(SandboxBackend):
"""
无沙盒模式 - 直接执行(默认,仅用于开发环境)
特性:
- 直接导入并执行技能模块
- 零性能开销
- 无隔离,依赖代码审查保证安全
"""
# 函数名到脚本模块名的映射
FUNCTION_TO_SCRIPT_MAP = {
# valuation_review 技能
"build_dcf_report": "dcf_report",
"build_owner_earnings_report": "owner_earnings_report",
"build_ev_ebitda_report": "multiple_valuation_report",
"build_residual_income_report": "multiple_valuation_report",
}
def __init__(self):
self._module_cache = {}
def _get_script_name(self, function_name: str) -> str:
"""
根据函数名获取脚本模块名
优先使用预定义映射,否则尝试自动推断
"""
if function_name in self.FUNCTION_TO_SCRIPT_MAP:
return self.FUNCTION_TO_SCRIPT_MAP[function_name]
# 自动推断: build_X_report -> X_report
if function_name.startswith("build_") and function_name.endswith("_report"):
return function_name[6:] # 去掉 "build_" 前缀
return function_name
def execute(
self,
skill_name: str,
function_name: str,
function_args: dict,
) -> dict:
"""直接导入模块并执行函数"""
logger.debug(f"[NoSandbox] 执行技能: {skill_name}.{function_name}")
try:
# 将技能路径转换为模块路径
# builtin/valuation_review -> backend.skills.builtin.valuation_review.scripts
module_path = f"backend.skills.{skill_name.replace('/', '.')}.scripts"
# 从 function_name 获取脚本模块名
script_name = self._get_script_name(function_name)
submodule_path = f"{module_path}.{script_name}"
logger.debug(f"[NoSandbox] 导入模块: {submodule_path}.{function_name}")
# 缓存已加载的模块
if submodule_path not in self._module_cache:
self._module_cache[submodule_path] = __import__(
submodule_path,
fromlist=[function_name],
)
module = self._module_cache[submodule_path]
func = getattr(module, function_name)
# 执行函数
result = func(**function_args)
return {
"status": "success",
"result": result,
}
except Exception as e:
logger.error(f"[NoSandbox] 执行失败: {e}")
return {
"status": "error",
"error": str(e),
"error_type": type(e).__name__,
}
class DockerSandboxBackend(SandboxBackend):
"""
Docker 沙盒模式 - 容器隔离
特性:
- 使用 Docker 容器隔离执行
- 支持资源限制CPU、内存
- 支持网络隔离
- 临时容器,执行后销毁
依赖:
pip install agentscope-runtime
Docker 守护进程运行中
"""
# 函数名到脚本模块名的映射
FUNCTION_TO_SCRIPT_MAP = {
# valuation_review 技能
"build_dcf_report": "dcf_report",
"build_owner_earnings_report": "owner_earnings_report",
"build_ev_ebitda_report": "multiple_valuation_report",
"build_residual_income_report": "multiple_valuation_report",
}
def __init__(self, config: dict):
self.config = config
self._available = None
def _get_script_name(self, function_name: str) -> str:
"""
根据函数名获取脚本模块名
优先使用预定义映射,否则尝试自动推断
"""
if function_name in self.FUNCTION_TO_SCRIPT_MAP:
return self.FUNCTION_TO_SCRIPT_MAP[function_name]
# 自动推断: build_X_report -> X_report
if function_name.startswith("build_") and function_name.endswith("_report"):
return function_name[6:] # 去掉 "build_" 前缀
return function_name
def _check_availability(self) -> bool:
"""检查 Docker 是否可用"""
if self._available is not None:
return self._available
try:
from agentscope_runtime.sandbox import BaseSandbox
self._available = True
except ImportError:
logger.error(
"AgentScope Runtime 未安装,无法使用 Docker 沙盒。"
"请运行: pip install agentscope-runtime"
)
self._available = False
return self._available
def execute(
self,
skill_name: str,
function_name: str,
function_args: dict,
) -> dict:
"""在 Docker 容器中执行"""
if not self._check_availability():
raise RuntimeError(
"Docker 沙盒不可用,请安装 agentscope-runtime "
"或切换到 SKILL_SANDBOX_MODE=none"
)
from agentscope_runtime.sandbox import BaseSandbox
logger.info(f"[DockerSandbox] 执行技能: {skill_name}.{function_name}")
# 获取脚本模块名
script_name = self._get_script_name(function_name)
# 构建执行代码
code = f"""
import sys
import json
# 挂载路径
sys.path.insert(0, '/skill/scripts')
# 导入函数
from {script_name} import {function_name}
# 执行
args = json.loads('{json.dumps(function_args)}')
result = {function_name}(**args)
# 输出结果
print(json.dumps({{"status": "success", "result": result}}))
"""
try:
with BaseSandbox(**self.config) as box:
# 挂载技能目录(只读)
host_skill_path = f"backend/skills/{skill_name}"
box.mount(
host_path=host_skill_path,
container_path="/skill",
read_only=True,
)
# 执行代码
exec_result = box.run_ipython_cell(code=code)
# 解析结果
if exec_result.get("exit_code") == 0:
output = exec_result.get("stdout", "")
return json.loads(output)
else:
return {
"status": "error",
"error": exec_result.get("stderr", "Unknown error"),
"exit_code": exec_result.get("exit_code"),
}
except Exception as e:
logger.error(f"[DockerSandbox] 执行失败: {e}")
return {
"status": "error",
"error": str(e),
"error_type": type(e).__name__,
}
class KubernetesSandboxBackend(SandboxBackend):
"""
Kubernetes 沙盒模式 - Pod 隔离(预留接口)
特性:
- 使用 Kubernetes Pod 隔离执行
- 企业级隔离和调度
- 支持资源配额和命名空间
"""
def __init__(self, config: dict):
self.config = config
raise NotImplementedError(
"Kubernetes 沙盒模式尚未实现,"
"请使用 SKILL_SANDBOX_MODE=docker 或 none"
)
def execute(
self,
skill_name: str,
function_name: str,
function_args: dict,
) -> dict:
raise NotImplementedError()
class SkillSandbox:
"""
技能沙盒执行器
统一接口,根据配置自动选择后端。
默认使用 none 模式(无沙盒)。
示例:
>>> sandbox = SkillSandbox()
>>> result = sandbox.execute_skill(
... skill_name="builtin/valuation_review",
... function_name="build_dcf_report",
... function_args={"rows": [...], "current_date": "2024-01-01"}
... )
>>> print(result)
{"status": "success", "result": "..."}
"""
_instance = None
_mode = None
def __new__(cls):
"""单例模式"""
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance._initialized = False
return cls._instance
def __init__(self):
if self._initialized:
return
self.mode = os.getenv("SKILL_SANDBOX_MODE", "none").lower()
self._backend = self._create_backend()
self._initialized = True
logger.debug(f"SkillSandbox 初始化完成,模式: {self.mode}")
def _create_backend(self) -> SandboxBackend:
"""根据模式创建对应后端"""
if self.mode == "none":
logger.debug("使用无沙盒模式(直接执行)")
return NoSandboxBackend()
elif self.mode == "docker":
config = {
"image": os.getenv(
"SKILL_SANDBOX_IMAGE", "python:3.11-slim"
),
"memory_limit": os.getenv(
"SKILL_SANDBOX_MEMORY_LIMIT", "512m"
),
"cpu_limit": float(
os.getenv("SKILL_SANDBOX_CPU_LIMIT", "1.0")
),
"network": os.getenv("SKILL_SANDBOX_NETWORK", "none"),
"timeout": int(os.getenv("SKILL_SANDBOX_TIMEOUT", "60")),
}
logger.info(f"使用 Docker 沙盒模式,配置: {config}")
return DockerSandboxBackend(config)
elif self.mode == "kubernetes":
config = {
"namespace": os.getenv(
"SKILL_SANDBOX_NAMESPACE", "agentscope"
),
"memory_limit": os.getenv(
"SKILL_SANDBOX_MEMORY_LIMIT", "512Mi"
),
"cpu_limit": os.getenv("SKILL_SANDBOX_CPU_LIMIT", "1000m"),
"timeout": int(os.getenv("SKILL_SANDBOX_TIMEOUT", "60")),
}
logger.info(f"使用 Kubernetes 沙盒模式,配置: {config}")
return KubernetesSandboxBackend(config)
else:
raise ValueError(
f"未知的沙盒模式: {self.mode}"
f"请设置 SKILL_SANDBOX_MODE=none/docker/kubernetes"
)
def execute_skill(
self,
skill_name: str,
function_name: str,
function_args: dict | None = None,
) -> Any:
"""
执行技能函数
Args:
skill_name: 技能名称,如 "builtin/valuation_review"
function_name: 函数名,如 "build_dcf_report"
function_args: 函数参数,默认 None
Returns:
函数执行结果(成功时返回 result 字段,失败时抛出异常)
Raises:
RuntimeError: 执行失败
"""
if function_args is None:
function_args = {}
logger.debug(
f"执行技能: {skill_name}.{function_name} "
f"(模式: {self.mode})"
)
result = self._backend.execute(
skill_name=skill_name,
function_name=function_name,
function_args=function_args,
)
if result.get("status") == "error":
error_msg = result.get("error", "Unknown error")
error_type = result.get("error_type", "Exception")
raise RuntimeError(f"[{error_type}] {error_msg}")
return result.get("result")
@property
def current_mode(self) -> str:
"""获取当前沙盒模式"""
return self.mode
def get_sandbox() -> SkillSandbox:
"""
获取 SkillSandbox 单例实例
Returns:
SkillSandbox 实例
"""
return SkillSandbox()
def reset_sandbox():
"""
重置沙盒实例(用于测试)
"""
SkillSandbox._instance = None
SkillSandbox._mode = None

View File

@@ -228,12 +228,12 @@ class SettlementCoordinator:
all_evaluations = {**analyst_evaluations, **pm_evaluations} all_evaluations = {**analyst_evaluations, **pm_evaluations}
leaderboard = self.storage.load_export_file("leaderboard") or [] leaderboard = self.storage.load_runtime_leaderboard()
updated_leaderboard = update_leaderboard_with_evaluations( updated_leaderboard = update_leaderboard_with_evaluations(
leaderboard, leaderboard,
all_evaluations, all_evaluations,
) )
self.storage.save_export_file("leaderboard", updated_leaderboard) self.storage.persist_runtime_leaderboard(updated_leaderboard)
self._update_summary_with_baselines( self._update_summary_with_baselines(
date, date,

File diff suppressed because one or more lines are too long

View File

@@ -1,474 +0,0 @@
{
"baseline_state": {
"initialized": true,
"initial_allocation": {
"AAPL": 52.82787621372046,
"MSFT": 27.48283353510314,
"GOOGL": 50.62714374311787,
"NVDA": 68.65491294557039,
"TSLA": 31.329007841650665,
"META": 21.77700348432056,
"AMZN": 55.94343000358038
}
},
"baseline_vw_state": {
"initialized": true,
"initial_allocation": {
"AAPL": 68.50435598171448,
"MSFT": 28.26372943269579,
"GOOGL": 64.10562703513074,
"NVDA": 105.43488803941372,
"TSLA": 16.283886873554753,
"META": 12.29869945153529,
"AMZN": 44.10358298129591
}
},
"momentum_state": {
"positions": {
"AAPL": 123.26504449868106,
"MSFT": 64.12661158190733,
"GOOGL": 118.13000206727504
},
"cash": 0.0,
"initialized": true,
"last_rebalance_date": "2025-11-03"
},
"equity_history": [
{
"t": 1762070400000,
"v": 100000.0
},
{
"t": 1762156800000,
"v": 99785.98
},
{
"t": 1762243200000,
"v": 99590.68
},
{
"t": 1762329600000,
"v": 99298.78
},
{
"t": 1762416000000,
"v": 98425.78
},
{
"t": 1762502400000,
"v": 98434.93
}
],
"baseline_history": [
{
"t": 1762070400000,
"v": 100000.0
},
{
"t": 1762156800000,
"v": 99760.66
},
{
"t": 1762243200000,
"v": 97620.18
},
{
"t": 1762329600000,
"v": 98327.37
},
{
"t": 1762416000000,
"v": 96286.86
},
{
"t": 1762502400000,
"v": 95539.06
}
],
"baseline_vw_history": [
{
"t": 1762070400000,
"v": 100000.0
},
{
"t": 1762156800000,
"v": 99716.91
},
{
"t": 1762243200000,
"v": 97721.94
},
{
"t": 1762329600000,
"v": 98028.19
},
{
"t": 1762416000000,
"v": 96206.83
},
{
"t": 1762502400000,
"v": 95565.33
}
],
"momentum_history": [
{
"t": 1762070400000,
"v": 100000.0
},
{
"t": 1762156800000,
"v": 99835.69
},
{
"t": 1762243200000,
"v": 99054.53
},
{
"t": 1762329600000,
"v": 99406.81
},
{
"t": 1762416000000,
"v": 98768.07
},
{
"t": 1762502400000,
"v": 97890.54
}
],
"price_history": {
"AAPL": [
{
"date": "2025-11-03",
"price": 269.05
},
{
"date": "2025-11-04",
"price": 270.04
},
{
"date": "2025-11-05",
"price": 270.14
},
{
"date": "2025-11-06",
"price": 269.77
},
{
"date": "2025-11-07",
"price": 268.47
}
],
"MSFT": [
{
"date": "2025-11-03",
"price": 517.03
},
{
"date": "2025-11-04",
"price": 514.33
},
{
"date": "2025-11-05",
"price": 507.16
},
{
"date": "2025-11-06",
"price": 497.1
},
{
"date": "2025-11-07",
"price": 496.82
}
],
"GOOGL": [
{
"date": "2025-11-03",
"price": 283.72
},
{
"date": "2025-11-04",
"price": 277.54
},
{
"date": "2025-11-05",
"price": 284.31
},
{
"date": "2025-11-06",
"price": 284.75
},
{
"date": "2025-11-07",
"price": 278.83
}
],
"NVDA": [
{
"date": "2025-11-03",
"price": 206.88
},
{
"date": "2025-11-04",
"price": 198.69
},
{
"date": "2025-11-05",
"price": 195.21
},
{
"date": "2025-11-06",
"price": 188.08
},
{
"date": "2025-11-07",
"price": 188.15
}
],
"TSLA": [
{
"date": "2025-11-03",
"price": 468.37
},
{
"date": "2025-11-04",
"price": 444.26
},
{
"date": "2025-11-05",
"price": 462.07
},
{
"date": "2025-11-06",
"price": 445.91
},
{
"date": "2025-11-07",
"price": 429.52
}
],
"META": [
{
"date": "2025-11-03",
"price": 637.71
},
{
"date": "2025-11-04",
"price": 627.32
},
{
"date": "2025-11-05",
"price": 635.95
},
{
"date": "2025-11-06",
"price": 618.94
},
{
"date": "2025-11-07",
"price": 621.71
}
],
"AMZN": [
{
"date": "2025-11-03",
"price": 254.0
},
{
"date": "2025-11-04",
"price": 249.32
},
{
"date": "2025-11-05",
"price": 250.2
},
{
"date": "2025-11-06",
"price": 243.04
},
{
"date": "2025-11-07",
"price": 244.41
}
]
},
"portfolio_state": {
"cash": 25395.10000000001,
"positions": {
"MSFT": {
"long": 60,
"short": 0,
"long_cost_basis": 514.2845833333333,
"short_cost_basis": 0.0
},
"GOOGL": {
"long": 50,
"short": 0,
"long_cost_basis": 279.556,
"short_cost_basis": 0.0
},
"META": {
"long": 20,
"short": 0,
"long_cost_basis": 644.155,
"short_cost_basis": 0.0
},
"AMZN": {
"long": 40,
"short": 0,
"long_cost_basis": 247.5725,
"short_cost_basis": 0.0
},
"NVDA": {
"long": 20,
"short": 0,
"long_cost_basis": 203.0,
"short_cost_basis": 0.0
},
"TSLA": {
"long": 0,
"short": 15,
"long_cost_basis": 0.0,
"short_cost_basis": 454.46
},
"AAPL": {
"long": 30,
"short": 0,
"long_cost_basis": 267.89,
"short_cost_basis": 0.0
}
},
"margin_used": 1704.225
},
"all_trades": [
{
"id": "t_20251103_MSFT_0",
"ts": 1762156800000,
"trading_date": "2025-11-03",
"side": "LONG",
"ticker": "MSFT",
"qty": 15,
"price": 519.8
},
{
"id": "t_20251103_GOOGL_1",
"ts": 1762156800000,
"trading_date": "2025-11-03",
"side": "LONG",
"ticker": "GOOGL",
"qty": 20,
"price": 282.18
},
{
"id": "t_20251103_META_2",
"ts": 1762156800000,
"trading_date": "2025-11-03",
"side": "LONG",
"ticker": "META",
"qty": 10,
"price": 656.0
},
{
"id": "t_20251103_AMZN_3",
"ts": 1762156800000,
"trading_date": "2025-11-03",
"side": "LONG",
"ticker": "AMZN",
"qty": 15,
"price": 255.36
},
{
"id": "t_20251104_MSFT_0",
"ts": 1762243200000,
"trading_date": "2025-11-04",
"side": "LONG",
"ticker": "MSFT",
"qty": 25,
"price": 511.76
},
{
"id": "t_20251104_GOOGL_1",
"ts": 1762243200000,
"trading_date": "2025-11-04",
"side": "LONG",
"ticker": "GOOGL",
"qty": 15,
"price": 276.75
},
{
"id": "t_20251104_NVDA_2",
"ts": 1762243200000,
"trading_date": "2025-11-04",
"side": "LONG",
"ticker": "NVDA",
"qty": 20,
"price": 203.0
},
{
"id": "t_20251104_TSLA_3",
"ts": 1762243200000,
"trading_date": "2025-11-04",
"side": "SHORT",
"ticker": "TSLA",
"qty": 15,
"price": 454.46
},
{
"id": "t_20251105_MSFT_0",
"ts": 1762329600000,
"trading_date": "2025-11-05",
"side": "LONG",
"ticker": "MSFT",
"qty": 20,
"price": 513.3
},
{
"id": "t_20251105_GOOGL_1",
"ts": 1762329600000,
"trading_date": "2025-11-05",
"side": "LONG",
"ticker": "GOOGL",
"qty": 15,
"price": 278.87
},
{
"id": "t_20251105_META_2",
"ts": 1762329600000,
"trading_date": "2025-11-05",
"side": "LONG",
"ticker": "META",
"qty": 10,
"price": 632.31
},
{
"id": "t_20251106_AAPL_0",
"ts": 1762416000000,
"trading_date": "2025-11-06",
"side": "LONG",
"ticker": "AAPL",
"qty": 30,
"price": 267.89
},
{
"id": "t_20251107_AMZN_0",
"ts": 1762502400000,
"trading_date": "2025-11-07",
"side": "LONG",
"ticker": "AMZN",
"qty": 25,
"price": 242.9
},
{
"id": "t_20251107_TSLA_1",
"ts": 1762502400000,
"trading_date": "2025-11-07",
"side": "SHORT",
"ticker": "TSLA",
"qty": -5,
"price": 437.92
}
],
"daily_position_history": {},
"last_update_date": "2025-11-07"
}

View File

@@ -1,58 +0,0 @@
[
{
"ticker": "MSFT",
"quantity": 60,
"currentPrice": 496.82,
"marketValue": 29809.2,
"weight": 0.3028
},
{
"ticker": "CASH",
"quantity": 1,
"currentPrice": 25395.1,
"marketValue": 25395.1,
"weight": 0.258
},
{
"ticker": "GOOGL",
"quantity": 50,
"currentPrice": 278.83,
"marketValue": 13941.5,
"weight": 0.1416
},
{
"ticker": "META",
"quantity": 20,
"currentPrice": 621.71,
"marketValue": 12434.2,
"weight": 0.1263
},
{
"ticker": "AMZN",
"quantity": 40,
"currentPrice": 244.41,
"marketValue": 9776.4,
"weight": 0.0993
},
{
"ticker": "AAPL",
"quantity": 30,
"currentPrice": 268.47,
"marketValue": 8054.1,
"weight": 0.0818
},
{
"ticker": "TSLA",
"quantity": -15,
"currentPrice": 429.52,
"marketValue": -6442.8,
"weight": 0.0655
},
{
"ticker": "NVDA",
"quantity": 20,
"currentPrice": 188.15,
"marketValue": 3763.0,
"weight": 0.0382
}
]

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +0,0 @@
{
"totalAssetValue": 98434.93,
"totalReturn": -1.57,
"cashPosition": 25395.1,
"tickerWeights": {},
"totalTrades": 14,
"winRate": 0.0,
"bullBear": {
"bull": {
"n": 0,
"win": 0
},
"bear": {
"n": 0,
"win": 0
}
}
}

View File

@@ -1,121 +0,0 @@
{
"totalAssetValue": 98434.93,
"totalReturn": -1.57,
"cashPosition": 25395.1,
"tickerWeights": {
"MSFT": 0.3028,
"GOOGL": 0.1416,
"META": 0.1263,
"AMZN": 0.0993,
"NVDA": 0.0382,
"TSLA": -0.0655,
"AAPL": 0.0818
},
"totalTrades": 14,
"pnlPct": -1.57,
"balance": 98434.93,
"equity": [
{
"t": 1762070400000,
"v": 100000.0
},
{
"t": 1762156800000,
"v": 99785.98
},
{
"t": 1762243200000,
"v": 99590.68
},
{
"t": 1762329600000,
"v": 99298.78
},
{
"t": 1762416000000,
"v": 98425.78
},
{
"t": 1762502400000,
"v": 98434.93
}
],
"baseline": [
{
"t": 1762070400000,
"v": 100000.0
},
{
"t": 1762156800000,
"v": 99760.66
},
{
"t": 1762243200000,
"v": 97620.18
},
{
"t": 1762329600000,
"v": 98327.37
},
{
"t": 1762416000000,
"v": 96286.86
},
{
"t": 1762502400000,
"v": 95539.06
}
],
"baseline_vw": [
{
"t": 1762070400000,
"v": 100000.0
},
{
"t": 1762156800000,
"v": 99716.91
},
{
"t": 1762243200000,
"v": 97721.94
},
{
"t": 1762329600000,
"v": 98028.19
},
{
"t": 1762416000000,
"v": 96206.83
},
{
"t": 1762502400000,
"v": 95565.33
}
],
"momentum": [
{
"t": 1762070400000,
"v": 100000.0
},
{
"t": 1762156800000,
"v": 99835.69
},
{
"t": 1762243200000,
"v": 99054.53
},
{
"t": 1762329600000,
"v": 99406.81
},
{
"t": 1762416000000,
"v": 98768.07
},
{
"t": 1762502400000,
"v": 97890.54
}
]
}

Some files were not shown because too many files have changed in this diff Show More