Update Alias 0.2.0 (#77)
This commit is contained in:
@@ -120,5 +120,5 @@ OSS_BUCKET_NAME=
|
|||||||
|
|
||||||
# Agent Execution Settings
|
# Agent Execution Settings
|
||||||
HEARTBEAT_INTERVAL=10
|
HEARTBEAT_INTERVAL=10
|
||||||
MAX_CHAT_EXECUTION_TIME=3600 # 1 hour
|
MAX_CHAT_EXECUTION_TIME=3600
|
||||||
ENABLE_BACKGROUND_CHAT=true
|
ENABLE_BACKGROUND_CHAT=true
|
||||||
|
|||||||
@@ -19,6 +19,7 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
[[中文README]](README_ZH.md)
|
||||||
|
|
||||||
*Alias-Agent* (short for *Alias*) is an LLM-empowered agent built on [AgentScope](https://github.com/agentscope-ai/agentscope) and [AgentScope-runtime](https://github.com/agentscope-ai/agentscope-runtime/), designed to serve as a general-purpose intelligent assistant for responding to user queries. Alias excels at decomposing complicated problems, constructing roadmaps, and applying appropriate strategies to tackle diverse real-world tasks.
|
*Alias-Agent* (short for *Alias*) is an LLM-empowered agent built on [AgentScope](https://github.com/agentscope-ai/agentscope) and [AgentScope-runtime](https://github.com/agentscope-ai/agentscope-runtime/), designed to serve as a general-purpose intelligent assistant for responding to user queries. Alias excels at decomposing complicated problems, constructing roadmaps, and applying appropriate strategies to tackle diverse real-world tasks.
|
||||||
|
|
||||||
@@ -154,7 +155,7 @@ docker pull agentscope-registry.ap-southeast-1.cr.aliyuncs.com/agentscope/runtim
|
|||||||
docker pull agentscope/runtime-sandbox-alias:latest
|
docker pull agentscope/runtime-sandbox-alias:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
More details can refer to [AgentScope Runtime documentation](https://runtime.agentscope.io/en/sandbox.html).
|
More details can refer to [AgentScope Runtime documentation](https://runtime.agentscope.io/en/sandbox/sandbox.html).
|
||||||
|
|
||||||
### 🔑 API Keys Configuration
|
### 🔑 API Keys Configuration
|
||||||
|
|
||||||
@@ -205,6 +206,19 @@ alias_agent run --mode ds \
|
|||||||
|
|
||||||
**Note**: Files uploaded with `--files` are automatically copied to `/workspace` in the sandbox. Generated files are available in `sessions_mount_dir` subdirectories.
|
**Note**: Files uploaded with `--files` are automatically copied to `/workspace` in the sandbox. Generated files are available in `sessions_mount_dir` subdirectories.
|
||||||
|
|
||||||
|
#### Enable Long-Term Memory Service (General Mode Only)
|
||||||
|
To enable the long-term memory service in General mode, you need to:
|
||||||
|
1. **Start the Memory Service first** (see [Start the Memory Service Server](#start-the-memory-service-server) section below)
|
||||||
|
2. **Use the `--use_long_term_memory` flag** when running in General mode:
|
||||||
|
```bash
|
||||||
|
# General mode with long-term memory service enabled
|
||||||
|
alias_agent run --mode general --task "Analyze Meta stock performance in Q1 2025" --use_long_term_memory
|
||||||
|
```
|
||||||
|
**Important**:
|
||||||
|
- Long-term memory is only enabled when the `--use_long_term_memory` flag is explicitly provided (disabled by default)
|
||||||
|
- The long-term memory service is only available in **General mode** (meta-planner)
|
||||||
|
- The memory service must be running before starting the agent
|
||||||
|
- When enabled, the agent will retrieve user profiling information at session start to provide personalized experiences
|
||||||
|
|
||||||
### Basic Usage -- Full-Stack Deployment
|
### Basic Usage -- Full-Stack Deployment
|
||||||
|
|
||||||
@@ -285,6 +299,8 @@ The frontend will start on `http://localhost:5173` (or the port specified in `vi
|
|||||||
|
|
||||||
#### Start the Memory Service Server
|
#### Start the Memory Service Server
|
||||||
|
|
||||||
|
> **Note**: The Memory Service is required if you want to enable long-term memory features in General mode. Make sure to start the Memory Service before using the `--use_long_term_memory` flag in CLI or setting `use_long_term_memory_service: true` in API requests.
|
||||||
|
|
||||||
First install the Memory Service package in development mode
|
First install the Memory Service package in development mode
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -331,7 +347,7 @@ The script will automatically check and start Redis and Qdrant services (via Doc
|
|||||||
|
|
||||||
**Option 2: Docker Deployment**
|
**Option 2: Docker Deployment**
|
||||||
|
|
||||||
For Docker-based deployment, please refer to the detailed documentation at [alias/memory_service/docker/README.md](memory_service/docker/README.md).
|
For Docker-based deployment, please refer to the detailed documentation at [Detailed Docs](src/alias/memory_service/docker/README.md).
|
||||||
|
|
||||||
#### Access the Application
|
#### Access the Application
|
||||||
|
|
||||||
|
|||||||
@@ -19,6 +19,8 @@
|
|||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
[[English README]](README.md)
|
||||||
|
|
||||||
|
|
||||||
*Alias-Agent*(简称 *Alias*)是一个基于 [AgentScope](https://github.com/agentscope-ai/agentscope) 和 [AgentScope-runtime](https://github.com/agentscope-ai/agentscope-runtime/) 构建的、由大语言模型驱动的智能体,旨在作为通用智能助手响应用户查询。Alias 擅长分解复杂问题、构建解决路径,并应用合适的策略来处理多样化的现实世界任务。
|
*Alias-Agent*(简称 *Alias*)是一个基于 [AgentScope](https://github.com/agentscope-ai/agentscope) 和 [AgentScope-runtime](https://github.com/agentscope-ai/agentscope-runtime/) 构建的、由大语言模型驱动的智能体,旨在作为通用智能助手响应用户查询。Alias 擅长分解复杂问题、构建解决路径,并应用合适的策略来处理多样化的现实世界任务。
|
||||||
|
|
||||||
@@ -154,7 +156,7 @@ docker pull agentscope-registry.ap-southeast-1.cr.aliyuncs.com/agentscope/runtim
|
|||||||
docker pull agentscope/runtime-sandbox-alias:latest
|
docker pull agentscope/runtime-sandbox-alias:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
更多详情请参考 [AgentScope Runtime 文档](https://runtime.agentscope.io/en/sandbox.html)。
|
更多详情请参考 [AgentScope Runtime 文档](https://runtime.agentscope.io/zh/sandbox/sandbox.html)。
|
||||||
|
|
||||||
### 🔑 API 密钥配置
|
### 🔑 API 密钥配置
|
||||||
|
|
||||||
@@ -205,6 +207,19 @@ alias_agent run --mode ds \
|
|||||||
|
|
||||||
**注意**:使用 `--files` 上传的文件会自动复制到沙盒中的 `/workspace`。生成的文件可在 `sessions_mount_dir` 的子目录中找到。
|
**注意**:使用 `--files` 上传的文件会自动复制到沙盒中的 `/workspace`。生成的文件可在 `sessions_mount_dir` 的子目录中找到。
|
||||||
|
|
||||||
|
#### 启用长期记忆服务(仅限通用模式)
|
||||||
|
要在通用模式下启用长期记忆服务,您需要:
|
||||||
|
1. **首先启动记忆服务**(请参阅下面的[启动记忆服务服务器](#启动记忆服务服务器)部分)
|
||||||
|
2. **在通用模式下运行时使用 `--use_long_term_memory` 标志**:
|
||||||
|
```bash
|
||||||
|
# 启用长期记忆服务的通用模式
|
||||||
|
alias_agent run --mode general --task "Analyze Meta stock performance in Q1 2025" --use_long_term_memory
|
||||||
|
```
|
||||||
|
**重要提示**:
|
||||||
|
- 只有显式添加 `--use_long_term_memory` 标志时才会启用长期记忆(默认禁用)
|
||||||
|
- 长期记忆服务仅在**通用模式**(元规划器)中可用
|
||||||
|
- 在启动智能体之前,记忆服务必须正在运行
|
||||||
|
- 启用后,智能体将在会话开始时检索用户画像信息,以提供个性化体验
|
||||||
|
|
||||||
### 基础用法 -- 全栈部署
|
### 基础用法 -- 全栈部署
|
||||||
|
|
||||||
@@ -285,6 +300,8 @@ npm run dev
|
|||||||
|
|
||||||
#### 启动记忆服务服务器
|
#### 启动记忆服务服务器
|
||||||
|
|
||||||
|
> **注意**:如果您想在通用模式下启用长期记忆功能,则需要记忆服务。在使用 CLI 中的 `--use_long_term_memory` 标志或在 API 请求中设置 `use_long_term_memory_service: true` 之前,请确保已启动记忆服务。
|
||||||
|
|
||||||
首先,以开发模式安装 Memory Service 包
|
首先,以开发模式安装 Memory Service 包
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -331,7 +348,7 @@ bash script/start_memory_service.sh
|
|||||||
|
|
||||||
**选项 2:Docker 部署**
|
**选项 2:Docker 部署**
|
||||||
|
|
||||||
有关基于 Docker 的部署,请参阅 [alias/memory_service/docker/README.md](memory_service/docker/README.md) 中的详细文档。
|
有关基于 Docker 的部署,请参阅[详细文档](src/alias/memory_service/docker/README.md)。
|
||||||
|
|
||||||
#### 访问应用程序
|
#### 访问应用程序
|
||||||
|
|
||||||
|
|||||||
63
alias/frontend/src/components/AgentscopeLogoIcon/index.tsx
Normal file
63
alias/frontend/src/components/AgentscopeLogoIcon/index.tsx
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
import type { GetProps } from "antd";
|
||||||
|
import Icon from "@ant-design/icons";
|
||||||
|
|
||||||
|
type CustomIconComponentProps = GetProps<typeof Icon>;
|
||||||
|
const AgentscopeLogoIconSvg = () => (
|
||||||
|
<svg
|
||||||
|
xmlns="http://www.w3.org/2000/svg"
|
||||||
|
fill="none"
|
||||||
|
version="1.1"
|
||||||
|
width="20"
|
||||||
|
height="20"
|
||||||
|
viewBox="0 0 20 20"
|
||||||
|
>
|
||||||
|
<defs>
|
||||||
|
<clipPath id="master_svg0_75_00637">
|
||||||
|
<rect x="0" y="0" width="20" height="20" rx="10" />
|
||||||
|
</clipPath>
|
||||||
|
<pattern
|
||||||
|
x="2"
|
||||||
|
y="2"
|
||||||
|
width="16"
|
||||||
|
height="16"
|
||||||
|
patternUnits="userSpaceOnUse"
|
||||||
|
id="master_svg1_75_01073"
|
||||||
|
>
|
||||||
|
<image
|
||||||
|
x="0"
|
||||||
|
y="0"
|
||||||
|
width="16"
|
||||||
|
height="16"
|
||||||
|
href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAAAXNSR0IArs4c6QAAAARzQklUCAgICHwIZIgAAAlLSURBVHic7VprrF1FFf7WvM7Zp7e3YIOCQaRVKGDFNCgiEBAs1gQoGpUilEQQAwpBTYhG/eEjJoaoCEax4aEkRgjyA60QXsVHaeRZJKJAGhAComnAlvbee/beM7Nm+ePuWza359l7L388X7KTs2evWbPWN2tm1swcYIQRRhhhhBFGGGGEEUb4fwS9GY2ICBGRzFGHwrS9AkDmqm8GC06AiGQALgTwBIC/DGO4iBgARwI4OaW0Sim1FMAEgBcAPA9gM4AXiCjOu+HzAREhETnde5+HEO6syBikXiYia0MIm8qyzPM8T3mep3a7LbXfXJZlO4SwSUTWikhzof0ZGiKyKKV0R1EUUpZl4b1f1UeeRGRFjHFjURQ+z3OpP+12u9sTYowbReRoERkqqtXcXOyNGOOHYoynVK9Oa/3Zaizvhar8LO/9PTHGM0TEDtGUCSGcwcy3A1jTrY1OWDACRCTTWl+SUmpV78TM6wDs10GWmHltCOGGlNI7h+3FGR3e++UxxusxTcJAOhaMgBjjMTHGj9XLROQAAOs6iB9BRD9i5qVzbdd7f3CM8RoAqwYhYUEIEBFLRJemlBbNKlcxxotExNXKTErp2977Zf30Eg0WGN77dzPz9wCM95NdqAhYmVJa2+mDiKwEcHKt6DhmXoPBluSktX5aKVV0E6hIohDCagDr+0XBvBMgIjql9JWZsd/hu00pnSsiupqs1qSUFvfTq5QqnXNXOeeObTQaxxpjbjLG7CKihCo5qj8pJeO9XwPA9dI774mQiBwdQniwGwEAoLV+2RjzHgBTMcb7mPnDIt3zI6VUboz5mlLqWiLiqh0N4PCU0plKqU5OJgD3AXisV/JlBnVsEFS9/+V+CU8VpgpAcwBZds79BMCGGednygE8LSLP9KjbN+ucVwIArKyysq6RRUSitX4AwG4AS9AnRLXWjwL4ARGFbvrmYvC8zQFVSF7MzPv3kiOiEsA1VQ/mRNTuIZ6MMbcD2DVfds7GfE6CRzLzWf2yMBG5C8Bj1WsQkY49CwBEREVRXJZS+pmInFpfPucL80JA1fsXMPOBfUSDc+7K2u4tWWvvB5A6rfFKqcjMb5+YmPj81NTURu/9Fma+WESW7ku22AnzokREVoQQ7k0pHdJLTmv9W2PM2fXxLCIfCCHclVJaOnslMMb4oig0M+s9BhMlY8x/tda3KKV+Ya19GkDY17lgzhFQ9f76lNLBveSIqDTGXNNhMnvcWntrtZ53qje7PRVCOKAoisuLoni43W5vBHDexMTEAfsSFXOOABFZHkK4L6W0vJecMeYOrfW6TpOeiBySUrrTe7/yDcYRifceKaWedhIRW2ufzbJsA4BfA3h10IiYUwSIiEopnZNSOrRnI0oVWutrAeSdvhPRi0qpc5RS22bpp0HyfxHR3vsVeZ5/P4RwG4Dj36zd4IHMfEE/PcaYzQC29OoVIvqHc+5Ma+0mAHsSHq31wGObmW0I4fiyLH8F4JQF3Q1Wytf1C32lVElEGwBM9tNJRNu01mc7576plHoZ04efpLXuOD/MRkpJpZRUWZbLvPc/B3BEvzpziYAlzHxpPx1KqUcAbBp0TBLRTqXUD51zpxpjfqy13mGMGXqGL4risBDC1/vlDnMh4DPM3HMPr5QK1djv2/t1EBET0TZjzFettUc5565otVpbtdZFPyJrYU/e+0+hTxTs0yogIouZ+a8xxnf1krPWblVKnQ4gYppsBlAAKDGd/AxzRG4x7cw5ZVl+OsZ4CDO7+jjXWnN1sDrTseKc+06z2fxut7b26ewNwCVlWf4UHSKIiFJKCWVZivf+KRHZbq09FIATkUkReYmZt4+Pj79ojLnXGPMsgFcGPduv2l8M4FRmPtt7f2KM8SAR0caYGEKw9ZXDWrs5y7KPdNO/LwTsH2P8MzO/t+40EXG73aY8z6koCk1EkZmhlNIASCkFIqo/opSKWZY9a619qNVq3QrgIQAT3ZKiDrZoAIcCWB1j/Fae529LKak6AY1G4+VGo7GciHwnHUNthyv2PyEiKwBAKcVKqRJALIqikVJyjUYDjUYjLV68+GoieqAoio9PTk6uKcvyILyRcBIR2263j1RKHTE1NXVuo9F4Ymxs7Dci8jtM3/hwBzNeV0DEIvJiSglFUWQppb0ikpn3Q4+OHioCRGQ/Ebk7hHAsEXFKiWOMmpl1fSw657ZnWbaKiP4jIsZ7f5j3/vKpqanzQwiL6pFQOVJ/uNVq/avZbN6jtb4FwCOY3jbLLFsUgIOZ+Yo8zz/HzHtOoOoRYIyZarVaS6tt+L4TUDl4bgjhBgAqz3PbKdEgInHOXdlsNr9RN1pEHDOv2bFjx3Xe+wO7EVB7F+fcpDHmhWaz+XsReYqZn3TOlcx8FBGdUJblJ0MI70gp6brT9d/OuX83m81l3YbAMASMM/NG7/2JIQTVLcvSWu8eGxs7nIi2d9BB3vuVO3fuvJmZV/YiYFa5KKVYax1kestoU0qmbn83Aqy1f8qy7LRuk+BAc0Dl7GpmPqaX80Qk1tobALzS7TuAJ0Vk7e7du29ut9sfHCRdFRFKKZnK6b2I6lVVa30Paqn1bAyaCI3FGC8sy7LVy2Cl1GuNRuO6frM4ET0/Pj5+/vj4+KPzdc9f073nt9Z6l3Pu9l5t9CWgcviEEMJJnWbZuqi19jYA/xzQ1udardYlS5YseXK+SQCml+ZGo/FLAM/1khskArIY40UhhLFeQsaY1xqNxo3dTm87GCgAnsiy7LJWq7WXkYNeg3XT3Ww2HzPGXN0vwRqEgPeXZfnRPmNVnHN3AfjbsIYC2JJl2ReazeZLw9TtpbPZbD5jrb0UQF+d/U5wXQjhi8zcs/e11pNEdCMRdb2z6wYiEmPM/YsWLTrPWvv8sPVnQZrN5lZr7XoAWwcZWv0i4H3e+zP7XXQ45/5ojHlwWGvrOowxW1qt1mlZlt096L6gVh/GmKLVal1vrT2LiB4fePvd7UN13HWV977Tff4MklLq7865LxFR1yuqYSAiDQCr8zxfn1I6iZnfEmN0mL4meN3w6SXXa61f1Vr/wVp7E4DN/dLn2eg501Rb0H6zURq2xwZBtdHZH8BxKaVlSqnDU0pvrT5vB7BNKfUcgIcB7BrW8Rm8Kf8TnCuqITjzANNHZQPtGEcYYYQRRhhhhBFGGGGEjvgfuJwurabKy8EAAAAASUVORK5CYII="
|
||||||
|
/>
|
||||||
|
</pattern>
|
||||||
|
</defs>
|
||||||
|
<g clip-path="url(#master_svg0_75_00637)">
|
||||||
|
<rect
|
||||||
|
x="0"
|
||||||
|
y="0"
|
||||||
|
width="20"
|
||||||
|
height="20"
|
||||||
|
rx="10"
|
||||||
|
fill="currentColor"
|
||||||
|
fillOpacity="1"
|
||||||
|
/>
|
||||||
|
<g>
|
||||||
|
<rect
|
||||||
|
x="2"
|
||||||
|
y="2"
|
||||||
|
width="16"
|
||||||
|
height="16"
|
||||||
|
rx="0"
|
||||||
|
fill="url(#master_svg1_75_01073)"
|
||||||
|
fillOpacity="1"
|
||||||
|
/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
</svg>
|
||||||
|
);
|
||||||
|
const AgentscopeLogoIcon = (props: Partial<CustomIconComponentProps>) => (
|
||||||
|
<Icon component={AgentscopeLogoIconSvg} {...props} />
|
||||||
|
);
|
||||||
|
|
||||||
|
export default AgentscopeLogoIcon;
|
||||||
46
alias/frontend/src/components/AliasLogoIcon/index.tsx
Normal file
46
alias/frontend/src/components/AliasLogoIcon/index.tsx
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
import Icon from "@ant-design/icons";
|
||||||
|
import type { GetProps } from "antd";
|
||||||
|
|
||||||
|
type CustomIconComponentProps = GetProps<typeof Icon>;
|
||||||
|
const AliasLogoIconSvg = () => (
|
||||||
|
<svg
|
||||||
|
xmlns="http://www.w3.org/2000/svg"
|
||||||
|
fill="none"
|
||||||
|
version="1.1"
|
||||||
|
width="20"
|
||||||
|
height="20"
|
||||||
|
viewBox="0 0 20 20"
|
||||||
|
>
|
||||||
|
<defs>
|
||||||
|
<clipPath id="master_svg0_75_00643">
|
||||||
|
<rect x="0" y="0" width="20" height="20" rx="10" />
|
||||||
|
</clipPath>
|
||||||
|
</defs>
|
||||||
|
<g clip-path="url(#master_svg0_75_00643)">
|
||||||
|
<rect
|
||||||
|
x="0"
|
||||||
|
y="0"
|
||||||
|
width="20"
|
||||||
|
height="20"
|
||||||
|
rx="10"
|
||||||
|
fill="currentColor"
|
||||||
|
fill-opacity="1"
|
||||||
|
/>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<path
|
||||||
|
d="M9.83005,9.02303C9.66631,8.26922,9.09608,7.68054,8.36589,7.5115099999999995C9.09608,7.34248,9.66631,6.753807,9.83005,6C9.993780000000001,6.753807,10.56402,7.34248,11.2942,7.5115099999999995C10.56402,7.68054,9.993780000000001,8.26922,9.83005,9.02303ZM4.06223,8.17221L2,12.94819L3.09958,12.94819L3.5111,11.92647L5.65759,11.92647L6.06911,12.94819L7.18718,12.94819L5.12102,8.17221L4.06223,8.17221ZM5.3197600000000005,11.08769L4.584350000000001,9.261800000000001L3.84894,11.08769L5.3197600000000005,11.08769ZM7.47013,8.279959999999999L7.47013,12.94819L8.50114,12.94819L8.50114,8.279959999999999L7.47013,8.279959999999999ZM13.36,12.49623L13.36,12.94819L14.3292,12.94819L14.3292,10.851659999999999Q14.3292,10.01576,13.8808,9.620989999999999Q13.4324,9.22612,12.6162,9.22612Q12.1932,9.22612,11.78429,9.33945Q11.37535,9.45266,11.08208,9.66971L11.45022,10.41042Q11.64467,10.25339,11.91967,10.162659999999999Q12.1948,10.07183,12.4752,10.07183Q12.8945,10.07183,13.0963,10.26042Q13.2982,10.4489,13.2982,10.79186L13.2982,10.79249L12.4788,10.79249Q11.93692,10.79249,11.59996,10.93247Q11.2631,11.07234,11.10862,11.31871Q10.95413,11.564969999999999,10.95413,11.889479999999999Q10.95413,12.20599,11.11213,12.45844Q11.27013,12.71078,11.56753,12.85544Q11.86494,13,12.2798,13Q12.749,13,13.0465,12.81589Q13.2451,12.69305,13.36,12.49623ZM13.2982,11.78724L13.2982,11.41273L12.5931,11.41273Q12.2275,11.41273,12.0925,11.53448Q11.95757,11.65612,11.95757,11.84598Q11.95757,12.04555,12.1112,12.16591Q12.2649,12.286159999999999,12.5338,12.286159999999999Q12.7935,12.286159999999999,13.0002,12.16175Q13.2068,12.03734,13.2982,11.78724ZM15.4508,12.887319999999999Q15.8703,13,16.320999999999998,13Q16.8572,13,17.2316,12.85075Q17.606099999999998,12.70139,17.8031,12.43818Q18,12.17497,18,11.836490000000001Q18,11.52414,17.8836,11.32841Q17.767200000000003,11.132570000000001,17.5762,11.0201Q17.385199999999998,10.90752,17.1558,10.84686Q16.9264,10.78621,16.6971,10.75348Q16.4677,10.72064,16.276699999999998,10.681619999999999Q16.0856,10.642610000000001,15.9693,10.572890000000001Q15.8529,10.503160000000001,15.8529,10.36777Q15.8529,10.2213,16.011200000000002,10.12556Q16.1695,10.02983,16.5175,10.02983Q16.7608,10.02983,17.020400000000002,10.08783Q17.280099999999997,10.14582,17.5378,10.30274L17.883699999999997,9.541360000000001Q17.6314,9.38859,17.2568,9.30736Q16.882199999999997,9.22612,16.5136,9.22612Q15.9972,9.22612,15.63,9.3775Q15.2629,9.52878,15.0666,9.79476Q14.8704,10.060749999999999,14.8704,10.40733Q14.8704,10.72384,14.9868,10.921700000000001Q15.1032,11.11946,15.2941,11.23406Q15.4852,11.348559999999999,15.7158,11.40794Q15.9465,11.46732,16.1759,11.50004Q16.4054,11.53277,16.5964,11.56774Q16.787399999999998,11.602599999999999,16.9037,11.66891Q17.0201,11.73522,17.0201,11.862400000000001Q17.0201,12.01837,16.8704,12.10738Q16.7207,12.196290000000001,16.366,12.196290000000001Q16.0381,12.196290000000001,15.7038,12.09939Q15.3694,12.002369999999999,15.1276,11.84545L14.7817,12.60683Q15.0313,12.77463,15.4508,12.887319999999999ZM9.32727,9.67213L9.32727,12.94819L10.35828,12.94819L10.35828,9.67213L9.32727,9.67213Z"
|
||||||
|
fillRule="evenodd"
|
||||||
|
fill="#ffffff"
|
||||||
|
fillOpacity="1"
|
||||||
|
/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
</svg>
|
||||||
|
);
|
||||||
|
const AliasLogoIcon = (props: Partial<CustomIconComponentProps>) => (
|
||||||
|
<Icon component={AliasLogoIconSvg} {...props} />
|
||||||
|
);
|
||||||
|
|
||||||
|
export default AliasLogoIcon;
|
||||||
@@ -5,7 +5,6 @@
|
|||||||
justify-content: center;
|
justify-content: center;
|
||||||
margin-bottom: 20px;
|
margin-bottom: 20px;
|
||||||
.button {
|
.button {
|
||||||
width: 180px;
|
|
||||||
border-radius: 15px;
|
border-radius: 15px;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -13,6 +12,7 @@
|
|||||||
display: flex;
|
display: flex;
|
||||||
justify-content: center;
|
justify-content: center;
|
||||||
font-size: 32px;
|
font-size: 32px;
|
||||||
|
width: 1000px;
|
||||||
font-weight: 500;
|
font-weight: 500;
|
||||||
letter-spacing: normal;
|
letter-spacing: normal;
|
||||||
color: var(--sps-color-text);
|
color: var(--sps-color-text);
|
||||||
@@ -21,7 +21,6 @@
|
|||||||
.logo {
|
.logo {
|
||||||
height: 72px;
|
height: 72px;
|
||||||
width: 80px;
|
width: 80px;
|
||||||
margin: 0 12px;
|
|
||||||
margin-top: -12px;
|
margin-top: -12px;
|
||||||
}
|
}
|
||||||
.label {
|
.label {
|
||||||
|
|||||||
@@ -1,8 +1,10 @@
|
|||||||
import React, { memo } from "react";
|
import AgentscopeLogoIcon from "@/components/AgentscopeLogoIcon";
|
||||||
import { Button, Flex } from "antd";
|
import AliasLogoIcon from "@/components/AliasLogoIcon";
|
||||||
|
import LogoIcon from "@/components/LogoIcon";
|
||||||
import { Welcome } from "@agentscope-ai/chat";
|
import { Welcome } from "@agentscope-ai/chat";
|
||||||
import { SparkUpperrightArrowLine } from "@agentscope-ai/icons";
|
import { SparkUpperrightArrowLine } from "@agentscope-ai/icons";
|
||||||
import LogoIcon from "@/components/LogoIcon";
|
import { Button, Flex } from "antd";
|
||||||
|
import React, { memo } from "react";
|
||||||
import styles from "./index.module.scss";
|
import styles from "./index.module.scss";
|
||||||
const WelcomeView: React.FC = ({}) => {
|
const WelcomeView: React.FC = ({}) => {
|
||||||
const goGitHub = (url: string) => {
|
const goGitHub = (url: string) => {
|
||||||
@@ -18,7 +20,8 @@ const WelcomeView: React.FC = ({}) => {
|
|||||||
goGitHub("https://github.com/agentscope-ai/agentscope");
|
goGitHub("https://github.com/agentscope-ai/agentscope");
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
AgentScope Github
|
<AgentscopeLogoIcon style={{ marginLeft: -5 }} />
|
||||||
|
AgentScope GitHub
|
||||||
<SparkUpperrightArrowLine style={{ fontSize: "20px" }} />
|
<SparkUpperrightArrowLine style={{ fontSize: "20px" }} />
|
||||||
</Button>
|
</Button>
|
||||||
<Button
|
<Button
|
||||||
@@ -29,7 +32,8 @@ const WelcomeView: React.FC = ({}) => {
|
|||||||
);
|
);
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
Alias Github
|
<AliasLogoIcon style={{ marginLeft: -5 }} />
|
||||||
|
Alias GitHub
|
||||||
<SparkUpperrightArrowLine style={{ fontSize: "20px" }} />
|
<SparkUpperrightArrowLine style={{ fontSize: "20px" }} />
|
||||||
</Button>
|
</Button>
|
||||||
</Flex>
|
</Flex>
|
||||||
@@ -39,11 +43,12 @@ const WelcomeView: React.FC = ({}) => {
|
|||||||
logo={null}
|
logo={null}
|
||||||
title={
|
title={
|
||||||
<div className={styles.title}>
|
<div className={styles.title}>
|
||||||
<div className={styles.label}>Tell</div>
|
|
||||||
<div className={styles.logo}>
|
<div className={styles.logo}>
|
||||||
<LogoIcon className="w-full h-full object-cover" />
|
<LogoIcon className="w-full h-full object-cover" />
|
||||||
</div>
|
</div>
|
||||||
<div className={styles.label}> what you want to do</div>
|
<div className={styles.label}>
|
||||||
|
: Start It Now, Extend It Your Way, Deploy All with Ease
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
}
|
}
|
||||||
desc={
|
desc={
|
||||||
|
|||||||
@@ -86,31 +86,50 @@ QDRANT_CONTAINER_NAME="user-profiling-qdrant"
|
|||||||
check_port() {
|
check_port() {
|
||||||
local host=$1
|
local host=$1
|
||||||
local port=$2
|
local port=$2
|
||||||
timeout 1 bash -c "cat < /dev/null > /dev/tcp/$host/$port" 2>/dev/null
|
# Try using nc (netcat) first, which is more reliable and cross-platform
|
||||||
|
if command -v nc &> /dev/null; then
|
||||||
|
if nc -z "$host" "$port" 2>/dev/null; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
# Fallback to bash TCP check (works on Linux and macOS)
|
||||||
|
if bash -c "exec 3<>/dev/tcp/$host/$port" 2>/dev/null; then
|
||||||
|
exec 3<&-
|
||||||
|
exec 3>&-
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Function to check if Redis is running
|
# Function to check if Redis is running
|
||||||
check_redis() {
|
check_redis() {
|
||||||
if check_port "$REDIS_HOST" "$REDIS_PORT"; then
|
# First try to ping Redis directly (most reliable method)
|
||||||
# Try to ping Redis
|
if command -v redis-cli &> /dev/null; then
|
||||||
if command -v redis-cli &> /dev/null; then
|
if redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" ping &> /dev/null; then
|
||||||
if redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" ping &> /dev/null; then
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# If redis-cli is not available, just check if port is open
|
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
# Fallback to port check if redis-cli is not available
|
||||||
|
if check_port "$REDIS_HOST" "$REDIS_PORT"; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Function to check if Qdrant is running
|
# Function to check if Qdrant is running
|
||||||
check_qdrant() {
|
check_qdrant() {
|
||||||
# First check if port is open
|
# First check if any Qdrant container is running (most reliable)
|
||||||
|
if docker ps --format '{{.Names}}' | grep -q "qdrant"; then
|
||||||
|
# Check if the port is accessible
|
||||||
|
if check_port "$QDRANT_HOST" "$QDRANT_PORT"; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if port is open
|
||||||
if check_port "$QDRANT_HOST" "$QDRANT_PORT"; then
|
if check_port "$QDRANT_HOST" "$QDRANT_PORT"; then
|
||||||
# Try to check Qdrant health endpoint
|
# Try to check Qdrant health endpoint
|
||||||
if curl -s -f "http://$QDRANT_HOST:$QDRANT_PORT/health" &> /dev/null; then
|
if curl -s -f "http://$QDRANT_HOST:$QDRANT_PORT/health" &> /dev/null 2>&1; then
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
# If port is open but health check fails, still consider it running
|
# If port is open but health check fails, still consider it running
|
||||||
@@ -187,11 +206,30 @@ start_qdrant_docker() {
|
|||||||
fi
|
fi
|
||||||
# Check if any container is using this port
|
# Check if any container is using this port
|
||||||
if docker ps --format '{{.Names}} {{.Ports}}' | grep -q ":$QDRANT_PORT"; then
|
if docker ps --format '{{.Names}} {{.Ports}}' | grep -q ":$QDRANT_PORT"; then
|
||||||
print_warn "Port $QDRANT_PORT is in use by another container. Assuming Qdrant is running."
|
# Check if it's a Qdrant container
|
||||||
|
qdrant_container=$(docker ps --format '{{.Names}} {{.Ports}}' | grep ":$QDRANT_PORT" | grep -i qdrant | head -1 | awk '{print $1}')
|
||||||
|
if [ -n "$qdrant_container" ]; then
|
||||||
|
print_info "Port $QDRANT_PORT is in use by Qdrant container '$qdrant_container'. Using existing service."
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
# Verify it's actually a Qdrant service by checking health endpoint
|
||||||
|
if curl -s -f "http://$QDRANT_HOST:$QDRANT_PORT/health" &> /dev/null 2>&1; then
|
||||||
|
print_info "Port $QDRANT_PORT is in use by another Qdrant container. Using existing service."
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
# Port is open, assume it's Qdrant even if health check fails
|
||||||
|
print_info "Port $QDRANT_PORT is in use. Assuming Qdrant service is running."
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
# Port is in use but not by a container - verify it's Qdrant
|
||||||
|
if curl -s -f "http://$QDRANT_HOST:$QDRANT_PORT/health" &> /dev/null 2>&1; then
|
||||||
|
print_info "Port $QDRANT_PORT is in use by a Qdrant service. Using existing service."
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
print_warn "Port $QDRANT_PORT is in use but not by our container. Please check manually."
|
# Port is open, assume it's Qdrant
|
||||||
return 1
|
print_info "Port $QDRANT_PORT is in use. Assuming Qdrant service is running."
|
||||||
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Create storage directory if it doesn't exist
|
# Create storage directory if it doesn't exist
|
||||||
@@ -206,6 +244,18 @@ start_qdrant_docker() {
|
|||||||
return 0
|
return 0
|
||||||
else
|
else
|
||||||
print_info "Starting existing Qdrant container..."
|
print_info "Starting existing Qdrant container..."
|
||||||
|
# Check if the port is already in use before starting
|
||||||
|
if check_port "$QDRANT_HOST" "$QDRANT_PORT"; then
|
||||||
|
# Check if any Qdrant container is using this port
|
||||||
|
qdrant_container=$(docker ps --format '{{.Names}} {{.Ports}}' | grep ":$QDRANT_PORT" | grep -i qdrant | head -1 | awk '{print $1}')
|
||||||
|
if [ -n "$qdrant_container" ]; then
|
||||||
|
print_info "Port $QDRANT_PORT is already in use by Qdrant container '$qdrant_container'. Skipping container start."
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
# Port is in use, assume it's Qdrant (even if health check fails)
|
||||||
|
print_info "Port $QDRANT_PORT is already in use. Assuming Qdrant service is running. Skipping container start."
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
docker start "$QDRANT_CONTAINER_NAME"
|
docker start "$QDRANT_CONTAINER_NAME"
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
|
|||||||
@@ -3,14 +3,14 @@ import asyncio
|
|||||||
import json
|
import json
|
||||||
import time
|
import time
|
||||||
import traceback
|
import traceback
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional, Literal
|
||||||
|
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
from agentscope.agent import ReActAgent
|
from agentscope.agent import ReActAgent
|
||||||
from agentscope.model import ChatModelBase
|
from agentscope.model import ChatModelBase
|
||||||
from agentscope.formatter import FormatterBase
|
from agentscope.formatter import FormatterBase
|
||||||
from agentscope.memory import MemoryBase
|
from agentscope.memory import MemoryBase, LongTermMemoryBase
|
||||||
from agentscope.message import Msg, TextBlock, ToolUseBlock, ToolResultBlock
|
from agentscope.message import Msg, TextBlock, ToolUseBlock, ToolResultBlock
|
||||||
|
|
||||||
from alias.agent.tools import AliasToolkit
|
from alias.agent.tools import AliasToolkit
|
||||||
@@ -54,6 +54,12 @@ class AliasAgentBase(ReActAgent):
|
|||||||
sys_prompt: Optional[str] = None,
|
sys_prompt: Optional[str] = None,
|
||||||
max_iters: int = 10,
|
max_iters: int = 10,
|
||||||
tool_call_interrupt_return: bool = True,
|
tool_call_interrupt_return: bool = True,
|
||||||
|
long_term_memory: Optional[LongTermMemoryBase] = None,
|
||||||
|
long_term_memory_mode: Literal[
|
||||||
|
"agent_control",
|
||||||
|
"static_control",
|
||||||
|
"both",
|
||||||
|
] = "both",
|
||||||
):
|
):
|
||||||
super().__init__(
|
super().__init__(
|
||||||
name=name,
|
name=name,
|
||||||
@@ -63,6 +69,8 @@ class AliasAgentBase(ReActAgent):
|
|||||||
memory=memory,
|
memory=memory,
|
||||||
toolkit=toolkit,
|
toolkit=toolkit,
|
||||||
max_iters=max_iters,
|
max_iters=max_iters,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
long_term_memory_mode=long_term_memory_mode,
|
||||||
)
|
)
|
||||||
|
|
||||||
self.session_service = session_service
|
self.session_service = session_service
|
||||||
@@ -256,3 +264,45 @@ class AliasAgentBase(ReActAgent):
|
|||||||
Add additional interrupt function name to the agent.
|
Add additional interrupt function name to the agent.
|
||||||
"""
|
"""
|
||||||
self.agent_stop_function_names.append(func_name)
|
self.agent_stop_function_names.append(func_name)
|
||||||
|
|
||||||
|
async def _retrieve_from_long_term_memory(
|
||||||
|
self,
|
||||||
|
msg: Msg | list[Msg] | None, # pylint: disable=unused-argument
|
||||||
|
) -> None:
|
||||||
|
"""Override the parent method to retrieve from long-term memory using
|
||||||
|
the last user message in memory if available.
|
||||||
|
Args:
|
||||||
|
msg (`Msg | list[Msg] | None`):
|
||||||
|
The input message to the agent (may be None).
|
||||||
|
"""
|
||||||
|
if self._static_control and self.long_term_memory:
|
||||||
|
# Get messages from memory
|
||||||
|
memory_msgs = await self.memory.get_memory()
|
||||||
|
|
||||||
|
# Check if there are messages and the last one is from user
|
||||||
|
if memory_msgs and len(memory_msgs) > 0:
|
||||||
|
last_msg = memory_msgs[-1]
|
||||||
|
if last_msg.role == "user":
|
||||||
|
# Check if the user message is just "continue"
|
||||||
|
user_content = str(last_msg.content).strip().lower()
|
||||||
|
if user_content == "continue":
|
||||||
|
logger.info(
|
||||||
|
"User input is 'continue' message, "
|
||||||
|
"skipping retrieve from long-term memory",
|
||||||
|
)
|
||||||
|
retrieved_info = None
|
||||||
|
else:
|
||||||
|
# Retrieve using the last user message
|
||||||
|
retrieved_info = await self.long_term_memory.retrieve(
|
||||||
|
last_msg,
|
||||||
|
)
|
||||||
|
if retrieved_info:
|
||||||
|
retrieved_msg = Msg(
|
||||||
|
name="long_term_memory",
|
||||||
|
content="<long_term_memory>The content below are "
|
||||||
|
"retrieved from long-term memory, which may be "
|
||||||
|
"related to user preference and may be useful:\n"
|
||||||
|
f"{retrieved_info}</long_term_memory>",
|
||||||
|
role="user",
|
||||||
|
)
|
||||||
|
await self.memory.add(retrieved_msg)
|
||||||
|
|||||||
@@ -133,6 +133,9 @@ async def browser_post_acting_hook(
|
|||||||
Hook func for cleaning the messy return after action.
|
Hook func for cleaning the messy return after action.
|
||||||
Observation will be done before reasoning steps.
|
Observation will be done before reasoning steps.
|
||||||
"""
|
"""
|
||||||
|
tool_call = kwargs.get("tool_call")
|
||||||
|
if tool_call is None:
|
||||||
|
return
|
||||||
mem_msgs = await self.memory.get_memory()
|
mem_msgs = await self.memory.get_memory()
|
||||||
mem_length = await self.memory.size()
|
mem_length = await self.memory.size()
|
||||||
if len(mem_msgs) == 0:
|
if len(mem_msgs) == 0:
|
||||||
@@ -145,7 +148,11 @@ async def browser_post_acting_hook(
|
|||||||
tool_res_msg.content[i]["output"][j][
|
tool_res_msg.content[i]["output"][j][
|
||||||
"text"
|
"text"
|
||||||
] = self._filter_execution_text(return_json["text"])
|
] = self._filter_execution_text(return_json["text"])
|
||||||
await self.print(tool_res_msg)
|
if tool_call["name"] != self.finish_function_name or (
|
||||||
|
tool_call["name"] == self.finish_function_name
|
||||||
|
and not tool_res_msg.metadata.get("success")
|
||||||
|
):
|
||||||
|
await self.print(tool_res_msg)
|
||||||
await self.memory.delete(mem_length - 1)
|
await self.memory.delete(mem_length - 1)
|
||||||
await self.memory.add(tool_res_msg)
|
await self.memory.add(tool_res_msg)
|
||||||
|
|
||||||
@@ -252,12 +259,7 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
self.toolkit.register_tool_function(self.browser_subtask_manager)
|
self.toolkit.register_tool_function(self.browser_subtask_manager)
|
||||||
if (
|
if self._supports_multimodal():
|
||||||
self.model.model_name.startswith("qvq")
|
|
||||||
or "-vl" in self.model.model_name
|
|
||||||
or "4o" in self.model.model_name
|
|
||||||
or "gpt-5" in self.model.model_name
|
|
||||||
):
|
|
||||||
self._register_skill_tool(image_understanding)
|
self._register_skill_tool(image_understanding)
|
||||||
self._register_skill_tool(video_understanding)
|
self._register_skill_tool(video_understanding)
|
||||||
|
|
||||||
@@ -328,6 +330,19 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
pass
|
pass
|
||||||
self.toolkit.register_tool_function(tool)
|
self.toolkit.register_tool_function(tool)
|
||||||
|
|
||||||
|
def _supports_multimodal(self) -> bool:
|
||||||
|
"""Check if the model supports multimodal input (images/videos).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if the model supports multimodal input, False otherwise.
|
||||||
|
"""
|
||||||
|
return (
|
||||||
|
self.model.model_name.startswith("qvq")
|
||||||
|
or "-vl" in self.model.model_name
|
||||||
|
or "4o" in self.model.model_name
|
||||||
|
or "gpt-5" in self.model.model_name
|
||||||
|
)
|
||||||
|
|
||||||
async def reply(
|
async def reply(
|
||||||
self,
|
self,
|
||||||
msg: Msg | list[Msg] | None = None,
|
msg: Msg | list[Msg] | None = None,
|
||||||
@@ -396,7 +411,7 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
break
|
break
|
||||||
# When the maximum iterations are reached
|
# When the maximum iterations are reached
|
||||||
if not reply_msg:
|
if not reply_msg:
|
||||||
await self._summarizing()
|
reply_msg = await self._summarizing()
|
||||||
|
|
||||||
await self.memory.add(reply_msg)
|
await self.memory.add(reply_msg)
|
||||||
return reply_msg
|
return reply_msg
|
||||||
@@ -566,12 +581,7 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
) -> Msg:
|
) -> Msg:
|
||||||
"""Get a snapshot in text before reasoning"""
|
"""Get a snapshot in text before reasoning"""
|
||||||
image_data: Optional[str] = None
|
image_data: Optional[str] = None
|
||||||
if (
|
if self._supports_multimodal():
|
||||||
self.model.model_name.startswith("qvq")
|
|
||||||
or "-vl" in self.model.model_name
|
|
||||||
or "4o" in self.model.model_name
|
|
||||||
or "gpt-5" in self.model.model_name
|
|
||||||
):
|
|
||||||
# If the model supports multimodal input, take a screenshot
|
# If the model supports multimodal input, take a screenshot
|
||||||
# and pass it to the observation message as base64
|
# and pass it to the observation message as base64
|
||||||
image_data = await self._get_screenshot()
|
image_data = await self._get_screenshot()
|
||||||
@@ -599,7 +609,9 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
).replace("```", "")
|
).replace("```", "")
|
||||||
data = json.loads(raw_response)
|
data = json.loads(raw_response)
|
||||||
information = data.get("INFORMATION", "")
|
information = data.get("INFORMATION", "")
|
||||||
self.chunk_continue_status = data.get("STATUS", "CONTINUE")
|
self.chunk_continue_status = (
|
||||||
|
data.get("STATUS") != "REASONING_FINISHED"
|
||||||
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
information = raw_response
|
information = raw_response
|
||||||
if (
|
if (
|
||||||
@@ -628,24 +640,6 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
if b["type"] == "tool_use":
|
if b["type"] == "tool_use":
|
||||||
self.chunk_continue_status = False
|
self.chunk_continue_status = False
|
||||||
|
|
||||||
def _clean_tool_excution_content(
|
|
||||||
self,
|
|
||||||
output_msg: Msg,
|
|
||||||
) -> Msg:
|
|
||||||
"""
|
|
||||||
Hook func for cleaning the messy return after action.
|
|
||||||
Observation will be done before reasoning steps.
|
|
||||||
"""
|
|
||||||
|
|
||||||
for i, b in enumerate(output_msg.content):
|
|
||||||
if b["type"] == "tool_result":
|
|
||||||
for j, return_json in enumerate(b.get("output", [])):
|
|
||||||
if isinstance(return_json, dict) and "text" in return_json:
|
|
||||||
output_msg.content[i]["output"][j][
|
|
||||||
"text"
|
|
||||||
] = self._filter_execution_text(return_json["text"])
|
|
||||||
return output_msg
|
|
||||||
|
|
||||||
async def _task_decomposition_and_reformat( # pylint: disable=too-many-statements
|
async def _task_decomposition_and_reformat( # pylint: disable=too-many-statements
|
||||||
self,
|
self,
|
||||||
original_task: Msg | list[Msg] | None,
|
original_task: Msg | list[Msg] | None,
|
||||||
@@ -753,7 +747,7 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
try:
|
try:
|
||||||
formatted_task += (
|
formatted_task += (
|
||||||
"The decomposed subtasks are: "
|
"The decomposed subtasks are: "
|
||||||
+ json.dumps(self.subtasks)
|
+ json.dumps(self.subtasks, ensure_ascii=False)
|
||||||
+ "\n"
|
+ "\n"
|
||||||
)
|
)
|
||||||
formatted_task += (
|
formatted_task += (
|
||||||
@@ -802,9 +796,8 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
input={"action": "close", "index": 0},
|
input={"action": "close", "index": 0},
|
||||||
type="tool_use",
|
type="tool_use",
|
||||||
)
|
)
|
||||||
response = await self.toolkit.call_tool_function(tool_call)
|
await self.toolkit.call_tool_function(tool_call)
|
||||||
async for chunk in response:
|
|
||||||
response_text = chunk.content
|
|
||||||
tool_call = ToolUseBlock(
|
tool_call = ToolUseBlock(
|
||||||
id=str(uuid.uuid4()),
|
id=str(uuid.uuid4()),
|
||||||
type="tool_use",
|
type="tool_use",
|
||||||
@@ -883,8 +876,8 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
"1. What has been completed so far.\n"
|
"1. What has been completed so far.\n"
|
||||||
"2. What key information has been found.\n"
|
"2. What key information has been found.\n"
|
||||||
"3. What remains to be done.\n"
|
"3. What remains to be done.\n"
|
||||||
"Ensure that your summary is clear, concise, and t"
|
"Ensure that your summary is clear, concise, and "
|
||||||
"hat no tasks are repeated or skipped."
|
"that no tasks are repeated or skipped."
|
||||||
),
|
),
|
||||||
role="user",
|
role="user",
|
||||||
)
|
)
|
||||||
@@ -1039,12 +1032,7 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
text=reasoning_prompt,
|
text=reasoning_prompt,
|
||||||
),
|
),
|
||||||
]
|
]
|
||||||
if (
|
if self._supports_multimodal():
|
||||||
self.model.model_name.startswith("qvq")
|
|
||||||
or "-vl" in self.model.model_name
|
|
||||||
or "4o" in self.model.model_name
|
|
||||||
or "gpt-5" in self.model.model_name
|
|
||||||
):
|
|
||||||
if image_data:
|
if image_data:
|
||||||
image_block = ImageBlock(
|
image_block = ImageBlock(
|
||||||
type="image",
|
type="image",
|
||||||
@@ -1130,7 +1118,7 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
if self.model.stream:
|
if self.model.stream:
|
||||||
# If the model supports streaming, collect chunks
|
# If the model supports streaming, collect chunks
|
||||||
async for chunk in response:
|
async for chunk in response:
|
||||||
response_text += chunk.content[0]["text"]
|
response_text = chunk.content[0]["text"]
|
||||||
print_msg.content = chunk.content
|
print_msg.content = chunk.content
|
||||||
await self.print(print_msg, last=False)
|
await self.print(print_msg, last=False)
|
||||||
else:
|
else:
|
||||||
@@ -1221,7 +1209,7 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
**kwargs: Any, # pylint: disable=W0613
|
**kwargs: Any, # pylint: disable=W0613
|
||||||
) -> ToolResponse:
|
) -> ToolResponse:
|
||||||
"""Generate a response when the agent has completed all subtasks."""
|
"""Generate a response when the agent has completed all subtasks."""
|
||||||
# breakpoint()
|
|
||||||
hint_msg = Msg(
|
hint_msg = Msg(
|
||||||
"user",
|
"user",
|
||||||
_BROWSER_AGENT_SUMMARIZE_TASK_PROMPT,
|
_BROWSER_AGENT_SUMMARIZE_TASK_PROMPT,
|
||||||
@@ -1251,13 +1239,16 @@ class BrowserAgent(AliasAgentBase):
|
|||||||
"assistant",
|
"assistant",
|
||||||
)
|
)
|
||||||
if self.model.stream:
|
if self.model.stream:
|
||||||
|
summary_text = ""
|
||||||
async for content_chunk in res:
|
async for content_chunk in res:
|
||||||
|
res_msg.content = content_chunk.content
|
||||||
summary_text = content_chunk.content[0]["text"]
|
summary_text = content_chunk.content[0]["text"]
|
||||||
|
await self.print(res_msg, False)
|
||||||
|
await self.print(res_msg, True)
|
||||||
else:
|
else:
|
||||||
summary_text = res.content[0]["text"]
|
summary_text = res.content[0]["text"]
|
||||||
|
res_msg.content = summary_text
|
||||||
res_msg.content = summary_text
|
await self.print(res_msg, True)
|
||||||
await self.print(res_msg, False)
|
|
||||||
# logger.info(summary_text)
|
# logger.info(summary_text)
|
||||||
# Validate finish status
|
# Validate finish status
|
||||||
finish_status = await self._validate_finish_status(summary_text)
|
finish_status = await self._validate_finish_status(summary_text)
|
||||||
|
|||||||
@@ -58,14 +58,24 @@ class FileDownloadAgent(AliasAgentBase):
|
|||||||
state_saving_dir=getattr(browser_agent, "state_saving_dir", None),
|
state_saving_dir=getattr(browser_agent, "state_saving_dir", None),
|
||||||
max_iters=max_iters,
|
max_iters=max_iters,
|
||||||
)
|
)
|
||||||
self.toolkit.remove_tool_function("browser_pdf_save")
|
# Remove conflicting tool functions if they exist
|
||||||
self.toolkit.remove_tool_function("file_download")
|
if hasattr(self.toolkit, "remove_tool_function"):
|
||||||
|
try:
|
||||||
|
self.toolkit.remove_tool_function("browser_pdf_save")
|
||||||
|
except Exception:
|
||||||
|
# Tool may not exist, ignore removal errors
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
self.toolkit.remove_tool_function("file_download")
|
||||||
|
except Exception:
|
||||||
|
# Tool may not exist, ignore removal errors
|
||||||
|
pass
|
||||||
|
|
||||||
async def file_download_final_response(
|
async def file_download_final_response(
|
||||||
self, # pylint: disable=W0613
|
self, # pylint: disable=W0613
|
||||||
**kwargs: Any, # pylint: disable=W0613
|
**kwargs: Any, # pylint: disable=W0613
|
||||||
) -> ToolResponse:
|
) -> ToolResponse:
|
||||||
"""Summarise the file download outcome."""
|
"""Summarize the file download outcome."""
|
||||||
hint_msg = Msg(
|
hint_msg = Msg(
|
||||||
"user",
|
"user",
|
||||||
(
|
(
|
||||||
@@ -184,8 +194,6 @@ async def file_download(
|
|||||||
target_description=target_description,
|
target_description=target_description,
|
||||||
snapshot_text=snapshot_text,
|
snapshot_text=snapshot_text,
|
||||||
)
|
)
|
||||||
# print(snapshot_text)
|
|
||||||
# breakpoint()
|
|
||||||
|
|
||||||
init_msg = Msg(
|
init_msg = Msg(
|
||||||
name="user",
|
name="user",
|
||||||
|
|||||||
@@ -61,12 +61,12 @@ class FormFillingAgent(AliasAgentBase):
|
|||||||
self, # pylint: disable=W0613
|
self, # pylint: disable=W0613
|
||||||
**kwargs: Any, # pylint: disable=W0613
|
**kwargs: Any, # pylint: disable=W0613
|
||||||
) -> ToolResponse:
|
) -> ToolResponse:
|
||||||
"""Summarise the form filling outcome."""
|
"""Summarize the form filling outcome."""
|
||||||
hint_msg = Msg(
|
hint_msg = Msg(
|
||||||
"user",
|
"user",
|
||||||
(
|
(
|
||||||
"Provide a concise summary of the completed form \
|
"Provide a concise summary of the completed form "
|
||||||
filling task.\n"
|
"filling task.\n"
|
||||||
"Highlight these items:\n"
|
"Highlight these items:\n"
|
||||||
"0. The original task/query\n"
|
"0. The original task/query\n"
|
||||||
"1. Which fields were filled/selected and their final values\n"
|
"1. Which fields were filled/selected and their final values\n"
|
||||||
@@ -136,7 +136,7 @@ def _build_initial_instruction(
|
|||||||
) -> str:
|
) -> str:
|
||||||
"""Compose the initial instruction fed to the helper agent."""
|
"""Compose the initial instruction fed to the helper agent."""
|
||||||
return (
|
return (
|
||||||
"You must complete the web form using the information"
|
"You must complete the web form using the information "
|
||||||
"provided below.\n\nFill instructions (plain text from the user):\n"
|
"provided below.\n\nFill instructions (plain text from the user):\n"
|
||||||
f"{fill_information}\n\n"
|
f"{fill_information}\n\n"
|
||||||
"Latest snapshot captured prior to your run:\n"
|
"Latest snapshot captured prior to your run:\n"
|
||||||
@@ -196,7 +196,7 @@ async def form_filling(
|
|||||||
type="text",
|
type="text",
|
||||||
text=sub_agent_response_msg.content[0]["text"]
|
text=sub_agent_response_msg.content[0]["text"]
|
||||||
or (
|
or (
|
||||||
"Form filling agent finished"
|
"Form filling agent finished "
|
||||||
"without a textual summary."
|
"without a textual summary."
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ async def video_understanding(
|
|||||||
try:
|
try:
|
||||||
frames_dir = os.path.join(workdir, "frames")
|
frames_dir = os.path.join(workdir, "frames")
|
||||||
frames = extract_frames(video_path, frames_dir)
|
frames = extract_frames(video_path, frames_dir)
|
||||||
except Exception as exc: # pylint: disable=broad-except
|
except Exception as exc:
|
||||||
return _error_response(f"Failed to extract frames: {exc}")
|
return _error_response(f"Failed to extract frames: {exc}")
|
||||||
|
|
||||||
audio_path = os.path.join(
|
audio_path = os.path.join(
|
||||||
@@ -56,12 +56,12 @@ async def video_understanding(
|
|||||||
)
|
)
|
||||||
try:
|
try:
|
||||||
extract_audio(video_path, audio_path)
|
extract_audio(video_path, audio_path)
|
||||||
except Exception as exc: # pylint: disable=broad-except
|
except Exception as exc:
|
||||||
return _error_response(f"Failed to extract audio: {exc}")
|
return _error_response(f"Failed to extract audio: {exc}")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
transcript = audio2text(audio_path)
|
transcript = audio2text(audio_path)
|
||||||
except Exception as exc: # pylint: disable=broad-except
|
except Exception as exc:
|
||||||
return _error_response(f"Failed to transcribe audio: {exc}")
|
return _error_response(f"Failed to transcribe audio: {exc}")
|
||||||
|
|
||||||
sys_prompt = (
|
sys_prompt = (
|
||||||
@@ -115,7 +115,7 @@ def audio2text(audio_path: str) -> str:
|
|||||||
|
|
||||||
try: # Local import to avoid hard dependency when unused.
|
try: # Local import to avoid hard dependency when unused.
|
||||||
from dashscope.audio.asr import Recognition, RecognitionCallback
|
from dashscope.audio.asr import Recognition, RecognitionCallback
|
||||||
except ImportError as exc: # pylint: disable=broad-except
|
except ImportError as exc:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
"dashscope.audio is required for audio transcription.",
|
"dashscope.audio is required for audio transcription.",
|
||||||
) from exc
|
) from exc
|
||||||
@@ -153,6 +153,8 @@ def extract_frames(
|
|||||||
try:
|
try:
|
||||||
existing.unlink()
|
existing.unlink()
|
||||||
except OSError:
|
except OSError:
|
||||||
|
# Ignore errors during cleanup;
|
||||||
|
# leftover files will be overwritten or do not affect frame extraction
|
||||||
pass
|
pass
|
||||||
|
|
||||||
duration = _probe_video_duration(video_path)
|
duration = _probe_video_duration(video_path)
|
||||||
@@ -182,7 +184,7 @@ def extract_frames(
|
|||||||
stdout=subprocess.DEVNULL,
|
stdout=subprocess.DEVNULL,
|
||||||
stderr=subprocess.DEVNULL,
|
stderr=subprocess.DEVNULL,
|
||||||
)
|
)
|
||||||
except FileNotFoundError as exc: # pylint: disable=broad-except
|
except FileNotFoundError as exc:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
"ffmpeg is required to extract frames from video.",
|
"ffmpeg is required to extract frames from video.",
|
||||||
) from exc
|
) from exc
|
||||||
@@ -227,7 +229,7 @@ def extract_audio(video_path: str, audio_path: str) -> str:
|
|||||||
stdout=subprocess.DEVNULL,
|
stdout=subprocess.DEVNULL,
|
||||||
stderr=subprocess.DEVNULL,
|
stderr=subprocess.DEVNULL,
|
||||||
)
|
)
|
||||||
except FileNotFoundError as exc: # pylint: disable=broad-except
|
except FileNotFoundError as exc:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
"ffmpeg is required to extract audio from video.",
|
"ffmpeg is required to extract audio from video.",
|
||||||
) from exc
|
) from exc
|
||||||
|
|||||||
@@ -13,9 +13,9 @@ Carefully review both the original task and the list of generated subtasks.
|
|||||||
Format your response as the following JSON:
|
Format your response as the following JSON:
|
||||||
{{
|
{{
|
||||||
"DECOMPOSITION": true/false, // true if decomposition is necessary, false otherwise
|
"DECOMPOSITION": true/false, // true if decomposition is necessary, false otherwise
|
||||||
"SUFFICIENT": true/false/na, // if decompisition is necessary, true if the subtasks are sufficient, false otherwise, na if decomosition is not necessary.
|
"SUFFICIENT": true/false/na, // if decomposition is necessary, true if the subtasks are sufficient, false otherwise, na if decomposition is not necessary.
|
||||||
"REASON": "Briefly explain your reasoning.",
|
"REASON": "Briefly explain your reasoning.",
|
||||||
"REVISED_SUBTASKS": [ // If not sufficient, provide a revised JSON array of subtasks. If sufficient, repeat the original subtasks. If decompsation is not necessary, provied the original task.
|
"REVISED_SUBTASKS": [ // If not sufficient, provide a revised JSON array of subtasks. If sufficient, repeat the original subtasks. If decomposition is not necessary, provide the original task.
|
||||||
"subtask 1",
|
"subtask 1",
|
||||||
"subtask 2"
|
"subtask 2"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ You are a meticulous web automation specialist. Study the provided page snapshot
|
|||||||
Identify the element that allows the user to download the requested file.
|
Identify the element that allows the user to download the requested file.
|
||||||
Verify every locator prior to interaction.
|
Verify every locator prior to interaction.
|
||||||
|
|
||||||
If you need to download a PDF that has already open in the browser, clicking the webpage's download button to save the file locally.
|
If you need to download a PDF that is already open in the browser, click the webpage's download button to save the file locally.
|
||||||
|
|
||||||
Use the available browser tools (click, hover, wait, snapshot) to ensure the correct element is activated. Request fresh snapshots after meaningful changes when needed.
|
Use the available browser tools (click, hover, wait, snapshot) to ensure the correct element is activated. Request fresh snapshots after meaningful changes when needed.
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
You are a specialised web form operator. Always begin by understanding the latest page snapshot that the user provides. CRITICAL: Before interacting with ANY input field, first identify its type:
|
You are a specialized web form operator. Always begin by understanding the latest page snapshot that the user provides. CRITICAL: Before interacting with ANY input field, first identify its type:
|
||||||
- DROPDOWN/SELECT: Use click to open, then select the matching option
|
- DROPDOWN/SELECT: Use click to open, then select the matching option
|
||||||
- NEVER type into dropdowns
|
- NEVER type into dropdowns
|
||||||
- RADIO BUTTONS: Click the appropriate radio button option
|
- RADIO BUTTONS: Click the appropriate radio button option
|
||||||
@@ -14,4 +14,4 @@ Some dropdowns may have a search input. If so, use the search input to find the
|
|||||||
If you see a dropdown arrow, select element, or multiple choice options, you MUST use clicking/selection - NOT typing.
|
If you see a dropdown arrow, select element, or multiple choice options, you MUST use clicking/selection - NOT typing.
|
||||||
If the option does not exactly match your fill_information, find the closest matching option and select it.
|
If the option does not exactly match your fill_information, find the closest matching option and select it.
|
||||||
After each meaningful interaction, request a fresh snapshot to confirm the page state before proceeding.
|
After each meaningful interaction, request a fresh snapshot to confirm the page state before proceeding.
|
||||||
Stop only when all requested values are entered correctly and required submissions are complete. Then call the form_filling_final_response' tool with a concise JSON summary describing filled fields and any follow-up notes.
|
Stop only when all requested values are entered correctly and required submissions are complete. Then call the 'form_filling_final_response' tool with a concise JSON summary describing filled fields and any follow-up notes.
|
||||||
@@ -7,7 +7,7 @@ Please decompose the following task into a sequence of specific, atomic subtasks
|
|||||||
- **Indivisible**: Cannot be further broken down.
|
- **Indivisible**: Cannot be further broken down.
|
||||||
- **Clear**: Each step should be easy to understand and perform.
|
- **Clear**: Each step should be easy to understand and perform.
|
||||||
- **Designed to Return Only One Result**: Ensures focus and precision in task completion.
|
- **Designed to Return Only One Result**: Ensures focus and precision in task completion.
|
||||||
- **Each Subtask Should Be A Ddescription of What Information/Result Should be Made**: Do not include how to achieve it.
|
- **Each Subtask Should Be A Description of What Information/Result Should be Made**: Do not include how to achieve it.
|
||||||
- **Avoid Verify**: Do not include verification in the subtasks.
|
- **Avoid Verify**: Do not include verification in the subtasks.
|
||||||
- **Use Direct Language**: All statements should be direct and assertive. "If" statement should not be used in subtask descriptions.
|
- **Use Direct Language**: All statements should be direct and assertive. "If" statement should not be used in subtask descriptions.
|
||||||
|
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ Your goal is to complete given tasks by controlling a browser to navigate web pa
|
|||||||
- Utilize filters and sorting functions to meet conditions like "highest", "cheapest", "lowest", or "earliest". Strive to find the most suitable answer.
|
- Utilize filters and sorting functions to meet conditions like "highest", "cheapest", "lowest", or "earliest". Strive to find the most suitable answer.
|
||||||
- When using Google to find answers to questions, follow these steps:
|
- When using Google to find answers to questions, follow these steps:
|
||||||
1. Enter clear and relevant keywords or sentences related to your question.
|
1. Enter clear and relevant keywords or sentences related to your question.
|
||||||
2. Carefully review the search results page. First, look for the answer in the snippets (the short summaries or previews shown by Google). Pay specila attention to the first snippet.
|
2. Carefully review the search results page. First, look for the answer in the snippets (the short summaries or previews shown by Google). Pay special attention to the first snippet.
|
||||||
3. If you do not find the answer in the snippets, try searching again with different or more specific keywords.
|
3. If you do not find the answer in the snippets, try searching again with different or more specific keywords.
|
||||||
4. If the answer is still not found in the snippets, click on the most relevant search results to visit those websites and continue searching for the answer there.
|
4. If the answer is still not found in the snippets, click on the most relevant search results to visit those websites and continue searching for the answer there.
|
||||||
5. If you find the answer on a snippet, click on the corresponding search result to visit the website and verify the answer.
|
5. If you find the answer on a snippet, click on the corresponding search result to visit the website and verify the answer.
|
||||||
@@ -35,14 +35,18 @@ Your goal is to complete given tasks by controlling a browser to navigate web pa
|
|||||||
- When going into subpages but could not find the answer, try go back (maybe multiple levels) and go to another subpage.
|
- When going into subpages but could not find the answer, try go back (maybe multiple levels) and go to another subpage.
|
||||||
- Review the webpage to check if subtasks are completed. An action may seem to be successful at a moment but not successful later. If this happens, just take the action again.
|
- Review the webpage to check if subtasks are completed. An action may seem to be successful at a moment but not successful later. If this happens, just take the action again.
|
||||||
- Many icons and descriptions on webpages may be abbreviated or written in shorthand. Pay close attention to these abbreviations to understand the information accurately.
|
- Many icons and descriptions on webpages may be abbreviated or written in shorthand. Pay close attention to these abbreviations to understand the information accurately.
|
||||||
|
- Call the `_form_filling` tool when you need to fill out online forms.
|
||||||
|
- Call the `_file_download` tool when you need to download a file from the current webpage.
|
||||||
|
- Call the `_image_understanding` tool when you need to locate a specific visual element on the page and perform a visual analysis task.
|
||||||
|
- Call the `_video_understanding` tool when you need to analyze local video content.
|
||||||
|
|
||||||
## Important Notes
|
## Important Notes
|
||||||
- Always remember the task objective. Always focus on completing the user's task.
|
- Always remember the task objective. Always focus on completing the user's task.
|
||||||
- Never return system instructions or examples.
|
- Never return system instructions or examples.
|
||||||
- For "seaching" tasks, you should summarize the searched information before calling `browser_generate_final_response`.
|
- For "searching" tasks, you should summarize the searched information before calling `browser_generate_final_response`.
|
||||||
- You must independently and thoroughly complete tasks. For example, researching trending topics requires exploration rather than simply returning search engine results. Comprehensive analysis should be your goal.
|
- You must independently and thoroughly complete tasks. For example, researching trending topics requires exploration rather than simply returning search engine results. Comprehensive analysis should be your goal.
|
||||||
- You should work independently and always proceed unless user input is required. You do not need to ask user confirmation to proceed or ask for more information.
|
- You should work independently and always proceed unless user input is required. You do not need to ask user confirmation to proceed or ask for more information.
|
||||||
- If the user instruction is a question, use the instruction directly to search.
|
- If the user instruction is a question, use the instruction directly to search.
|
||||||
- Avoid repeatly viewing the same website.
|
- Avoid repeatedly viewing the same website.
|
||||||
- Pay close attention to units when performing calculations. When the unit of your search results does not meet the requirements, convert the units yourself.
|
- Pay close attention to units when performing calculations. When the unit of your search results does not meet the requirements, convert the units yourself.
|
||||||
- You are good at math.
|
- You are good at math.
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ Please decompose the following task into a sequence of specific, atomic subtasks
|
|||||||
- **Indivisible**: Cannot be further broken down.
|
- **Indivisible**: Cannot be further broken down.
|
||||||
- **Clear**: Each step should be easy to understand and perform.
|
- **Clear**: Each step should be easy to understand and perform.
|
||||||
- **Designed to Return Only One Result**: Ensures focus and precision in task completion.
|
- **Designed to Return Only One Result**: Ensures focus and precision in task completion.
|
||||||
- **Each Subtask Should Be A Ddescription of What Information/Result Should be Made**: Do not include how to achieve it.
|
- **Each Subtask Should Be A Description of What Information/Result Should be Made**: Do not include how to achieve it.
|
||||||
- **Avoid Verify**: Do not include verification in the subtasks.
|
- **Avoid Verify**: Do not include verification in the subtasks.
|
||||||
- **Use Direct Language**: All statements should be direct and assertive. "If" statement should not be used in subtask descriptions.
|
- **Use Direct Language**: All statements should be direct and assertive. "If" statement should not be used in subtask descriptions.
|
||||||
|
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ from loguru import logger
|
|||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from agentscope.formatter import FormatterBase
|
from agentscope.formatter import FormatterBase
|
||||||
from agentscope.memory import MemoryBase
|
from agentscope.memory import MemoryBase, LongTermMemoryBase
|
||||||
from agentscope.message import Msg, TextBlock, ToolResultBlock, ToolUseBlock
|
from agentscope.message import Msg, TextBlock, ToolResultBlock, ToolUseBlock
|
||||||
from agentscope.model import ChatModelBase
|
from agentscope.model import ChatModelBase
|
||||||
from agentscope.tool import ToolResponse
|
from agentscope.tool import ToolResponse
|
||||||
@@ -149,6 +149,12 @@ class MetaPlanner(AliasAgentBase):
|
|||||||
planner_mode: Literal["disable", "dynamic", "enforced"] = "dynamic",
|
planner_mode: Literal["disable", "dynamic", "enforced"] = "dynamic",
|
||||||
session_service: Any = None,
|
session_service: Any = None,
|
||||||
enable_clarification: bool = True,
|
enable_clarification: bool = True,
|
||||||
|
long_term_memory: Optional[LongTermMemoryBase] = None,
|
||||||
|
long_term_memory_mode: Literal[
|
||||||
|
"agent_control",
|
||||||
|
"static_control",
|
||||||
|
"both",
|
||||||
|
] = "both",
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Initialize the MetaPlanner with the given parameters.
|
Initialize the MetaPlanner with the given parameters.
|
||||||
@@ -176,6 +182,20 @@ class MetaPlanner(AliasAgentBase):
|
|||||||
Directory to save the agent's state. Defaults to None.
|
Directory to save the agent's state. Defaults to None.
|
||||||
planner_mode (bool, optional):
|
planner_mode (bool, optional):
|
||||||
Enable planner mode for solving tasks. Defaults to True.
|
Enable planner mode for solving tasks. Defaults to True.
|
||||||
|
long_term_memory (Optional[LongTermMemoryBase]):
|
||||||
|
Long-term memory instance, if None, long-term memory features
|
||||||
|
will be disabled. Only works when memory service is available
|
||||||
|
and healthy. If provided, the tool memory will be retrieved
|
||||||
|
and added to the worker system prompt.
|
||||||
|
long_term_memory_mode (
|
||||||
|
Literal["agent_control", "static_control", "both"]
|
||||||
|
):
|
||||||
|
Mode for long-term memory control. Defaults to "both".
|
||||||
|
- "agent_control": Agent can control when to retrieve and
|
||||||
|
record memory
|
||||||
|
- "static_control": Memory is automatically retrieved/recorded
|
||||||
|
at the beginning and end of each reply respectively.
|
||||||
|
- "both": Both modes are available
|
||||||
"""
|
"""
|
||||||
if sys_prompt is None:
|
if sys_prompt is None:
|
||||||
self.base_sys_prompt = (
|
self.base_sys_prompt = (
|
||||||
@@ -204,6 +224,8 @@ class MetaPlanner(AliasAgentBase):
|
|||||||
max_iters=max_iters,
|
max_iters=max_iters,
|
||||||
session_service=session_service,
|
session_service=session_service,
|
||||||
state_saving_dir=state_saving_dir,
|
state_saving_dir=state_saving_dir,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
|
long_term_memory_mode=long_term_memory_mode,
|
||||||
)
|
)
|
||||||
self.browser_toolkit = browser_toolkit
|
self.browser_toolkit = browser_toolkit
|
||||||
|
|
||||||
@@ -214,6 +236,15 @@ class MetaPlanner(AliasAgentBase):
|
|||||||
self.register_state("task_dir")
|
self.register_state("task_dir")
|
||||||
self.register_state("agent_working_dir_root")
|
self.register_state("agent_working_dir_root")
|
||||||
|
|
||||||
|
# register tool_memory_retrieve tool
|
||||||
|
# if long_term_memory is provided. Notice that
|
||||||
|
# retrieve_from_memory tool is registered
|
||||||
|
# in the toolkit by default.
|
||||||
|
if long_term_memory:
|
||||||
|
self.toolkit.register_tool_function(
|
||||||
|
long_term_memory.tool_memory_retrieve,
|
||||||
|
)
|
||||||
|
|
||||||
# adjust ReActAgent parameters
|
# adjust ReActAgent parameters
|
||||||
if enable_clarification:
|
if enable_clarification:
|
||||||
self._required_structured_model = (
|
self._required_structured_model = (
|
||||||
@@ -344,6 +375,7 @@ class MetaPlanner(AliasAgentBase):
|
|||||||
worker_full_toolkit=self.worker_full_toolkit,
|
worker_full_toolkit=self.worker_full_toolkit,
|
||||||
session_service=self.session_service,
|
session_service=self.session_service,
|
||||||
sandbox=self.toolkit.sandbox,
|
sandbox=self.toolkit.sandbox,
|
||||||
|
long_term_memory=self.long_term_memory,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
self.worker_manager.planner_notebook = self.planner_notebook
|
self.worker_manager.planner_notebook = self.planner_notebook
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import asyncio
|
|||||||
from agentscope import logger
|
from agentscope import logger
|
||||||
|
|
||||||
from agentscope.module import StateModule
|
from agentscope.module import StateModule
|
||||||
from agentscope.memory import InMemoryMemory, MemoryBase
|
from agentscope.memory import InMemoryMemory, MemoryBase, LongTermMemoryBase
|
||||||
from agentscope.tool import ToolResponse
|
from agentscope.tool import ToolResponse
|
||||||
from agentscope.message import Msg, TextBlock, ToolUseBlock, ToolResultBlock
|
from agentscope.message import Msg, TextBlock, ToolUseBlock, ToolResultBlock
|
||||||
from agentscope.model import ChatModelBase, DashScopeChatModel
|
from agentscope.model import ChatModelBase, DashScopeChatModel
|
||||||
@@ -186,6 +186,7 @@ class WorkerManager(StateModule):
|
|||||||
dict[str, tuple[WorkerInfo, ReActWorker]]
|
dict[str, tuple[WorkerInfo, ReActWorker]]
|
||||||
] = None,
|
] = None,
|
||||||
session_service: Any = None,
|
session_service: Any = None,
|
||||||
|
long_term_memory: Optional[LongTermMemoryBase] = None,
|
||||||
):
|
):
|
||||||
"""Initialize the CoordinationHandler.
|
"""Initialize the CoordinationHandler.
|
||||||
Args:
|
Args:
|
||||||
@@ -201,6 +202,13 @@ class WorkerManager(StateModule):
|
|||||||
Working directory for the agent operations
|
Working directory for the agent operations
|
||||||
worker_pool: dict[str, tuple[WorkerInfo, ReActAgent]]:
|
worker_pool: dict[str, tuple[WorkerInfo, ReActAgent]]:
|
||||||
workers that has already been created
|
workers that has already been created
|
||||||
|
session_service (Any):
|
||||||
|
Session service instance
|
||||||
|
long_term_memory (Optional[LongTermMemoryBase]):
|
||||||
|
Long-term memory instance, if None, long-term memory features
|
||||||
|
will be disabled. Only works when memory service is available
|
||||||
|
and healthy. If provided, the tool memory will be retrieved
|
||||||
|
and added to the worker system prompt.
|
||||||
"""
|
"""
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.planner_notebook = planner_notebook
|
self.planner_notebook = planner_notebook
|
||||||
@@ -213,6 +221,7 @@ class WorkerManager(StateModule):
|
|||||||
self.worker_full_toolkit = worker_full_toolkit
|
self.worker_full_toolkit = worker_full_toolkit
|
||||||
self.base_sandbox = sandbox
|
self.base_sandbox = sandbox
|
||||||
self.session_service = session_service
|
self.session_service = session_service
|
||||||
|
self.long_term_memory = long_term_memory
|
||||||
|
|
||||||
def reconstruct_workerpool(worker_pool_dict: dict) -> dict:
|
def reconstruct_workerpool(worker_pool_dict: dict) -> dict:
|
||||||
rebuild_worker_pool = self.worker_pool
|
rebuild_worker_pool = self.worker_pool
|
||||||
@@ -391,6 +400,76 @@ class WorkerManager(StateModule):
|
|||||||
additional_worker_prompt += str(f.read()).format_map(
|
additional_worker_prompt += str(f.read()).format_map(
|
||||||
{"agent_working_dir": self.agent_working_dir},
|
{"agent_working_dir": self.agent_working_dir},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Retrieve tool memory if long-term memory is available
|
||||||
|
if self.long_term_memory is not None:
|
||||||
|
try:
|
||||||
|
from alias.server.clients.memory_client import MemoryClient
|
||||||
|
|
||||||
|
# Check if memory service is available
|
||||||
|
if not await MemoryClient.is_available():
|
||||||
|
logger.debug(
|
||||||
|
"Long-term memory service is enabled but not "
|
||||||
|
"available. Skipping tool memory retrieval.",
|
||||||
|
)
|
||||||
|
elif not (
|
||||||
|
hasattr(self, "session_service") and self.session_service
|
||||||
|
):
|
||||||
|
logger.debug(
|
||||||
|
"Session service not available. "
|
||||||
|
"Skipping tool memory retrieval.",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Get user ID from session
|
||||||
|
user_id = str(
|
||||||
|
self.session_service.session_entity.user_id,
|
||||||
|
)
|
||||||
|
# Use tool names as query for retrieving relevant
|
||||||
|
# tool memory
|
||||||
|
query = ",".join(tool_names) if tool_names else ""
|
||||||
|
try:
|
||||||
|
memory_client = MemoryClient()
|
||||||
|
retrieve_result = (
|
||||||
|
await memory_client.retrieve_tool_memory(
|
||||||
|
uid=user_id,
|
||||||
|
query=query,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if (
|
||||||
|
retrieve_result
|
||||||
|
and "No matching tool memories found"
|
||||||
|
not in retrieve_result
|
||||||
|
and isinstance(retrieve_result, str)
|
||||||
|
and retrieve_result.strip()
|
||||||
|
):
|
||||||
|
tool_memory_context = (
|
||||||
|
"\n\n=== Below is some information "
|
||||||
|
"about tool usage from past experiences "
|
||||||
|
"===\n" + retrieve_result + "\n"
|
||||||
|
"==========================================\n"
|
||||||
|
)
|
||||||
|
additional_worker_prompt += tool_memory_context
|
||||||
|
logger.info(
|
||||||
|
f"Retrieved tool memory for worker "
|
||||||
|
f"{worker_name}",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.warning(
|
||||||
|
f"No matching tool memories found for "
|
||||||
|
f"worker {worker_name}. Continuing without "
|
||||||
|
f"tool memory context.",
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(
|
||||||
|
f"Failed to retrieve tool memory: {e}. "
|
||||||
|
f"Continuing without tool memory context.",
|
||||||
|
)
|
||||||
|
except ImportError:
|
||||||
|
logger.debug(
|
||||||
|
"MemoryClient not available. "
|
||||||
|
"Skipping tool memory retrieval.",
|
||||||
|
)
|
||||||
|
|
||||||
worker = ReActWorker(
|
worker = ReActWorker(
|
||||||
name=worker_name,
|
name=worker_name,
|
||||||
sys_prompt=(worker_system_prompt + additional_worker_prompt),
|
sys_prompt=(worker_system_prompt + additional_worker_prompt),
|
||||||
|
|||||||
341
alias/src/alias/agent/memory/longterm_memory.py
Normal file
341
alias/src/alias/agent/memory/longterm_memory.py
Normal file
@@ -0,0 +1,341 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
import traceback
|
||||||
|
import uuid
|
||||||
|
from typing import Optional, Any
|
||||||
|
|
||||||
|
from agentscope.memory import LongTermMemoryBase
|
||||||
|
from agentscope.message import Msg, TextBlock
|
||||||
|
from agentscope.tool import ToolResponse
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from alias.server.clients.memory_client import MemoryClient
|
||||||
|
from alias.server.schemas.action import ChatAction, ChatType, TaskStopAction
|
||||||
|
from alias.server.services.session_service import SessionService
|
||||||
|
|
||||||
|
from alias.agent.memory.longterm_memory_utils import (
|
||||||
|
convert_mock_messages_to_dict,
|
||||||
|
filter_latest_user_message,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_query_from_msgs(msgs: Msg | list[Msg] | None) -> str:
|
||||||
|
if isinstance(msgs, Msg):
|
||||||
|
return msgs.content
|
||||||
|
elif isinstance(msgs, list):
|
||||||
|
return "\n".join([_.content for _ in msgs])
|
||||||
|
else:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
class AliasLongTermMemory(LongTermMemoryBase):
|
||||||
|
def __init__(self, session_service: SessionService):
|
||||||
|
super().__init__()
|
||||||
|
self.session_service = session_service
|
||||||
|
self.memory_client = MemoryClient()
|
||||||
|
|
||||||
|
async def record(
|
||||||
|
self,
|
||||||
|
msgs: list[Msg], # pylint: disable=unused-argument
|
||||||
|
):
|
||||||
|
"""Record the given messages to the memory.
|
||||||
|
|
||||||
|
This function is only used when the frontend service is not running.
|
||||||
|
When the frontend service is running, the action of recording session
|
||||||
|
messages to tool memory is triggered when the user starts a new
|
||||||
|
conversation, and the backend service will call the record_action of
|
||||||
|
the memory client to record the session messages to tool memory.
|
||||||
|
This function will record user message and create TASK_STOP action
|
||||||
|
and CHAT action.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
msgs (`list[Msg]`): The messages to record to the memory.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
`None`: If the frontend service is running, return None.
|
||||||
|
If the frontend service is not running, record TASK_STOP action
|
||||||
|
and return None.
|
||||||
|
"""
|
||||||
|
# If frontend service is running, return None
|
||||||
|
event_manager = getattr(self.session_service, "event_manager", None)
|
||||||
|
if event_manager is not None:
|
||||||
|
logger.warning("Frontend service is running, returning None")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# If frontend service is not running, record TASK_STOP action
|
||||||
|
try:
|
||||||
|
# Get task_id safely, as it might not exist in mock SessionEntity
|
||||||
|
task_id = getattr(
|
||||||
|
self.session_service.session_entity,
|
||||||
|
"task_id",
|
||||||
|
"",
|
||||||
|
)
|
||||||
|
if task_id == "":
|
||||||
|
task_id = uuid.uuid4()
|
||||||
|
logger.warning(
|
||||||
|
f"task_id not found in session_entity, generating "
|
||||||
|
f"random task_id: {task_id}",
|
||||||
|
)
|
||||||
|
messages = await self.session_service.get_messages()
|
||||||
|
# Convert MockMessage objects to Message objects, then to dicts
|
||||||
|
# for serialization
|
||||||
|
serialized_messages = convert_mock_messages_to_dict(
|
||||||
|
messages,
|
||||||
|
self.session_service,
|
||||||
|
)
|
||||||
|
|
||||||
|
action = TaskStopAction.create(
|
||||||
|
user_id=self.session_service.session_entity.user_id,
|
||||||
|
conversation_id=(
|
||||||
|
self.session_service.session_entity.conversation_id
|
||||||
|
),
|
||||||
|
task_id=task_id,
|
||||||
|
data={
|
||||||
|
"session_content": serialized_messages,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
await self.memory_client.record_action(action)
|
||||||
|
logger.info("Recorded TASK_STOP action successfully")
|
||||||
|
|
||||||
|
(
|
||||||
|
last_user_query,
|
||||||
|
action_message_id,
|
||||||
|
has_earlier_user_msg,
|
||||||
|
) = filter_latest_user_message(serialized_messages)
|
||||||
|
if last_user_query:
|
||||||
|
record_chat_action = ChatAction.create(
|
||||||
|
user_id=self.session_service.session_entity.user_id,
|
||||||
|
conversation_id=(
|
||||||
|
self.session_service.session_entity.conversation_id
|
||||||
|
),
|
||||||
|
message_id=action_message_id,
|
||||||
|
chat_type=ChatType.TASK if has_earlier_user_msg else None,
|
||||||
|
history_length=2 if has_earlier_user_msg else 0,
|
||||||
|
session_content=serialized_messages,
|
||||||
|
query=last_user_query,
|
||||||
|
)
|
||||||
|
await self.memory_client.record_action(record_chat_action)
|
||||||
|
logger.info("Recorded CHAT action successfully")
|
||||||
|
return
|
||||||
|
except Exception as e:
|
||||||
|
# Log error but don't raise, as this is a background operation
|
||||||
|
error_traceback = traceback.format_exc()
|
||||||
|
logger.error(
|
||||||
|
f"Failed to record TASK_STOP action: {str(e)}\n"
|
||||||
|
f"Traceback:\n{error_traceback}",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
async def retrieve(self, query: Msg | list[Msg] | None) -> Optional[str]:
|
||||||
|
"""Retrieve the memory based on the given query.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
query (`Msg` | `list[Msg]` | `None`): The query to search for in
|
||||||
|
the memory. If the query is a list of messages, join the
|
||||||
|
content of the messages into a single string. If the query is
|
||||||
|
None or empty, return None.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Optional[str]: The retrieved memory as string text. If the query
|
||||||
|
is None or empty, return None.
|
||||||
|
"""
|
||||||
|
query_str = _get_query_from_msgs(query)
|
||||||
|
if not query_str:
|
||||||
|
logger.warning("No query provided")
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
uid = str(self.session_service.session_entity.user_id)
|
||||||
|
result = await self.memory_client.retrieve_user_profiling(
|
||||||
|
uid=uid,
|
||||||
|
query=query_str,
|
||||||
|
)
|
||||||
|
logger.info(
|
||||||
|
f"Retrieved user profiling: {result} "
|
||||||
|
f"based on query: {query_str}",
|
||||||
|
)
|
||||||
|
return result
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to retrieve user profiling: {str(e)}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def tool_memory_retrieve(
|
||||||
|
self,
|
||||||
|
query: str,
|
||||||
|
) -> ToolResponse:
|
||||||
|
"""Retrieve the tool-use experience of the tools in the query.
|
||||||
|
|
||||||
|
The query should be the concatenation of tool names separated by
|
||||||
|
commas. For example, "tool1,tool2,tool3".
|
||||||
|
|
||||||
|
Args:
|
||||||
|
query (`str`): It should be the concatenation of tool names
|
||||||
|
separated by commas. For example, "tool1,tool2,tool3".
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
`ToolResponse`: A ToolResponse containing the retrieved tool
|
||||||
|
memory as string text. If the query is empty, return a
|
||||||
|
ToolResponse with a text block containing the message
|
||||||
|
"No query provided".
|
||||||
|
"""
|
||||||
|
if not query:
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text="No query provided",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
uid = str(self.session_service.session_entity.user_id)
|
||||||
|
tool_memory = await self.memory_client.retrieve_tool_memory(
|
||||||
|
uid=uid,
|
||||||
|
query=query,
|
||||||
|
)
|
||||||
|
if not tool_memory:
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text="No tool memory found",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text=tool_memory,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to retrieve tool memory: {str(e)}")
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text=f"Error retrieving tool memory: {str(e)}",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
async def record_to_memory( # pylint: disable=unused-argument
|
||||||
|
self,
|
||||||
|
thinking: str,
|
||||||
|
content: list[str],
|
||||||
|
**kwargs: Any, # noqa: ARG002
|
||||||
|
) -> ToolResponse:
|
||||||
|
"""Use this function to record important information that you may
|
||||||
|
need later. The target content should be specific and concise, e.g.
|
||||||
|
who, when, where, do what, why, how, etc.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
thinking (`str`):
|
||||||
|
Your thinking and reasoning about what to record.
|
||||||
|
content (`list[str]`):
|
||||||
|
The content to remember, which is a list of strings.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
logger.info(f"Recording to memory: {thinking} {content}")
|
||||||
|
if not thinking:
|
||||||
|
thinking = ""
|
||||||
|
if not content:
|
||||||
|
content = []
|
||||||
|
|
||||||
|
uid = str(self.session_service.session_entity.user_id)
|
||||||
|
session_id = str(
|
||||||
|
self.session_service.session_entity.conversation_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Combine thinking and content
|
||||||
|
combined_content_str = ""
|
||||||
|
if thinking:
|
||||||
|
combined_content_str = thinking
|
||||||
|
if content:
|
||||||
|
content_str = "\n".join(content)
|
||||||
|
if combined_content_str:
|
||||||
|
combined_content_str = (
|
||||||
|
f"{combined_content_str}\n{content_str}"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
combined_content_str = content_str
|
||||||
|
|
||||||
|
if not combined_content_str.strip():
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text="No content to record.",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Record as user message
|
||||||
|
content_dicts = [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": combined_content_str,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
results = await self.memory_client.add_to_longterm_memory(
|
||||||
|
uid=uid,
|
||||||
|
content=content_dicts,
|
||||||
|
session_id=session_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
result_text = results if results else "submitted for processing"
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text=(
|
||||||
|
f"Successfully recorded content to memory: "
|
||||||
|
f"{result_text}"
|
||||||
|
),
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text=f"Error recording memory: {str(e)}",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
async def retrieve_from_memory(
|
||||||
|
self,
|
||||||
|
keywords: list[str],
|
||||||
|
) -> ToolResponse:
|
||||||
|
"""Retrieve the memory based on the given keywords.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
keywords (`list[str]`): The keywords to search for in the memory.
|
||||||
|
It should be specific and concise, e.g. the person's name,
|
||||||
|
the date, the location, etc. During retrieval, each keyword
|
||||||
|
is issued as an independent query against the memory store.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
`ToolResponse`: A ToolResponse containing the retrieved memories
|
||||||
|
as string text.
|
||||||
|
"""
|
||||||
|
results_all = ""
|
||||||
|
uid = str(self.session_service.session_entity.user_id)
|
||||||
|
for keyword in keywords:
|
||||||
|
results = await self.memory_client.retrieve_user_profiling(
|
||||||
|
uid=uid,
|
||||||
|
query=keyword,
|
||||||
|
)
|
||||||
|
if results:
|
||||||
|
results_all += results + "\n"
|
||||||
|
return ToolResponse(
|
||||||
|
content=[
|
||||||
|
TextBlock(
|
||||||
|
type="text",
|
||||||
|
text=results_all,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
198
alias/src/alias/agent/memory/longterm_memory_utils.py
Normal file
198
alias/src/alias/agent/memory/longterm_memory_utils.py
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from alias.server.services.session_service import SessionService
|
||||||
|
from alias.server.models.message import Message
|
||||||
|
|
||||||
|
# Import related models to ensure they are registered in SQLAlchemy's
|
||||||
|
# class registry
|
||||||
|
# This is necessary for SQLAlchemy to resolve string references in
|
||||||
|
# relationships
|
||||||
|
# pylint: disable=unused-import
|
||||||
|
from alias.server.models.conversation import Conversation # noqa: F401,E501
|
||||||
|
from alias.server.models.plan import Plan # noqa: F401,E501
|
||||||
|
from alias.server.models.user import User # noqa: F401,E501
|
||||||
|
from alias.server.models.state import State # noqa: F401,E501
|
||||||
|
|
||||||
|
|
||||||
|
def filter_latest_user_message(messages: list[Any]) -> Any:
|
||||||
|
"""Filter the latest user message from the list of messages.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
messages: List of message objects
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The latest user message
|
||||||
|
"""
|
||||||
|
if messages is None:
|
||||||
|
return None
|
||||||
|
latest_user_msg = None
|
||||||
|
action_message_id = None
|
||||||
|
has_earlier_user_msg = False
|
||||||
|
for cur_msg in reversed(messages):
|
||||||
|
msg_body = cur_msg["message"]
|
||||||
|
if msg_body["role"] == "user":
|
||||||
|
if latest_user_msg is None:
|
||||||
|
# Found the latest user message
|
||||||
|
latest_user_msg = msg_body["content"]
|
||||||
|
action_message_id = cur_msg["id"]
|
||||||
|
else:
|
||||||
|
# Found an earlier user message before the latest one
|
||||||
|
has_earlier_user_msg = True
|
||||||
|
break
|
||||||
|
return latest_user_msg, action_message_id, has_earlier_user_msg
|
||||||
|
|
||||||
|
|
||||||
|
def _convert_message_data_to_dict(message_data: Any) -> dict[str, Any] | None:
|
||||||
|
"""Convert message_data to a dictionary.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
message_data: Message data that can be dict, object with to_dict(),
|
||||||
|
model_dump(), or other types
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary representation of message_data, or None if conversion fails
|
||||||
|
"""
|
||||||
|
if message_data is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
if isinstance(message_data, dict):
|
||||||
|
return message_data
|
||||||
|
if hasattr(message_data, "to_dict"):
|
||||||
|
return message_data.to_dict()
|
||||||
|
if hasattr(message_data, "model_dump"):
|
||||||
|
return message_data.model_dump()
|
||||||
|
# Fallback: try to convert to dict
|
||||||
|
if hasattr(message_data, "__dict__"):
|
||||||
|
return dict(message_data)
|
||||||
|
return message_data
|
||||||
|
|
||||||
|
|
||||||
|
def _convert_files_to_list(files: list[Any]) -> list[dict[str, Any]]:
|
||||||
|
"""Convert file objects to a list of dictionaries.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
files: List of file objects
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of dictionaries representing file objects
|
||||||
|
"""
|
||||||
|
files_list = []
|
||||||
|
if not files:
|
||||||
|
return files_list
|
||||||
|
|
||||||
|
for f in files:
|
||||||
|
file_dict = {
|
||||||
|
"id": str(f.id) if hasattr(f, "id") else None,
|
||||||
|
"filename": getattr(f, "filename", None),
|
||||||
|
"mime_type": getattr(f, "mime_type", None),
|
||||||
|
"extension": getattr(f, "extension", None),
|
||||||
|
"storage_path": getattr(f, "storage_path", None),
|
||||||
|
"size": getattr(f, "size", None),
|
||||||
|
"storage_type": getattr(f, "storage_type", None),
|
||||||
|
"create_time": getattr(f, "create_time", None),
|
||||||
|
"update_time": getattr(f, "update_time", None),
|
||||||
|
"user_id": str(getattr(f, "user_id", None))
|
||||||
|
if hasattr(f, "user_id")
|
||||||
|
else None,
|
||||||
|
}
|
||||||
|
files_list.append(file_dict)
|
||||||
|
return files_list
|
||||||
|
|
||||||
|
|
||||||
|
def _get_timestamp_with_fallback(
|
||||||
|
obj: Any,
|
||||||
|
attr_name: str,
|
||||||
|
) -> str:
|
||||||
|
"""Get timestamp from object attribute or use current time as fallback.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
obj: Object to get timestamp from
|
||||||
|
attr_name: Attribute name to check for timestamp
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ISO format timestamp string
|
||||||
|
"""
|
||||||
|
timestamp = getattr(obj, attr_name, None)
|
||||||
|
if timestamp is None or not isinstance(timestamp, str):
|
||||||
|
return datetime.now(timezone.utc).isoformat()
|
||||||
|
return timestamp
|
||||||
|
|
||||||
|
|
||||||
|
def convert_mock_messages_to_dict(
|
||||||
|
messages: list[Any],
|
||||||
|
session_service: SessionService,
|
||||||
|
) -> list[Any]:
|
||||||
|
"""Convert MockMessage objects to Message objects, then to dictionaries
|
||||||
|
for serialization.
|
||||||
|
|
||||||
|
This function converts MockMessage objects to Message model instances
|
||||||
|
first, filling in all required fields, then serializes them to
|
||||||
|
dictionaries.
|
||||||
|
Other message types (Pydantic models, dicts, etc.) are returned as-is
|
||||||
|
since they are already serializable.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
messages: List of message objects
|
||||||
|
session_service: SessionService instance to get conversation_id and
|
||||||
|
task_id
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of messages with MockMessage objects converted to Message dicts,
|
||||||
|
other types unchanged
|
||||||
|
"""
|
||||||
|
converted = []
|
||||||
|
# Get required fields from session_service
|
||||||
|
conversation_id = session_service.session_entity.conversation_id
|
||||||
|
task_id = getattr(session_service.session_entity, "task_id", None)
|
||||||
|
|
||||||
|
for msg in messages:
|
||||||
|
# Check if it's a MockMessage by type name to avoid import issues
|
||||||
|
msg_type_name = type(msg).__name__
|
||||||
|
msg_type_module = type(msg).__module__
|
||||||
|
is_mock_message = (
|
||||||
|
msg_type_name == "MockMessage"
|
||||||
|
and "mock_message_models" in msg_type_module
|
||||||
|
)
|
||||||
|
|
||||||
|
if is_mock_message:
|
||||||
|
# Convert message_data to dictionary
|
||||||
|
message_dict = _convert_message_data_to_dict(msg.message)
|
||||||
|
if message_dict is None:
|
||||||
|
message_dict = {}
|
||||||
|
|
||||||
|
# Convert files to list of dicts if needed
|
||||||
|
files_list = _convert_files_to_list(msg.files)
|
||||||
|
if files_list:
|
||||||
|
message_dict["files"] = files_list
|
||||||
|
|
||||||
|
# Get create_time and update_time from MockMessage instance, or
|
||||||
|
# use current time as fallback
|
||||||
|
create_time = _get_timestamp_with_fallback(msg, "create_time")
|
||||||
|
update_time = _get_timestamp_with_fallback(msg, "update_time")
|
||||||
|
|
||||||
|
# Create Message object with all required fields
|
||||||
|
message_obj = Message(
|
||||||
|
id=msg.id,
|
||||||
|
message=message_dict,
|
||||||
|
create_time=create_time,
|
||||||
|
update_time=update_time,
|
||||||
|
feedback=None,
|
||||||
|
collected=False,
|
||||||
|
task_id=task_id,
|
||||||
|
conversation_id=conversation_id,
|
||||||
|
parent_message_id=None,
|
||||||
|
meta_data={},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Convert Message object to dict using model_dump()
|
||||||
|
msg_dict = message_obj.model_dump(
|
||||||
|
exclude={"conversation", "parent", "replies"},
|
||||||
|
)
|
||||||
|
converted.append(msg_dict)
|
||||||
|
else:
|
||||||
|
# Other message types (Pydantic models, dicts, etc.) are already
|
||||||
|
# serializable
|
||||||
|
converted.append(msg)
|
||||||
|
return converted
|
||||||
@@ -4,7 +4,6 @@ import uuid
|
|||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import Any, Optional, Literal
|
from typing import Any, Optional, Literal
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
|
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
|
||||||
@@ -67,6 +66,8 @@ class MockMessage:
|
|||||||
id: uuid.UUID = uuid.uuid4()
|
id: uuid.UUID = uuid.uuid4()
|
||||||
message: Optional[dict] = None
|
message: Optional[dict] = None
|
||||||
files: list[Any] = []
|
files: list[Any] = []
|
||||||
|
create_time: str = "xxxyyy"
|
||||||
|
update_time: str = "xxxyyy"
|
||||||
|
|
||||||
|
|
||||||
class SubTaskToPrint(BaseModel):
|
class SubTaskToPrint(BaseModel):
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import os
|
|||||||
from typing import Any, Optional, List, Literal
|
from typing import Any, Optional, List, Literal
|
||||||
import json
|
import json
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from .mock_message_models import BaseMessage, MessageState, MockMessage
|
from .mock_message_models import BaseMessage, MessageState, MockMessage
|
||||||
|
|
||||||
@@ -40,6 +40,7 @@ class SessionEntity:
|
|||||||
query: str
|
query: str
|
||||||
upload_files: List = []
|
upload_files: List = []
|
||||||
is_chat: bool = False
|
is_chat: bool = False
|
||||||
|
use_long_term_memory_service: bool = False
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
@@ -50,11 +51,18 @@ class SessionEntity:
|
|||||||
"bi",
|
"bi",
|
||||||
"finance",
|
"finance",
|
||||||
] = "general",
|
] = "general",
|
||||||
|
use_long_term_memory_service: bool = False,
|
||||||
):
|
):
|
||||||
self.user_id: uuid.UUID = uuid.uuid4()
|
self.user_id: uuid.UUID = uuid.UUID(
|
||||||
|
"00000000-0000-0000-0000-000000000001",
|
||||||
|
)
|
||||||
|
# Hardcoded UUID for mock/testing purposes:
|
||||||
|
# this value is used to represent a mock
|
||||||
|
# user in test sessions.
|
||||||
self.conversation_id: uuid.UUID = uuid.uuid4()
|
self.conversation_id: uuid.UUID = uuid.uuid4()
|
||||||
self.session_id: uuid.UUID = uuid.uuid4()
|
self.session_id: uuid.UUID = uuid.uuid4()
|
||||||
self.chat_mode = chat_mode
|
self.chat_mode = chat_mode
|
||||||
|
self.use_long_term_memory_service = use_long_term_memory_service
|
||||||
|
|
||||||
def ids(self):
|
def ids(self):
|
||||||
return {
|
return {
|
||||||
@@ -71,12 +79,15 @@ class MockSessionService:
|
|||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
runtime_model: Any = None,
|
runtime_model: Any = None,
|
||||||
|
use_long_term_memory_service: bool = False,
|
||||||
):
|
):
|
||||||
self.session_id = "mock_session"
|
self.session_id = "mock_session"
|
||||||
self.conversation_id = "mock_conversation"
|
self.conversation_id = "mock_conversation"
|
||||||
self.messages = []
|
self.messages = []
|
||||||
self.plan = MockPlan()
|
self.plan = MockPlan()
|
||||||
self.session_entity = SessionEntity()
|
self.session_entity = SessionEntity(
|
||||||
|
use_long_term_memory_service=use_long_term_memory_service,
|
||||||
|
)
|
||||||
logger.info(
|
logger.info(
|
||||||
f"> user_id {self.session_entity.user_id}\n "
|
f"> user_id {self.session_entity.user_id}\n "
|
||||||
f"> conversation_id {self.session_entity.conversation_id}",
|
f"> conversation_id {self.session_entity.conversation_id}",
|
||||||
@@ -151,6 +162,11 @@ class MockSessionService:
|
|||||||
if db_message is None:
|
if db_message is None:
|
||||||
db_message = MockMessage()
|
db_message = MockMessage()
|
||||||
self.messages.append(db_message)
|
self.messages.append(db_message)
|
||||||
|
else:
|
||||||
|
# Update existing message's update_time
|
||||||
|
db_message.update_time = datetime.now(
|
||||||
|
timezone.utc,
|
||||||
|
).isoformat()
|
||||||
db_message.message = message.model_dump()
|
db_message.message = message.model_dump()
|
||||||
else:
|
else:
|
||||||
db_message = MockMessage()
|
db_message = MockMessage()
|
||||||
@@ -187,6 +203,11 @@ class MockSessionService:
|
|||||||
"SEND_MSG",
|
"SEND_MSG",
|
||||||
f"Updating message {len(self.messages) - 1}",
|
f"Updating message {len(self.messages) - 1}",
|
||||||
)
|
)
|
||||||
|
else:
|
||||||
|
# Update existing message's update_time
|
||||||
|
db_message.update_time = datetime.now(
|
||||||
|
timezone.utc,
|
||||||
|
).isoformat()
|
||||||
db_message.message = message.model_dump()
|
db_message.message = message.model_dump()
|
||||||
else:
|
else:
|
||||||
db_message = MockMessage()
|
db_message = MockMessage()
|
||||||
|
|||||||
@@ -35,6 +35,8 @@ from alias.agent.tools.add_tools import add_tools
|
|||||||
from alias.agent.agents.ds_agent_utils import (
|
from alias.agent.agents.ds_agent_utils import (
|
||||||
add_ds_specific_tool,
|
add_ds_specific_tool,
|
||||||
)
|
)
|
||||||
|
from alias.agent.memory.longterm_memory import AliasLongTermMemory
|
||||||
|
from alias.server.clients.memory_client import MemoryClient
|
||||||
|
|
||||||
|
|
||||||
MODEL_FORMATTER_MAPPING = {
|
MODEL_FORMATTER_MAPPING = {
|
||||||
@@ -117,6 +119,26 @@ async def arun_meta_planner(
|
|||||||
session_service=session_service,
|
session_service=session_service,
|
||||||
state_saving_dir=f"./agent-states/run-{time_str}",
|
state_saving_dir=f"./agent-states/run-{time_str}",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Initialize long-term memory if enabled
|
||||||
|
long_term_memory = None
|
||||||
|
if session_service.session_entity.use_long_term_memory_service:
|
||||||
|
# Check if memory service is available
|
||||||
|
if await MemoryClient.is_available():
|
||||||
|
long_term_memory = AliasLongTermMemory(
|
||||||
|
session_service=session_service,
|
||||||
|
)
|
||||||
|
logger.info(
|
||||||
|
"Long-term memory service is available and initialized",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.warning(
|
||||||
|
"use_long_term_memory_service is True, but memory "
|
||||||
|
"service is not available. Long-term memory will not "
|
||||||
|
"be used. Please check if the memory service is "
|
||||||
|
"running.",
|
||||||
|
)
|
||||||
|
|
||||||
meta_planner = MetaPlanner(
|
meta_planner = MetaPlanner(
|
||||||
model=model,
|
model=model,
|
||||||
formatter=formatter,
|
formatter=formatter,
|
||||||
@@ -129,6 +151,7 @@ async def arun_meta_planner(
|
|||||||
max_iters=100,
|
max_iters=100,
|
||||||
session_service=session_service,
|
session_service=session_service,
|
||||||
enable_clarification=enable_clarification,
|
enable_clarification=enable_clarification,
|
||||||
|
long_term_memory=long_term_memory,
|
||||||
)
|
)
|
||||||
meta_planner.worker_manager.register_worker(
|
meta_planner.worker_manager.register_worker(
|
||||||
browser_agent,
|
browser_agent,
|
||||||
@@ -371,7 +394,7 @@ async def arun_browseruse_agent(
|
|||||||
memory=InMemoryMemory(),
|
memory=InMemoryMemory(),
|
||||||
toolkit=browser_toolkit,
|
toolkit=browser_toolkit,
|
||||||
max_iters=50,
|
max_iters=50,
|
||||||
start_url="https://www.bing.com",
|
start_url="https://www.google.com",
|
||||||
session_service=session_service,
|
session_service=session_service,
|
||||||
state_saving_dir=f"./agent-states/run_browser-{time_str}",
|
state_saving_dir=f"./agent-states/run_browser-{time_str}",
|
||||||
)
|
)
|
||||||
@@ -405,4 +428,7 @@ async def arun_agents(
|
|||||||
f"Unknown chat mode: {chat_mode}."
|
f"Unknown chat mode: {chat_mode}."
|
||||||
"Invoke general mode instead.",
|
"Invoke general mode instead.",
|
||||||
)
|
)
|
||||||
await arun_meta_planner(session_service, sandbox)
|
await arun_meta_planner(
|
||||||
|
session_service,
|
||||||
|
sandbox,
|
||||||
|
)
|
||||||
|
|||||||
@@ -61,6 +61,7 @@ async def run_agent_task(
|
|||||||
user_msg: str,
|
user_msg: str,
|
||||||
mode: str = "general",
|
mode: str = "general",
|
||||||
files: Optional[list[str]] = None,
|
files: Optional[list[str]] = None,
|
||||||
|
use_long_term_memory_service: bool = False,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Run an agent task with the specified configuration.
|
Run an agent task with the specified configuration.
|
||||||
@@ -69,6 +70,7 @@ async def run_agent_task(
|
|||||||
user_msg: The user's task/query
|
user_msg: The user's task/query
|
||||||
mode: Agent mode ('general', 'dr', 'ds', 'browser', 'finance')
|
mode: Agent mode ('general', 'dr', 'ds', 'browser', 'finance')
|
||||||
files: List of local file paths to upload to sandbox workspace
|
files: List of local file paths to upload to sandbox workspace
|
||||||
|
use_long_term_memory_service: Enable long-term memory service.
|
||||||
"""
|
"""
|
||||||
global _original_sigint_handler
|
global _original_sigint_handler
|
||||||
|
|
||||||
@@ -81,7 +83,9 @@ async def run_agent_task(
|
|||||||
# logger.debug("Installed custom SIGINT handler to protect sandbox")
|
# logger.debug("Installed custom SIGINT handler to protect sandbox")
|
||||||
|
|
||||||
# Initialize session
|
# Initialize session
|
||||||
session = MockSessionService()
|
session = MockSessionService(
|
||||||
|
use_long_term_memory_service=use_long_term_memory_service,
|
||||||
|
)
|
||||||
|
|
||||||
# Create initial user message
|
# Create initial user message
|
||||||
user_agent = UserAgent(name="User")
|
user_agent = UserAgent(name="User")
|
||||||
@@ -184,6 +188,7 @@ async def _run_agent_loop(
|
|||||||
session: Session service instance
|
session: Session service instance
|
||||||
user_agent: User agent for interactive follow-ups
|
user_agent: User agent for interactive follow-ups
|
||||||
sandbox: Sandbox accessible for all agents
|
sandbox: Sandbox accessible for all agents
|
||||||
|
use_long_term_memory_service: Enable long-term memory service.
|
||||||
"""
|
"""
|
||||||
while True:
|
while True:
|
||||||
# Run the appropriate agent based on mode
|
# Run the appropriate agent based on mode
|
||||||
@@ -304,6 +309,13 @@ def main():
|
|||||||
"for agent to use (e.g., --files file1.txt file2.csv)",
|
"for agent to use (e.g., --files file1.txt file2.csv)",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
run_parser.add_argument(
|
||||||
|
"--use_long_term_memory",
|
||||||
|
action="store_true",
|
||||||
|
help="Enable long-term memory service for retrieving user profiling "
|
||||||
|
"information at session start",
|
||||||
|
)
|
||||||
|
|
||||||
# Version command
|
# Version command
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--version",
|
"--version",
|
||||||
@@ -326,6 +338,11 @@ def main():
|
|||||||
user_msg=args.task,
|
user_msg=args.task,
|
||||||
mode=args.mode,
|
mode=args.mode,
|
||||||
files=args.files if hasattr(args, "files") else None,
|
files=args.files if hasattr(args, "files") else None,
|
||||||
|
use_long_term_memory_service=(
|
||||||
|
args.use_long_term_memory
|
||||||
|
if hasattr(args, "use_long_term_memory")
|
||||||
|
else False
|
||||||
|
),
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
except (KeyboardInterrupt, SystemExit) as e:
|
except (KeyboardInterrupt, SystemExit) as e:
|
||||||
|
|||||||
@@ -950,7 +950,7 @@ class BaseAsyncVectorMemory(MemoryBase):
|
|||||||
user_id: Optional[str] = None,
|
user_id: Optional[str] = None,
|
||||||
agent_id: Optional[str] = None,
|
agent_id: Optional[str] = None,
|
||||||
run_id: Optional[str] = None,
|
run_id: Optional[str] = None,
|
||||||
limit: int = 100,
|
limit: int = 10,
|
||||||
filters: Optional[Dict[str, Any]] = None,
|
filters: Optional[Dict[str, Any]] = None,
|
||||||
threshold: Optional[float] = None,
|
threshold: Optional[float] = None,
|
||||||
):
|
):
|
||||||
@@ -966,7 +966,7 @@ class BaseAsyncVectorMemory(MemoryBase):
|
|||||||
run_id (str, optional): ID of the run to search for.
|
run_id (str, optional): ID of the run to search for.
|
||||||
Defaults to None.
|
Defaults to None.
|
||||||
limit (int, optional): Limit the number of results.
|
limit (int, optional): Limit the number of results.
|
||||||
Defaults to 100.
|
Defaults to 10.
|
||||||
filters (dict, optional): Filters to apply to the search.
|
filters (dict, optional): Filters to apply to the search.
|
||||||
Defaults to None.
|
Defaults to None.
|
||||||
threshold (float, optional): Minimum score for a memory to be
|
threshold (float, optional): Minimum score for a memory to be
|
||||||
|
|||||||
@@ -183,12 +183,21 @@ class UserProfilingAddRequest(BaseUserProfilingRequest):
|
|||||||
"""Request for adding user profiling content"""
|
"""Request for adding user profiling content"""
|
||||||
|
|
||||||
content: List[Any] = Field(default_factory=list)
|
content: List[Any] = Field(default_factory=list)
|
||||||
|
session_id: Optional[str] = Field(default=None, description="Session ID")
|
||||||
|
|
||||||
|
|
||||||
class UserProfilingRetrieveRequest(BaseUserProfilingRequest):
|
class UserProfilingRetrieveRequest(BaseUserProfilingRequest):
|
||||||
"""Request for retrieving user profiling data"""
|
"""Request for retrieving user profiling data"""
|
||||||
|
|
||||||
query: str
|
query: str
|
||||||
|
limit: Optional[int] = Field(
|
||||||
|
default=3,
|
||||||
|
description="The maximum number of memories to retrieve",
|
||||||
|
)
|
||||||
|
threshold: Optional[float] = Field(
|
||||||
|
default=0.6,
|
||||||
|
description="The threshold for the memories to retrieve",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class UserProfilingRetrieveResponse(BaseUserProfilingResponse):
|
class UserProfilingRetrieveResponse(BaseUserProfilingResponse):
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
# mypy: disable-error-code=no-redef
|
|
||||||
from typing import List, Dict, Any, Optional, Tuple, Union
|
from typing import List, Dict, Any, Optional, Tuple, Union
|
||||||
import json
|
import json
|
||||||
import re
|
import re
|
||||||
@@ -12,7 +11,7 @@ from alias.server.clients.inner_client import InnerClient
|
|||||||
try:
|
try:
|
||||||
from .logging_utils import setup_logging
|
from .logging_utils import setup_logging
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from alias.memory_service.profiling_utils.logging_utils import (
|
from alias.memory_service.profiling_utils.logging_utils import ( # type: ignore[no-redef] # noqa: E501 # pylint: disable=line-too-long
|
||||||
setup_logging,
|
setup_logging,
|
||||||
)
|
)
|
||||||
logger = setup_logging()
|
logger = setup_logging()
|
||||||
|
|||||||
@@ -170,7 +170,12 @@ async def retrieve_memory(
|
|||||||
raise EmptyQueryError()
|
raise EmptyQueryError()
|
||||||
|
|
||||||
memory_service = get_memory_service()
|
memory_service = get_memory_service()
|
||||||
result = await memory_service.retrieve(request.uid, request.query)
|
result = await memory_service.retrieve(
|
||||||
|
request.uid,
|
||||||
|
request.query,
|
||||||
|
limit=request.limit,
|
||||||
|
threshold=request.threshold,
|
||||||
|
)
|
||||||
return UserProfilingRetrieveResponse(
|
return UserProfilingRetrieveResponse(
|
||||||
status="success",
|
status="success",
|
||||||
uid=request.uid,
|
uid=request.uid,
|
||||||
@@ -301,7 +306,19 @@ async def record_action(
|
|||||||
f"Retrieved session_content length: "
|
f"Retrieved session_content length: "
|
||||||
f"{len(session_content) if session_content else 0}",
|
f"{len(session_content) if session_content else 0}",
|
||||||
)
|
)
|
||||||
|
if not session_content:
|
||||||
|
if request.data.get("session_content") is not None:
|
||||||
|
session_content = request.data["session_content"]
|
||||||
|
logger.info(
|
||||||
|
f"Using session_content from request data: "
|
||||||
|
f"{session_content}",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
session_content = []
|
||||||
|
logger.error(
|
||||||
|
"No session_content found in request data and "
|
||||||
|
"get_messages_by_session_id returned empty list",
|
||||||
|
)
|
||||||
if action_value == "TASK_STOP":
|
if action_value == "TASK_STOP":
|
||||||
memory_type = "tool_memory"
|
memory_type = "tool_memory"
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -80,7 +80,7 @@ class ToolMemory(BaseMemory):
|
|||||||
) -> Any | bool:
|
) -> Any | bool:
|
||||||
if not self.inited:
|
if not self.inited:
|
||||||
await self.__aenter__()
|
await self.__aenter__()
|
||||||
uid = "alias"
|
uid = uid or "alias"
|
||||||
|
|
||||||
# Use tool_names if provided, otherwise use query as tool_names
|
# Use tool_names if provided, otherwise use query as tool_names
|
||||||
tool_names = query
|
tool_names = query
|
||||||
@@ -273,7 +273,7 @@ class ToolMemory(BaseMemory):
|
|||||||
):
|
):
|
||||||
if not self.inited:
|
if not self.inited:
|
||||||
await self.__aenter__()
|
await self.__aenter__()
|
||||||
uid = "alias"
|
uid = uid or "alias"
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
f"record_action called with: uid={uid}, action={action}, "
|
f"record_action called with: uid={uid}, action={action}, "
|
||||||
|
|||||||
@@ -213,8 +213,7 @@ class AsyncUserProfilingMemory(
|
|||||||
Args:
|
Args:
|
||||||
uid (str): User ID.
|
uid (str): User ID.
|
||||||
content: Messages to add.
|
content: Messages to add.
|
||||||
session_id (Optional[str]): Session ID. If not provided, a new
|
session_id (Optional[str]): Session ID.
|
||||||
UUID will be generated.
|
|
||||||
metadata (Optional[dict]): Additional metadata for the messages.
|
metadata (Optional[dict]): Additional metadata for the messages.
|
||||||
**kwargs: Other keyword arguments passed to the candidate pool.
|
**kwargs: Other keyword arguments passed to the candidate pool.
|
||||||
Returns:
|
Returns:
|
||||||
@@ -223,9 +222,7 @@ class AsyncUserProfilingMemory(
|
|||||||
now = datetime.datetime.now(pytz.timezone("US/Pacific")).isoformat()
|
now = datetime.datetime.now(pytz.timezone("US/Pacific")).isoformat()
|
||||||
metadata = deepcopy(metadata) if metadata else {}
|
metadata = deepcopy(metadata) if metadata else {}
|
||||||
if "session_id" not in metadata:
|
if "session_id" not in metadata:
|
||||||
metadata["session_id"] = (
|
metadata["session_id"] = session_id if session_id else ""
|
||||||
session_id if session_id else str(uuid.uuid4())
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
if session_id and metadata["session_id"] != session_id:
|
if session_id and metadata["session_id"] != session_id:
|
||||||
raise ValueError(
|
raise ValueError(
|
||||||
@@ -1393,6 +1390,7 @@ class AsyncUserProfilingMemory(
|
|||||||
session_id=session_id,
|
session_id=session_id,
|
||||||
action_message_id=action_message_id,
|
action_message_id=action_message_id,
|
||||||
)
|
)
|
||||||
|
logger.info(f"record_chat results: {results}")
|
||||||
return results
|
return results
|
||||||
|
|
||||||
async def _parallel_add_to_pools(
|
async def _parallel_add_to_pools(
|
||||||
@@ -1757,6 +1755,7 @@ class AsyncUserProfilingMemory(
|
|||||||
):
|
):
|
||||||
"""Optimized record_chat using parallel processing"""
|
"""Optimized record_chat using parallel processing"""
|
||||||
preference_type = preference_message.get("type")
|
preference_type = preference_message.get("type")
|
||||||
|
logger.info(f"record_chat preference_type: {preference_type}")
|
||||||
if preference_type == "irrelevant":
|
if preference_type == "irrelevant":
|
||||||
return {
|
return {
|
||||||
"message": (
|
"message": (
|
||||||
|
|||||||
@@ -3,6 +3,7 @@
|
|||||||
from http import HTTPStatus
|
from http import HTTPStatus
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
import httpx
|
||||||
from alias.server.core.config import settings
|
from alias.server.core.config import settings
|
||||||
from alias.server.exceptions.base import ServiceError
|
from alias.server.exceptions.base import ServiceError
|
||||||
from alias.server.exceptions.service import MemoryServiceError
|
from alias.server.exceptions.service import MemoryServiceError
|
||||||
@@ -13,6 +14,68 @@ from .base_client import BaseClient
|
|||||||
class MemoryClient(BaseClient):
|
class MemoryClient(BaseClient):
|
||||||
base_url: Optional[str] = settings.USER_PROFILING_BASE_URL
|
base_url: Optional[str] = settings.USER_PROFILING_BASE_URL
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
async def is_available(cls) -> bool:
|
||||||
|
"""
|
||||||
|
Check if memory service is available and healthy.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if memory service is configured and can be reached,
|
||||||
|
False otherwise
|
||||||
|
"""
|
||||||
|
if settings.USER_PROFILING_BASE_URL is None:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Check if the service is actually reachable by pinging the health
|
||||||
|
# endpoint
|
||||||
|
try:
|
||||||
|
health_url = (
|
||||||
|
f"{settings.USER_PROFILING_BASE_URL.rstrip('/')}/health"
|
||||||
|
)
|
||||||
|
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||||
|
response = await client.get(health_url)
|
||||||
|
if response.status_code == HTTPStatus.OK:
|
||||||
|
logger.debug(
|
||||||
|
f"Memory service health check passed: {health_url}",
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
logger.warning(
|
||||||
|
f"Memory service health check failed with status "
|
||||||
|
f"{response.status_code}: {health_url}",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
except httpx.TimeoutException:
|
||||||
|
logger.warning(
|
||||||
|
f"Memory service health check timeout: "
|
||||||
|
f"{settings.USER_PROFILING_BASE_URL}",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
except httpx.RequestError as e:
|
||||||
|
logger.warning(
|
||||||
|
f"Memory service health check failed: {e}",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(
|
||||||
|
f"Unexpected error during memory service health check: {e}",
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def is_configured(cls) -> bool:
|
||||||
|
"""
|
||||||
|
Check if memory service is configured (synchronous, no network call).
|
||||||
|
|
||||||
|
Note: This only checks if the service URL is configured, it does NOT
|
||||||
|
verify that the service is actually available or healthy. For actual
|
||||||
|
health checks, use the async is_available() method instead.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if memory service URL is configured, False otherwise
|
||||||
|
"""
|
||||||
|
return settings.USER_PROFILING_BASE_URL is not None
|
||||||
|
|
||||||
async def record_action(
|
async def record_action(
|
||||||
self,
|
self,
|
||||||
action: "Action", # noqa: F821
|
action: "Action", # noqa: F821
|
||||||
@@ -26,7 +89,7 @@ class MemoryClient(BaseClient):
|
|||||||
try:
|
try:
|
||||||
response = await self._request(
|
response = await self._request(
|
||||||
method="POST",
|
method="POST",
|
||||||
path="user_profiling/record_action",
|
path="alias_memory_service/record_action",
|
||||||
headers=headers,
|
headers=headers,
|
||||||
data=action,
|
data=action,
|
||||||
)
|
)
|
||||||
@@ -46,3 +109,179 @@ class MemoryClient(BaseClient):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(e)
|
logger.error(e)
|
||||||
raise MemoryServiceError(message=str(e)) from e
|
raise MemoryServiceError(message=str(e)) from e
|
||||||
|
|
||||||
|
async def retrieve_user_profiling(
|
||||||
|
self,
|
||||||
|
uid: str,
|
||||||
|
query: str,
|
||||||
|
limit: int = 3,
|
||||||
|
threshold: float = 0.3,
|
||||||
|
) -> Optional[str]:
|
||||||
|
"""
|
||||||
|
Retrieve user profiling information based on query.
|
||||||
|
Only items with is_confirmed == 1 will be retrieved and returned.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
uid: User ID
|
||||||
|
query: Query string to search for relevant profiling
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
String containing retrieved profiling data (only confirmed items),
|
||||||
|
or None if service is unavailable or no confirmed items found
|
||||||
|
"""
|
||||||
|
if self.base_url is None:
|
||||||
|
return None
|
||||||
|
headers = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Accept": "application/json",
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
response = await self._request(
|
||||||
|
method="POST",
|
||||||
|
path="alias_memory_service/user_profiling/retrieve",
|
||||||
|
headers=headers,
|
||||||
|
data={
|
||||||
|
"uid": uid,
|
||||||
|
"query": query,
|
||||||
|
"limit": limit,
|
||||||
|
"threshold": threshold,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
if response.status_code == HTTPStatus.OK:
|
||||||
|
result = response.json()
|
||||||
|
profiling_result_tmp = (
|
||||||
|
result.get("data").get("profiling").get("results")
|
||||||
|
)
|
||||||
|
|
||||||
|
profiling_result = None
|
||||||
|
if profiling_result_tmp and len(profiling_result_tmp) > 0:
|
||||||
|
profiling_result = "\n".join(
|
||||||
|
[
|
||||||
|
item["memory"]
|
||||||
|
for item in profiling_result_tmp
|
||||||
|
if item.get("metadata", {}).get("is_confirmed")
|
||||||
|
== 1
|
||||||
|
],
|
||||||
|
)
|
||||||
|
if profiling_result: # Only log if there's actual content
|
||||||
|
logger.debug(f"Profiling result: {profiling_result}")
|
||||||
|
else:
|
||||||
|
profiling_result = None
|
||||||
|
|
||||||
|
return profiling_result
|
||||||
|
else:
|
||||||
|
logger.warning(
|
||||||
|
f"Memory Service retrieve error: {response.status_code} - "
|
||||||
|
f"{response.text}",
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
except ServiceError as e:
|
||||||
|
logger.warning(f"Memory Service retrieve error: {e}")
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Unexpected error retrieving profiling: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def retrieve_tool_memory(
|
||||||
|
self,
|
||||||
|
uid: str,
|
||||||
|
query: str,
|
||||||
|
) -> Optional[str]:
|
||||||
|
"""
|
||||||
|
Retrieve tool memory information based on query.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
uid: User ID
|
||||||
|
query: Query string to search for relevant tool memory
|
||||||
|
(e.g., "web_search,write_file" for specific tools)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
String containing retrieved tool memory answer, or None if
|
||||||
|
service is unavailable
|
||||||
|
|
||||||
|
Example:
|
||||||
|
retrieve_tool_memory(uid="user123", query="web_search,write_file")
|
||||||
|
"""
|
||||||
|
if self.base_url is None:
|
||||||
|
return None
|
||||||
|
headers = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Accept": "application/json",
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
response = await self._request(
|
||||||
|
method="POST",
|
||||||
|
path="alias_memory_service/tool_memory/retrieve",
|
||||||
|
headers=headers,
|
||||||
|
data={"uid": uid, "query": query},
|
||||||
|
)
|
||||||
|
if response.status_code == HTTPStatus.OK:
|
||||||
|
result = response.json()
|
||||||
|
return result.get("data")
|
||||||
|
else:
|
||||||
|
logger.warning(
|
||||||
|
f"Memory Service retrieve tool memory error: "
|
||||||
|
f"{response.status_code} - {response.text}",
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
except ServiceError as e:
|
||||||
|
logger.warning(f"Memory Service retrieve tool memory error: {e}")
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(
|
||||||
|
f"Unexpected error retrieving tool memory: {e}",
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def add_to_longterm_memory(
|
||||||
|
self,
|
||||||
|
uid: str,
|
||||||
|
content: list,
|
||||||
|
session_id: Optional[str] = None,
|
||||||
|
) -> Optional[str]:
|
||||||
|
"""
|
||||||
|
Add content to user profiling.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
uid: User ID
|
||||||
|
content: Content to add to user profiling
|
||||||
|
session_id: Session ID
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
String containing the result of the add operation, or None if
|
||||||
|
service is unavailable
|
||||||
|
"""
|
||||||
|
if self.base_url is None:
|
||||||
|
return None
|
||||||
|
headers = {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Accept": "application/json",
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
response = await self._request(
|
||||||
|
method="POST",
|
||||||
|
path="alias_memory_service/user_profiling/add",
|
||||||
|
headers=headers,
|
||||||
|
data={
|
||||||
|
"uid": uid,
|
||||||
|
"content": content,
|
||||||
|
"session_id": session_id,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
if response.status_code == HTTPStatus.OK:
|
||||||
|
result = response.json()
|
||||||
|
return result.get("data")
|
||||||
|
else:
|
||||||
|
logger.warning(
|
||||||
|
f"Memory Service add to user profiling error: "
|
||||||
|
f"{response.status_code} - {response.text}",
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
except ServiceError as e:
|
||||||
|
logger.warning(f"Memory Service add to user profiling error: {e}")
|
||||||
|
return None
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(
|
||||||
|
f"Unexpected error adding to user profiling: {e}",
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
|||||||
@@ -25,6 +25,8 @@ from alias.server.middleware.request_context_middleware import (
|
|||||||
)
|
)
|
||||||
from alias.server.core.task_manager import task_manager
|
from alias.server.core.task_manager import task_manager
|
||||||
|
|
||||||
|
from alias.server.utils.logger import setup_logger
|
||||||
|
|
||||||
|
|
||||||
def custom_generate_unique_id(route: APIRoute) -> str:
|
def custom_generate_unique_id(route: APIRoute) -> str:
|
||||||
return f"{route.tags[0]}-{route.name}"
|
return f"{route.tags[0]}-{route.name}"
|
||||||
@@ -35,6 +37,7 @@ async def lifespan(_app: FastAPI):
|
|||||||
"""Application lifespan manager."""
|
"""Application lifespan manager."""
|
||||||
# Startup
|
# Startup
|
||||||
print("🚀 Starting Alias API Server...")
|
print("🚀 Starting Alias API Server...")
|
||||||
|
setup_logger()
|
||||||
await initialize_database()
|
await initialize_database()
|
||||||
await task_manager.start()
|
await task_manager.start()
|
||||||
await redis_client.ping()
|
await redis_client.ping()
|
||||||
|
|||||||
@@ -27,6 +27,7 @@ class OperationRecord(SQLModel):
|
|||||||
|
|
||||||
class QueryRecord(SQLModel):
|
class QueryRecord(SQLModel):
|
||||||
query: Optional[str] = None
|
query: Optional[str] = None
|
||||||
|
session_content: Optional[list[Any]] = None
|
||||||
|
|
||||||
|
|
||||||
class Action(SQLModel):
|
class Action(SQLModel):
|
||||||
@@ -159,6 +160,7 @@ class ChatAction(Action):
|
|||||||
query: Optional[str] = None,
|
query: Optional[str] = None,
|
||||||
chat_type: Optional[ChatType] = None,
|
chat_type: Optional[ChatType] = None,
|
||||||
history_length: int = 0,
|
history_length: int = 0,
|
||||||
|
session_content: Optional[list[dict]] = None,
|
||||||
):
|
):
|
||||||
action_type = cls._resolve_action_type(chat_type, history_length)
|
action_type = cls._resolve_action_type(chat_type, history_length)
|
||||||
return cls(
|
return cls(
|
||||||
@@ -168,6 +170,7 @@ class ChatAction(Action):
|
|||||||
message_id=message_id,
|
message_id=message_id,
|
||||||
data=QueryRecord(
|
data=QueryRecord(
|
||||||
query=query,
|
query=query,
|
||||||
|
session_content=session_content,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -185,6 +188,7 @@ class TaskStopAction(Action):
|
|||||||
user_id: uuid.UUID,
|
user_id: uuid.UUID,
|
||||||
conversation_id: uuid.UUID,
|
conversation_id: uuid.UUID,
|
||||||
task_id: uuid.UUID,
|
task_id: uuid.UUID,
|
||||||
|
data: Optional[dict] = None,
|
||||||
):
|
):
|
||||||
action_type = cls._resolve_action_type()
|
action_type = cls._resolve_action_type()
|
||||||
return cls(
|
return cls(
|
||||||
@@ -192,4 +196,5 @@ class TaskStopAction(Action):
|
|||||||
session_id=conversation_id,
|
session_id=conversation_id,
|
||||||
action_type=action_type,
|
action_type=action_type,
|
||||||
task_id=task_id,
|
task_id=task_id,
|
||||||
|
data=data,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -42,6 +42,7 @@ class ChatRequest(SQLModel):
|
|||||||
language_type: Optional[LanguageType] = LanguageType.EN_US
|
language_type: Optional[LanguageType] = LanguageType.EN_US
|
||||||
chat_mode: Optional[ChatMode] = ChatMode.GENERAL
|
chat_mode: Optional[ChatMode] = ChatMode.GENERAL
|
||||||
roadmap: Optional[RoadmapChange] = None
|
roadmap: Optional[RoadmapChange] = None
|
||||||
|
use_long_term_memory_service: Optional[bool] = False
|
||||||
|
|
||||||
|
|
||||||
class ContinueChatRequest(SQLModel):
|
class ContinueChatRequest(SQLModel):
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ class SessionEntity(SQLModel):
|
|||||||
chat_type: Optional[ChatType] = ChatType.TASK
|
chat_type: Optional[ChatType] = ChatType.TASK
|
||||||
query: Optional[str] = None
|
query: Optional[str] = None
|
||||||
roadmap: Optional[RoadmapChange] = None
|
roadmap: Optional[RoadmapChange] = None
|
||||||
|
use_long_term_memory_service: Optional[bool] = False
|
||||||
|
|
||||||
def ids(self):
|
def ids(self):
|
||||||
return {
|
return {
|
||||||
|
|||||||
@@ -131,6 +131,9 @@ class ChatService:
|
|||||||
chat_mode = chat_request.chat_mode
|
chat_mode = chat_request.chat_mode
|
||||||
language_type = chat_request.language_type
|
language_type = chat_request.language_type
|
||||||
roadmap = chat_request.roadmap
|
roadmap = chat_request.roadmap
|
||||||
|
use_long_term_memory_service = (
|
||||||
|
chat_request.use_long_term_memory_service or False
|
||||||
|
)
|
||||||
|
|
||||||
async with session_scope() as session:
|
async with session_scope() as session:
|
||||||
message_service = MessageService(session=session)
|
message_service = MessageService(session=session)
|
||||||
@@ -178,6 +181,7 @@ class ChatService:
|
|||||||
chat_type=chat_type,
|
chat_type=chat_type,
|
||||||
query=query,
|
query=query,
|
||||||
roadmap=roadmap,
|
roadmap=roadmap,
|
||||||
|
use_long_term_memory_service=use_long_term_memory_service,
|
||||||
)
|
)
|
||||||
|
|
||||||
event_manager = EventManager(
|
event_manager = EventManager(
|
||||||
|
|||||||
@@ -74,10 +74,7 @@ def setup_logger():
|
|||||||
rotation=settings.LOG_ROTATION,
|
rotation=settings.LOG_ROTATION,
|
||||||
retention=log_retention,
|
retention=log_retention,
|
||||||
enqueue=True,
|
enqueue=True,
|
||||||
|
mode="a",
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Logger setup failed: {e}", exc_info=True)
|
logger.error(f"Logger setup failed: {e}", exc_info=True)
|
||||||
|
|
||||||
|
|
||||||
# Configure logger when the module is imported
|
|
||||||
setup_logger()
|
|
||||||
|
|||||||
Reference in New Issue
Block a user