Fix in learn to ask (#107)

This commit is contained in:
chenyushuo
2026-01-20 14:13:20 +08:00
committed by GitHub
parent b843abea93
commit 311ddfff46
2 changed files with 2 additions and 2 deletions

View File

@@ -289,6 +289,8 @@ Also, make sure to update the `model_path` in `tuner/learn_to_ask/main.py` to po
> 🔗 Learn more about Tinker Backend: [Tinker Backend Documentation](https://agentscope-ai.github.io/Trinity-RFT/en/main/tutorial/example_tinker_backend.html) > 🔗 Learn more about Tinker Backend: [Tinker Backend Documentation](https://agentscope-ai.github.io/Trinity-RFT/en/main/tutorial/example_tinker_backend.html)
> In this provided example, training is configured for 4 epochs. When using Tinker, the total token consumption is approximately 112 million tokens, resulting in an estimated cost of approximately 18 USD.
### Launch Training ### Launch Training
```bash ```bash
python tuner/learn_to_ask/main.py python tuner/learn_to_ask/main.py

View File

@@ -249,7 +249,6 @@ if __name__ == "__main__":
temperature=1.0, temperature=1.0,
tensor_parallel_size=1, tensor_parallel_size=1,
inference_engine_num=4, inference_engine_num=4,
reasoning_parser=None,
) )
aux_models = { aux_models = {
AUXILIARY_MODEL_NAME: TunerModelConfig( AUXILIARY_MODEL_NAME: TunerModelConfig(
@@ -259,7 +258,6 @@ if __name__ == "__main__":
temperature=0.7, temperature=0.7,
tensor_parallel_size=2, tensor_parallel_size=2,
inference_engine_num=1, inference_engine_num=1,
reasoning_parser=None,
), ),
} }
algorithm = AlgorithmConfig( algorithm = AlgorithmConfig(