如果部署环境是Linux,推荐安装fast_tokenizer可以得到更极致的文本处理效率,进一步提升服务性能。目前暂不支持Windows设备安装,将会在下个版本支持。 ```shell pip installfast_tokenizer pip installfast-tokenizer-python ```--> Expand Down 3 changes: 2 additions & 1 deletion3fast_tokenizer/setup.py ...
目前由于人力问题,已经停止更新fast-tokenizer了,欢迎开发者贡献。https://github.com/PaddlePaddle/Paddle...
1、I installed FastChat by Method 2: From source 2、execute python3 -m fastchat.model.apply_delta \ --base /path/to/llama-13b \ --target /output/path/to/vicuna-13b \ --delta lmsys/vicuna-13b-delta-v0 3、Following error be found,then I check transformers projects (https://github....