ifmodel_size=="faster-whisper-large-v3-turbo-ct2": model_path=f'tools/asr/models/faster-whisper-large-v3-turbo-ct2' iflanguage=='auto': language=None#不设置语种由模型自动输出概率最高的语种 print("loading faster whisper model:",model_size,model_path) ...
("faster-whisper-large-v3-turbo") Invalid model size 'faster-whisper-large-v3-turbo', expected one of: tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, distil-large-v2, distil-medium.en, distil-small.en, distil-large-v3 ...
支持新large-v3-turbo模型。 VAD 过滤器现在在 CPU 上的运行速度提高了 3 倍。 特征提取速度现在提高了 3 倍。 已添加log_progress到WhisperModel.transcribe打印转录进度。 添加了multilingual转录选项,允许转录多语言音频。请注意,大型模型已经具有代码转换功能,因此这对medium模型或较小的模型最有益。 WhisperModel...
基于Faster-whisper和modelscope一键生成双语字幕,双语字幕生成器,基于离线大模型,Generate bilingual subtitles with one click based on Faster-whisper and modelscope. Off-line large model - Modelscope_Faster_Whisper_Multi_Subtitle/test_turbo.py at main · v3u
Support for the newlarge-v3-turbomodel. VAD filter is now 3x faster on CPU. Feature Extraction is now 3x faster. Addedlog_progresstoWhisperModel.transcribeto print transcription progress. Addedmultilingualoption to transcription to allow transcribing multilingual audio. Note that Large models already ...
Whisper-large-v3-turbo is an efficient automatic speech recognition model by OpenAI, featuring 809 million parameters and significantly faster than its predecessor, Whisper large-v3. - GitHub - inferless/Whisper-large-v3-turbo: Whisper-large-v3-turbo is
1 change: 1 addition & 0 deletions 1 faster_whisper_GUI/config.py Original file line numberDiff line numberDiff line change @@ -127,6 +127,7 @@ "large-v1", "large-v2", "large-v3", "large-v3-turbo", "distil-large-v3", "distil-large-v2", "distil-medium.en", 18 changes:...
test_turbo.py utils.py 生成英文配音.bat 运行.bat README MIT license Modelscope_Faster_Whisper_Multi_Subtitle 基于Faster-whisper和modelscope一键生成双语字幕,双语字幕生成器,基于离线大模型 Generate bilingual subtitles with one click based on Faster-whisper and modelscope. Off-line large model ...
Faster-whisperfrom faster_whisper import WhisperModel model_size = "large-v3" # Run on GPU with FP16 model = WhisperModel(model_size, device="cuda", compute_type="float16") # or run on GPU with INT8 # model = WhisperModel(model_size, device="cuda", compute_type="int8_float16") ...
升级faster-whisper 至 1.1.0 版本 详见:https://github.com/SYSTRAN/faster-whisper/releases 多语言模式 Large-V3-turbo 模型支持 ... 修复WhisperX 相关bug 修复VAD 阈值选项 UI 相关问题 提示 安装包过大无法上传安装文件,转为使用分割自解压程序形式发布 需要安装 ffmpeg 0.8.1 Changes Upgrade faster...