FasterWhisperLarge-v3调优PerformanceBatchSizeLearningRateEpochsHardwareGPUTypeMemoryUtilizationDataDataAugmentationDatasetQuality 在实际调参中,我们可能会用到以下的Python脚本代码: importtorchfromtransformersimportWhisperForCon
基于Faster-whisper和modelscope一键生成双语字幕,双语字幕生成器,基于离线大模型,Generate bilingual subtitles with one click based on Faster-whisper and modelscope. Off-line large model - 接入faster-whisper-large-v3-turbo-ct2 · v3ucn/Modelscope_Faster_Whis
choco install ffmpeg 下载whisper-large-v3-turbo python3 download_model.py ollama run qwen2:7b ollama serve 目前支持双语字幕类型:中英、英中、日中、韩中 Special thanks to the following projects and contributors: Languages Python97.5% Batchfile2.5%...
注意,这里只是用了whisper原版的算法,现在我们添加--whisper_implementation faster-whisper参数来使用faster-whisper改进后的算法: python cli.py--whisper_implementationfaster-whisper--modellarge-v2--vadsilero-vad--languageJapanese--output_dird:/whisper_model d:/Downloads/test.mp4 程序返回: Running whisper ...
PotPlayer最新beta更新加入faster-large-v3-turbo模型,实现质量和速度的完美结合!加之可在线调用chatGPT兼容格式的大模型翻译api,极大地缓解之前本地部署大模型方案带来的GPU压力,让中低配置的电脑也能实现高效高质量的语音翻译~ PotPlayer官网:https://potplayer.tv PotPlayer beta版下载链接:https://www.videohelp....
PotPlayer最新beta更新加入faster-large-v3-turbo模型,实现质量和速度的完美结合!加之可在线调用chatGPT兼容格式的大模型翻译api,极大地缓解之前本地部署大模型方案带来的GPU压力,让中低配置的电脑也能实现高效高质量的语音翻译~PotPlayer官网:https://potplayer.tvPot
分享一款开源的视频音频文案提取工具 今天分享一款免费的开源文案提取工具。开源地址:https://github.com/v3ucn/Modelscope_Faster_Whisper_Multi_Subtitle 我根据开源代码,制作了可 - Ai剪辑助手于20241106发布在抖音,已经收获了1.3万个喜欢,来抖音,记录美好生活!
https://huggingface.co/guillaumekln/faster-whisper-large-v2 这里建议只下载faster-whisper-large-v2模型,也就是大模型的第二版,因为faster-whisper本来就比whisper快,所以使用large模型优势就会更加的明显。 模型放入models文件夹的faster-whisper目录,最终目录结构如下: ...
基于Faster-whisper和modelscope一键生成双语字幕,双语字幕生成器,基于离线大模型,Generate bilingual subtitles with one click based on Faster-whisper and modelscope. Off-line large model - Modelscope_Faster_Whisper_Multi_Subtitle/faster-whisper-large-v3-turb
modified the milestones:v0.10.2,v0.10.3,v0.11.0on Apr 19, 2024 codingl2k1 mentioned thison Apr 29, 2024 qinxuye closed this ascompletedin#1402on May 7, 2024 Sign up for freeto join this conversation on GitHub.Already have an account?Sign in to comment ...