codebase from git clone bark , env py3.9, pip install . fromhttps://github.com/suno-ai/bark then test basic func, as following: from transformers import AutoProcessor, BarkModel hgmodelname="suno/bark-small" processor = AutoProcessor.from_pretrained(hgmodelname) model = BarkModel.from_pretr...
from bark import SAMPLE_RATE, generate_audio, preload_models from IPython.display import Audio download and load all models preload_models( text_use_small=True, coarse_use_small=True, fine_use_gpu=False, fine_use_small=True, ) generate audio from text ...
TextToSpeechService class. Args: device (str, optional): The device to be used for the model, either "cuda" if a GPU is available or "cpu". Defaults to "cuda" if available, otherwise "cpu". """self.device=device self.processor=AutoProcessor.from_pretrained("suno/bark-small")self....
added another simple option using the env var SUNO_USE_SMALL_MODELS=True to get smaller models that will prob fit on an 8gb card. Qw haven't implemented quantization yet. As for requirements would love it if people confirm who have the relevant cards (since it also depends on eg bf16 su...