(.venv) potapov-m1:ai muzhig$ CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python Using pip 23.3.1 from /Users/muzhig/PycharmProjects/ai/.venv/lib/python3.10/site-packages/pip (python 3.10)...
🐛 Describe the bug I have been trying to JIT trace Whisper, using this code: from transformers import WhisperProcessor, WhisperForConditionalGeneration from datasets import load_dataset import torch # model_name = "whisper-large-v2" mode...
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9 llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from ../text-ge...
path.dirname(__file__), "user-provided-file.txt")) Windows Programs without console give no errors For debugging purposes, remove --windows-disable-console or use the options --windows-force-stdout-spec and --windows-force-stderr-spec with paths as documented for --windows-onefile-tempdir-...