I can install everything fine, but then when I run the ingest.py file I get a "Segmentation fault (core dumped)" error, right after it says "Using embedded DuckDB with persistence: data will be stored in: db". Expected behavior I assumed it will take a few seconds to complete ...
tried installing via cli as well as .git nvcc is included in path, I can run nvidia-smi cuda version 11.7 (textgen) [root@pve0 llama.cpp]# ./main -m ../text-generation-webui/models/Marx-3B-V2-Q4_1-GGUF.gguf -n 128 Segmentation fault (core dumped) (textgen) [root@pve0 llama...
Segmentation fault (core dumped) Steps To Reproduce I have a flake-based system. I tried installing tabby with the following variants: tabby (tabby.override { acceleration = "cuda"; }) ((tabby.override { acceleration = "cuda"; }).overrideAttrs (oldAttrs: rec { version = "0.15.0";...
I am also using libllama.so built from the latest llama.cpp source, so I can debug it with gdb. AMD Ryzen 5 3600 6-Core Processor + RX 580 4 GB Vendor ID: AuthenticAMD Model name: AMD Ryzen 5 3600 6-Core Processor CPU family: 23 Model: 113 Thread(s) per core: 2 Core(s)...
However, a segmentation fault occurred when Context Shifting erased tokens, ie: [Context Shifting: Erased 49 tokens at position 2719] The stack trace recorded from systemd is very short: PID: 384139 (koboldcpp-linux) UID: 1000 (tuantran1632001) GID: 1000 (tuantran1632001) Signal: 11 (SEGV)...
Any updates on this? Stable Diffusion and LLM training (like LLaMa and Mistral) work without a hitch with the usual ROCm PyTorch installations, but this just doesAborted (core dumped)no matter what I try. PyTorch+ROCm-5.4.2 fails to see my 7900XTX in the UI, then tries to use DML, ...
Segmentation fault Current thread 0x000000020330bac0 (most recent call first): File "/Users/lama/workspace/arrow-new/python/pyarrow/tests/test_dataset.py", line 5645 in test_make_write_options_error File "/Users/lama/anaconda3/envs/pyarrow-dev-310/lib/python3.10/site-packages/_pytest/python....