packages/flash_attn/flash_attn_interface.py", line 10, in <module> import flash_attn_2_cuda as flash_attn_cuda ImportError: /home/apus/mambaforge/envs/Qwen/lib/python3.11/site-packages/flash_attn_2_cuda.cpython-311-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda9SetDeviceEi #1061...
I found I was unable to import flash_attn_cuda after running python setup.py install. --- details --- I run python setup.py install with a prefix pointing to the root dir of flash-attention. I set PYTHONPATH=$PWD also with the absolute path of the root dir of flash-attention. Any...
import torch.nn.functional as F from torch import nn try: import xformers.ops MEM_EFFICIENT_ATTN = True except ImportError: MEM_EFFICIENT_ATTN = False class AttentionBlock(nn.Module): """ An attention block that allows spatial positions to attend to each other. Originally ported...
env :cuda 12.3 pytorch 2.2.2 Failed to import transformers.models.qwen2.modeling_qwen2 because of the following error (look up to see its traceback): /mnt/pfs/zhangfan/system/anaconda/envs/swift/lib/python3.10/site-packages/flash_attn-2...
py3.8 The text was updated successfully, but these errors were encountered: nero-dvcommentedMay 3, 2024 add results of the following txt file after piping results to file: pip freeze>out.txtecho$PATH>path.txt and uname -a It seems that there is noflash_attn.flash_attentionmodule after flas...
Device: cuda:0 NVIDIA GeForce RTX 4070 Ti:cudaMallocAsync Using xformers cross attention [Prompt Server] web root: D:\comfyui\ComfyUI\web Adding extra search path checkpoints path/to/stable-diffusion-webui/models/Stable-diffusion Adding extra search path configs path/to/stable-diffusion-webui...
Reminder I have read the README and searched the existing issues. Reproduction (base) root@I19c2837ff800901ccf:/hy-tmp/LLaMA-Factory-main/src# CUDA_VISIBLE_DEVICES=0,1,2,3 python3.10 api.py \ --model_name_or_path ../model/qwen/Qwen1.5-72...
Thanks for sharing your amazing work, i was excited to give it a try. I tried to follow the steps and after building kernal package in /models/csrc/, after running the code i am getting the error as if there is no package, i am not sure if i am missing anything in between. Should...
During the training process, I always encountered this problem. Later, based on my CUDA=11.7, I downgraded the torch version and also downgraded the flash attn version, and the problem was solved. Currently torch=1.13.1 flash-attn==2.3 tokenizers == 0.11.4 ...
C:\Users\QK\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-segment-anything-2\sam2\modeling\sam\transformer.py:20: UserWarning: Flash Attention is disabled as it requires a GPU with Ampere (8.0) CUDA capability. OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()...