Although GPT4All shows me the card in Application General Settings > Device , every time I load a model it tells me that it runs on CPU with the message "GPU loading failed (Out of VRAM?)". However, I am not us
We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Cancel Create saved search Sign in Sign up Reseting focus {...
Depends on if you have a GPU cluster and 64 GB or so of RAM to run anything comparable at a reasonable speed. You also gotta calculate impact on your electric bill. There's a ton of smaller ones that can run relatively efficiently. ...