🚀 Feature Support Tensor indexing after an ellipsis: right now the following code: @torch.jit.script def zero_diag(inputs: torch.Tensor) -> torch.Tensor: dim = inputs.shape[-1] inputs[..., torch.range(0, dim), torch.range(0, dim)] = 0 fa...
Import from Safetensors See theguideon importing models for more information. Customize a prompt Models from the Ollama library can be customized with a prompt. For example, to customize thellama3.2model: ollama pull llama3.2 Create aModelfile: ...
[--force-safetensors] [--no_use_fast] [--use_flash_attention_2] [--use_eager_attention] [--torch-compile] [--load-in-4bit] [--use_double_quant] [--compute_dtype COMPUTE_DTYPE] [--quant_type QUANT_TYPE] [--flash-attn] [--threads THREADS] [--threads-batch THREADS_BATCH] [...
sachinprasadhsadded thestat:awaiting tensorflowerStatus - Awaiting response from tensorflowerlabelOct 5, 2023 cbrnrmentioned this issueOct 9, 2023 Check if Python 3.12 workscbrnr/sleepecg#192 Merged johnthagenmentioned this issueOct 17, 2023 ...
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries - Mamba + Tensor Parallel Support (#1184) · EleutherAI/gpt-neox@277141e
The topic of this issue has changed. Original content: Currently pytorch has torch.expand and torch.expand_like, which allow expanding the shape of a tensor to a fully specified shape(directly as in expand or indirectly as in expand_like...
Summary This PR adds support for IP Adapter safetensor files for direct usage inside InvokeAI. TEST You can download the Composition Adapters which weren't previously supported in Invoke and try th...
The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which supports the 12.x series of CUDA libraries. The minimum driver version on the host system must be `>=525.60.13`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlat...
git clone https://github.com/aymericdamien/TensorFlow-Examples To run them, you also need the latest version of TensorFlow. To install it: pip install tensorflow or (with GPU support): pip install tensorflow_gpu For more details about TensorFlow installation, you can checkTensorFlow Installation ...
Additional context Theto_numpyfunction seems to be herehttps://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/tensor_numpy.cpp#L159 and the function that decides the outputnp.dtypeseems to be here: https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/tensor_numpy.cpp#L267...