In addition to expressing pure maps, you can use fast collective communication operations between devices: from functools import partial from jax import lax @partial(pmap, axis_name='i') def normalize(x): return x / lax.psum(x, 'i') print(normalize(jnp.arange(4.))) # prints [0. 0.16...
Linux x86_64Linux aarch64Mac x86_64Mac ARMWindows x86_64Windows WSL2 x86_64 CPU yes yes yes yes yes yes NVIDIA GPU yes yes no n/a no experimental Google TPU yes n/a n/a n/a n/a n/a AMD GPU experimental no no n/a no no Apple GPU n/a no experimental experimental n/a n/...
Linux x86_64Linux aarch64Mac x86_64Mac ARMWindows x86_64Windows WSL2 x86_64 CPU yes yes yes yes yes yes NVIDIA GPU yes yes no n/a no experimental Google TPU yes n/a n/a n/a n/a n/a AMD GPU experimental no no n/a no no Apple GPU n/a no experimental experimental n/a n/...
Linux x86_64Linux aarch64Mac x86_64Mac ARMWindows x86_64Windows WSL2 x86_64 CPU yes yes yes yes yes yes NVIDIA GPU yes yes no n/a no experimental Google TPU yes n/a n/a n/a n/a n/a AMD GPU experimental no no n/a no no Apple GPU n/a no experimental experimental n/a n/...
Linux, x86-64 Mac, Intel Mac, ARM Windows, x86-64 (experimental) Other operating systems and architectures require building from source. Trying to pip install on other operating systems and architectures may lead tojaxlibnot being installed alongsidejax, althoughjaxmay successfully install (but fail...
Linux x86_64Linux aarch64Mac x86_64Mac ARMWindows x86_64Windows WSL2 x86_64 CPU yes yes (build from source) yes yes yes yes NVIDIA GPU yes yes (build from source) no n/a no experimental Google TPU yes n/a n/a n/a n/a n/a AMD GPU experimental no no n/a no no Apple GPU...
Linux x86_64Linux aarch64Mac x86_64Mac ARMWindows x86_64Windows WSL2 x86_64 CPUyesyesyesyesyesyes NVIDIA GPUyesyesnon/anoexperimental Google TPUyesn/an/an/an/an/a AMD GPUexperimentalnonon/anono Apple GPUn/anoexperimentalexperimentaln/an/a ...
Linux x86_64Linux aarch64Mac x86_64Mac ARMWindows x86_64Windows WSL2 x86_64 CPUyesyesyesyesyesyes NVIDIA GPUyesyesnon/anoexperimental Google TPUyesn/an/an/an/an/a AMD GPUexperimentalnonon/anono Apple GPUn/anoexperimentalexperimentaln/an/a ...
In addition to expressing pure maps, you can use fast collective communication operations between devices: from functools import partial from jax import lax @partial(pmap, axis_name='i') def normalize(x): return x / lax.psum(x, 'i') print(normalize(jnp.arange(4.))) # prints [0. 0.16...