Swappiness = 60 is default for proxmox and I have a couple of systems running with SSDs that haven't needed any real tuning to resolve wear issues. One system has been in production use for nearly 20,000 hours and I'm only seeing about 5% wearout Curious to know what's causing such...
Hi Guys, I have a bunch of Proxmox servers all running new MX500 SSD's using ZFS, all is going well and performance is amazing. With this said though I...
devices:-/dev/apex_0:/dev/apex_0 # passes a PCIe Coral volumes:-/etc/localtime:/etc/localtime:ro-/home/docker/frigate:/config-/media/frigate:/media/frigate-type:tmpfs # Optional:1GBofmemory,reducesSSD/SDCard wear target:/tmp/cache tmpfs:size:1000000000ports:-"5000:5000"-"8554:8554"#R...
(I thought SSDs did that silently, until wearout, with a different SMART attribute indicating percentage.) It doesn't appear to be service impacting, but I'm not finding good info via Google. Has anybody else seen this, or is this an obvious question for Crucial? Click to expand... ...
is_ssdlike($type)) { # if we have an ssd we try to get the wearout indicatoraaron Proxmox Staff Member Staff member Jun 3, 2019 4,298 1,107 218 Feb 24, 2023 #19 hmm... Your diff has parts not marked as changed that I cannot find. Can you please diff it against the git...
Free space will be not partionned, so "cfdisk" a first SSD to create a partition , then create a regular "non mirrored" LVMThin datastore. Performance and wear level will be ok. You loose of course ZFS file system integrity in case of crash or power lost , if you need these , you...