Recommendations given to me: > Given your described file sizes of 20-60MB, a 1M record size would likely be appropriate here. This would stretch out the metadata-to-data ratio quite a bit. If you've got data on the pool already, you can run zdb -LbbbA -U /data/zfs/...
Calculate memory requirement as follows: Each in-core deduplication table (DDT) entry is approximately 320 bytes. Multiply the number of allocated blocks by 320. Here's an example using the data from the zdb output in Listing 1: In-core DDT size (1.02M) x 320 = 326.4 MB of memory is ...
"resolved": "https://registry.npmjs.org/to-fast-properties/-/to-fast-properties-2.0.0.tgz", "integrity": "sha1-3F5pjL0HkmW8c+A3doGk5Og/YW4=" } } }, "@mrmlnc/readdir-enhanced": { "version": "2.2.1", "resolved": "https://registry.npmjs.org/@mrmlnc/readdir-enhanced/-/readd...