Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Dedup isn't worth it

To add to that, ZFS dedup is a lie and you should forget its existence unless you have a very specific scenario of being a SAN with a massive amount of RAM, and even then, you had better be damn sure.

I really wish ZFS had either an option to store the Dedup Table on a NVMe like Optane, or to do an offline deduplication job.



It does have the former, these days - the "allocation_classes" feature lets you make the permanent home of certain subsets of data on "special" vdevs - which includes methods of specifying "store dedup table there".

Now, that becomes the only place entries on it are stored, so you best make it redundant if you don't want to lose your pool from a single NVMe failing, but the feature is there.

The latter I would predict seeing approximately when the sun burns out, on ZFS. It _really_ doesn't like the idea of data changing locations retroactively.


Thanks for this. I completely missed this feature in the run up to 0.8.

I'm going to have to do some test setups with this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: