It turns out, that http://docs.oracle.com/cd/E36784_01/html/E36835/gkknx.html is right.
When a file is written, the data is compressed, encrypted, and the checksum is verified. Then, the data is deduplicated, if possible.
My assumption with the random file was incorrect. It seems that ZFS aborts compression if it cannot achieve a certain minimum compression ratio.
quote from https://wiki.illumos.org/display/illumos/LZ4+Compression
Another particular thing to note is that LZ4's performance on incompressible data is very high. It achieves this by incorporating an "early abort" mechanism which will trigger if LZ4 can't meet the expected minimum compression ratio (12.5% on ZFS).
For testing i created a textfile from my filesystem with find / >> tree.txt
.
After copying the file to both datasets and then zpool get dedupratio
did return:
NAME PROPERTY VALUE SOURCE tank dedupratio 1.00x -
Dedup is really the last part in this write chain. Choosing different compression-algorithms will result in poor dedupratio!
Unfortunately my ZoL-version does not support encryption. But it seems that encrypting different datasets could also ruin dedup. Info on encryption: https://docs.oracle.com/cd/E53394_01/html/E54801/gkkih.html