Ebiggers/libdeflate: Heavily optimized DEFLATE/zlib/gzip library

by xioxoxon 8/26/23, 7:40 AMwith 12 comments
by powturboon 8/27/23, 8:42 AM

You can use TurboBench [1] to benchmark libdeflate against zlib, igzip, zlib-ng and others.

Download TurboBench from Releases [2]

Here Some Benchmarks:

- https://github.com/zlib-ng/zlib-ng/issues/1486

- https://github.com/powturbo/TurboBench/issues/43

[1] https://github.com/powturbo/TurboBench

[2] https://github.com/powturbo/TurboBench/releases

by mxmlnknon 8/26/23, 7:59 AM

Based on my benchmarks, ISA-l/igzip is more than twice(!) as fast as libdeflate and zlib for decompression. I'm almost enamored with ISA-l because of its speed. And yes, it works on AMD and also has Assembler code for ARM, so probably also works on ARM.

For parallelized decompression of gzip, I recommended my own tool, rapidgzip. I have measured up to 10 GB/s decompression bandwidth with it (>20 GB/s if an index already exists). I'm currently working on integrating ISA-l for even more special cases into rapidgzip and hope to release version 0.9.0 in the next days. It will have another +30-100% performance boost for many cases, thanks to ISA-l.

by xioxoxon 8/26/23, 8:15 AM

One nice thing about this is that the compression factor seems quite a bit higher than standard gzip. This is very useful if you need compatibility with gzip/zlib formats (e.g. PNG files). For my use case, the compressed output was 8% smaller using libdeflate compression=12 than with gzip compression=9. The compression was also a lot faster with a single thread. It would be nice to see a comparison with a larger number of inputs, however.