-
Notifications
You must be signed in to change notification settings - Fork 151
Description
Hi hi,
I've got a project that is compressing large-ish chunks (4-20k per chunk) as single lz4 blocks. In doing some profiling, I found that when the chunks are sparsely populated (ie, many runs of null bytes (these are disk blocks so they're commonly sparse)) the decoding performance is really poor.
Looking into the situation more, I found that the cause is the hand-coded assembly routines are tuned for quite small literals and match lengths as they use loops to copy up to 8 bytes at a time. When this is applied to long runs of, say, null bytes (in this case, the literal is 1 byte (a null) and the match ends up being like 16k in some cases) it's masssively slow when compared to the optimizations that copy does.
It being a slower time of year, I decided to go ahead and dig in and see if I could solve the issue. After reading the assembly, the Go version, and many profiling sessions, I found that as of Go 1.20, the Go version is faster than the hand-coded assembly in all cases if we apply one tweak to speed up copying short byte sequences (1 to 8 bytes).
I pulled the code out of lz4 and began working on profiling and optimizing and have released https://github.com/lab47/lz4decode.
I'm more than happy to port the code back into this package, please just let me know!