I'm pleased to announce pelican_precompress 1.1.1, which now features faster compression throughput!
Special thanks to Ryan Castellucci for contributing the patch that made this possible. He increased the throughput by using the built-in multiprocessing module. Also, if compressed files have already been generated then their contents will be checked by quickly decompressing them instead of blindly re-compressing and overwriting them. Brilliant!
In addition to compressing across multiple CPU cores, there's also an option to skip files that are smaller than a certain size. The default minimum size is 20 bytes.
If you wonder "Wait, what happened to 1.1.0?" there's a very simple explanation for that: I automated the release process for 1.1.0 and it didn't work when I tried running it. Rather than manually intervening, I fixed the problem and released 1.1.1.
If you've already installed the plugin you can upgrade it using pip:
python -m pip install pelican_precompress --upgrade
Further reading: multiprocessing