Realistically, especially when considering the inherent noise in the original image, I’d settle for lossy compression with COMPRESS=JPEG JPEG_QUALITY=90, which could reduce file size fairly quickly to 16% of the original.
But if lossless compression is a hard requirement, I’d probably go with COMPRESS=LZW PREDICTOR=2. However, I would want to verify that any downstream tools wolud support this sort of compression.
UPDATE: When saving with JPEG compression, further speed and size improvements are gained by adding PHOTOMETRIC=YCBCR, which uses a different color space that has even better compression. I’ve added new rows to the table for YCBCR, as well as .jp2 formats.
GDAL offers several forms of lossy and lossless compression for the TIFF file format. To compare the options, I took an original 387MB uncompressed RGB TIFF image, consisting of a black-and-white 1968 aerial photo with color pen markings, and used gdal_translate (via QGIS) to save it with various compression options. Here are the results:
lossy? | size (mb) | percent of original size | writetime (sec) | compression |
---|---|---|---|---|
y | 21 | 5.4% | 8 | compress=jpeg jpeg_quality=90 photometric=ycbcr |
y | 34 | 8.8% | 11 | compress=jpeg jpeg_quality=75 |
y | 40 | 10.3% | 11 | compress=jpeg jpeg_quality=80 |
y | 48 | 12.4% | 12 | compress=jpeg jpeg_quality=85 |
y | 59 | 15.2% | 42 | jp2 quality=90 |
y | 63 | 16.3% | 13 | compress=jpeg jpeg_quality=90 |
y | 72 | 18.6% | 13 | compress=jpeg jpeg_quality=100 photometric=ycbcr |
y | 92 | 23.8% | 17 | compress=jpeg jpeg_quality=95 |
n | 125 | 32.3% | 41 | jp2 quality=100 reversible=yes |
n | 163 | 42.1% | 117 | compress=lzma |
n | 172 | 44.4% | 62 | compress=deflate zlevel=9 predictor=2 (horizontal differencing) |
y | 175 | 45.2% | 25 | compress=jpeg jpeg_quality=100 |
n | 176 | 45.5% | 11 | compress=lzw predictor=2 (horizontal differencing) |
n | 243 | 62.8% | 24 | compress=deflate zlevel=9 predictor=1 (no predictor) |
n | 270 | 69.8% | 62 | compress=zstd |
n | 274 | 70.8% | 11 | compress=lzw predictor=1 (no predictor) |
n | 384 | 99.2% | 7 | compress=packbits |
n | 387 | 100.0% | ORIGINAL FILE |
Of the lossless compressions, LZMA had the greatest reduction in file size, just 42.1% the size of the original file. However it took ten times as long to compress than LZW, which had about the same file. For lossless compression (at least for these particular types of images), I’d probably go with COMPRESS=LZW PREDICTOR=2, and be done in a fraction of the time.
If you want the smallest file size, obviously a lossy compression such as JPEG will give the greatest reduction in size, but also the greatest reduction in quality. So ask yourself: how much quality is needed? Without magnification, even JPEG_QUALITY=75 is nearly indistinguishable from the original:
JPEG_QUALITY=75
ORIGINAL
However, zooming in 8x and increasing the contrast +50, we can start to see JPEG artifacts, which nearly disappear around JPEG_QUALITY=90:
JPEG_QUALITY | magnified 8x, contrast +50 |
---|---|
75 | |
85 | |
90 | |
95 | |
100 | |
ORIGINAL |
Given the inherent noise that already exists in the uncompressed image, I don’t think that JPEG_QUALITY=90 would really decrease the quality of the image in realistic way. Personally, I’d settle for lossy compression with COMPRESS=JPEG JPEG_QUALITY=90, which could reduce file size fairly quickly to 16% of the original, or even 5.4% when using the PHOTOMETRIC=YCBCR option.