See also: Rules and Conditions

Latest News

28.08.2020 Leaderboards updated, sample results for Test 4 (blocks) added
30.07.2020 API for Test 4 is defined
30.07.2020 Sample leaderboards for publicly available compressors are posted
30.06.2020 40% of Tests 1-4 available for download
17.06.2020 The competition is officially started!

Schedule

19.06.2020 Start accepting submissions
30.06.2020
 
Test set fixed
40% of test set available to participants
About every month Leaderboards updated
20.11.2020 Deadline for new and updated submissions
27.11.2020 Deadline for settling any technical issues affecting submitted compressors
15.12.2020
 
Winners and results announced
Test sets fully disclosed

Leaderboards

The leaderboard tables below contain results for competition submissions and selected publicly available compressors.

The names of the compressors that were submitted are marked with bold font.

One may use statistics for publicly available compressors to estimate the speed of other compressors on our hardware, they are only for reference.

Caution on comparability: if possible, we set compressor options to use one thread only. However, some programs might (and did) use multiple threads. We did not try to fine-tune presets to fit into speed limits as tightly as possible, so the compressors are not aligned by speed. Thus these results SHOULD NOT be used to draw conclusions like “compressor X is better than compressor Y”.

Notes on table titles:

  • HCR stands for “High compression ratio”.
  • “Full” means the entire test (1 GB), “open part” means 400 MB data available to participants.

The compressors that were about to fit into the given speed category, but didn’t make it, are placed at the bottom of the corresponding table. Submissions that did not fully comply with the rules (in particular, the rule that every compressor must be able to correctly decode the compressed files for all four tests) are also placed at the bottom.

Since the August’s update, the c_size column is replaced with the c_full_size column, taking the size of the decompressor into account.

Some presets for publicly available compressors were optimised since the initial publication of the leaderboards.

All compressors that are present in the table were tested under Windows 10 x64 OS and with the test hardware as described in Test Hardware.

Choose table:

Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes f score
1 lzturbo 1.2 -32 -p0 -b1000 19.94 3.06 23.00 311,178,609 337.24
2 Zstd 1.4.5 --single-thread -7 22.84 1.78 24.62 330,564,256 356.96
3 brotli 1.0.0 -q 5 33.14 2.27 35.41 343,087,319 380.77
4 gzip_libdeflate 1.6 -5 13.23 2.02 15.25 369,822,755 387.09
5 lz4 1.9.2 -5 23.57 0.96 24.53 426,323,640 451.81
- nanozip 0.09 -cd -m26g -p1 -t1 -nm 38.16 8.25 46.41 293,227,964 347.89
- pglz 26.37 4.07 30.44 329,518,476 364.03
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes f score
1 lzturbo 1.2 -32 -p0 -b1000 8.15 1.23 9.38 125,251,035 135.86
2 Zstd 1.4.5 --single-thread -7 9.15 0.72 9.87 133,165,513 143.76
3 brotli 1.0.0 -q 5 13.39 0.92 14.31 137,812,787 153.04
4 gzip_libdeflate 1.6 -5 5.28 0.81 6.09 148,231,580 155.13
5 lz4 1.9.2 -5 9.41 0.39 9.80 171,100,478 181.29
- nanozip 0.09 -cd -m26g -p1 -t1 -nm 13.82 3.26 17.08 118,871,836 139.21
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 mcm 0.84 -t11 196 185 381 195,096,485
2 bcm 1.51 -9 150 164 314 196,989,433
3 bsc 3.1.0 -b1024 -T -e2 143 86 229 199,216,374
4 rings 2.5 -m8 -t1 177 142 319 205,785,705
5 m99 0.7 -t1 -b1000000000 106 57 163 210,299,469
6 ppmd variant J, may 10 2006 -m256 -o8 -r0 134 137 271 218,278,764
7 ccm 1.30c Apr 24 2008 7 175 191 366 227,197,017
8 Bzip2 1.0.8, 13-jul-2019 -9 -k 63 27 90 272,713,185
9 Zstd 1.4.5 --single-thread -17 353 2 355 281,342,860
10 winrar 5.91 -m3 -md1g -mt1 281 5 286 300,191,840
11 zpaq 7.15, aug 17 2016 -m2 -t1 255 11 266 321,480,300
12 gzip_libdeflate 1.6 -12 139 2 141 346,062,965
- lzuf2 155 9 164 358,990,846
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 mcm 0.84 -t11 80 75 155 79,756,874
2 bcm 1.51 -9 56 57 113 81,343,464
3 bsc 3.1.0 -b1024 -T -e2 54 32 86 82,840,896
4 rings 2.5 -m8 -t1 70 52 122 83,381,001
5 m99 0.7 -t1 -b1000000000 40 19 58 86,883,307
6 ppmd variant J, may 10 2006 -m256 -o8 -r0 54 54 108 87,754,662
7 ccm 1.30c Apr 24 2008 7 70 76 146 91,372,976
8 Bzip2 1.0.8, 13-jul-2019 -9 -k 26 11 37 109,554,631
9 Zstd 1.4.5 --single-thread -17 142 1 143 113,540,069
10 winrar 5.91 -m3 -md1g -mt1 108 2 110 120,921,076
11 zpaq 7.15, aug 17 2016 -m2 -t1 102 4 106 129,324,085
12 gzip_libdeflate 1.6 -12 55 1 56 138,704,753
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 mcm 0.84 -x11 504 477 981 184,532,057
2 nanozip 0.09 -cc -m26g -p1 -t1 -nm 650 643 1293 187,223,505
3 ppmonstr var. J, feb 16 2006 -m1271 -o16 -r1 1113 1123 2236 199,664,296
4 zcm 0.93 -m8 -t1 308 317 625 200,066,323
5 ppmd_sh sh9 48 4095 1 585 593 1178 206,770,871
6 zpaq 7.15, aug 17 2016 -m4 -t1 676 703 1379 214,101,714
7 razor 1.03.7 2018-03-22 -d 1000M 3198 14 3212 219,542,238
8 lzturbo 1.2 -49 -p0 -b1000 1538 12 1550 241,638,250
9 lzpm 1264 14 1278 255,746,916
10 Zstd 1.4.5 --single-thread --ultra -22 765 2 767 257,812,534
11 winrar 5.91 -m5 -md1g -mt1 1019 6 1025 289,531,925
12 Nakamichi-Kaidanji 914 2 916 372,215,967
- zpaq 7.15, aug 17 2016 -m5 -t1 2083 2110 4193 203,939,131
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 mcm 0.84 -x11 203 192 395 75,893,850
2 nanozip 0.09 -cc -m26g -p1 -t1 -nm 263 259 522 76,789,812
3 ppmonstr var. J, feb 16 2006 -m1271 -o16 -r1 438 442 880 80,615,006
4 zcm 0.93 -m8 -t1 122 124 246 81,216,150
5 ppmd_sh sh9 48 4095 1 194 198 392 83,925,095
6 zpaq 7.15, aug 17 2016 -m4 -t1 271 282 553 86,272,447
7 razor 1.03.7 2018-03-22 -d 1000M 1082 5 1087 90,463,230
8 lzturbo 1.2 -49 -p0 -b1000 515 5 520 99,566,326
9 Zstd 1.4.5 --single-thread --ultra -22 304 1 305 104,366,649
10 winrar 5.91 -m5 -md1g -mt1 399 2 401 116,766,795
- zpaq 7.15, aug 17 2016 -m5 -t1 835 844 1679 82,199,115
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes f score
1 QLIC2 ver. 2 7.27 6.46 13.73 437,239,860 457.43
2 IZ 0.1 4.52 4.25 8.77 527,820,747 540.84
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes f score
1 QLIC2 ver. 2 2.86 2.57 5.43 177,083,745 185.08
2 IZ 0.1 1.80 1.69 3.49 213,041,455 218.22
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 zcm 0.93 -m8 -t1 125 100 225 421,677,821
2 bmf 2.01 64 45 109 424,927,877
3 bim 0.03 63 70 133 429,160,045
4 lsp 31 46 77 447,648,511
5 JPEG-LS 1 36 38 74 472,434,607
6 bcm -9 90 107 197 476,127,173
7 winrar 5.91 -m5 -mt1 268 11 279 572,478,511
8 lzpm 343 35 378 592,471,529
- lzuf2 168 62 230 809,898,756
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 zcm 0.93 -m8 -t1 50 40 90 171,294,134
2 bmf 2.01 26 18 44 172,379,801
3 bim 0.03 25 28 53 174,021,455
4 lsp 13 19 32 180,396,533
5 JPEG-LS 1 15 15 30 192,068,888
6 winrar 5.91 -m5 -mt1 107 4 111 232,459,815
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 bmf 2.01 -S 844 729 1573 386,901,589
2 nanozip 0.09 -cc -m1g -p1 -t1 -nm 313 285 598 451,944,839
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 bmf 2.01 -S 341 294 635 157,323,773
2 nanozip 0.09 -cc -m1g -p1 -t1 -nm 106 100 206 181,047,302
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes f score
1 brotli 1.0.0 -q 5 21.73 2.11 23.84 199,453,458 225.40
2 lzturbo 1.2 -32 -p0 -b1000 13.95 2.27 16.22 209,295,975 227.79
3 Zstd 1.4.5 --single-thread -7 10.97 1.46 12.43 215,787,321 229.68
4 nanozip 0.09 -cd -m26g -p1 -t1 -nm 15.17 4.68 19.85 224,528,769 249.06
5 gzip_libdeflate 1.6 -5 7.40 1.84 9.24 241,933,394 253.01
6 lz4 1.9.2 -5 10.28 0.92 11.20 309,169,499 321.29
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes f score
1 brotli 1.0.0 -q 5 8.92 0.86 9.78 82,263,709 92.90
2 Zstd 1.4.5 --single-thread -7 4.54 0.59 5.13 88,783,601 94.50
3 lzturbo 1.2 -32 -p0 -b1000 5.96 0.95 6.91 88,497,333 96.36
4 gzip_libdeflate 1.6 -5 3.02 0.74 3.76 98,734,701 103.23
5 nanozip 0.09 -cd -m26g -p1 -t1 -nm 5.99 1.91 7.90 95,246,107 105.06
6 lz4 1.9.2 -5 4.16 0.36 4.52 125,837,899 130.72
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 zcm 0.93 -m8 -t1 178 173 351 132,638,786
2 winrar 5.91 -m5 -md1g -mt1 183 4 187 147,841,445
3 ccm 1.30c Apr 24 2008 7 147 150 297 155,044,668
4 Zstd 1.4.5 --single-thread --ultra -22 371 2 373 168,006,947
5 bsc 3.1.0 -b1024 -T -e2 83 67 150 176,174,938
6 bcm 1.51 -9 96 122 218 177,974,488
7 rings 2.5 -m8 -t1 83 65 148 195,553,332
8 m99 0.7 -t1 -b1000000000 226 47 273 200,893,449
9 ppmd variant J, may 10 2006 -m256 -o8 -r0 102 107 209 208,062,705
10 gzip_libdeflate 1.6 -12 138 2 140 227,903,254
11 Bzip2 1.0.8, 13-jul-2019 -9 -k 50 20 70 227,879,673
12 zpaq 7.15, aug 17 2016 -m2 -t1 164 8 172 244,607,627
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 zcm 0.93 -m8 -t1 73 72 145 57,358,062
2 winrar 5.91 -m5 -md1g -mt1 61 2 63 64,228,928
3 ccm 1.30c Apr 24 2008 7 58 60 118 64,739,052
4 Zstd 1.4.5 --single-thread --ultra -22 147 1 148 70,397,038
5 bcm 1.51 -9 36 45 81 76,917,108
6 bsc 3.1.0 -b1024 -T -e2 31 26 57 76,935,444
7 rings 2.5 -m8 -t1 34 27 61 82,611,990
8 ppmd variant J, may 10 2006 -m256 -o8 -r0 42 44 86 85,088,671
9 m99 0.7 -t1 -b1000000000 50 17 67 86,556,429
10 gzip_libdeflate 1.6 -12 55 1 56 92,841,024
11 Bzip2 1.0.8, 13-jul-2019 -9 -k 19 8 27 92,871,584
12 zpaq 7.15, aug 17 2016 -m2 -t1 67 3 70 101,474,810
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 ppmonstr var. J, feb 16 2006 -m1271 -o16 -r1 1162 1168 2330 118,674,642
2 razor 1.03.7 2018-03-22 -d 1000M 1427 9 1436 120,177,277
3 zpaq 7.15, aug 17 2016 -m5 -t1 1855 1866 3721 128,706,164
4 lzturbo 1.2 -49 -p0 -b1000 771 8 779 134,334,658
5 zpaq 7.15, aug 17 2016 -m4 -t1 556 571 1127 159,158,626
6 lzpm 1414 11 1425 176,573,931
7 ppmd_sh sh9 48 4095 1 799 806 1805 184,750,286
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 ppmonstr var. J, feb 16 2006 -m1271 -o16 -r1 449 452 901 49,707,367
2 razor 1.03.7 2018-03-22 -d 1000M 537 4 541 52,939,847
3 zpaq 7.15, aug 17 2016 -m5 -t1 725 728 1453 54,933,927
4 lzturbo 1.2 -49 -p0 -b1000 276 3 279 59,128,518
5 zpaq 7.15, aug 17 2016 -m4 -t1 224 230 454 66,494,053
6 ppmd_sh sh9 48 4095 1 256 258 514 76,872,052
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes f score
1 Zstd 1.4.5 7 11.27 1.22 12.49 280,441,918 294.15
2 zlib 1.2.11 5 18.57 2.97 21.54 297,616,492 322.13
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes f score
1 Zstd 1.4.5 7 4.58 0.49 5.07 113,130,548 118.69
2 zlib 1.2.11 5 7.52 1.20 8.72 120,164,594 130.08
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 Zstd 1.4.5 22 272 1 273 270,469,364
2 zlib 1.2.11 9 135 2 137 292,899,838
Place Name Ver Preset c_time, s d_time, s Full time, s c_full_size, bytes
1 Zstd 1.4.5 22 109 0 109 109,435,025
2 zlib 1.2.11 9 55 1 56 118,338,974

Notes:

  • mcm 0.84 freezed while decoding Test 3 data with both -t11 and -x11 presets
  • nanozip 0.09 with -cc -m26g -p1 -t1 -nm preset failed to correctly decode Test 3 data
  • Zstd was modified for Test 4 to comply with our API–functions ZSTD_createCCtx, ZSTD_compressCCtx, ZSTD_createDCtx, ZSTD_decompressDCtx from zstd API were used, it was compiled with x86_64-w64-mingw32-gcc compiler, the number from the preset column was used as an argument for ZSTD_compressCCtx function
  • zlib was modified for Test 4 to comply with our API–functions compress2, uncompress from zlib API were used, it was compiled with x86_64-w64-mingw32-gcc compiler, the number from the preset column was used as an argument for compress2 function

Goals

  • Facilitate research and development of new and existing algorithms for universal lossless data compression.
  • Bring attention to lossless data compression, spur activity in the field and encourage more researchers and practitioners to pursue it.

We conceived this competition as a long-term project with annual events. Participation is free. Winners will receive substantial monetary prizes and formal awards; the total prize pool this year is 50,000 EUR.

Scope

Motivation

In this competition we concentrate on the advantages of algorithms and their implementations for universal lossless data compression, rather than tuning for certain data types. We test compressors under the following scenarios:

  • Qualitative-data compression. This year we use text data.
  • Quantitative-data compression. The test set for this year contains images, most of which are photographic.
  • Mixed-data compression. This year our focus is on slightly preprocessed executable files (we removed incompressible chunks).
  • Small-block-data compression. We use small blocks of textual and mixed data to evaluate how compressors behave when the data size is severely limited, such as in block-storage systems.

To align compressors by speed and consider practical use cases, we impose speed limits to separate each test into three subcategories: rapid compression, balanced compression and high compression ratio. All told, the result is 12 categories and leaderboards for this competition, as the table below illustrates.

We offer the following awards for each category by compressor rank:

  • First place: 3,000 EUR and award certificate
  • Second place: 1,000 EUR and award certificate
  • Third place: honorable mention certificate (no monetary prize)

We’re also reserving 2,000 EUR for additional rewards in case of ties or other difficult situations.

This competition therefore has 12 separate categories and leaderboards, each with its own prizes. Creating and submitting a universal solution is unnecessary.

Test Type Category Compression + Decompression Time Limit, seconds
Test 1. Qualitative data (text only).
Size = 1 GB.
T1 Rapid compression < 40
T1 Balanced compression < 400
T1 High compression ratio < 4000
Test 2. Quantitative data (images only).
Size = approximately 1 GB.
T2 Rapid compression < 40
T2 Balanced compression < 400
T2 High compression ratio < 4000
Test 3. Mixed data (preprocessed executable files).
Size = 1 GB.
T3 Rapid compression < 40
T3 Balanced compression < 400
T3 High compression ratio < 4000
Test 4. Block compression (mixed Test 1 and Test 3 data).
Size = approximately 1 GB in 32 KiB blocks to be compressed independently so as to allow random-access decompression.
T4 Rapid compression < 40
T4 Balanced compression < 400
T4 High compression ratio < 4000

Ranking

In all cases we add the compressed-decompressor size to the compressed-data size and call it the “compressed-data full size”:

c_full_size = c_size + c_decompressor_size.

For rapid-compression categories we rank compressors by the following formula:

f = c_time + A × d_time + B × c_full_size,

where

  • A is a coefficient that defines the relative importance of decompression speed versus compression speed, and
  • B is the significance ratio of processing speed to compressed-data full size,
  • c_time and d_time are in seconds, c_full_size is in bytes.

This year we use the formula with the following coefficients:

f = c_time + 2 × d_time + (1 / 1,000,000) × c_full_size.

Within each category, we compute f for every compressor, sort the values in ascending order and assign each one a rank equal to its position in the list. Therefore, first place goes to the compressor with the smallest f value and so on.

For the balanced-compression and high-compression-ratio categories, we order results by compressed-data full size only. Thus, among all compressors that meet the compression + decompression time limit in a given category, the winner is the one that produces the smallest output data.

For the block-compression test we measure compression and decompression times with initialization procedures running only once before compression or decompression of the first block. Input and output data are stored in RAM.

Who Can Submit

Developers and owners of new or existing compression software (compressors) may compete, either as individuals or groups. Participation is free!

This competition is sponsored by Huawei. The test method and result validity are the sole responsibility of the organizer (“we” in this text), the Graphics & Media Lab of the Computational Mathematics and Cybernetics Department at Moscow State University.