CouchDB
  1. CouchDB
  2. COUCHDB-1120

Snappy compression (databases, view indexes) + keeping doc bodies as ejson binaries

    Details

    • Type: Improvement Improvement
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.2
    • Component/s: Database Core
    • Labels:
      None
    • Environment:

      trunk

      Description

      The branch at:

      https://github.com/fdmanana/couchdb/compare/snappy

      Is an experiment which adds snappy compression to database files and view index files. Snappy is a very fast compressor/decompressor developed by and used by Google [1] - even for small data chunks like 100Kb it can be 2 orders of magnitude faster then zlib or Erlang's term_to_binary compression level 1. Somewhere at [1] there are benchmark results published by Google that compare against zlib's deflate, Erlang's term_to_binary compression, lzo, etc.

      Even small objects like database headers or btree nodes, still get smaller after compressing them with snappy, see the shell session at [2].
      Besides the compression, this branch also keeps the document bodies (#doc.body fields) as binaries (snappy compressed ejson binaries) and only converts them back to ejson when absolutely needed (done by couch_doc:to_json_obj/2 for e.g.) - this is similar to COUCHDB-1092 - but the bodies are EJSON compressed binaries and doesn't suffer from the same issue Paul identified before (which could be fixed without many changes) - on reads we decompress and still do the binary_to_term/1 + ?JSON_ENCODE calls as before.

      It also prepares the document summaries before sending the documents to the updater, so that we avoid copying EJSON terms and move this task outside of the updater to add more parallelism to concurrent updates.

      I made some tests, comparing trunk before and after the JSON parser NIF was added, against this snappy branch.
      I created databases with 1 000 000 documents of 4Kb each. The document template is this one: http://friendpaste.com/qdfyId8w1C5vkxROc5Thf

      The databases have this design document:

      {
      "_id": "_design/test",
      "language": "javascript",
      "views": {
      "simple": {
      "map": "function(doc)

      { emit(doc.data5.float1, [doc.strings[2], doc.strings[10]]); }

      "
      }
      }
      }

      == Results with trunk ==

      database file size after compaction: 7.5 Gb
      view index file size after compaction: 257 Mb

        • Before JSON nif:
          $ time curl 'http://localhost:5985/trunk_db_1m/_design/test/_view/simple?limit=1'
          Unknown macro: {"total_rows"}

      real 58m28.599s
      user 0m0.036s
      sys 0m0.056s

        • After JSON nif:
          fdmanana 12:45:55 /opt/couchdb > time curl 'http://localhost:5985/trunk_db_1m/_design/test/_view/simple?limit=1'
          Unknown macro: {"total_rows"}

      real 51m14.738s
      user 0m0.040s
      sys 0m0.044s

      == Results with the snappy branch ==

      database file size after compaction: 3.2 Gb (vs 7.5 Gb on trunk)
      view index file size after compaction: 100 Mb (vs 257 Mb on trunk)

        • Before JSON nif:
          $ time curl 'http://localhost:5984/snappy_db_1m/_design/test/_view/simple?limit=1'
          Unknown macro: {"total_rows"}

      real 32m29.854s
      user 0m0.008s
      sys 0m0.052s

        • After JSON nif:
          fdmanana 15:40:39 /opt/couchdb > time curl 'http://localhost:5984/snappy_db_1m/_design/test/_view/simple?limit=1'
          Unknown macro: {"total_rows"}

      real 18m39.240s
      user 0m0.012s
      sys 0m0.020s

      A writes-only relaximation test also shows a significant improvement in the writes response times / throughput:

      http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef63405480d

      These results are also in a file of this branch [3].

      Seems clear this, together with Paul's JSON NIF parser, has a very good impact in the view indexer, besides the big disk space savings and better write throughput.

      Some potential issues:

      • Snappy is C+, and so is the NIF [4] - however a C+ compiler is common and part of most development environments (gcc, xcode, etc)
      • Not sure if snappy builds on Windows - it might build, it doesn't seem to depend on fancy libraries, just stdc++ and the STL
      • Requires OTP R13B04 or higher. If built/running on R13B03 or below, it simple doesn't do any compression at all, just like current releases. However, 2 servers running this branch, one with R14 and other R13B01 for e.g., means that the second server will not be able to read database files created by the server with R14 - it will get an exception with the atom 'snappy_nif_not_loaded' - this is easy to catch and use for printing a nice and explicit error message to the user telling it needs to use a more recent otp release.

      The upgrade of databases and view indexes from previous releases is done on compaction - I made just a few tests with database files by hand, this surely needs to be better tested.

      Finally the branch is still in development phase, but maybe not far from completion, consider this ticket just as a way to share some results and get some feedback.

      [1] - http://code.google.com/p/snappy/
      [2] - http://friendpaste.com/45AOdi9MkFrS4BPsov7Lg8
      [3] - https://github.com/fdmanana/couchdb/blob/b8f806e41727ba18ed6143cee31a3242e024ab2c/snappy-couch-tests.txt
      [4] - https://github.com/fdmanana/snappy-erlang-nif/

      1. snappy.patch
        13 kB
        Paul Joseph Davis

        Activity

        Hide
        Paul Joseph Davis added a comment -

        You made a comment at one point about your editor wanting to indent inside the extern block. I waffled a bit on whether to go with just two defines but ended up hedging my bets and going with the standard header pattern.

        Show
        Paul Joseph Davis added a comment - You made a comment at one point about your editor wanting to indent inside the extern block. I waffled a bit on whether to go with just two defines but ended up hedging my bets and going with the standard header pattern.
        Hide
        Filipe Manana added a comment -

        Ok, if it's that ugly for you, I'm fine with it, go ahead and commit.

        Just one question. Why is this added?

        +#ifdef __cplusplus
        +#define BEGIN_C extern "C"

        { +#define END_C }

        +#else
        +#define BEGIN_C
        +#define END_C
        +#endif

        It's C++ code. Is it bad to assume that a C++ compiler is used to compile this file?

        Show
        Filipe Manana added a comment - Ok, if it's that ugly for you, I'm fine with it, go ahead and commit. Just one question. Why is this added? +#ifdef __cplusplus +#define BEGIN_C extern "C" { +#define END_C } +#else +#define BEGIN_C +#define END_C +#endif It's C++ code. Is it bad to assume that a C++ compiler is used to compile this file?
        Hide
        Paul Joseph Davis added a comment -

        Yeah, that C++ source is still gnarly. I've attached a cleanup patch to get things a bit cleaner.

        Same exact functionality, I've just reformatted things so that I won't rage if I ever have to look at this again.

        I would've just committed that, but its a decent sized diff so I figure I'd ask someone to glance at it before I do.

        Show
        Paul Joseph Davis added a comment - Yeah, that C++ source is still gnarly. I've attached a cleanup patch to get things a bit cleaner. Same exact functionality, I've just reformatted things so that I won't rage if I ever have to look at this again. I would've just committed that, but its a decent sized diff so I figure I'd ask someone to glance at it before I do.
        Hide
        Filipe Manana added a comment -

        Applied to trunk. Thanks everyone.

        Show
        Filipe Manana added a comment - Applied to trunk. Thanks everyone.
        Hide
        Filipe Manana added a comment -

        After chatting a bit with Paul Davis on IRC, having compression disabled, the view indexer gets about the same benefit:

        $ time curl http://localhost:5984/snappy_complex_keys/_design/test/_view/view1?limit=1
        {"total_rows":551200,"offset":0,"rows":[
        {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key":["dwarf","assassin",2,1.1],"value":[

        {"x":174347.18,"y":127272.8}

        ,

        {"x":35179.93,"y":41550.55}

        ,

        {"x":157014.38,"y":172052.63}

        ,

        {"x":116185.83,"y":69871.73}

        ,

        {"x":153746.28,"y":190006.59}

        ]}
        ]}

        real 13m18.866s
        user 0m0.012s
        sys 0m0.020s

        This is for the database with 551 200 documents (each with a size of about 1 Kb).

        For much larger databases, the compression adds more positive impact, since the OS will likely be able to cache more disk pages and the data to write to the index file is smaller. I've observed this for very large databases/indexes.

        Show
        Filipe Manana added a comment - After chatting a bit with Paul Davis on IRC, having compression disabled, the view indexer gets about the same benefit: $ time curl http://localhost:5984/snappy_complex_keys/_design/test/_view/view1?limit=1 {"total_rows":551200,"offset":0,"rows":[ {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key": ["dwarf","assassin",2,1.1] ,"value":[ {"x":174347.18,"y":127272.8} , {"x":35179.93,"y":41550.55} , {"x":157014.38,"y":172052.63} , {"x":116185.83,"y":69871.73} , {"x":153746.28,"y":190006.59} ]} ]} real 13m18.866s user 0m0.012s sys 0m0.020s This is for the database with 551 200 documents (each with a size of about 1 Kb). For much larger databases, the compression adds more positive impact, since the OS will likely be able to cache more disk pages and the data to write to the index file is smaller. I've observed this for very large databases/indexes.
        Hide
        Norman Barker added a comment -

        I was referring to trade off between access speeds using snappy vs gzip and file size. It works well.

        Show
        Norman Barker added a comment - I was referring to trade off between access speeds using snappy vs gzip and file size. It works well.
        Hide
        Filipe Manana added a comment -

        I've created another branch on top of the previous one which makes compression optional and also adds the possibility to use deflate (zlib) compression:

        https://github.com/fdmanana/couchdb/compare/file_compression

        By default, snappy compression is enabled. The compression is configured in the section [couchdb]: https://github.com/fdmanana/couchdb/compare/file_compression#diff-4

        For those interested, after checking out snappy from google code ( http://code.google.com/p/snappy/ ), one can run some benchmark tests to compare snappy against zlib, lzo and other algorithms. This is done by running:

        $ ./snappy_unittest -run_microbenchmarks=false --zlib --lzo testdata/*

        Output example at http://friendpaste.com/7YVC8jImnY2GbnOLJvce6x

        The tests I presented before, as well as the following ones, show that snappy has a positive impact on the database read/write performance and view indexer performance.

        Here are a few more tests against 2 different databases/views.

                • Database with 551 200 documents, each with a size of about 1 Kb *****

        Database created with:

        $ ./seatoncouch.rb --host localhost --port 5984 --docs 551200 --threads 20 --db-name complex_keys \
        --bulk-batch 100 --doc-tpl complex_keys.tpl

        The document template can be found here: http://friendpaste.com/1cRdpfPyzWzoQKo8zb3fky

        The database has following design document:

        {
        "_id": "_design/test",
        "language": "javascript",
        "views": {
        "view1": {
        "map": "function(doc)

        { emit([doc.type, doc.category, doc.level, doc.ratio], doc.nested.coords); }

        "
        },
        "view2": {
        "map": "function(doc) { emit(doc._id,

        {type: doc.type, cat: doc.category, level: doc.level, ratio: doc.ratio}

        ); }"
        }
        }
        }

        • trunk *

        database file size after compaction: 1592 Mb
        view file size after compaction: 520 Mb

        view index build time:

        $ time curl http://localhost:5984/trunk_complex_keys/_design/test/_view/view1?limit=1
        {"total_rows":551200,"offset":0,"rows":[
        {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key":["dwarf","assassin",2,1.1],"value":[

        {"x":174347.18,"y":127272.8}

        ,

        {"x":35179.93,"y":41550.55}

        ,

        {"x":157014.38,"y":172052.63}

        ,

        {"x":116185.83,"y":69871.73}

        ,

        {"x":153746.28,"y":190006.59}

        ]}
        ]}

        real 25m47.395s
        user 0m0.008s
        sys 0m0.040s

            • branch file_compression with snappy compression enabled ***

        database file size after compaction: 1215 Mb
        view file size after compaction: 155 Mb

        view index build time:

        $ time curl http://localhost:5985/snappy_complex_keys/_design/test/_view/view1?limit=1
        {"total_rows":551200,"offset":0,"rows":[
        {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key":["dwarf","assassin",2,1.1],"value":[

        {"x":174347.18,"y":127272.8}

        ,

        {"x":35179.93,"y":41550.55}

        ,

        {"x":157014.38,"y":172052.63}

        ,

        {"x":116185.83,"y":69871.73}

        ,

        {"x":153746.28,"y":190006.59}

        ]}
        ]}

        real 12m11.829s
        user 0m0.004s
        sys 0m0.020s

            • branch file_compression with deflate compression level 1 enabled ***

        database file size after compaction: 1097 Mb
        view file size after compaction: 123 Mb

        view index build time:

        $ time curl http://localhost:5985/deflate1_complex_keys/_design/test/_view/view1?limit=1
        {"total_rows":551200,"offset":0,"rows":[
        {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key":["dwarf","assassin",2,1.1],"value":[

        {"x":174347.18,"y":127272.8}

        ,

        {"x":35179.93,"y":41550.55}

        ,

        {"x":157014.38,"y":172052.63}

        ,

        {"x":116185.83,"y":69871.73}

        ,

        {"x":153746.28,"y":190006.59}

        ]}
        ]}

        real 19m32.945s
        user 0m0.000s
        sys 0m0.036s

            • branch file_compression with deflate compression level 9 enabled ***

        database file size after compaction: 1092 Mb
        view file size after compaction: 118 Mb

        view index build time:

        $ time curl http://localhost:5985/deflate9_complex_keys/_design/test/_view/view1?limit=1
        {"total_rows":551200,"offset":0,"rows":[
        {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key":["dwarf","assassin",2,1.1],"value":[

        {"x":174347.18,"y":127272.8}

        ,

        {"x":35179.93,"y":41550.55}

        ,

        {"x":157014.38,"y":172052.63}

        ,

        {"x":116185.83,"y":69871.73}

        ,

        {"x":153746.28,"y":190006.59}

        ]}
        ]}

        real 21m50.390s
        user 0m0.012s
        sys 0m0.036s

            • trunk ***

        database file size after compaction: 1090 Mb
        view file size after compaction: 1.3 Gb

        view index build time:

        $ time curl http://localhost:5984/warlogs_trunk/_design/warlogs/_view/by_date?limit=1
        {"total_rows":391832,"offset":0,"rows":[

        {"id":"0104D7FC-0219-4C16-8531-97C60A59C70C","key":["2004-01","0104D7FC-0219-4C16-8531-97C60A59C70C"],"value":null}

        ]}

        real 44m10.341s
        user 0m0.021s
        sys 0m0.050s

            • branch file_compression with snappy compression enabled ***

        database file size after compaction: 843 Mb
        view file size after compaction: 616 Mb

        view index build time:

        $ time curl http://localhost:5985/warlogs_snappy/_design/warlogs/_view/by_date?limit=1
        {"total_rows":391832,"offset":0,"rows":[

        {"id":"0104D7FC-0219-4C16-8531-97C60A59C70C","key":["2004-01","0104D7FC-0219-4C16-8531-97C60A59C70C"],"value":null}

        ]}

        real 32m53.988s
        user 0m0.036s
        sys 0m0.024s

            • branch file_compression with deflate compression level 1 enabled ***

        database file size after compaction: 803 Mb
        view file size after compaction: 459 Mb

        view index build time:

        time curl http://localhost:5985/warlogs_deflate1/_design/warlogs/_view/by_date?limit=1
        {"total_rows":391832,"offset":0,"rows":[

        {"id":"0104D7FC-0219-4C16-8531-97C60A59C70C","key":["2004-01","0104D7FC-0219-4C16-8531-97C60A59C70C"],"value":null}

        ]}

        real 43m1.360s
        user 0m0.008s
        sys 0m0.068s

            • branch file_compression with deflate compression level 9 enabled ***

        database file size after compaction: 798 Mb
        view file size after compaction: 435 Mb

        view index build time:

        $ time curl http://localhost:5985/warlogs_deflate9/_design/warlogs/_view/by_date?limit=1
        {"total_rows":391832,"offset":0,"rows":[

        {"id":"0104D7FC-0219-4C16-8531-97C60A59C70C","key":["2004-01","0104D7FC-0219-4C16-8531-97C60A59C70C"],"value":null}

        ]}

        real 47m10.841s
        user 0m0.032s
        sys 0m0.060s

        I made several relaximation read and writes tests, as well as writes only tests (as shown in the very first comment) and found out that both the reads and writes get a positive impact:

        1) relaximation reads and writes test, 1 Kb documents (snappy compression vs trunk):

        http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef6340655b7

        (The 1 Kb document can be found here: http://friendpaste.com/28yMMCXn5Dd0EPFpryrvMt

        2) relaximation reads and writes test, 2.5 Kb documents (snappy compression vs trunk):

        http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef634064d2d

        (The 2.5 Kb document can be found here: http://friendpaste.com/24dnXQT8FZ2gqGLI8571oV)

        3) relaximation reads and writes test, 8 Kb documents (snappy compression vs trunk):

        http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef634064f56

        (The 8 Kb document can be found here: http://friendpaste.com/2QN9TrpJA8476VzLzX0imy)

        4) relaximation reads and writes test, 8 Kb documents (deflate level 1 compression vs trunk):

        http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef634065e8f

        With deflate level 1 compression, reads seem to stay about the same, while writes are still slightly better compared to trunk.

        I've also observed that in my machine, decompressing 100Kb with snappy is very fast, taking between 100us and 150us, while zlib takes about ten times more for the same amount of data (which confirms the snappy benchmarks tests).

        Show
        Filipe Manana added a comment - I've created another branch on top of the previous one which makes compression optional and also adds the possibility to use deflate (zlib) compression: https://github.com/fdmanana/couchdb/compare/file_compression By default, snappy compression is enabled. The compression is configured in the section [couchdb] : https://github.com/fdmanana/couchdb/compare/file_compression#diff-4 For those interested, after checking out snappy from google code ( http://code.google.com/p/snappy/ ), one can run some benchmark tests to compare snappy against zlib, lzo and other algorithms. This is done by running: $ ./snappy_unittest -run_microbenchmarks=false --zlib --lzo testdata/* Output example at http://friendpaste.com/7YVC8jImnY2GbnOLJvce6x The tests I presented before, as well as the following ones, show that snappy has a positive impact on the database read/write performance and view indexer performance. Here are a few more tests against 2 different databases/views. Database with 551 200 documents, each with a size of about 1 Kb ***** Database created with: $ ./seatoncouch.rb --host localhost --port 5984 --docs 551200 --threads 20 --db-name complex_keys \ --bulk-batch 100 --doc-tpl complex_keys.tpl The document template can be found here: http://friendpaste.com/1cRdpfPyzWzoQKo8zb3fky The database has following design document: { "_id": "_design/test", "language": "javascript", "views": { "view1": { "map": "function(doc) { emit([doc.type, doc.category, doc.level, doc.ratio], doc.nested.coords); } " }, "view2": { "map": "function(doc) { emit(doc._id, {type: doc.type, cat: doc.category, level: doc.level, ratio: doc.ratio} ); }" } } } trunk * database file size after compaction: 1592 Mb view file size after compaction: 520 Mb view index build time: $ time curl http://localhost:5984/trunk_complex_keys/_design/test/_view/view1?limit=1 {"total_rows":551200,"offset":0,"rows":[ {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key": ["dwarf","assassin",2,1.1] ,"value":[ {"x":174347.18,"y":127272.8} , {"x":35179.93,"y":41550.55} , {"x":157014.38,"y":172052.63} , {"x":116185.83,"y":69871.73} , {"x":153746.28,"y":190006.59} ]} ]} real 25m47.395s user 0m0.008s sys 0m0.040s branch file_compression with snappy compression enabled *** database file size after compaction: 1215 Mb view file size after compaction: 155 Mb view index build time: $ time curl http://localhost:5985/snappy_complex_keys/_design/test/_view/view1?limit=1 {"total_rows":551200,"offset":0,"rows":[ {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key": ["dwarf","assassin",2,1.1] ,"value":[ {"x":174347.18,"y":127272.8} , {"x":35179.93,"y":41550.55} , {"x":157014.38,"y":172052.63} , {"x":116185.83,"y":69871.73} , {"x":153746.28,"y":190006.59} ]} ]} real 12m11.829s user 0m0.004s sys 0m0.020s branch file_compression with deflate compression level 1 enabled *** database file size after compaction: 1097 Mb view file size after compaction: 123 Mb view index build time: $ time curl http://localhost:5985/deflate1_complex_keys/_design/test/_view/view1?limit=1 {"total_rows":551200,"offset":0,"rows":[ {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key": ["dwarf","assassin",2,1.1] ,"value":[ {"x":174347.18,"y":127272.8} , {"x":35179.93,"y":41550.55} , {"x":157014.38,"y":172052.63} , {"x":116185.83,"y":69871.73} , {"x":153746.28,"y":190006.59} ]} ]} real 19m32.945s user 0m0.000s sys 0m0.036s branch file_compression with deflate compression level 9 enabled *** database file size after compaction: 1092 Mb view file size after compaction: 118 Mb view index build time: $ time curl http://localhost:5985/deflate9_complex_keys/_design/test/_view/view1?limit=1 {"total_rows":551200,"offset":0,"rows":[ {"id":"00d49881-7bcf-4c3d-a65d-e44435eeb513","key": ["dwarf","assassin",2,1.1] ,"value":[ {"x":174347.18,"y":127272.8} , {"x":35179.93,"y":41550.55} , {"x":157014.38,"y":172052.63} , {"x":116185.83,"y":69871.73} , {"x":153746.28,"y":190006.59} ]} ]} real 21m50.390s user 0m0.012s sys 0m0.036s Benoit's warlogs database ( https://warlogs.upondata.com/warlogs ), 391 835 documents ***** trunk *** database file size after compaction: 1090 Mb view file size after compaction: 1.3 Gb view index build time: $ time curl http://localhost:5984/warlogs_trunk/_design/warlogs/_view/by_date?limit=1 {"total_rows":391832,"offset":0,"rows":[ {"id":"0104D7FC-0219-4C16-8531-97C60A59C70C","key":["2004-01","0104D7FC-0219-4C16-8531-97C60A59C70C"],"value":null} ]} real 44m10.341s user 0m0.021s sys 0m0.050s branch file_compression with snappy compression enabled *** database file size after compaction: 843 Mb view file size after compaction: 616 Mb view index build time: $ time curl http://localhost:5985/warlogs_snappy/_design/warlogs/_view/by_date?limit=1 {"total_rows":391832,"offset":0,"rows":[ {"id":"0104D7FC-0219-4C16-8531-97C60A59C70C","key":["2004-01","0104D7FC-0219-4C16-8531-97C60A59C70C"],"value":null} ]} real 32m53.988s user 0m0.036s sys 0m0.024s branch file_compression with deflate compression level 1 enabled *** database file size after compaction: 803 Mb view file size after compaction: 459 Mb view index build time: time curl http://localhost:5985/warlogs_deflate1/_design/warlogs/_view/by_date?limit=1 {"total_rows":391832,"offset":0,"rows":[ {"id":"0104D7FC-0219-4C16-8531-97C60A59C70C","key":["2004-01","0104D7FC-0219-4C16-8531-97C60A59C70C"],"value":null} ]} real 43m1.360s user 0m0.008s sys 0m0.068s branch file_compression with deflate compression level 9 enabled *** database file size after compaction: 798 Mb view file size after compaction: 435 Mb view index build time: $ time curl http://localhost:5985/warlogs_deflate9/_design/warlogs/_view/by_date?limit=1 {"total_rows":391832,"offset":0,"rows":[ {"id":"0104D7FC-0219-4C16-8531-97C60A59C70C","key":["2004-01","0104D7FC-0219-4C16-8531-97C60A59C70C"],"value":null} ]} real 47m10.841s user 0m0.032s sys 0m0.060s I made several relaximation read and writes tests, as well as writes only tests (as shown in the very first comment) and found out that both the reads and writes get a positive impact: 1) relaximation reads and writes test, 1 Kb documents (snappy compression vs trunk): http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef6340655b7 (The 1 Kb document can be found here: http://friendpaste.com/28yMMCXn5Dd0EPFpryrvMt 2) relaximation reads and writes test, 2.5 Kb documents (snappy compression vs trunk): http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef634064d2d (The 2.5 Kb document can be found here: http://friendpaste.com/24dnXQT8FZ2gqGLI8571oV ) 3) relaximation reads and writes test, 8 Kb documents (snappy compression vs trunk): http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef634064f56 (The 8 Kb document can be found here: http://friendpaste.com/2QN9TrpJA8476VzLzX0imy ) 4) relaximation reads and writes test, 8 Kb documents (deflate level 1 compression vs trunk): http://graphs.mikeal.couchone.com/#/graph/698bf36b6c64dbd19aa2bef634065e8f With deflate level 1 compression, reads seem to stay about the same, while writes are still slightly better compared to trunk. I've also observed that in my machine, decompressing 100Kb with snappy is very fast, taking between 100us and 150us, while zlib takes about ten times more for the same amount of data (which confirms the snappy benchmarks tests).
        Hide
        Filipe Manana added a comment -

        Normam, what tradeoff are you talking about? I haven't seen document read or write performance drop before, neither performance drop in the view indexer.

        Show
        Filipe Manana added a comment - Normam, what tradeoff are you talking about? I haven't seen document read or write performance drop before, neither performance drop in the view indexer.
        Hide
        Norman Barker added a comment -

        Checking out and building NIFs is handled by rebar, so that would handle snappy, but that requires couchdb to go to a rebar structure.

        For our use cases (millions of small docs generated quickly) this is working well. I like (6) since for use cases that are more suited to archival then picking gzip for storage would be even better, for us, snappy is great gives a good tradeoff.

        Show
        Norman Barker added a comment - Checking out and building NIFs is handled by rebar, so that would handle snappy, but that requires couchdb to go to a rebar structure. For our use cases (millions of small docs generated quickly) this is working well. I like (6) since for use cases that are more suited to archival then picking gzip for storage would be even better, for us, snappy is great gives a good tradeoff.
        Hide
        Paul Joseph Davis added a comment -

        As to files included as a dep, we already have precedence from every dep in that we don't attempt to maintain a direct checkout of their tarball or VCS checkout. I would say we either make it an external dep (which I'm not advocating cause it doesn't seem to be widely distributed just yet) or we go ahead and move things to fit into our build system. Their configure.ac doesn't look like it'd be a huge issue to pull into ours so I'm not overly concerned there (though its not been tried yet, so maybe its harder than I expect).

        Not sure about the exact changes required in couch for this, I was going through commits and saw some stuff changing around so it was hard to get a handle on what the exact update was after pulling it in as a dep.

        Show
        Paul Joseph Davis added a comment - As to files included as a dep, we already have precedence from every dep in that we don't attempt to maintain a direct checkout of their tarball or VCS checkout. I would say we either make it an external dep (which I'm not advocating cause it doesn't seem to be widely distributed just yet) or we go ahead and move things to fit into our build system. Their configure.ac doesn't look like it'd be a huge issue to pull into ours so I'm not overly concerned there (though its not been tried yet, so maybe its harder than I expect). Not sure about the exact changes required in couch for this, I was going through commits and saw some stuff changing around so it was hard to get a handle on what the exact update was after pulling it in as a dep.
        Hide
        Filipe Manana added a comment -

        Thanks Paul.

        I'm not sure what is preferable: to keep all the snappy files (autotools files, unit test files, etc) without any modifications, or to strip it down just to the bare essentials and update Couch's configure.ac. The former makes it probably easier to maintain while the later makes our autotools configuration more complex. I really have no strong opinion on this.

        I also think the api changes to couch_file you're referring to are from an old commit, as they're no longer present for a while now.

        Good catch on the C++ exceptions bubbling out of the NIF!

        Show
        Filipe Manana added a comment - Thanks Paul. I'm not sure what is preferable: to keep all the snappy files (autotools files, unit test files, etc) without any modifications, or to strip it down just to the bare essentials and update Couch's configure.ac. The former makes it probably easier to maintain while the later makes our autotools configuration more complex. I really have no strong opinion on this. I also think the api changes to couch_file you're referring to are from an old commit, as they're no longer present for a while now. Good catch on the C++ exceptions bubbling out of the NIF!
        Hide
        Brian Mitchell added a comment -

        I've found the patch very useful in my limited testing. Right now I have a rather large dataset (10's of TB to start) that I'm going to be dealing with and the compression is an extremely valuable improvement (CPU time being important but a good trade-off in my measurement). I'd be happy to work on getting more testing done on this patch if there are points to evaluate more closely.

        I don't have any points for review on the code but it certainly worked. Paul makes important points that might help this move to production worthy status (C++ exceptions bringing things down is definitely not acceptable).

        Show
        Brian Mitchell added a comment - I've found the patch very useful in my limited testing. Right now I have a rather large dataset (10's of TB to start) that I'm going to be dealing with and the compression is an extremely valuable improvement (CPU time being important but a good trade-off in my measurement). I'd be happy to work on getting more testing done on this patch if there are points to evaluate more closely. I don't have any points for review on the code but it certainly worked. Paul makes important points that might help this move to production worthy status (C++ exceptions bringing things down is definitely not acceptable).
        Hide
        Paul Joseph Davis added a comment -

        Feedback:

        1. Neat idea
        2. Holy cow this is a super huge change set. Can we get this broken up into a series of patches?
        3. Get rid of the snappy build system
        4. When pulling trunk into your feature branches, please use rebase instead of merge.
        5. Compression and decompression is happening in lots of places and it confuses me.
        6. I'm suddenly wondering if we shouldn't consider making snappy a generalized term compression library.
        7. I'm not a super huge fan of the API change to couch_file. Instead of having it as append_term(Fd, Term, Compress), it'd probably be a lot more future proof to do append_term(Fd, Term, [

        {compress, true}

        ]) or [compress] or [compressed] or [compress |

        {compress, N}

        ] like term_to_binary.
        8. People don't generally indent source code in an 'extern "C" {}' block.
        9. You need to be catching all exceptions and not just out-of-mem exceptions. Letting a C++ exception bubble out of the C API is not good.
        10. There's a std::bad_alloc you might be able to use instead of the OutOfMem class.
        11. And some other things.

        As to number 6, I was reading the couch_util:compress/decompress and I realized that you're calling term_to_bin and then sending to the compression algorithm. I wonder if we couldn't do two things to make that a bit better. First, make the app name something like couchcomp which completely encapsulates the compression/decompression logic. Second, make the NIF do the term_to_binary or equivalent. Thirdly, it looks like Snappy can't do iterative compression like zlib can. I wonder if that might not be an issue.

        And probably some more stuff.

        Show
        Paul Joseph Davis added a comment - Feedback: 1. Neat idea 2. Holy cow this is a super huge change set. Can we get this broken up into a series of patches? 3. Get rid of the snappy build system 4. When pulling trunk into your feature branches, please use rebase instead of merge. 5. Compression and decompression is happening in lots of places and it confuses me. 6. I'm suddenly wondering if we shouldn't consider making snappy a generalized term compression library. 7. I'm not a super huge fan of the API change to couch_file. Instead of having it as append_term(Fd, Term, Compress), it'd probably be a lot more future proof to do append_term(Fd, Term, [ {compress, true} ]) or [compress] or [compressed] or [compress | {compress, N} ] like term_to_binary. 8. People don't generally indent source code in an 'extern "C" {}' block. 9. You need to be catching all exceptions and not just out-of-mem exceptions. Letting a C++ exception bubble out of the C API is not good. 10. There's a std::bad_alloc you might be able to use instead of the OutOfMem class. 11. And some other things. As to number 6, I was reading the couch_util:compress/decompress and I realized that you're calling term_to_bin and then sending to the compression algorithm. I wonder if we couldn't do two things to make that a bit better. First, make the app name something like couchcomp which completely encapsulates the compression/decompression logic. Second, make the NIF do the term_to_binary or equivalent. Thirdly, it looks like Snappy can't do iterative compression like zlib can. I wonder if that might not be an issue. And probably some more stuff.
        Hide
        Filipe Manana added a comment -

        Hi Alex, thanks.

        I can't give a precise estimation. It's a proposal, with all the code complete now and exact details about how the testing was done and how to reproduce those tests, but as you see there was no community response at all to it so far.
        I'll wait some more time for feedback before taking any action.

        Show
        Filipe Manana added a comment - Hi Alex, thanks. I can't give a precise estimation. It's a proposal, with all the code complete now and exact details about how the testing was done and how to reproduce those tests, but as you see there was no community response at all to it so far. I'll wait some more time for feedback before taking any action.
        Hide
        Alex Koshelev added a comment -

        Hi, Filipe!

        Do you have some estimates when this ticket can be resolved and changed will be landed into trunk?

        Thanks for great work!

        Show
        Alex Koshelev added a comment - Hi, Filipe! Do you have some estimates when this ticket can be resolved and changed will be landed into trunk? Thanks for great work!

          People

          • Assignee:
            Filipe Manana
            Reporter:
            Filipe Manana
          • Votes:
            5 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development