Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-3003

Trunk single-pass streaming doesn't handle large row correctly

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Urgent
    • Resolution: Fixed
    • 1.0.0
    • None
    • Critical

    Description

      For normal column family, trunk streaming always buffer the whole row into memory. In uses

        ColumnFamily.serializer().deserializeColumns(in, cf, true, true);
      

      on the input bytes.
      We must avoid this for rows that don't fit in the inMemoryLimit.

      Note that for regular column families, for a given row, there is actually no need to even recreate the bloom filter of column index, nor to deserialize the columns. It is enough to filter the key and row size to feed the index writer, but then simply dump the rest on disk directly. This would make streaming more efficient, avoid a lot of object creation and avoid the pitfall of big rows.

      Counters column family are unfortunately trickier, because each column needs to be deserialized (to mark them as 'fromRemote'). However, we don't need to do the double pass of LazilyCompactedRow for that. We can simply use a SSTableIdentityIterator and deserialize/reserialize input as it comes.

      Attachments

        1. v3003-v4.txt
          11 kB
          Yuki Morishita
        2. ASF.LICENSE.NOT.GRANTED--3003-v2.txt
          10 kB
          Yuki Morishita
        3. ASF.LICENSE.NOT.GRANTED--3003-v1.txt
          9 kB
          Yuki Morishita
        4. 3003-v5.txt
          11 kB
          Yuki Morishita
        5. 3003-v3.txt
          11 kB
          Yuki Morishita

        Activity

          People

            yukim Yuki Morishita
            slebresne Sylvain Lebresne
            Yuki Morishita
            Sylvain Lebresne
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: