Uploaded image for project: 'Cassandra'
  1. Cassandra
  2. CASSANDRA-6721

READ-STAGE: IllegalArgumentException when re-reading wide row immediately upon creation

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Normal
    • Resolution: Not A Problem
    • None
    • None
    • None
    • Windows 7 x64 dual core, 8GB memory, single Cassandra node, Java 1.7.0_45

    • Normal

    Description

      In my test case, I am writing a wide row to one table, ordering the columns in reverse chronogical order, newest to oldest, by insertion time. A simplified version of the schema:
      CREATE TABLE IF NOT EXISTS sr (s BIGINT, p INT, l BIGINT, ec TEXT, createDate TIMESTAMP, k BIGINT, properties TEXT, PRIMARY KEY ((s, p, l), createDate, ec) ) WITH CLUSTERING ORDER BY (createDate DESC) AND compression =

      {'sstable_compression' : 'LZ4Compressor'}

      Intermittently, after inserting 1,000,000 or 10,000,000 or more rows, when my test immediately turns around and tries to read this partition in its entirety, the client times out on the read and the Cassandra log looks like the following:

      java.lang.RuntimeException: java.lang.IllegalArgumentException
      at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1935)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
      at java.lang.Thread.run(Unknown Source)
      Caused by: java.lang.IllegalArgumentException
      at java.nio.Buffer.limit(Unknown Source)
      at org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:55)
      at org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:64)
      at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:82)
      at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
      at org.apache.cassandra.db.marshal.AbstractType$3.compare(AbstractType.java:77)
      at org.apache.cassandra.db.marshal.AbstractType$3.compare(AbstractType.java:74)
      at org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:152)
      at org.apache.cassandra.utils.MergeIterator$Candidate.compareTo(MergeIterator.java:129)
      at java.util.PriorityQueue.siftUpComparable(Unknown Source)
      at java.util.PriorityQueue.siftUp(Unknown Source)
      at java.util.PriorityQueue.offer(Unknown Source)
      at java.util.PriorityQueue.add(Unknown Source)
      at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:90)
      at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
      at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
      at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
      at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
      at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
      at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
      at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1560)
      at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)
      at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
      at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
      at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1396)
      at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
      ... 3 more

      I have seen the same failure whether I use the LZ4Compressor or the SnappyCompressor, so it is not dependent on the choice of compression.
      When compression is disabled, the log is similar, differing slightly in the details. The exception is then:
      java.io.IOError: java.io.IOException: mmap segment underflow; remaining is 10778639 but 876635247 requested

      At least in this case of no compression, although the read test failed when run immediately after the data was written, running just the read tests again later succeeded. Which suggests this is a problem with a cached version of the data, as the underlying file itself is not corrupted.

      The attached 2014-02-15 and 2014-02-17-21-05 files show the initial failure with LZ4Compressor. The 2014-02-17-22-05 file shows the log from the uncompressed test.
      In all of these, the log includes the message
      CompactionController.java (line 192) Compacting large row testdb/sr:5:1:6 (1079784915 bytes) incrementally.

      This may be coincidental, as it turns out, as I may be seeing the same issue on a table with narrow rows and a large number of composite primary keys. See the attached log 2014-02-18-13-45.

      Attachments

        1. 2014-02-15.txt
          21 kB
          Bill Mitchell
        2. 2014-02-17-21-05.txt
          19 kB
          Bill Mitchell
        3. 2014-02-17-22-05.txt
          14 kB
          Bill Mitchell
        4. 2014-02-18-13-45.txt
          9 kB
          Bill Mitchell

        Activity

          People

            Unassigned Unassigned
            wtmitchell3 Bill Mitchell
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: