Uploaded image for project: 'Apache Cassandra'
  1. Apache Cassandra
  2. CASSANDRA-12618

Out of memory bug with one insert

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Urgent
    • Resolution: Fixed
    • 2.2.9, 3.0.10, 3.10
    • None
    • None
    • Critical

    Description

      When executing an INSERT built by QueryBuilder in java driver produces an OutOfMemory in the server.

      Having a table with a List<String> field like this:

      CREATE TABLE keyspace_name.table_name(
      pk uuid,
      mylist list<text>,
      PRIMARY KEY (pk)
      );

      Anyone can build an INSERT with QueryBuilder like this:

      *Statement insert = QueryBuilder.insertInto(keyspace, table).value("pk", UUID.randomUUID()).value("mylist","blabla");*
      *session.execute(insert);*
      

      just trying to set a String where should be a List<String>.

      I have set tracing on in the cassandra node.

      TRACE [SharedPool-Worker-2] 2016-09-06 19:24:37,964 Message.java:506 - Received: QUERY INSERT INTO test_evil.table_aa (pk,mylist) VALUES (?,?);[pageSize = 5000], v=4
      DEBUG [SharedPool-Worker-2] 2016-09-06 19:24:37,967 ListSerializer.java:92 - deseiralizign a ByteBuffer in List lenght: 1651269986
      TRACE [EXPIRING-MAP-REAPER:1] 2016-09-06 19:24:38,319 ExpiringMap.java:102 - Expired 0 entries
      TRACE [SharedPool-Worker-2] 2016-09-06 19:24:38,320 Tracing.java:182 - request complete
      TRACE [Service Thread] 2016-09-06 19:24:38,321 GCInspector.java:286 - ParNew GC in 41ms.  CMS Old Gen: 0 -> 9337048; Par Eden Space: 525391216 -> 0; Par Survivor Space: 18860168 -> 30843544
      TRACE [Service Thread] 2016-09-06 19:24:38,322 GCInspector.java:286 - ConcurrentMarkSweep GC in 148ms.  CMS Old Gen: 9337048 -> 37098008; Par Survivor Space: 30843544 -> 0
      TRACE [Service Thread] 2016-09-06 19:24:38,322 GCInspector.java:286 - ConcurrentMarkSweep GC in 0ms.  CMS Old Gen: 37098008 -> 36063280; 
      ERROR [SharedPool-Worker-2] 2016-09-06 19:24:38,323 JVMStabilityInspector.java:140 - JVM state determined to be unstable.  Exiting forcefully due to:
      java.lang.OutOfMemoryError: Java heap space
      	at java.util.ArrayList.<init>(ArrayList.java:152) ~[na:1.8.0_101]
      	at org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:93) ~[main/:na]
      	at org.apache.cassandra.cql3.Lists$Value.fromSerialized(Lists.java:137) ~[main/:na]
      	at org.apache.cassandra.cql3.Lists$Marker.bind(Lists.java:242) ~[main/:na]
      	at org.apache.cassandra.cql3.Lists$Setter.execute(Lists.java:295) ~[main/:na]
      	at org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:94) ~[main/:na]
      	at org.apache.cassandra.cql3.statements.ModificationStatement.addUpdates(ModificationStatement.java:676) ~[main/:na]
      	at org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:616) ~[main/:na]
      	at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:429) ~[main/:na]
      	at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:417) ~[main/:na]
      	at com.stratio.cassandra.lucene.IndexQueryHandler.execute(IndexQueryHandler.java:181) ~[cassandra-lucene-index-plugin-3.7.2-RC1-SNAPSHOT.jar:na]
      	at com.stratio.cassandra.lucene.IndexQueryHandler.processStatement(IndexQueryHandler.java:155) ~[cassandra-lucene-index-plugin-3.7.2-RC1-SNAPSHOT.jar:na]
      	at com.stratio.cassandra.lucene.IndexQueryHandler.process(IndexQueryHandler.java:129) ~[cassandra-lucene-index-plugin-3.7.2-RC1-SNAPSHOT.jar:na]
      	at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115) ~[main/:na]
      	at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) [main/:na]
      	at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) [main/:na]
      	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.36.Final.jar:4.0.36.Final]
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) [netty-all-4.0.36.Final.jar:4.0.36.Final]
      	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) [netty-all-4.0.36.Final.jar:4.0.36.Final]
      	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) [netty-all-4.0.36.Final.jar:4.0.36.Final]
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101]
      	at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) [main/:na]
      	at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [main/:na]
      	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
      

      I think that QueryBuilder builds the query with prepared statement: INSERT INTO test_evil.table_aa (pk,mylist) VALUES (?,?); and in the server side it does not validate the correct types.

      I have tested this with normal CQL in cqlsh and it works well.
      So, i think it is only in the prepared statement validation.

      I attached a maven project in order to help you to reproduce it.

      To generate the jar: 'mvn clean compile assembly:single'
      To execute it: 'java -jar target/EvilQuery-1.0-SNAPSHOT-jar-with-dependencies.jar -host localhost -keyspace keyspace_name -table table_name'

      Attachments

        1. EvilQuery.tar.gz
          12 kB
          Eduardo Alonso de Blas

        Issue Links

          Activity

            People

              blerer Benjamin Lerer
              ealonsodb Eduardo Alonso de Blas
              Benjamin Lerer
              Alex Petrov
              Votes:
              1 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: