Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21248

Flaky test: o.a.s.sql.kafka010.KafkaSourceSuite.assign from specific offsets (failOnDataLoss: true)

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.3.0
    • Fix Version/s: 2.3.0
    • Component/s: Structured Streaming
    • Labels:
      None

      Description

      org.scalatest.exceptions.TestFailedException:  Stream Thread Died: null java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)  scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)  scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)  scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)  org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)  org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)  org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)  org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76)  org.apache.spark.sql.execution.streaming.state.StateStoreCoordinatorRef.deactivateInstances(StateStoreCoordinator.scala:108)  org.apache.spark.sql.streaming.StreamingQueryManager.notifyQueryTermination(StreamingQueryManager.scala:335)   == Progress ==    AssertOnQuery(<condition>, )    CheckAnswer: [-20],[-21],[-22],[0],[1],[2],[11],[12],[22]    StopStream    StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@6c63901,Map())    CheckAnswer: [-20],[-21],[-22],[0],[1],[2],[11],[12],[22]    AddKafkaData(topics = Set(topic-7), data = WrappedArray(30, 31, 32, 33, 34), message = )    CheckAnswer: [-20],[-21],[-22],[0],[1],[2],[11],[12],[22],[30],[31],[32],[33],[34]    StopStream  == Stream == Output Mode: Append Stream state: not started Thread state: dead  java.lang.InterruptedException  at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)  at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)  at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)  at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)  at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)  at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)  at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76)  at org.apache.spark.sql.execution.streaming.state.StateStoreCoordinatorRef.deactivateInstances(StateStoreCoordinator.scala:108)  at org.apache.spark.sql.streaming.StreamingQueryManager.notifyQueryTermination(StreamingQueryManager.scala:335)  at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:375)  at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:211)   == Sink == 0: [-20] [-21] [-22] [22] [11] [12] [0] [1] [2] 1: [30] 2: [33] [31] [32] [34]   == Plan ==   
      

      See https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.6/3173/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceSuite/assign_from_specific_offsets__failOnDataLoss__true_/

        Attachments

          Activity

            People

            • Assignee:
              zsxwing Shixiong Zhu
              Reporter:
              zsxwing Shixiong Zhu
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: