Resolution: Not A Problem
Affects Version/s: 2.0.0
Fix Version/s: None
Apache Spark 2.0.0, Kafka 0.10 for Scala 2.11
We have a Spark Streaming application reading records from Kafka 0.10.
Some tasks are failed because of the following error:
"java.lang.AssertionError: assertion failed: Failed to get records for (...) after polling for 512"
The first attempt fails and the second attempt (retry) completes successfully, - this is the pattern that we see for many tasks in our logs. These fails and retries consume resources.
A similar case with a stack trace are described here:
Here is the line from the stack trace where the error is raised:
We tried several values for "spark.streaming.kafka.consumer.poll.ms", - 2, 5, 10, 30 and 60 seconds, but the error appeared in all the cases except the last one. Moreover, increasing the threshold led to increasing total Spark stage duration.
In other words, increasing "spark.streaming.kafka.consumer.poll.ms" led to fewer task failures but with cost of total stage duration. So, it is bad for performance when processing data streams.
We have a suspicion that there is a bug in CachedKafkaConsumer (and/or other related classes) which inhibits the reading process.