Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
2.0.0
-
None
-
None
Description
I use kafka 0.10.1 and java code with the following dependencies:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.0.0</version>
</dependency>
The code tries to read the a topic starting with offsets.
The topic has 4 partitions that start somewhere before 585000 and end after 674000. So I wanted to read all partitions starting with 585000
fromOffsets.put(new TopicPartition(topic, 0), 585000L);
fromOffsets.put(new TopicPartition(topic, 1), 585000L);
fromOffsets.put(new TopicPartition(topic, 2), 585000L);
fromOffsets.put(new TopicPartition(topic, 3), 585000L);
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));
The code immediately throws:
Beginning offset 585000 is after the ending offset 584464 for topic commerce_item_expectation partition 1
It does not make sense because this topic/partition starts at 584464, not ends
I use this as a base: https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
But I use direct stream:
KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(
topics, kafkaParams, fromOffsets
)
)
Attachments
Attachments
Issue Links
- duplicates
-
SPARK-20036 impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0
- Resolved