Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
3.1.0
-
None
-
None
Description
When we use Spark Streaming to consume records from kafka, the generated KafkaRDD‘s partition number is equal to kafka topic's partition number, so we can not use more cpu cores to execute the streaming task except we change the topic's partition number,but we can not increase the topic's partition number infinitely.
Now I think we can split a kafka partition into multiple KafkaRDD partitions, and we can config
it, then we can use more cpu cores to execute the streaming task.
Attachments
Attachments
Issue Links
- duplicates
-
SPARK-23541 Allow Kafka source to read data with greater parallelism than the number of topic-partitions
- Resolved
- links to