Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Invalid
-
2.4.5
-
None
-
None
Description
i have a spark streaming application with kafka .
Here are the parameters:
kafka partition = 500
batch time = 60
--conf spark.streaming.backpressure.enabled=true
--conf spark.streaming.kafka.maxRatePerPartition=2500
input size= 500 * 120 * 2500 = 75,000,000
however input size become 160000000 after some batch
who can tell me reason