Uploaded image for project: 'Beam'
  1. Beam
  2. BEAM-8207

KafkaIOITs generate different hashes each run, sometimes dropping records



    • Type: Bug
    • Status: Triage Needed
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: io-java-kafka, testing
    • Labels:


      While working to adapt Java's KafkaIOIT to work with a large dataset generated by a SyntheticSource I encountered a problem. I want to push 100M records through a Kafka topic, verify data correctness and at the same time check the performance of KafkaIO.Write and KafkaIO.Read.
      To perform the tests I'm using a Kafka cluster on Kubernetes from the Beam repo (here).
      The expected result would be that first the records are generated in a deterministic way (using hashes of list positions as Random seeds), next they are written to Kafka - this concludes the write pipeline.
      As for reading and correctness checking - first, the data is read from the topic and after being decoded into String representations, a hashcode of the whole PCollection is calculated (For details, check KafkaIOIT.java).
      During the testing I ran into several problems:
      1. When all the records are read from the Kafka topic, the hash is different each time.
      2. Sometimes not all the records are read and the Dataflow task waits for the input indefinitely, occasionally throwing exceptions.
      I believe there are two possible causes of this behavior:
      either there is something wrong with the Kafka cluster configuration
      or KafkaIO behaves erratically on high data volumes, duplicating and/or dropping records.
      Second option seems troubling and I would be grateful for help with the first.




            • Assignee:
              mwalenia Michal Walenia
            • Votes:
              0 Vote for this issue
              1 Start watching this issue


              • Created: