Uploaded image for project: 'Apache Hudi'
  1. Apache Hudi
  2. HUDI-5686

Missing records when HoodieDeltaStreamer run in continuous

    XMLWordPrintableJSON

Details

    Description

      See issue https://github.com/apache/hudi/issues/7757 for more details.

      Description of the issue:
      If the HoodieDeltaStreamer is forcefully terminated before commit instant's state is `COMPLETED`, it leaves the commit state in either `REQUESTED` or `INFLIGHT`. When the HoodieDeltaStreamer is rerun, the first successful commit writes first batch of records into Hudi Table. However, in the consecutive commit, the changes committed by previous commit disappears. This causes loss of entire batch of data written by the first commit after restart.
      I observed this problem when HoodieDeltaStreamer is run in continuous mode and when job gets resubmitted when AM container gets killed due to reasons like loss of nodes or node going to unhealthy state. This issue is not limited to continuous mode alone, this can happen anytime when Hudi write gets terminated before instant is marked `COMPLETE`.

      How to reproduce the issue:

      1. Run HoodieDeltaStreamer and yarn kill the job before commit instant reaches `COMPLETE` state. Note the number of records after last successful commit (say 100)
      2. Upon re-submission of HoodieDeltaStreamer, there will be 2 new instants created (1 Commit complete and 1 rollback complete). Note the number of delta changes consumed(say 10 new records keys) in this run and total number of records in hudi table( 110 unique records )
      3. On next run, wait till Hudi completes the commit assuming it received 5 records and check the count of unique records in hudi table (It was observed to be 105). The delta records consumed in step 2 are entirely lost.

      Reason:
      Suppose Hudi is running and it's timeline looks like below, and you kill the job

      1. C1.commit.requested
      2. C1.inflight
      3. C1.commit
      4. C2.commit.requested
      5. C2.inflight
      6. C2.commit
      7. C3.commit.requested

      Upon re-submission, after 1 commit cycle the timeline looks like

      1. C1.commit.requested
      2. C1.inflight
      3. C1.commit
      4. C2.commit.requested
      5. C2.inflight
      6. C2.commit
      7. R1.rollback.requested
      8. R1.rollback.inflight
      9. R1.rollback
      10. C4.commit.requested
      11. C4.inflight
      12. C4.commit

      The next commit cycle loads R1.rollback as the recent latest instant in the timeline and due to which the new incoming records gets UPSERTed on C2.commit instant rather than C4.commit. This is because, the chronological order of timestamps of rollback is greater than the commit that triggered it ( i.e. in the above example R1 > C4 ). This creates a cascading effect on data loss whilte the kafka consumer offsets keep moving ahead.
      Refer to the commit timeline snapshot tagged in the github issue 7757.

      Attachments

        Issue Links

          Activity

            People

              pushpavanthar Purushotham Pushpavanthar
              codope Sagar Sumit
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: