Uploaded image for project: 'Apache Hudi'
  1. Apache Hudi
  2. HUDI-4992

Spark Row-writing Bulk Insert produces incorrect Bloom Filter metadata

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • 0.12.0
    • 0.12.1
    • None

    Description

      Troubleshooting duplicates issue w/ Abhishek Modi from Notion, we've found that the min/max record key stats are being currently persisted incorrectly into Parquet metadata, leading to duplicate records being produced in their pipeline after initial bulk-insert.

      Attachments

        Issue Links

          Activity

            People

              alexey.kudinkin Alexey Kudinkin
              alexey.kudinkin Alexey Kudinkin
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: