Pig
  1. Pig
  2. PIG-3059

Global configurable minimum 'bad record' thresholds

    Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.11
    • Fix Version/s: None
    • Component/s: impl
    • Labels:
      None

      Description

      See PIG-2614.

      Pig dies when one record in a LOAD of a billion records fails to parse. This is almost certainly not the desired behavior. elephant-bird and some other storage UDFs have minimum thresholds in terms of percent and count that must be exceeded before a job will fail outright.

      We need these limits to be configurable for Pig, globally. I've come to realize what a major problem Pig's crashing on bad records is for new Pig users. I believe this feature can greatly improve Pig.

      An example of a config would look like:

      pig.storage.bad.record.threshold=0.01
      pig.storage.bad.record.min=100

      A thorough discussion of this issue is available here: http://www.quora.com/Big-Data/In-Big-Data-ETL-how-many-records-are-an-acceptable-loss

      1. PIG-3059.patch
        24 kB
        Cheolsoo Park
      2. PIG-3059-2.patch
        33 kB
        Cheolsoo Park
      3. avro_test_files-2.tar.gz
        20 kB
        Cheolsoo Park

        Issue Links

          Activity

            People

            • Assignee:
              Cheolsoo Park
              Reporter:
              Russell Jurney
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:

                Development