Pig
  1. Pig
  2. PIG-250

Pig is broken with speculative execution

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.1.0
    • Component/s: None
    • Labels:
      None

      Description

      If I have speculative execution turned on, the following script fails:

      a = load 'studenttab20m' as (name, age, gpa);
      b = load 'votertab10k' as (name, age, registration, contributions);
      c = filter a by age < '50';
      d = filter b by age < '50';
      e = cogroup c by (name, age), d by (name, age) parallel 10;
      f = foreach e generate flatten(c), flatten(d) parallel 10;
      g = group f by registration parallel 10;
      h = foreach g generate group, SUM(f.d::contributions) parallel 10;
      i = order h by ($1, $0);
      store i into 'out';

      I traced this to the fact that the first MR job produces one or more empty outputs from the reducer. This happened on the reducers that happened to have second task running.

      I am not sure what the issue is and I am working with hadoop guys to investigate. Until this issue is resolved, I would like to trun speculative execution off.

      1. PIG-250.patch
        1 kB
        Olga Natkovich
      2. PIG-250_v2.patch
        2 kB
        Olga Natkovich

        Activity

        Hide
        Olga Natkovich added a comment -

        Latest patch committed. Thanks Arung for figuring out this issue!

        Show
        Olga Natkovich added a comment - Latest patch committed. Thanks Arung for figuring out this issue!
        Hide
        Alan Gates added a comment -

        +1

        Show
        Alan Gates added a comment - +1
        Hide
        Olga Natkovich added a comment -

        Arun helped to diagnose the problem. It turned out that we were not passing the right directory to the task to write to and as the result all tasks (primary + speculative) were writing to the same place overwriting each others output.

        The attached patch fixes the issue and also reverses the prior change of disabling speculative execution. With this changes, by default the speculative execution will be consistent with hadoop default (currently on) and will be controlled by configuration (hadoop-site.xml)

        I tested that speculative execution works now and that I can control it from hadoop-site.xml.

        Thanks, Arun.

        One of the committers - please review.

        Show
        Olga Natkovich added a comment - Arun helped to diagnose the problem. It turned out that we were not passing the right directory to the task to write to and as the result all tasks (primary + speculative) were writing to the same place overwriting each others output. The attached patch fixes the issue and also reverses the prior change of disabling speculative execution. With this changes, by default the speculative execution will be consistent with hadoop default (currently on) and will be controlled by configuration (hadoop-site.xml) I tested that speculative execution works now and that I can control it from hadoop-site.xml. Thanks, Arun. One of the committers - please review.
        Hide
        Alan Gates added a comment -

        +1

        Show
        Alan Gates added a comment - +1
        Hide
        Arun C Murthy added a comment -

        +1

        Show
        Arun C Murthy added a comment - +1

          People

          • Assignee:
            Olga Natkovich
            Reporter:
            Olga Natkovich
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development