Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.7.0
    • Fix Version/s: 0.10.0
    • Component/s: impl
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      We can optimize limit operation by stopping early in PigRecordReader. In general, we need a way to communicate between PigRecordReader and execution pipeline. POLimit could instruct PigRecordReader that we have already had enough records and stop feeding more data.

      1. PIG-1270-1.patch
        12 kB
        Daniel Dai
      2. PIG-1270-2.patch
        12 kB
        Daniel Dai
      3. PIG-1270-3.patch
        10 kB
        Min Zhou
      4. PIG-1270-4.patch
        10 kB
        Daniel Dai

        Activity

        Daniel Dai created issue -
        Daniel Dai made changes -
        Field Original Value New Value
        Attachment PIG-1270-1.patch [ 12492146 ]
        Daniel Dai made changes -
        Attachment PIG-1270-2.patch [ 12492577 ]
        Min Zhou made changes -
        Attachment PIG-1270-3.patch [ 12504993 ]
        Daniel Dai made changes -
        Fix Version/s 0.10 [ 12316246 ]
        Daniel Dai made changes -
        Attachment PIG-1270-4.patch [ 12519563 ]
        Daniel Dai made changes -
        Status Open [ 1 ] Resolved [ 5 ]
        Hadoop Flags Reviewed [ 10343 ]
        Resolution Fixed [ 1 ]
        Daniel Dai made changes -
        Status Resolved [ 5 ] Closed [ 6 ]

          People

          • Assignee:
            Daniel Dai
            Reporter:
            Daniel Dai
          • Votes:
            1 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development