Uploaded image for project: 'Apache Arrow'
  1. Apache Arrow
  2. ARROW-13611

[C++] Scanning datasets does not enforce back pressure

    XMLWordPrintableJSON

Details

    Description

      I have a simple test case where I scan the batches of a 4GB dataset and print out the currently used memory:

      import pyarrow as pa
      import pyarrow.dataset as ds
      
      dataset = ds.dataset('/home/pace/dev/data/dataset/csv/5_big', format='csv')
      num_rows = 0
      for batch in dataset.to_batches():
          print(pa.total_allocated_bytes())
          num_rows += batch.num_rows
      
      print(num_rows)
      

      In pyarrow 3.0.0 this consumes just over 5MB. In pyarrow 4.0.0 and 5.0.0 this consumes multiple GB of RAM.

      Attachments

        Issue Links

          Activity

            People

              westonpace Weston Pace
              westonpace Weston Pace
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 4h 50m
                  4h 50m