Uploaded image for project: 'Apache Arrow'
  1. Apache Arrow
  2. ARROW-14727

[R] Excessive memory usage on Windows

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 6.0.0
    • None
    • R
    • None

    Description

      I have the following workflow which worked on Arrow 5.0 on Windows 10 and R 4.1.2:

      open_dataset(path) %>%
        select(i, j) %>%
        collect()
      

      The dataset in path is partitioned by i and j, with 16 partitions in total, 5 million rows in each partition and each partition has several other regular columns (i.e. present in every partition). The entire dataset can be read into memory on my 16GB machine, which results in an R data.frame of around 3GB. However, on Arrow 6.0 the same operation fails, and R runs out of memory. Interestingly, this still works:

      open_dataset(path) %>%
        select(i, j, x) %>%
        collect() %>%
      

      where x is a regular column.

      I cannot reproduce the same issue on Linux. Measuring the actual memory consumption with GNU time (--format=%Mmax) I get very similar figures for the first pipeline both on 5.0 and 6.0. The same is true for the second pipeline, which of course consumes slightly more memory, as expected. On Windows I don’t know of a simple method to measure maximum memory consumption but eyeballing it from Process Explorer, Arrow 5.0 needs around 0.5GB for the first example, while with Arrow 6.0 my 16GB machine becomes unresponsive, starts swapping, and depending on the circumstances, other apps might crash before R crashes with this error:

      terminate called after throwing an instance of 'std::bad_alloc'
        what():  std::bad_alloc 

      With the second example, both versions consume roughly the same amount of memory.

      With the new features in Arrow 6.0, this doesn’t work in Windows either, memory consumption shoots up into the 10s of GBs:

      open_dataset(path) %>%
        distinct(i, j) %>%
        collect()
      

      Meanwhile this works, with under 1GB memory needed:

      open_dataset(path) %>%
        distinct(i, j, x) %>%
        collect()
      

      These last two examples work without any issue on Linux, and as expected, they consume significantly less memory, as the select-then-collect examples.

      Attachments

        Activity

          People

            Unassigned Unassigned
            svraka András Svraka
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated: