An InMemoryRelation is created when you cache a table; but if the table is large, defined by either having a really large amount of columns, or a really large amount of partitions (in the file split sense, not the "table partition" sense), or both, it causes an immense amount of memory to be used in the driver.
The reason is that it uses an accumulator to collect statistics about each partition, and instead of summarizing the data in the driver, it keeps all entries in memory.
I'm attaching a script I used to create a parquet file with 20,000 columns and a single row, which I then copied 500 times so I'd have 500 partitions.
When doing the following:
Everything works fine, both in Spark 1.6 and 2.0. (It's super slow with the settings I used, but it works.)
I ran spark-shell like this:
You'll see the results in screenshot example_1.6_pre_patch.png. After 40 partitions were processed, there were 40 GenericInternalRow objects with
100,000 items each (5 stat info fields * 20,000 columns). So, memory usage was:
(Note: Integer = 20 bytes, Long = 24 bytes.)
If I waited until the end, there would be 500 partitions, so ~ 5GB of memory to hold the stats.
I'm also attaching a patch I made on top of 1.6 that uses just a long accumulator to capture the table size; with that patch memory usage on the driver doesn't keep growing. Also note in the patch that I'm multiplying the column size by the row count, which I think is a different bug in the existing code (those stats should be for the whole batch, not just a single row, right?). I also added example_1.6_post_patch.png to show the InMemoryRelation with the patch.
I also applied a very similar patch on top of Spark 2.0. But there things blow up even more spectacularly when I try to run the count on the cached table. It starts with this error:
And basically a lot of that going on making the output unreadable, so I just killed the shell. Anyway, I believe the same fix should work there, but I can't be sure because the test doesn't work for different reasons, it seems.