As a user, I can use `spark.read.format("binaryFile").load(path).filter($"status.lenght" < 100000000L)` to load files that are less than 1e8 bytes. Spark shouldn't even read files that are bigger than 1e8 bytes in this case.
- is blocked by
-
SPARK-25348 Data source for binary files
-
- Resolved
-
- is related to
-
SPARK-25558 Pushdown predicates for nested fields in DataSource Strategy
-
- Resolved
-
- links to