We have parquet data written from Spark1.6 that, when read from 2.0.1, produces errors.
The above code fails; stack trace attached.
If an integer used, explicit partition discovery succeeds.
The action succeeds. Additionally, if 'partitionBy' is used instead of explicit writes, partition discovery succeeds.
Question: Is the first example a reasonable use case? PartitioningUtils seems to default to Integer types unless the partition value exceeds the integer type's length.