When using "extreme" values in the partition column (like having a randomly generated long number) overflow might happen, leading to the following warning message:
When this happens, no data is read from the table.
This happens because of the following check in org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala:
Funny thing is that we worry about overflows a few lines later:
A better check would be: