Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Duplicate
-
None
-
None
-
None
Description
We used to have the problems that values of partitioning columns are not correctly cast to the desired Spark SQL values based on their data types. Let's make sure we correctly do that for both Hive's partitions and HadoopFSRelation's partitions.
Attachments
Issue Links
- contains
-
SPARK-5456 Decimal Type comparison issue
- Resolved
- duplicates
-
SPARK-7790 when use dynamic partitions, the partition string can be wrong without looking at the type
- Resolved
- links to