Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
4.5.2
-
None
-
None
Description
When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, the underlying precision and scale aren't carried forward to Spark.
The Spark catalyst schema converter should load these from the underlying column. These appear to be exposed in the ResultSetMetaData, but if there was a way to expose these somehow through ColumnInfo, it would be cleaner.
I'm not sure if Pig has the same issues or not, but I suspect it may.
Attachments
Attachments
Issue Links
- links to