Uploaded image for project: 'Phoenix'
  1. Phoenix
  2. PHOENIX-2288

Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark DataFrame

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 4.5.2
    • 4.7.0
    • None
    • None

    Description

      When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, the underlying precision and scale aren't carried forward to Spark.

      The Spark catalyst schema converter should load these from the underlying column. These appear to be exposed in the ResultSetMetaData, but if there was a way to expose these somehow through ColumnInfo, it would be cleaner.

      I'm not sure if Pig has the same issues or not, but I suspect it may.

      Attachments

        1. PHOENIX-2288.patch
          15 kB
          Josh Mahonin
        2. PHOENIX-2288-v2.patch
          17 kB
          Josh Mahonin
        3. PHOENIX-2288-v3.patch
          16 kB
          Josh Mahonin

        Issue Links

          Activity

            People

              jmahonin Josh Mahonin
              jmahonin Josh Mahonin
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: