Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.2.2, 2.3.2, 2.4.1
    • Fix Version/s: 2.3.3, 2.4.1, 3.0.0
    • Component/s: SQL
    • Labels:
      None
    • Environment:

      PostgreSQL 10.4, 9.6.9.

      Description

      Consider the following table definition:

      create table test1
      (
         v  numeric[],
         d  numeric
      );
      
      insert into test1 values('{1111.222,2222.332}', 222.4555);
      

      When reading the table into a Dataframe, I get the following schema:

      root
       |-- v: array (nullable = true)
       |    |-- element: decimal(0,0) (containsNull = true)
       |-- d: decimal(38,18) (nullable = true)

      Notice that for both columns precision and scale were not specified, but in case of the array element I got both set to 0, while in the other case defaults were set.

      Later, when I try to read the Dataframe, I get the following error:

      java.lang.IllegalArgumentException: requirement failed: Decimal precision 4 exceeds max precision 0
              at scala.Predef$.require(Predef.scala:224)
              at org.apache.spark.sql.types.Decimal.set(Decimal.scala:114)
              at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:453)
              at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$16$$anonfun$apply$6$$anonfun$apply$7.apply(JdbcUtils.scala:474)
              ...

      I would expect to get array elements of type decimal(38,18) and no error when reading in this case.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                alsh Oleksii Shkarupin
                Reporter:
                alsh Oleksii Shkarupin
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: