OpenJPA
  1. OpenJPA
  2. OPENJPA-213

@Column with precision and scale should result in NUMERIC(precision, scale)

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.9.7, 1.1.0
    • Fix Version/s: 1.3.0, 2.0.0-M2
    • Component/s: jpa
    • Labels:
      None

      Description

      @Column provides the precision and scale attributes, but there's no (easy) way to figure out how it affects the way OpenJPA works if any. It looks like OpenJPA reads the type of a persistent field and when it's double it maps it to DOUBLE in Derby regardless of the other attributes. When precision and scale are specified, a DDL should use NUMERIC(precision, scale) or its synonim - DECIMAL(precision, scale).

        Issue Links

          Activity

          Hide
          Patrick Linskey added a comment -

          If the JPA2 spec ends up addressing schema generation, this might be a candidate for a TCK test.

          Show
          Patrick Linskey added a comment - If the JPA2 spec ends up addressing schema generation, this might be a candidate for a TCK test.
          Hide
          Michael Dick added a comment -

          I'm not sure I agree with the description of the problem.

          I've been basing my assumptions on the conversions tables found at http://java.sun.com/j2se/1.5.0/docs/guide/jdbc/getstart/mapping.html#1004791

          The tables there indicate that a java.lang.Double should be mapped to DOUBLE, not NUMERIC or DECIMAL. If NUMERIC or DECIMAL is desired then the entity should use a variable of type java.math.BigDecimal.

          The way the problem description is worded we'd be changing the rules if precision and scale were specified in an annotation. It becomes a question of which is more important, the type of the variable or the annotations around it. An argument can be made for either side, but I'm inclined to side with the type of the variable trumping the annotations. I believe the language in the spec supports this interpretation too :

          From section 9.1.5
          int precision (Optional) The precision for a decimal (exact 0 (Value must be set by
          numeric) column. (Applies only if a decimal col- developer.)
          umn is used.)
          int scale (Optional) The scale for a decimal (exact 0
          numeric) column. (Applies only if a decimal col-
          umn is used.)

          Assuming that is the correct approach, there is still a problem with DB2 and Derby where the mapping tool creates a DOUBLE column for BigDecimals instead of a NUMERIC column. I'll use this JIRA to fix the problem with DB2 and Derby.

          Show
          Michael Dick added a comment - I'm not sure I agree with the description of the problem. I've been basing my assumptions on the conversions tables found at http://java.sun.com/j2se/1.5.0/docs/guide/jdbc/getstart/mapping.html#1004791 The tables there indicate that a java.lang.Double should be mapped to DOUBLE, not NUMERIC or DECIMAL. If NUMERIC or DECIMAL is desired then the entity should use a variable of type java.math.BigDecimal. The way the problem description is worded we'd be changing the rules if precision and scale were specified in an annotation. It becomes a question of which is more important, the type of the variable or the annotations around it. An argument can be made for either side, but I'm inclined to side with the type of the variable trumping the annotations. I believe the language in the spec supports this interpretation too : From section 9.1.5 int precision (Optional) The precision for a decimal (exact 0 (Value must be set by numeric) column. (Applies only if a decimal col- developer.) umn is used.) int scale (Optional) The scale for a decimal (exact 0 numeric) column. (Applies only if a decimal col- umn is used.) Assuming that is the correct approach, there is still a problem with DB2 and Derby where the mapping tool creates a DOUBLE column for BigDecimals instead of a NUMERIC column. I'll use this JIRA to fix the problem with DB2 and Derby.
          Hide
          Craig L Russell added a comment -

          In general, an annotation on a persistent field should override the type of the field, and orm metadata should override the annotation.

          So I agree with the plaintiff that if OpenJPA generates columns, the annotation should be consulted to establish the column metadata in the database.

          Absent any annotation or orm metadata, I agree that the jdbc mapping is reasonable. But if the user specifies a mapping, I believe it should override the jdbc defaults.

          Show
          Craig L Russell added a comment - In general, an annotation on a persistent field should override the type of the field, and orm metadata should override the annotation. So I agree with the plaintiff that if OpenJPA generates columns, the annotation should be consulted to establish the column metadata in the database. Absent any annotation or orm metadata, I agree that the jdbc mapping is reasonable. But if the user specifies a mapping, I believe it should override the jdbc defaults.
          Hide
          Dan Mihai Dumitriu added a comment -

          When using BigDecimal, presumably one is trying to get arbitrary precision, in our case for currency values. Mapping simply as FLOAT just simply doesn't work! I get rounding error all the time.

          Is there any workaround for this? Can I modify the DDL manually? Does OpenJPA extract the double() from BigDecimal or store it as a String?

          Show
          Dan Mihai Dumitriu added a comment - When using BigDecimal, presumably one is trying to get arbitrary precision, in our case for currency values. Mapping simply as FLOAT just simply doesn't work! I get rounding error all the time. Is there any workaround for this? Can I modify the DDL manually? Does OpenJPA extract the double() from BigDecimal or store it as a String?
          Hide
          Tamas Sandor added a comment -

          I'm using openjpa-1.1.0-r422266:656510 on DB2 and SQLSERVER for testing.
          Unfortunately mappingtool still creates DOUBLE column on DB2 and FLOAT(32) on SQLSERVER for BigDecimal columns with precisions.
          Is there any patch or progress in this issue?

          Show
          Tamas Sandor added a comment - I'm using openjpa-1.1.0-r422266:656510 on DB2 and SQLSERVER for testing. Unfortunately mappingtool still creates DOUBLE column on DB2 and FLOAT(32) on SQLSERVER for BigDecimal columns with precisions. Is there any patch or progress in this issue?
          Hide
          Michael Dick added a comment -

          Attaching a patch. The patch only addresses the "first" part of the fix ie BigDecimal will now be mapped to NUMERIC instead of DOUBLE.

          The second part of the fix is to promote a field of type Double to NUMERIC if scale or precision is specified on the annotation.

          Show
          Michael Dick added a comment - Attaching a patch. The patch only addresses the "first" part of the fix ie BigDecimal will now be mapped to NUMERIC instead of DOUBLE. The second part of the fix is to promote a field of type Double to NUMERIC if scale or precision is specified on the annotation.
          Hide
          chunlinyao added a comment -

          DB2 And Derby still map BigDecimal to DOUBLE. Although this path map BigDecimal to Types.NUMERIC ,but AbstraceDB2Dictionary.java file set numericTypeName to DOUBLE in its constructor.
          I don't know why it defined to DOUBLE. @curtisr7 tried to fix it at OPENJPA-1224 but finally rolled back the changes.

          Show
          chunlinyao added a comment - DB2 And Derby still map BigDecimal to DOUBLE. Although this path map BigDecimal to Types.NUMERIC ,but AbstraceDB2Dictionary.java file set numericTypeName to DOUBLE in its constructor. I don't know why it defined to DOUBLE. @curtisr7 tried to fix it at OPENJPA-1224 but finally rolled back the changes.
          Hide
          Rick Curtis added a comment -

          Per one of my last comment messages in OPENJPA-1224 :
          Rick Curtis submitted changeset 822288 to trunk in openjpa (3 files) - 06/Oct/09 15:14
          OPENJPA-1224: backing out changes while investigating a test regression.

          It looks like this change regressed some of our internal tests and I backed the change out. I'd suggest posting a question to the users mailing list with details on what you are hitting as opposed to posting comments to a JIRA that was closed over 4 years ago.

          Thanks,
          Rick

          Show
          Rick Curtis added a comment - Per one of my last comment messages in OPENJPA-1224 : Rick Curtis submitted changeset 822288 to trunk in openjpa (3 files) - 06/Oct/09 15:14 OPENJPA-1224 : backing out changes while investigating a test regression. It looks like this change regressed some of our internal tests and I backed the change out. I'd suggest posting a question to the users mailing list with details on what you are hitting as opposed to posting comments to a JIRA that was closed over 4 years ago. Thanks, Rick

            People

            • Assignee:
              Michael Dick
              Reporter:
              Jacek Laskowski
            • Votes:
              1 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development