I'm not sure I agree with the description of the problem.
I've been basing my assumptions on the conversions tables found at http://java.sun.com/j2se/1.5.0/docs/guide/jdbc/getstart/mapping.html#1004791
The tables there indicate that a java.lang.Double should be mapped to DOUBLE, not NUMERIC or DECIMAL. If NUMERIC or DECIMAL is desired then the entity should use a variable of type java.math.BigDecimal.
The way the problem description is worded we'd be changing the rules if precision and scale were specified in an annotation. It becomes a question of which is more important, the type of the variable or the annotations around it. An argument can be made for either side, but I'm inclined to side with the type of the variable trumping the annotations. I believe the language in the spec supports this interpretation too :
From section 9.1.5
int precision (Optional) The precision for a decimal (exact 0 (Value must be set by
numeric) column. (Applies only if a decimal col- developer.)
umn is used.)
int scale (Optional) The scale for a decimal (exact 0
numeric) column. (Applies only if a decimal col-
umn is used.)
Assuming that is the correct approach, there is still a problem with DB2 and Derby where the mapping tool creates a DOUBLE column for BigDecimals instead of a NUMERIC column. I'll use this JIRA to fix the problem with DB2 and Derby.