Details
-
Improvement
-
Status: Open
-
Minor
-
Resolution: Unresolved
-
10.4.1.3
-
None
-
Normal
Description
If you constrain a column to be both "not null" and "unique", then Derby creates an old-style unique index on the column. If you later remove the "not null" constraint, Derby converts the index to be a new-style unique-with-duplicate-nulls index.
However, if you constrain a column to be "unique" and then later add a "not null" constraint, then Derby does not convert the unique-with-duplicate-nulls index into an old-style unique index.
It seems that the order in which you add these constraints affects the metadata. I don't know if these differences affect the optimizer's decisions.
Here is a script which shows this behavior:
drop table v;
0 rows inserted/updated/deleted
ij> create table v
(
a int,
unique ( a )
);
0 rows inserted/updated/deleted
ij> drop table w;
0 rows inserted/updated/deleted
ij> create table w
(
m int not null,
unique ( m )
);
0 rows inserted/updated/deleted
ij> select cast( t.tableName as varchar( 10 )), cast (c.conglomeratename as varchar( 30 )), c.descriptor
from sys.sysconglomerates c, sys.systables t
where c.tableid = t.tableid
and ( t.tablename = 'V' or t.tablename='W')
and c.isconstraint
;
1 |2 |DESCRIPTOR
---------------------------------------------------------
V |SQL080417133332230 |UNIQUE WITH DU&
W |SQL080417133332330 |UNIQUE BTREE (&
2 rows selected
ij> alter table v
alter column a not null
;
0 rows inserted/updated/deleted
ij> alter table w
alter column m null
;
0 rows inserted/updated/deleted
ij> select cast( t.tableName as varchar( 10 )), cast (c.conglomeratename as varchar( 30 )), c.descriptor
from sys.sysconglomerates c, sys.systables t
where c.tableid = t.tableid
and ( t.tablename = 'V' or t.tablename='W')
and c.isconstraint
;
1 |2 |DESCRIPTOR
---------------------------------------------------------
V |SQL080417133332230 |UNIQUE WITH DU&
W |SQL080417133332330 |UNIQUE WITH DU&
2 rows selected