The new logic in the patch looks correct to me, but the writeRDBNAM() method is a bit too aggressive on allocating byte arrays:
+ int len = currentManager.getByteLength(rdbnam);
The UTF-8 CCSID manager will allocate a new byte array in order to find the length, and then throw it away.
+ /* Initialize the buffer at MAX_NAME - max length in bytes for RDBNAM */
+ ByteBuffer buff = ByteBuffer.allocate(CodePoint.MAX_NAME);
This will allocate a byte buffer with length 255, but after
DERBY-4805 it'll increase to ~64K, and then it's probably going to hurt.
+ /* Convert the RDBNAM into bytes using the current CCSID */
+ currentManager.convertFromJavaString(rdbnam, buff);
Here, the UTF-8 CCSID manager will allocate a new byte array again, holding the contents of the string, before it puts it into the ByteBuffer.
+ /* Get the byte array out of the byte buffer */
+ int bytesLen = buff.position();
+ byte rdbBytes = new byte[bytesLen];
And here yet another byte array is allocated before it's sent to DDMWriter.
I think if writeRDBNAM() had used writeScalarPaddedString() instead of writeScalarPaddedBytes(), we could avoid some of these byte array allocations. We wouldn't avoid all of them, because writeScalarPaddedString() would still call getByteLength() and convertFromJavaString() internally, but at least 2 and 4 would go away.
As a possible future improvement, we may also consider changing Utf8CcsidManager's implementation of convertFromJavaString(String,ByteBuffer) and getByteLength() so that they don't create any byte arrays internally.