When a Utf8 instance is about to receive new data (i.e. in BinaryDecoder), Utf8::setByteLength is invoked to essentially ensure capacity of the backing byte array.
However, the logical length of the current instance is compared against the required size rather than the existing byte array size.
This causes needless allocations of a new backing byte array: If you read a 10 byte string followed by an 8 byte string followed by a 9 byte string, the 3rd read will cause a new backing array allocation even though the instance already has a 10 byte array at its disposal.
At a minimum we should replace:
We may also wish to consider setting a maximum size limit to the utf8 instance: If we allocate over this, we drop the backing array the next time we get a resize for a data length smaller than this (so we aren't forced to keep memory for the largest utf8 encountered in memory).