Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-302

class Text (replacement for class UTF8) was: HADOOP-136



    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.5.0
    • io
    • None


      Just to verify, which length-encoding scheme are we using for class Text (aka LargeUTF8)

      a) The "UTF-8/Lucene" scheme? (highest bit of each byte is an extension bit, which I think is what Doug is describing in his last comment) or
      b) the record-IO scheme in o.a.h.record.Utils.java:readInt

      Either way, note that:

      1. UTF8.java and its successor Text.java need to read the length in two ways:
      1a. consume 1+ bytes from a DataInput and
      1b. parse the length within a byte array at a given offset
      (1.b is used for the "WritableComparator optimized for UTF8 keys" ).

      o.a.h.record.Utils only supports the DataInput mode.
      It is not clear to me what is the best way to extend this Utils code when you need to support both reading modes

      2 Methods like UTF8's WritableComparator are to be low overhead, in partic. there should be no Object allocation.
      For the byte array case, the varlen-reader utility needs to be extended to return both:
      the decoded length and the length of the encoded length.
      (so that the caller can do offset += encodedlength)

      3. A String length does not need (small) negative integers.

      4. One advantage of a) is that it is standard (or at least well-known and natural) and there are no magic constants (like -120, -121 -124)


        1. VInt.patch
          12 kB
          Hairong Kuang
        2. textwrap.patch
          1 kB
          Hairong Kuang
        3. text.patch
          28 kB
          Hairong Kuang

        Issue Links



              hairong Hairong Kuang
              michel_tourn Michel Tourn
              0 Vote for this issue
              0 Start watching this issue