Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-136

Overlong UTF8's not handled well

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Duplicate
    • 0.2.0
    • 0.6.0
    • io
    • None

    Description

      When we feed an overlong string to the UTF8 constructor, two suboptimal things happen.

      First, we truncate to 0xffff/3 characters on the assumption that every character takes three bytes in UTF8. This can truncate strings that don't need it, and it can be overoptimistic since there are characters that render as four bytes in UTF8.

      Second, the code doesn't actually handle four-byte characters.

      Third, there's a behavioral discontinuity. If the string is "discovered" to be overlong by the arbitrary limit described above, we truncate with a log message, otherwise we signal a RuntimeException. One feels that both forms of truncation should be treated alike. However, this issue is concealed by the second issue; the exception will never be thrown because UTF8.utf8Length can't return more than three times the length of its input.

      I would recommend changing UTF8.utf8Length to let its caller know how many characters of the input string will actually fit if there's an overflow [perhaps by returning the negative of that number] and doing the truncation accurately as needed.

      -dk

      Attachments

        1. largeutf8.patch
          13 kB
          Michel Tourn

        Issue Links

          Activity

            People

              hairong Hairong Kuang
              dking Dick King
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: