Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-11672

Hive Streaming API handles bucketing incorrectly

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Duplicate
    • 1.2.1
    • None
    • HCatalog, Hive, Transactions
    • None

    Description

      Hive Streaming API allows the clients to get a random bucket and then insert data into it. However, this leads to incorrect bucketing as Hive expects data to be distributed into buckets based on a hash function applied to bucket key. The data is inserted randomly by the clients right now. They have no way of

      1. Knowing what bucket a row (tuple) belongs to
      2. Asking for a specific bucket

      There are optimization such as Sort Merge Join and Bucket Map Join that rely on the data being correctly distributed across buckets and these will cause incorrect read results if the data is not distributed correctly.

      There are two obvious design choices

      1. Hive Streaming API should fix this internally by distributing the data correctly
      2. Hive Streaming API should expose data distribution scheme to the clients and allow them to distribute the data correctly

      The first option will mean every client thread will write to many buckets, causing many small files in each bucket and too many connections open. this does not seem feasible. The second option pushes more functionality into the client of the Hive Streaming API, but can maintain high throughput and write good sized ORC files. This option seems preferable.

      Attachments

        Issue Links

          Activity

            People

              roshan_naik Roshan Naik
              rbains Raj Bains
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: