• Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.7.0
    • Component/s: Query Processor
    • Labels:
    • Hadoop Flags:


      Concurrency model for Hive:

      Currently, hive does not provide a good concurrency model. The only guanrantee provided in case of concurrent readers and writers is that
      reader will not see partial data from the old version (before the write) and partial data from the new version (after the write).
      This has come across as a big problem, specially for background processes performing maintenance operations.

      The following possible solutions come to mind.

      1. Locks: Acquire read/write locks - they can be acquired at the beginning of the query or the write locks can be delayed till move
      task (when the directory is actually moved). Care needs to be taken for deadlocks.

      2. Versioning: The writer can create a new version if the current version is being read. Note that, it is not equivalent to snapshots,
      the old version can only be accessed by the current readers, and will be deleted when all of them have finished.


      1. hive_leases.txt
        3 kB
        Prasad Chakka
      2. hive.1293.1.patch
        87 kB
        Namit Jain
      3. hive.1293.2.patch
        89 kB
        Namit Jain
      4. hive.1293.3.patch
        99 kB
        Namit Jain
      5. hive.1293.4.patch
        100 kB
        Namit Jain
      6. hive.1293.5.patch
        101 kB
        Namit Jain
      7. hive.1293.6.patch
        117 kB
        Namit Jain
      8. hive.1293.7.patch
        122 kB
        Namit Jain
      9. hive.1293.8.patch
        121 kB
        Namit Jain
      10. hive.1293.9.patch
        122 kB
        Namit Jain

        Issue Links


          No work has yet been logged on this issue.


            • Assignee:
              Namit Jain
              Namit Jain
            • Votes:
              0 Vote for this issue
              21 Start watching this issue


              • Created: