Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-3118

Automatically invoke .hiverc init script when running hiveserver instance.

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • Server Infrastructure
    • None

    Description

      Using Hive with Microsoft PowerPivot as visualization tool (connected using HiveODBC driver), facing the following problems:

      1) Multiple instance of the same set of mapreduce jobs spawned (one after another) for a single query.
      For eg: select * from tweetsdata;
      ---------------------------------------------------------------------------
      Hive history file=/tmp/hadoop/hive_job_log_hadoop_201206121337_1423188701.txt
      Total MapReduce jobs = 2
      Launching Job 1 out of 2
      Launching Job 2 out of 2
      OK
      Total MapReduce jobs = 2
      Launching Job 1 out of 2
      Launching Job 2 out of 2
      OK
      Hive history file=/tmp/hadoop/hive_job_log_hadoop_201206121337_1423188701.txt
      Total MapReduce jobs = 2
      ....
      //Multiple instance of same MapReduce jobs (with same HDFS read/write values)
      ---------------------------------------------------------------------------

      2)UDF defined before executing the query is not being recognized after the second instance of MapReduce jobs (i.e. for instances after this statement which says a new server instance is started)
      Hive history file=/tmp/hadoop/hive_job_log_hadoop_201206121337_1423188701.txt
      ---------------------------------------------------------------------------
      Error:
      java.lang.RuntimeException: failed to evaluate: <unbound>=Class.forName("retweetlink");
      ---------------------------------------------------------------------------

      So would be a good idea to invoke .hiverc init script for the server instances, so that we can define UDF in the .hiverc script and each time of a new instance of hiveserver, the script is executed prior to the MapReduce job.

      Need this because it cannot be achieved by code repetition as the single query (select in this case) is executed across different server instances.
      ---------------------------------------------------------------------------
      add jar /usr/local/hadoop/src/retweetlink1.jar;
      create temporary function link as 'retweetlink';
      select link(tweet),count as countlink from tweetsdata where tweet like '%RT%' group by link(tweet);
      ---------------------------------------------------------------------------

      Attachments

        Activity

          People

            Unassigned Unassigned
            sreenathmenon Sreenath Menon
            Votes:
            1 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated: