Uploaded image for project: 'Kylin'
  1. Kylin
  2. KYLIN-1021

upload dependent jars of kylin to HDFS and set tmpjars

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • v1.0
    • v1.2, v1.4.0
    • None
    • None
    • Patch

    Description

      As Shaofengshi says in maillist : Regrading your question about the jar files located in local disk instead of HDFS, yes the hadoop/hive/hbase jars should exist in local disk on each machine of the hadoop cluster, with the same locations; Kylin will not upload those jars; Please check and ensure the consistency of your hadoop cluster.

      However, our hadoop cluster is managed by hadoop administrator, we have no right to login those machine, even though we have the right, copy all files to hundreds of machine will be a painful job(I do not know is there any tools can do the job well).

      By the way, I can not get any tips about you measure(If you has the document, tell me)...

      I change my source code and create a directory in kylin tmp directory(kylin.hdfs.working.dir/kylin_metadata) and upload all jars to the directory if the directory is empty(it only happened at the first time) when submitting a mapreduce job, then set those locations to tmpjars of the mapreduce job(just like kylin set tmpfiles before submit job), This is automated and make kylin deploying easier..

      Attachments

        Activity

          People

            liyang.gmt8@gmail.com liyang
            feng_xiao_yu fengYu
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: