Hadoop Common
  1. Hadoop Common
  2. HADOOP-3182

JobClient creates submitJobDir with SYSTEM_DIR_PERMISSION ( rwx-wx-wx)

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.16.2
    • Fix Version/s: 0.16.3
    • Component/s: None
    • Labels:
      None
    • Release Note:
      Changed <job-dir> from 733 to 777, so that a shared JobTracker can be started by a non-superuser account.

      Description

      JobClient creates submitJobDir with SYSTEM_DIR_PERMISSION ( rwx-wx-wx ) which causes problem while sharing a cluster.
      Consider the case where userA starts jobtracker/tasktrackers and userB submits a job to this cluster. When userB creates submitJobDir it is created with rwx-wx-wx which cannot be read by tasktracker started by userA

      1. 3182_20080408.patch
        1 kB
        Tsz Wo Nicholas Sze
      2. 3182_20080408_0.16.patch
        1 kB
        Tsz Wo Nicholas Sze
      3. 3182_20080408.patch
        1 kB
        Tsz Wo Nicholas Sze
      4. patch-3182.txt
        3 kB
        Amareshwari Sriramadasu
      5. HADOOP-3182_2_20080410.patch
        3 kB
        Arun C Murthy
      6. HADOOP-3182_2_20080410_0.16.patch
        3 kB
        Tsz Wo Nicholas Sze

        Activity

        Hide
        Tsz Wo Nicholas Sze added a comment -

        Setting submitJobDir to 777 will work for this problem. The permission setting in the map/red files/dirs are not clear at this point. I will write down a specification of the permission requirement.

        Show
        Tsz Wo Nicholas Sze added a comment - Setting submitJobDir to 777 will work for this problem. The permission setting in the map/red files/dirs are not clear at this point. I will write down a specification of the permission requirement.
        Hide
        Doug Cutting added a comment -

        Should the JobClient ask the JobTracker to create this directory for it, rather than creating it itself?

        Show
        Doug Cutting added a comment - Should the JobClient ask the JobTracker to create this directory for it, rather than creating it itself?
        Hide
        Tsz Wo Nicholas Sze added a comment -

        > Should the JobClient ask the JobTracker to create this directory for it, rather than creating it itself?
        In the current implementation, job-dir is created by user (not JobTracker) and then user put the job related files like jobconf, job.jar in job-dir.

        If JobTracker creates and owns this dir, we need to provide some way (may be RPC) for user sending jobconf, job.jar, etc. to JobTracker.

        Show
        Tsz Wo Nicholas Sze added a comment - > Should the JobClient ask the JobTracker to create this directory for it, rather than creating it itself? In the current implementation, job-dir is created by user (not JobTracker) and then user put the job related files like jobconf, job.jar in job-dir. If JobTracker creates and owns this dir, we need to provide some way (may be RPC) for user sending jobconf, job.jar, etc. to JobTracker.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        After changed the job-dir permission to 777. Tried the following:

        • Start NN, DN as nn_account (superuser by definition)
        • Start JT, TT as jt_account (non-superuser)
        • Run wordcount as user_account (non-superuser)

        Below is an observation of the files/dir creation and their permission setting.

        Step 0: During JobTracker startup

        mapred-sys-dir (delete-mkdirs-setPerm as jt_account)
        733

        Step 1: Client side job submission (JobClient .submitJob)

        job-dir (aka submitJobDir or mapred-sys-dir/jobId, mkdirs-setPerm as user_account)
        (was 733) 777

        job-dir/job.jar (may not exists, create-setPerm by JobClient)
        job-dir/job.split (create-setPerm by JobClient)
        644

        job-dir/job.xml (aka jobconf, create-setPerm by JobClient; Should it be visible in JobTracker webpage??? It is visible now.)
        644

        Miscellaneous dirs: Current using the default permission, i.e. umask. What is the correct permission for them???

        • filesDir = new Path(submitJobDir, "files");
        • archivesDir = new Path(submitJobDir, "archives");
        • libjarsDir = new Path(submitJobDir, "libjars");

        Step 2: JobTracker side job submission (JobTracker.submitJob)

        job-output-dir (aka mapred.output.dir)
        job-history-dir
        What are the correct permissions???
        They may be created by anto-mkdir in JobTracker as user_account

        • For example the first file created under job-output-dir in wordcount is job-output-dir/_logs/history/hostname_1207356629390_job_200804041750_0001_username_wordcount. Note that job-output-dir/_logs/history is the job-history-dir.
        • This file is created as user_account
          at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:413)
          at org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:194)
          at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1751)

        Task does mkdirs job-output-dir as user_account
        at org.apache.hadoop.mapred.Task.moveTaskOutputs(Task.java:594)
        at org.apache.hadoop.mapred.Task.saveTaskOutput(Task.java:554)
        at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2208)

        • In Task.moveTaskOutputs(...), is finalOutputPath a constant? If yes, it should not be computed again and again in Task.moveTaskOutputs(...).

        job-temp-dir (aka _temporary, created in JobInProgress.<init> as user_account)
        What is the correct permission???

        Still need to look at the directories clean-up and the case in LocalJobRunner.

        Show
        Tsz Wo Nicholas Sze added a comment - After changed the job-dir permission to 777. Tried the following: Start NN, DN as nn_account (superuser by definition) Start JT, TT as jt_account (non-superuser) Run wordcount as user_account (non-superuser) Below is an observation of the files/dir creation and their permission setting. Step 0: During JobTracker startup mapred-sys-dir (delete-mkdirs-setPerm as jt_account) 733 Step 1: Client side job submission (JobClient .submitJob) job-dir (aka submitJobDir or mapred-sys-dir/jobId, mkdirs-setPerm as user_account) (was 733) 777 job-dir/job.jar (may not exists, create-setPerm by JobClient) job-dir/job.split (create-setPerm by JobClient) 644 job-dir/job.xml (aka jobconf, create-setPerm by JobClient; Should it be visible in JobTracker webpage??? It is visible now.) 644 Miscellaneous dirs: Current using the default permission, i.e. umask. What is the correct permission for them??? filesDir = new Path(submitJobDir, "files"); archivesDir = new Path(submitJobDir, "archives"); libjarsDir = new Path(submitJobDir, "libjars"); Step 2: JobTracker side job submission (JobTracker.submitJob) job-output-dir (aka mapred.output.dir) job-history-dir What are the correct permissions??? They may be created by anto-mkdir in JobTracker as user_account For example the first file created under job-output-dir in wordcount is job-output-dir/_logs/history/hostname_1207356629390_job_200804041750_0001_username_wordcount. Note that job-output-dir/_logs/history is the job-history-dir. This file is created as user_account at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:413) at org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:194) at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1751) Task does mkdirs job-output-dir as user_account at org.apache.hadoop.mapred.Task.moveTaskOutputs(Task.java:594) at org.apache.hadoop.mapred.Task.saveTaskOutput(Task.java:554) at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2208) In Task.moveTaskOutputs(...), is finalOutputPath a constant? If yes, it should not be computed again and again in Task.moveTaskOutputs(...). job-temp-dir (aka _temporary, created in JobInProgress.<init> as user_account) What is the correct permission??? Still need to look at the directories clean-up and the case in LocalJobRunner.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Below are some codes found in TaskTracker:

          private void localizeJob(TaskInProgress tip) throws IOException {
            ...
            FileStatus status[] = fileSystem.listStatus(new Path(jobFile).getParent());
            ...
            for(FileStatus stat : status) {
              if (stat.getPath().toString().contains("job.xml")) {
                ...
              }
              if (stat.getPath().toString().contains("job.jar")) {
                ...
              }
            }
            ...
          }
        

        Why it first gets all FileStatus and then does a linear search for the path containing "job.xml" and "job.jar"? It seems to me that the paths can be constructed directly from <job-dir>/job.xml and <job-dir>/job.jar

        Show
        Tsz Wo Nicholas Sze added a comment - Below are some codes found in TaskTracker: private void localizeJob(TaskInProgress tip) throws IOException { ... FileStatus status[] = fileSystem.listStatus( new Path(jobFile).getParent()); ... for (FileStatus stat : status) { if (stat.getPath().toString().contains( "job.xml" )) { ... } if (stat.getPath().toString().contains( "job.jar" )) { ... } } ... } Why it first gets all FileStatus and then does a linear search for the path containing "job.xml" and "job.jar"? It seems to me that the paths can be constructed directly from <job-dir>/job.xml and <job-dir>/job.jar
        Hide
        Tsz Wo Nicholas Sze added a comment - - edited

        Below is the clean-up trace for wordcount :

        Step c1: Task.saveTaskOutput

        • FileSystem.delete <job-output-dir>/_temporary/_task_200804071355_0001_r_000000_0 by JobTracker as user_account
          at org.apache.hadoop.mapred.Task.saveTaskOutput(Task.java:557)
          at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2208)

        Step c2: JobInProgress.garbageCollect

        • FileSystem.delete <job-dir> by JobTracker as user_account
          at org.apache.hadoop.mapred.JobInProgress.garbageCollect(JobInProgress.java:1637)
          at org.apache.hadoop.mapred.JobInProgress.isJobComplete(JobInProgress.java:1396)
          at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:1357)
          at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:565)
          at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2270)
          <job-dir> is obtained by profile.getJobFile()).getParent()
        • FileSystem.delete <job-dir> again by JobTracker as user_account
          at org.apache.hadoop.mapred.JobInProgress.garbageCollect(JobInProgress.java:1642)
          at org.apache.hadoop.mapred.JobInProgress.isJobComplete(JobInProgress.java:1396)
          at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:1357)
          at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:565)
          at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2270)
          <job-dir> is obtained by new Path(conf.getSystemDir(), jobId)

        Question: Are profile.getJobFile()).getParent() and new Path(conf.getSystemDir(), jobId) supposed to be different?

        • FileUtil.fullyDelete <job-output-dir>/_temporary as user_account
          at org.apache.hadoop.mapred.JobInProgress.garbageCollect(JobInProgress.java:1650)
          at org.apache.hadoop.mapred.JobInProgress.isJobComplete(JobInProgress.java:1396)
          at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:1357)
          at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:565)
          at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2270)

        Question: What is the usage of FileUtil.fullyDelete? Is it the same as FileSystem.delete(path, recursive=true)?

        Show
        Tsz Wo Nicholas Sze added a comment - - edited Below is the clean-up trace for wordcount : Step c1: Task.saveTaskOutput FileSystem.delete <job-output-dir>/_temporary/_task_200804071355_0001_r_000000_0 by JobTracker as user_account at org.apache.hadoop.mapred.Task.saveTaskOutput(Task.java:557) at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2208) Step c2: JobInProgress.garbageCollect FileSystem.delete <job-dir> by JobTracker as user_account at org.apache.hadoop.mapred.JobInProgress.garbageCollect(JobInProgress.java:1637) at org.apache.hadoop.mapred.JobInProgress.isJobComplete(JobInProgress.java:1396) at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:1357) at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:565) at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2270) <job-dir> is obtained by profile.getJobFile()).getParent() FileSystem.delete <job-dir> again by JobTracker as user_account at org.apache.hadoop.mapred.JobInProgress.garbageCollect(JobInProgress.java:1642) at org.apache.hadoop.mapred.JobInProgress.isJobComplete(JobInProgress.java:1396) at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:1357) at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:565) at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2270) <job-dir> is obtained by new Path(conf.getSystemDir(), jobId) Question: Are profile.getJobFile()).getParent() and new Path(conf.getSystemDir(), jobId) supposed to be different? FileUtil.fullyDelete <job-output-dir>/_temporary as user_account at org.apache.hadoop.mapred.JobInProgress.garbageCollect(JobInProgress.java:1650) at org.apache.hadoop.mapred.JobInProgress.isJobComplete(JobInProgress.java:1396) at org.apache.hadoop.mapred.JobInProgress.completedTask(JobInProgress.java:1357) at org.apache.hadoop.mapred.JobInProgress.updateTaskStatus(JobInProgress.java:565) at org.apache.hadoop.mapred.JobTracker$TaskCommitQueue.run(JobTracker.java:2270) Question: What is the usage of FileUtil.fullyDelete? Is it the same as FileSystem.delete(path, recursive=true)?
        Hide
        Tsz Wo Nicholas Sze added a comment -

        > What is the usage of FileUtil.fullyDelete? Is it the same as FileSystem.delete(path, recursive=true)?
        Created HADOOP-3202 for deprecating FileUtil.fullyDelete(FileSystem fs, Path dir).

        Show
        Tsz Wo Nicholas Sze added a comment - > What is the usage of FileUtil.fullyDelete? Is it the same as FileSystem.delete(path, recursive=true)? Created HADOOP-3202 for deprecating FileUtil.fullyDelete(FileSystem fs, Path dir).
        Hide
        Tsz Wo Nicholas Sze added a comment -

        > Why it first gets all FileStatus and then does a linear search for the path containing "job.xml" and "job.jar"? It seems to me that the paths can be constructed directly from <job-dir>/job.xml and <job-dir>/job.jar
        See also HADOOP-3203.

        Show
        Tsz Wo Nicholas Sze added a comment - > Why it first gets all FileStatus and then does a linear search for the path containing "job.xml" and "job.jar"? It seems to me that the paths can be constructed directly from <job-dir>/job.xml and <job-dir>/job.jar See also HADOOP-3203 .
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Where should <job-history-dir> be? Found some codes in org.apache.hadoop.mapred.JobHistory.JobInfo:

            public static void logSubmitted(String jobId, JobConf jobConf, 
                                            String jobConfPath, long submitTime) {
                ...
                // find user log directory 
                Path outputPath = FileOutputFormat.getOutputPath(jobConf);
                userLogDir = jobConf.get("hadoop.job.history.user.location",
                		outputPath == null ? null : outputPath.toString());
                if ("none".equals(userLogDir)) {
                  userLogDir = null;
                }
                if (userLogDir != null) {
                  userLogDir = userLogDir + "/_logs/history";
                }
                ...
              }
        

        What is the usage of the jobconf property "hadoop.job.history.user.location"?
        Cannot find any other codes using it or any description.about it.

        Show
        Tsz Wo Nicholas Sze added a comment - Where should <job-history-dir> be? Found some codes in org.apache.hadoop.mapred.JobHistory.JobInfo: public static void logSubmitted( String jobId, JobConf jobConf, String jobConfPath, long submitTime) { ... // find user log directory Path outputPath = FileOutputFormat.getOutputPath(jobConf); userLogDir = jobConf.get( "hadoop.job.history.user.location" , outputPath == null ? null : outputPath.toString()); if ( "none" .equals(userLogDir)) { userLogDir = null ; } if (userLogDir != null ) { userLogDir = userLogDir + "/_logs/history" ; } ... } What is the usage of the jobconf property "hadoop.job.history.user.location"? Cannot find any other codes using it or any description.about it.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        3182_20080408.patch: use 777 for <job-dir>.

        Also tried randomwriter, the behavior is similar.

        Show
        Tsz Wo Nicholas Sze added a comment - 3182_20080408.patch: use 777 for <job-dir>. Also tried randomwriter, the behavior is similar.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        3182_20080408_0.16.patch: for 0.16

        Show
        Tsz Wo Nicholas Sze added a comment - 3182_20080408_0.16.patch: for 0.16
        Hide
        Tsz Wo Nicholas Sze added a comment -

        3182_20080408.patch works for both 0.17 and 0.18

        Show
        Tsz Wo Nicholas Sze added a comment - 3182_20080408.patch works for both 0.17 and 0.18
        Hide
        Amareshwari Sriramadasu added a comment -

        What is the usage of the jobconf property "hadoop.job.history.user.location"? Cannot find any other codes using it or any description.about it.

        You can find description in hadoop-default.xml.

        <property>
          <name>hadoop.job.history.user.location</name>
          <value></value>
          <description> User can specify a location to store the history files of 
          a particular job. If nothing is specified, the logs are stored in 
          output directory. The files are stored in "_logs/history/" in the directory.
          User can stop logging by giving the value "none". 
          </description>
        </property>
        
        Show
        Amareshwari Sriramadasu added a comment - What is the usage of the jobconf property "hadoop.job.history.user.location"? Cannot find any other codes using it or any description.about it. You can find description in hadoop-default.xml. <property> <name>hadoop.job.history.user.location</name> <value></value> <description> User can specify a location to store the history files of a particular job. If nothing is specified, the logs are stored in output directory. The files are stored in "_logs/history/" in the directory. User can stop logging by giving the value "none". </description> </property>
        Hide
        Amareshwari Sriramadasu added a comment - - edited

        job-output-dir, job-history-dir and job-temp-dir can have the permissions as 755.

        Are profile.getJobFile()).getParent() and new Path(conf.getSystemDir(), jobId) supposed to be different?

        Yes. It looks like they are the same directories. One of the deletes can be removed.
        To be specific, fs.delete(new Path(profile.getJobFile()).getParent(), true); can be removed.

        What is the usage of FileUtil.fullyDelete? Is it the same as FileSystem.delete(path, recursive=true)?

        Yes they are the same.

        job-dir/job.xml (aka jobconf, create-setPerm by JobClient; Should it be visible in JobTracker webpage??? It is visible now.)

        Yes. It should be visible

        Show
        Amareshwari Sriramadasu added a comment - - edited job-output-dir, job-history-dir and job-temp-dir can have the permissions as 755. Are profile.getJobFile()).getParent() and new Path(conf.getSystemDir(), jobId) supposed to be different? Yes. It looks like they are the same directories. One of the deletes can be removed. To be specific, fs.delete(new Path(profile.getJobFile()).getParent(), true); can be removed. What is the usage of FileUtil.fullyDelete? Is it the same as FileSystem.delete(path, recursive=true)? Yes they are the same. job-dir/job.xml (aka jobconf, create-setPerm by JobClient; Should it be visible in JobTracker webpage??? It is visible now.) Yes. It should be visible
        Hide
        Amareshwari Sriramadasu added a comment -

        In Task.moveTaskOutputs(...), is finalOutputPath a constant?

        No, that is calculated from the parameters to moveTaskOutputs() and is called recursively.

        Miscellaneous dirs: Current using the default permission, i.e. umask. What is the correct permission for them???

        The miscellaneous directories (submitJobdir/files, submitJobdir/archives, submitJobdir/libjars) should have permissions same as submitJobDir. Patch should address this one also.

        Show
        Amareshwari Sriramadasu added a comment - In Task.moveTaskOutputs(...), is finalOutputPath a constant? No, that is calculated from the parameters to moveTaskOutputs() and is called recursively. Miscellaneous dirs: Current using the default permission, i.e. umask. What is the correct permission for them??? The miscellaneous directories (submitJobdir/files, submitJobdir/archives, submitJobdir/libjars) should have permissions same as submitJobDir. Patch should address this one also.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12379695/3182_20080408_0.16.patch
        against trunk revision 645773.

        @author +1. The patch does not contain any @author tags.

        tests included +1. The patch appears to include 3 new or modified tests.

        patch -1. The patch command could not apply the patch.

        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2188/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12379695/3182_20080408_0.16.patch against trunk revision 645773. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 3 new or modified tests. patch -1. The patch command could not apply the patch. Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2188/console This message is automatically generated.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Forgot to do the trick for submitting patch for two branches. Upload the same file again.

        Show
        Tsz Wo Nicholas Sze added a comment - Forgot to do the trick for submitting patch for two branches. Upload the same file again.
        Hide
        Tsz Wo Nicholas Sze added a comment -

        Thank you for your answers, Amareshwari. Let's work on the miscellaneous dirs in another issue.

        Show
        Tsz Wo Nicholas Sze added a comment - Thank you for your answers, Amareshwari. Let's work on the miscellaneous dirs in another issue.
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12379755/3182_20080408.patch
        against trunk revision 645773.

        @author +1. The patch does not contain any @author tags.

        tests included +1. The patch appears to include 3 new or modified tests.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new javac compiler warnings.

        release audit +1. The applied patch does not generate any new release audit warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2190/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2190/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2190/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2190/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12379755/3182_20080408.patch against trunk revision 645773. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 3 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new javac compiler warnings. release audit +1. The applied patch does not generate any new release audit warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2190/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2190/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2190/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2190/console This message is automatically generated.
        Hide
        Owen O'Malley added a comment -

        +1 for the 777 on the directory for now.

        In the medium term we need to remove the job client doing anything to the system dir. The client should just submit using rpc and the job tracker should be the one writing it to hdfs.

        Show
        Owen O'Malley added a comment - +1 for the 777 on the directory for now. In the medium term we need to remove the job client doing anything to the system dir. The client should just submit using rpc and the job tracker should be the one writing it to hdfs.
        Hide
        Amareshwari Sriramadasu added a comment -

        I think the same problem will be still there if the users use -files, -libsjars and -archives options
        Cancelling patch to make miscellaneous directories have same permissions as submitJobDir.

        Show
        Amareshwari Sriramadasu added a comment - I think the same problem will be still there if the users use -files, -libsjars and -archives options Cancelling patch to make miscellaneous directories have same permissions as submitJobDir.
        Hide
        Amareshwari Sriramadasu added a comment -

        Here is patch doing the permission changes for miscellaneous directories

        Show
        Amareshwari Sriramadasu added a comment - Here is patch doing the permission changes for miscellaneous directories
        Hide
        Arun C Murthy added a comment -

        Nicholas, have you tested with -files/-archives and made sure they work? Thanks!

        Show
        Arun C Murthy added a comment - Nicholas, have you tested with -files/-archives and made sure they work? Thanks!
        Hide
        Arun C Murthy added a comment -

        Amareshwari's patch is almost there... we do need to change permissions of the directories inside jobSubmissionDir to be safe.

        The only nit is that we need to move

        {SYSTEM_DIR_PERMISSION}

        to JobTracker since it's only used there...

        Show
        Arun C Murthy added a comment - Amareshwari's patch is almost there... we do need to change permissions of the directories inside jobSubmissionDir to be safe. The only nit is that we need to move {SYSTEM_DIR_PERMISSION} to JobTracker since it's only used there...
        Hide
        Arun C Murthy added a comment -

        Updated patch ...

        Show
        Arun C Murthy added a comment - Updated patch ...
        Hide
        Tsz Wo Nicholas Sze added a comment -

        HADOOP-3182_2_20080410_0.16.patch: for 0.16

        Show
        Tsz Wo Nicholas Sze added a comment - HADOOP-3182 _2_20080410_0.16.patch: for 0.16
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12379870/HADOOP-3182_2_20080410.patch
        against trunk revision 645773.

        @author +1. The patch does not contain any @author tags.

        tests included +1. The patch appears to include 3 new or modified tests.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new javac compiler warnings.

        release audit +1. The applied patch does not generate any new release audit warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2199/testReport/
        Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2199/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2199/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2199/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12379870/HADOOP-3182_2_20080410.patch against trunk revision 645773. @author +1. The patch does not contain any @author tags. tests included +1. The patch appears to include 3 new or modified tests. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new javac compiler warnings. release audit +1. The applied patch does not generate any new release audit warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2199/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2199/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2199/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2199/console This message is automatically generated.
        Hide
        Arun C Murthy added a comment -

        I just committed this to trunk, branch-0.17 & branch-0.16. Thanks, Nicholas & Amareshwari; also Mahadev for helping to validate this!

        Show
        Arun C Murthy added a comment - I just committed this to trunk, branch-0.17 & branch-0.16. Thanks, Nicholas & Amareshwari; also Mahadev for helping to validate this!
        Hide
        Hudson added a comment -
        Show
        Hudson added a comment - Integrated in Hadoop-trunk #457 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/457/ )

          People

          • Assignee:
            Tsz Wo Nicholas Sze
            Reporter:
            Lohit Vijayarenu
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development