Hadoop Common
  1. Hadoop Common
  2. HADOOP-9253

Capture ulimit info in the logs at service start time

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 1.1.1, 2.0.2-alpha
    • Fix Version/s: 1.2.0, 0.23.7, 2.1.0-beta
    • Component/s: None
    • Labels:
      None

      Description

      output of ulimit -a is helpful while debugging issues on the system.

      1. HADOOP-9253.branch-1.patch
        0.5 kB
        Arpit Gupta
      2. HADOOP-9253.patch
        1 kB
        Arpit Gupta
      3. HADOOP-9253.branch-1.patch
        1 kB
        Arpit Gupta
      4. HADOOP-9253.branch-1.patch
        1 kB
        Arpit Gupta
      5. HADOOP-9253.patch
        2 kB
        Arpit Gupta

        Issue Links

          Activity

          Hide
          Matt Foley added a comment -

          Closed upon release of Hadoop 1.2.0.

          Show
          Matt Foley added a comment - Closed upon release of Hadoop 1.2.0.
          Hide
          Suresh Srinivas added a comment - - edited

          Marked it as resolved.

          Show
          Suresh Srinivas added a comment - - edited Marked it as resolved.
          Hide
          Thomas Graves added a comment -

          Can this jira be moved back to resolved then? Looks like work was done in HADOOP-9379.

          Show
          Thomas Graves added a comment - Can this jira be moved back to resolved then? Looks like work was done in HADOOP-9379 .
          Hide
          Junping Du added a comment -

          Yes. It is much cleaner. Thanks all for quickly response.

          Show
          Junping Du added a comment - Yes. It is much cleaner. Thanks all for quickly response.
          Hide
          Suresh Srinivas added a comment -

          Junping, can you use the patch from HADOOP-9379 and see if it addresses your issue?

          Show
          Suresh Srinivas added a comment - Junping, can you use the patch from HADOOP-9379 and see if it addresses your issue?
          Hide
          Arpit Gupta added a comment -

          I logged HADOOP-9379 and uploaded a patch which captures the ulimit info after the head statement so console output is cleaner.

          Show
          Arpit Gupta added a comment - I logged HADOOP-9379 and uploaded a patch which captures the ulimit info after the head statement so console output is cleaner.
          Hide
          Junping Du added a comment -

          Hi guys, I tried to start a cluster on branch-1 today but find a lot of info with "uname -a" printed to console but I wish to get a more clean one as before. Am I missing something?

          Show
          Junping Du added a comment - Hi guys, I tried to start a cluster on branch-1 today but find a lot of info with "uname -a" printed to console but I wish to get a more clean one as before. Am I missing something?
          Hide
          Arpit Gupta added a comment -

          Right if the head cmd was run before the ulimit info was captured then it will only be in the log and not in the terminal.

          Show
          Arpit Gupta added a comment - Right if the head cmd was run before the ulimit info was captured then it will only be in the log and not in the terminal.
          Hide
          Alejandro Abdelnur added a comment -

          Arpit, the info showing up in the logs is fine, showing up in the terminal is not.

          Show
          Alejandro Abdelnur added a comment - Arpit, the info showing up in the logs is fine, showing up in the terminal is not.
          Hide
          Arpit Gupta added a comment -

          @Alejandro

          Another thing we could do is capture the ulimit info after the head cmd. That way the users can still get to see the info. Let me know and i can generate a new patch.

          Show
          Arpit Gupta added a comment - @Alejandro Another thing we could do is capture the ulimit info after the head cmd. That way the users can still get to see the info. Let me know and i can generate a new patch.
          Hide
          Arpit Gupta added a comment -

          @Alejandro

          i added head -30 as Andy suggested that because of new information captured in the logs we might miss some info incase of some errors. Granted the user can still open the .out file and look at them but felt this would somewhat preserve the behavior we had before.

          Show
          Arpit Gupta added a comment - @Alejandro i added head -30 as Andy suggested that because of new information captured in the logs we might miss some info incase of some errors. Granted the user can still open the .out file and look at them but felt this would somewhat preserve the behavior we had before.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-0.23-Build #520 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/520/)
          HADOOP-9253. Capture ulimit info in the logs at service start time. (Arpit Gupta via tgraves) (Revision 1444082)

          Result = SUCCESS
          tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1444082
          Files :

          • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
          • /hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-0.23-Build #520 (See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/520/ ) HADOOP-9253 . Capture ulimit info in the logs at service start time. (Arpit Gupta via tgraves) (Revision 1444082) Result = SUCCESS tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1444082 Files : /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh /hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Hide
          Alejandro Abdelnur added a comment -

          Now, when starting the cluster you get on the console

          ulimit -a for user tucu
          core file size          (blocks, -c) 0
          data seg size           (kbytes, -d) unlimited
          file size               (blocks, -f) unlimited
          max locked memory       (kbytes, -l) unlimited
          max memory size         (kbytes, -m) unlimited
          open files                      (-n) 256
          pipe size            (512 bytes, -p) 1
          stack size              (kbytes, -s) 8192
          cpu time               (seconds, -t) unlimited
          max user processes              (-u) 709
          virtual memory          (kbytes, -v) unlimited
          

          The OUT file is created on every start and that is the only thing you get, on every service.

          I think we should remove the head -30.

          Show
          Alejandro Abdelnur added a comment - Now, when starting the cluster you get on the console ulimit -a for user tucu core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited The OUT file is created on every start and that is the only thing you get, on every service. I think we should remove the head -30 .
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1338 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1338/)
          HADOOP-9253. Capture ulimit info in the logs at service start time. Contributed by Arpit Gupta. (Revision 1443517)

          Result = FAILURE
          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1443517
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
          • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1338 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1338/ ) HADOOP-9253 . Capture ulimit info in the logs at service start time. Contributed by Arpit Gupta. (Revision 1443517) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1443517 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #1310 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1310/)
          HADOOP-9253. Capture ulimit info in the logs at service start time. Contributed by Arpit Gupta. (Revision 1443517)

          Result = FAILURE
          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1443517
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
          • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1310 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1310/ ) HADOOP-9253 . Capture ulimit info in the logs at service start time. Contributed by Arpit Gupta. (Revision 1443517) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1443517 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Yarn-trunk #121 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/121/)
          HADOOP-9253. Capture ulimit info in the logs at service start time. Contributed by Arpit Gupta. (Revision 1443517)

          Result = SUCCESS
          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1443517
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
          • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Show
          Hudson added a comment - Integrated in Hadoop-Yarn-trunk #121 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/121/ ) HADOOP-9253 . Capture ulimit info in the logs at service start time. Contributed by Arpit Gupta. (Revision 1443517) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1443517 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Hide
          Hudson added a comment -

          Integrated in Hadoop-trunk-Commit #3339 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3339/)
          HADOOP-9253. Capture ulimit info in the logs at service start time. Contributed by Arpit Gupta. (Revision 1443517)

          Result = SUCCESS
          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1443517
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
          • /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Show
          Hudson added a comment - Integrated in Hadoop-trunk-Commit #3339 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3339/ ) HADOOP-9253 . Capture ulimit info in the logs at service start time. Contributed by Arpit Gupta. (Revision 1443517) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1443517 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
          Hide
          Suresh Srinivas added a comment -

          Committed the patch to trunk, branch-1 and branch-2.

          Thank you Arpit.

          Show
          Suresh Srinivas added a comment - Committed the patch to trunk, branch-1 and branch-2. Thank you Arpit.
          Hide
          Suresh Srinivas added a comment -

          Folks, any further comments?

          I am +1 for this patch. I will commit this by next Monday/Tuesday, if there are no further comments.

          Show
          Suresh Srinivas added a comment - Folks, any further comments? I am +1 for this patch. I will commit this by next Monday/Tuesday, if there are no further comments.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12567274/HADOOP-9253.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2120//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2120//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12567274/HADOOP-9253.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2120//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2120//console This message is automatically generated.
          Hide
          Arpit Gupta added a comment -

          updated trunk patch

          Show
          Arpit Gupta added a comment - updated trunk patch
          Hide
          Arpit Gupta added a comment -

          updated branch-1 patch

          Show
          Arpit Gupta added a comment - updated branch-1 patch
          Hide
          Arpit Gupta added a comment -

          @Andy

          Sounds good i will change the head to print 30 lines the reason being since i have added about 17 lines in case of an error we will see atleast 10+ lines worth or error log.

          I will post an update to the trunk and the branch-1 patch.

          Show
          Arpit Gupta added a comment - @Andy Sounds good i will change the head to print 30 lines the reason being since i have added about 17 lines in case of an error we will see atleast 10+ lines worth or error log. I will post an update to the trunk and the branch-1 patch.
          Hide
          Andy Isaacson added a comment -

          head "$log" Is something that existed before and hence i left it as is.

          Previously it made sense since $log was probably only a few lines long. Now that your code is changing $log to be guaranteed to be more than 10 lines long, please adjust the head command as appropriate.

          The reason for using head here is, there may be a few lines of output in the log that would be helpful for debugging. But it's also possible that the log has thousands of lines of errors which would not be helpful. With head you get the first few errors and avoid potentially dumping MBs of errors to the terminal. Please preserve that behavior. Since you're adding 17 lines of output, perhaps add 17 lines to the number that head will print.

          Show
          Andy Isaacson added a comment - head "$log" Is something that existed before and hence i left it as is. Previously it made sense since $log was probably only a few lines long. Now that your code is changing $log to be guaranteed to be more than 10 lines long, please adjust the head command as appropriate. The reason for using head here is, there may be a few lines of output in the log that would be helpful for debugging. But it's also possible that the log has thousands of lines of errors which would not be helpful. With head you get the first few errors and avoid potentially dumping MBs of errors to the terminal. Please preserve that behavior. Since you're adding 17 lines of output, perhaps add 17 lines to the number that head will print.
          Hide
          Arpit Gupta added a comment -

          it's unclear why to write ulimit to $log at all

          This is being added so we can debug issues related to limits being set for user. Thus capturing in the log so the user can refer to them at a later time.

          2. If writing ulimit to $log, why use head to truncate the output

          head "$log"
          

          Is something that existed before and hence i left it as is. I can certainly change it to -20 but as you mention if there are errors in the nohup command it will log to this file as well so changing it to printing 20 lines might not help in that case.

          Show
          Arpit Gupta added a comment - it's unclear why to write ulimit to $log at all This is being added so we can debug issues related to limits being set for user. Thus capturing in the log so the user can refer to them at a later time. 2. If writing ulimit to $log, why use head to truncate the output head "$log" Is something that existed before and hence i left it as is. I can certainly change it to -20 but as you mention if there are errors in the nohup command it will log to this file as well so changing it to printing 20 lines might not help in that case.
          Hide
          Andy Isaacson added a comment -

          I am not quite sure i understand what you are referring to. The log file that is being printed to the console should never have any left over contents as start commands overwrites it.

          Your patch has:

          +++ hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
          @@ -154,7 +154,11 @@ case $startStop in
                 ;;
               esac
               echo $! > $pid
          -    sleep 1; head "$log"
          +    sleep 1
          +    # capture the ulimit output
          +    echo "ulimit -a" >> $log
          +    ulimit -a >> $log 2>&1
          +    head "$log"
          

          The file $log might be empty, or it might have some content from the 'nohup' command line a few lines up. Regardless, your patch then adds two commands (echo, then ulimit) that >> append to $log. Together those will append 17 lines of output to $log.

          Then you use head to print out the first 10 lines of $log. These 10 lines might include some errors or warning messages from nohup, and then a few lines of the 17 that were printed by ulimit.

          So I have two feedback items: 1. it's unclear why to write ulimit to $log at all. Why not just write ulimit output directly to console? 2. If writing ulimit to $log, why use head to truncate the output? At least change the head command to print the entire expected output, head -20 or similar.

          Show
          Andy Isaacson added a comment - I am not quite sure i understand what you are referring to. The log file that is being printed to the console should never have any left over contents as start commands overwrites it. Your patch has: +++ hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh @@ -154,7 +154,11 @@ case $startStop in ;; esac echo $! > $pid - sleep 1; head "$log" + sleep 1 + # capture the ulimit output + echo "ulimit -a" >> $log + ulimit -a >> $log 2>&1 + head "$log" The file $log might be empty, or it might have some content from the 'nohup' command line a few lines up. Regardless, your patch then adds two commands (echo, then ulimit) that >> append to $log . Together those will append 17 lines of output to $log . Then you use head to print out the first 10 lines of $log . These 10 lines might include some errors or warning messages from nohup, and then a few lines of the 17 that were printed by ulimit. So I have two feedback items: 1. it's unclear why to write ulimit to $log at all. Why not just write ulimit output directly to console? 2. If writing ulimit to $log, why use head to truncate the output? At least change the head command to print the entire expected output, head -20 or similar.
          Hide
          Arpit Gupta added a comment -

          @Harsh
          I have updated the patch to handle a secure datanode startup. I tested on a secure and un secure cluster and the appropriate info was captured. Let me know if the approach looks good and i will provide a similar patch for trunk.

          @Andy
          I am not quite sure i understand what you are referring to. The log file that is being printed to the console should never have any left over contents as start commands overwrites it.

          nohup nice -n $HADOOP_NICENESS "$HADOOP_PREFIX"/bin/hadoop --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
          

          But if you think the problem still exists can open another jira for it.

          Show
          Arpit Gupta added a comment - @Harsh I have updated the patch to handle a secure datanode startup. I tested on a secure and un secure cluster and the appropriate info was captured. Let me know if the approach looks good and i will provide a similar patch for trunk. @Andy I am not quite sure i understand what you are referring to. The log file that is being printed to the console should never have any left over contents as start commands overwrites it. nohup nice -n $HADOOP_NICENESS "$HADOOP_PREFIX" /bin/hadoop --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/ null & But if you think the problem still exists can open another jira for it.
          Hide
          Andy Isaacson added a comment -

          It's pretty odd to append to $log using >> and then print only the beginning of $log using head. This results in the output duplicating the previous stanza's leftover contents of $log.

          Show
          Andy Isaacson added a comment - It's pretty odd to append to $log using >> and then print only the beginning of $log using head . This results in the output duplicating the previous stanza's leftover contents of $log .
          Hide
          Arpit Gupta added a comment -

          Does this also work in context of a secure DN startup? Does the logged ulimit reflect the actual JVM's instead of the wrapper's?

          Good point let me test this out and see what it will log.

          Show
          Arpit Gupta added a comment - Does this also work in context of a secure DN startup? Does the logged ulimit reflect the actual JVM's instead of the wrapper's? Good point let me test this out and see what it will log.
          Hide
          Harsh J added a comment -

          PAM is the one that applies limits (pam_limits.so) and PAM is configurable in how it applies to various scenarios it needs to be on. See http://linux.die.net/man/8/pam_limits, which also states that things may be changed to suit different needs in an environment (i.e. "For the services you need resources limits (login for example) put a the following line in /etc/pam.d/login as the last line for that service (usually after the pam_unix session line)")

          We had been battling limit-induced issues at customers almost the whole of 2011 since at that time the config had to be manual, but Apache Bigtop and other tools has solved it for us now by placing limits files during installation.

          This is still good to go though, just want to make sure we aren't writing out wrong values in any special case (such as a secure DN startup, particularly).

          Show
          Harsh J added a comment - PAM is the one that applies limits (pam_limits.so) and PAM is configurable in how it applies to various scenarios it needs to be on. See http://linux.die.net/man/8/pam_limits , which also states that things may be changed to suit different needs in an environment (i.e. "For the services you need resources limits (login for example) put a the following line in /etc/pam.d/login as the last line for that service (usually after the pam_unix session line)") We had been battling limit-induced issues at customers almost the whole of 2011 since at that time the config had to be manual, but Apache Bigtop and other tools has solved it for us now by placing limits files during installation. This is still good to go though, just want to make sure we aren't writing out wrong values in any special case (such as a secure DN startup, particularly).
          Hide
          Chris Nauroth added a comment -

          the former's more truthful in face of PAM config oddities.

          Harsh, can you give more details on this, or feel free to link to an external page that explains it? I'm not familiar with how PAM can interfere with the ulimit values seen by the launching user. Thanks!

          Show
          Chris Nauroth added a comment - the former's more truthful in face of PAM config oddities. Harsh, can you give more details on this, or feel free to link to an external page that explains it? I'm not familiar with how PAM can interfere with the ulimit values seen by the launching user. Thanks!
          Hide
          Harsh J added a comment -

          Also to note, its better to rely on the proc-fs limits file than the ulimit command run by the launching user: the former's more truthful in face of PAM config oddities.

          Show
          Harsh J added a comment - Also to note, its better to rely on the proc-fs limits file than the ulimit command run by the launching user: the former's more truthful in face of PAM config oddities.
          Hide
          Harsh J added a comment -

          Worth adding, although we probably just want -u and -n from my experience. Isn't it generally easier to instead just check /proc/PID/limits though?

          Does this also work in context of a secure DN startup? Does the logged ulimit reflect the actual JVM's instead of the wrapper's?

          Show
          Harsh J added a comment - Worth adding, although we probably just want -u and -n from my experience. Isn't it generally easier to instead just check /proc/PID/limits though? Does this also work in context of a secure DN startup? Does the logged ulimit reflect the actual JVM's instead of the wrapper's?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12566634/HADOOP-9253.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2096//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2096//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12566634/HADOOP-9253.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2096//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2096//console This message is automatically generated.
          Hide
          Arpit Gupta added a comment -

          the following will be captured in the .out file

          ulimit -a
          core file size          (blocks, -c) 0
          data seg size           (kbytes, -d) unlimited
          file size               (blocks, -f) unlimited
          max locked memory       (kbytes, -l) unlimited
          max memory size         (kbytes, -m) unlimited
          open files                      (-n) 1000000
          pipe size            (512 bytes, -p) 1
          stack size              (kbytes, -s) 8192
          cpu time               (seconds, -t) unlimited
          max user processes              (-u) 709
          virtual memory          (kbytes, -v) unlimited
          
          Show
          Arpit Gupta added a comment - the following will be captured in the .out file ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1000000 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited

            People

            • Assignee:
              Arpit Gupta
              Reporter:
              Arpit Gupta
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development