Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-2014

bin/hdfs no longer works from a source checkout

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 0.23.0
    • Fix Version/s: 0.23.0
    • Component/s: scripts
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      bin/hdfs now appears to depend on ../libexec, which doesn't exist inside of a source checkout:

      todd@todd-w510:~/git/hadoop-hdfs$ ./bin/hdfs namenode
      ./bin/hdfs: line 22: /home/todd/git/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or directory
      ./bin/hdfs: line 138: cygpath: command not found
      ./bin/hdfs: line 161: exec: : not found

      1. HDFS-2014.patch
        5 kB
        Eric Yang
      2. HDFS-2014-1.patch
        6 kB
        Eric Yang
      3. HDFS-2014-2.patch
        6 kB
        Eric Yang

        Issue Links

          Activity

          Hide
          philo vivero added a comment -

          This always evaluates to true and always executes the code inside the "if". Why even have the if/fi? Just clobber the CLASSPATH and be done with it.

          if $cygwin; then
          CLASSPATH=`cygpath -p -w "$CLASSPATH"`
          fi

          Show
          philo vivero added a comment - This always evaluates to true and always executes the code inside the "if". Why even have the if/fi? Just clobber the CLASSPATH and be done with it. if $cygwin; then CLASSPATH=`cygpath -p -w "$CLASSPATH"` fi
          Hide
          Owen O'Malley added a comment -

          Sorry about that. On the other hand, having different layouts for the different deployment vehicles is pretty broken too.

          Show
          Owen O'Malley added a comment - Sorry about that. On the other hand, having different layouts for the different deployment vehicles is pretty broken too.
          Hide
          Aaron T. Myers added a comment -

          Owen told me verbally in his cube. Add Owen for comment.

          I've filed HDFS-2045 to discuss this issue. Hopefully Owen can comment there. In the absence of a compelling reason to keep it as-is, I'm inclined to change it back to the previous behavior.

          Show
          Aaron T. Myers added a comment - Owen told me verbally in his cube. Add Owen for comment. I've filed HDFS-2045 to discuss this issue. Hopefully Owen can comment there. In the absence of a compelling reason to keep it as-is, I'm inclined to change it back to the previous behavior.
          Hide
          Todd Lipcon added a comment -

          It's unfortunate that all of these behavioral changes for non-RPM users were put in under the umbrella of "RPM support" tickets.

          Packaging should be external and not affect people who prefer to install via tarballs or other means.

          Show
          Todd Lipcon added a comment - It's unfortunate that all of these behavioral changes for non-RPM users were put in under the umbrella of "RPM support" tickets. Packaging should be external and not affect people who prefer to install via tarballs or other means.
          Hide
          Eric Yang added a comment -

          Owen told me verbally in his cube. Add Owen for comment.

          Show
          Eric Yang added a comment - Owen told me verbally in his cube. Add Owen for comment.
          Hide
          Aaron T. Myers added a comment -

          @Eric: ah, sorry, didn't realize that was consciously decided elsewhere. Where did Owen say that?

          Show
          Aaron T. Myers added a comment - @Eric: ah, sorry, didn't realize that was consciously decided elsewhere. Where did Owen say that?
          Hide
          Eric Yang added a comment -

          Aaron, HADOOP_COMMON_HOME and HADOOP_HDFS_HOME are the parameters that Owen proposed to remove for production deployment to prevent admin/user errors in setting those environment variables. Hence, this is the reason that build directory artifact is not suppose to interact with HADOOP_COMMON_HOME environment variable. This change requires admin to use ant bin-package and untar the artifacts to a common directory to run.

          I will change the JIRA to your recommendation.

          Show
          Eric Yang added a comment - Aaron, HADOOP_COMMON_HOME and HADOOP_HDFS_HOME are the parameters that Owen proposed to remove for production deployment to prevent admin/user errors in setting those environment variables. Hence, this is the reason that build directory artifact is not suppose to interact with HADOOP_COMMON_HOME environment variable. This change requires admin to use ant bin-package and untar the artifacts to a common directory to run. I will change the JIRA to your recommendation.
          Hide
          Aaron T. Myers added a comment -

          Eric, I totally agree. The observation I'm making is that the `bin-package' targets don't seem to work as-is.

          It used to be that you could do the following:

          1. Run `ant bin-package' in your hadoop-common checkout.
          2. Set HADOOP_COMMON_HOME to the built directory of hadoop-common.
          3. Run `ant bin-package' in your hadoop-hdfs checkout.
          4. Set HADOOP_HDFS_HOME to the built directory of hadoop-hdfs.
          5. Set PATH to have HADOOP_HDFS_HOME/bin and HADOOP_COMMON_HOME/bin on it.
          6. Run `hdfs'.

          This no longer works since hdfs-config.sh is looking in HADOOP_COMMON_HOME/bin/ for hadoop-config.sh, but it's being placed in HADOOP_COMMON_HOME/libexec.

          I realize this is somewhat different than the original observation in this JIRA, but the two are highly related (hdfs-config.sh looking in the wrong place for hadoop-config.sh.) Feel free to say this should be filed as a new JIRA.

          As an aside, the title of this JIRA should probably be changed to something like "bin/hdfs no longer works from a source checkout."

          Show
          Aaron T. Myers added a comment - Eric, I totally agree. The observation I'm making is that the `bin-package' targets don't seem to work as-is. It used to be that you could do the following: Run `ant bin-package' in your hadoop-common checkout. Set HADOOP_COMMON_HOME to the built directory of hadoop-common. Run `ant bin-package' in your hadoop-hdfs checkout. Set HADOOP_HDFS_HOME to the built directory of hadoop-hdfs. Set PATH to have HADOOP_HDFS_HOME/bin and HADOOP_COMMON_HOME/bin on it. Run `hdfs' . This no longer works since hdfs-config.sh is looking in HADOOP_COMMON_HOME/bin/ for hadoop-config.sh , but it's being placed in HADOOP_COMMON_HOME/libexec . I realize this is somewhat different than the original observation in this JIRA, but the two are highly related ( hdfs-config.sh looking in the wrong place for hadoop-config.sh .) Feel free to say this should be filed as a new JIRA. As an aside, the title of this JIRA should probably be changed to something like "bin/hdfs no longer works from a source checkout."
          Hide
          Eric Yang added a comment -

          Aaron, the developer setup is set HADOOP_COMMON_HOME to source code directory rather than the built artifacts directory, hence hdfs-config.sh will pick up HADOOP_COMMON_HOME/bin/hadoop-config.sh instead of build/hadoop-common-x.y.z/libexec/hadoop-config.sh. Hope this helps.

          Show
          Eric Yang added a comment - Aaron, the developer setup is set HADOOP_COMMON_HOME to source code directory rather than the built artifacts directory, hence hdfs-config.sh will pick up HADOOP_COMMON_HOME/bin/hadoop-config.sh instead of build/hadoop-common-x.y.z/libexec/hadoop-config.sh. Hope this helps.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #686 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/686/)
          HDFS-2014. Change HDFS scripts to work in developer enviroment post RPM packaging changes. Contributed by Eric Yang.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1130843
          Files :

          • /hadoop/hdfs/trunk/bin/stop-balancer.sh
          • /hadoop/hdfs/trunk/bin/stop-secure-dns.sh
          • /hadoop/hdfs/trunk/bin/start-balancer.sh
          • /hadoop/hdfs/trunk/bin/start-secure-dns.sh
          • /hadoop/hdfs/trunk/bin/distribute-exclude.sh
          • /hadoop/hdfs/trunk/bin/hdfs
          • /hadoop/hdfs/trunk/bin/hdfs-config.sh
          • /hadoop/hdfs/trunk/bin/start-dfs.sh
          • /hadoop/hdfs/trunk/bin/refresh-namenodes.sh
          • /hadoop/hdfs/trunk/bin/stop-dfs.sh
          • /hadoop/hdfs/trunk/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #686 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/686/ ) HDFS-2014 . Change HDFS scripts to work in developer enviroment post RPM packaging changes. Contributed by Eric Yang. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1130843 Files : /hadoop/hdfs/trunk/bin/stop-balancer.sh /hadoop/hdfs/trunk/bin/stop-secure-dns.sh /hadoop/hdfs/trunk/bin/start-balancer.sh /hadoop/hdfs/trunk/bin/start-secure-dns.sh /hadoop/hdfs/trunk/bin/distribute-exclude.sh /hadoop/hdfs/trunk/bin/hdfs /hadoop/hdfs/trunk/bin/hdfs-config.sh /hadoop/hdfs/trunk/bin/start-dfs.sh /hadoop/hdfs/trunk/bin/refresh-namenodes.sh /hadoop/hdfs/trunk/bin/stop-dfs.sh /hadoop/hdfs/trunk/CHANGES.txt
          Hide
          Aaron T. Myers added a comment -

          Is there some corresponding Common patch which needs to go in as well? bin/hdfs still isn't working for me with a fresh checkout. Running "ant bin-package" in Common results in hadoop-config.sh being put in $HADOOP_COMMON_HOME/libexec, but hdfs-config.sh isn't looking for it in there, so I'm getting:

          $ hdfs
          Hadoop common not found.
          
          Show
          Aaron T. Myers added a comment - Is there some corresponding Common patch which needs to go in as well? bin/hdfs still isn't working for me with a fresh checkout. Running " ant bin-package " in Common results in hadoop-config.sh being put in $HADOOP_COMMON_HOME/libexec, but hdfs-config.sh isn't looking for it in there, so I'm getting: $ hdfs Hadoop common not found.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12481293/HDFS-2014-2.patch
          against trunk revision 1130734.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 4 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/693//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/693//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/693//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12481293/HDFS-2014-2.patch against trunk revision 1130734. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/693//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/693//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/693//console This message is automatically generated.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #707 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/707/)
          HDFS-2014. Change HDFS scripts to work in developer enviroment post RPM packaging changes. Contributed by Eric Yang.

          suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1130843
          Files :

          • /hadoop/hdfs/trunk/bin/stop-balancer.sh
          • /hadoop/hdfs/trunk/bin/stop-secure-dns.sh
          • /hadoop/hdfs/trunk/bin/start-balancer.sh
          • /hadoop/hdfs/trunk/bin/start-secure-dns.sh
          • /hadoop/hdfs/trunk/bin/distribute-exclude.sh
          • /hadoop/hdfs/trunk/bin/hdfs
          • /hadoop/hdfs/trunk/bin/hdfs-config.sh
          • /hadoop/hdfs/trunk/bin/start-dfs.sh
          • /hadoop/hdfs/trunk/bin/refresh-namenodes.sh
          • /hadoop/hdfs/trunk/bin/stop-dfs.sh
          • /hadoop/hdfs/trunk/CHANGES.txt
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #707 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/707/ ) HDFS-2014 . Change HDFS scripts to work in developer enviroment post RPM packaging changes. Contributed by Eric Yang. suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1130843 Files : /hadoop/hdfs/trunk/bin/stop-balancer.sh /hadoop/hdfs/trunk/bin/stop-secure-dns.sh /hadoop/hdfs/trunk/bin/start-balancer.sh /hadoop/hdfs/trunk/bin/start-secure-dns.sh /hadoop/hdfs/trunk/bin/distribute-exclude.sh /hadoop/hdfs/trunk/bin/hdfs /hadoop/hdfs/trunk/bin/hdfs-config.sh /hadoop/hdfs/trunk/bin/start-dfs.sh /hadoop/hdfs/trunk/bin/refresh-namenodes.sh /hadoop/hdfs/trunk/bin/stop-dfs.sh /hadoop/hdfs/trunk/CHANGES.txt
          Hide
          Suresh Srinivas added a comment -

          I have committed the change. Thank you Eric.

          Show
          Suresh Srinivas added a comment - I have committed the change. Thank you Eric.
          Hide
          Suresh Srinivas added a comment -

          +1 for the patch. I am going to commit it. If there are any further comments - it could be addressed in a newer jira.

          Show
          Suresh Srinivas added a comment - +1 for the patch. I am going to commit it. If there are any further comments - it could be addressed in a newer jira.
          Hide
          Eric Yang added a comment -

          Changed the if statement to be more consistent in checking for the location of hadoop-config.sh.

          Show
          Eric Yang added a comment - Changed the if statement to be more consistent in checking for the location of hadoop-config.sh.
          Hide
          Suresh Srinivas added a comment -

          hdfs-config.sh - you many want check for the existence of file instead of directory for HADOOP_COMMON_HOME and HADOOP_HOME

          Show
          Suresh Srinivas added a comment - hdfs-config.sh - you many want check for the existence of file instead of directory for HADOOP_COMMON_HOME and HADOOP_HOME
          Hide
          Bharath Mundlapudi added a comment -

          I have tested this patch on few cases like hdfs format and upgrade etc. This patch works. Without this patch users will run into issues for the trunk. can someone commit this patch if you don't have any comments?

          Show
          Bharath Mundlapudi added a comment - I have tested this patch on few cases like hdfs format and upgrade etc. This patch works. Without this patch users will run into issues for the trunk. can someone commit this patch if you don't have any comments?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12481182/HDFS-2014-1.patch
          against trunk revision 1130339.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 4 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
          org.apache.hadoop.hdfs.TestHDFSTrash

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/682//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/682//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/682//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12481182/HDFS-2014-1.patch against trunk revision 1130339. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.TestDFSUpgradeFromImage org.apache.hadoop.hdfs.TestHDFSTrash +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/682//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/682//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/682//console This message is automatically generated.
          Hide
          Eric Yang added a comment -

          Restore to HADOOP_HDFS_HOME for developer.

          Show
          Eric Yang added a comment - Restore to HADOOP_HDFS_HOME for developer.
          Hide
          Todd Lipcon added a comment -

          actually, this still has an issue in that webapps are not located correctly.

          bin/hdfs is looking at $HADOOP_PREFIX/build/webapps, which is pointing to COMMON_HOME/build/webapps, rather than HDFS_HOME/build/webapps.

          Show
          Todd Lipcon added a comment - actually, this still has an issue in that webapps are not located correctly. bin/hdfs is looking at $HADOOP_PREFIX/build/webapps, which is pointing to COMMON_HOME/build/webapps, rather than HDFS_HOME/build/webapps.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12480972/HDFS-2014.patch
          against trunk revision 1128987.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/666//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/666//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/666//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12480972/HDFS-2014.patch against trunk revision 1128987. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/666//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/666//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/666//console This message is automatically generated.
          Hide
          Todd Lipcon added a comment -

          looks good to me. Owen, want to take a look as well?

          Show
          Todd Lipcon added a comment - looks good to me. Owen, want to take a look as well?
          Hide
          Eric Yang added a comment -

          Restore HADOOP_COMMON_HOME and environment based control. This was removed per Owen's request to make the system more secure without depending on environment variables.

          This revision will check for $HADOOP_PREFIX/libexec/hadoop-config.sh exists. If libexec/hadoop-config.sh exists, it will be used. If libexec/hadoop-config.sh does not exist, then it falls back to developer environment setup.

          Show
          Eric Yang added a comment - Restore HADOOP_COMMON_HOME and environment based control. This was removed per Owen's request to make the system more secure without depending on environment variables. This revision will check for $HADOOP_PREFIX/libexec/hadoop-config.sh exists. If libexec/hadoop-config.sh exists, it will be used. If libexec/hadoop-config.sh does not exist, then it falls back to developer environment setup.
          Hide
          Eli Collins added a comment -

          It's very useful for developers to be able to run the software w/o having to build/install a tarball, eg if I just want to make a one-line change to the code I can recompile, that's much faster than re-building/unpacking a tarball. Eg here's how my dev script uses the hdfs command today: https://github.com/elicollins/hadoop-dev/blob/master/bin/datanode.

          Show
          Eli Collins added a comment - It's very useful for developers to be able to run the software w/o having to build/install a tarball, eg if I just want to make a one-line change to the code I can recompile, that's much faster than re-building/unpacking a tarball. Eg here's how my dev script uses the hdfs command today: https://github.com/elicollins/hadoop-dev/blob/master/bin/datanode .
          Hide
          Allen Wittenauer added a comment -

          Why should a source checkout be expected to work like a binary distribution? It is fairly common that one does a build and an install in order for something to work. I think it is realistic to expect Hadoop to drift towards that as well.

          Show
          Allen Wittenauer added a comment - Why should a source checkout be expected to work like a binary distribution? It is fairly common that one does a build and an install in order for something to work. I think it is realistic to expect Hadoop to drift towards that as well.
          Hide
          Eric Yang added a comment -

          How about?

          if [ -d "$bin"/../libexec ]; then
            . $bin/../libexec/hadoop-config.sh
          elif [ -e "${HADOOP_HDFS_HOME}"/bin/hadoop-config.sh ]; then
            . "$HADOOP_HDFS_HOME"/bin/hadoop-config.sh
          else
            echo "Hadoop common not found."
            exit
          fi
          
          Show
          Eric Yang added a comment - How about? if [ -d "$bin"/../libexec ]; then . $bin/../libexec/hadoop-config.sh elif [ -e "${HADOOP_HDFS_HOME}"/bin/hadoop-config.sh ]; then . "$HADOOP_HDFS_HOME"/bin/hadoop-config.sh else echo "Hadoop common not found." exit fi
          Hide
          Todd Lipcon added a comment -

          it also expects to find libexec/hadoop-config.sh in the hdfs checkout, which isn't correct either

          Show
          Todd Lipcon added a comment - it also expects to find libexec/hadoop-config.sh in the hdfs checkout, which isn't correct either

            People

            • Assignee:
              Eric Yang
              Reporter:
              Todd Lipcon
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development