Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-4109

?Formatting HDFS running into errors :( - Many thanks

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Invalid
    • Affects Version/s: 0.20.2
    • Fix Version/s: None
    • Component/s: hdfs-client
    • Environment:

      Windows 7
      Cygwin installed
      downloaded hadoop-0.20.2.tar (apparently works best with Win 7?)

    • Target Version/s:
    • Tags:
      help bigdata hadoop format file system

      Description

      Hi,

      I am trying to format the Hadoop file system with:

      bin/hadoop namenode -format

      But I received this error in Cygwin:

      /home/anjames/bin/../conf/hadoop-env.sh: line 8: $’\r’: command not found
      /home/anjames/bin/../conf/hadoop-env.sh: line 14: $’\r’: command not found
      /home/anjames/bin/../conf/hadoop-env.sh: line 17: $’\r’: command not found
      /home/anjames/bin/../conf/hadoop-env.sh: line 25: $’\r’: command not found
      /bin/java; No such file or directoryjre7
      /bin/java; No such file or directoryjre7
      /bin/java; cannot execute: No such file or directory

      I had previous modified the following conf files the cygwin/home/anjames directory
      1. core-site.xml
      2. mapred-site.xml
      3. hdfs-site.xml

      4. hadoop-env.sh

      -I updated this file using the instructions: "uncomment the JAVA_HOME export command, and set the path to your Java home (typically C:/Program Files/Java/

      {java-home}

      "

      i.e. In the "hadoop-env.sh" file, I took out the "#" infront of JAVA_HOME comment and changed the path as follows:

      export JAVA_HOME=C:\Progra~1\Java\jre7

      The hadoop-env.sh file is now:

      ----------------------------------------------------------------

      1. Set Hadoop-specific environment variables here.
      1. The only required environment variable is JAVA_HOME. All others are
      2. optional. When running a distributed configuration it is best to
      3. set JAVA_HOME in this file, so that it is correctly defined on
      4. remote nodes.
      1. The java implementation to use.
        export JAVA_HOME=C:\Progra~1\Java\jre7 ###<-----uncommented and revised code
      1. Extra Java CLASSPATH elements. Optional.
      1. export HADOOP_CLASSPATH=
      1. The maximum amount of heap to use, in MB. Default is 1000.
      1. export HADOOP_HEAPSIZE=2000
      1. Extra Java runtime options. Empty by default.
      2. export HADOOP_OPTS=-server
      1. Command specific options appended to HADOOP_OPTS when specified
        export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
        export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
        export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
        export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
        export JAVA_HOME=C:\Progra~1\Java\jre7
        HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
      2. export HADOOP_TASKTRACKER_OPTS=
      3. The following applies to multiple commands (fs, dfs, fsck, distcp etc)
      4. export HADOOP_CLIENT_OPTS
      1. Extra ssh options. Empty by default.
      2. export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
      1. Where log files are stored. $HADOOP_HOME/logs by default.
      2. export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
      1. File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
      2. export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
      1. host:path where hadoop code should be rsync'd from. Unset by default.
      2. export HADOOP_MASTER=master:/home/$USER/src/hadoop
      1. Seconds to sleep between slave commands. Unset by default. This
      2. can be useful in large clusters, where, e.g., slave rsyncs can
      3. otherwise arrive faster than the master can service them.
      4. export HADOOP_SLAVE_SLEEP=0.1
      1. The directory where pid files are stored. /tmp by default.
      2. export HADOOP_PID_DIR=/var/hadoop/pids
      1. A string representing this instance of hadoop. $USER by default.
      2. export HADOOP_IDENT_STRING=$USER
      1. The scheduling priority for daemon processes. See 'man nice'.
      2. export HADOOP_NICENESS=10

      ------------------------------

      I'm trying to get back in the programming swing with a Big Data Analytics course, so any help is much appreciated its been a while, many thanks.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              anjames ajames
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: