Details
-
Bug
-
Status: Resolved
-
Trivial
-
Resolution: Not A Problem
-
None
-
None
-
None
-
My laptop:
---------
Ubuntu 11.10 Oneiric Ocelot
Hadoop-0.20.2-cdh3u2
Apache Whirr 0.5.0-cdh3u2
=========================================whirr config:
--------------- 32-bit Ubuntu 10.04 LTS instance
whirr.hardware-id=m1.small
whirr.image-id=us-east-1/ami-6936fb00
whirr.location-id=us-east-1
whirr.hadoop.install-function=install_cdh_hadoop
whirr.hadoop.configure-function=configure_cdh_hadoop
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,2 hadoop-datanode+hadoop-tasktrackerwhirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr
whirr.public-key-file=${sys:user.home}/.ssh/id_rsa_whirr.pubMy laptop: --------- Ubuntu 11.10 Oneiric Ocelot Hadoop-0.20.2-cdh3u2 Apache Whirr 0.5.0-cdh3u2 ========================================= whirr config: -------------- 32-bit Ubuntu 10.04 LTS instance whirr.hardware-id=m1.small whirr.image-id=us-east-1/ami-6936fb00 whirr.location-id=us-east-1 whirr.hadoop.install-function=install_cdh_hadoop whirr.hadoop.configure-function=configure_cdh_hadoop whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,2 hadoop-datanode+hadoop-tasktracker whirr.provider=aws-ec2 whirr.identity=${env:AWS_ACCESS_KEY_ID} whirr.credential=${env:AWS_SECRET_ACCESS_KEY} whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr whirr.public-key-file=${sys:user.home}/.ssh/id_rsa_whirr.pub - 32-bit Ubuntu 10.04 LTS instance
Description
JAVA_HOME is not set on the master node in EC2 after launching the cluster. After ssh-ing into the masternode as root, I had to edit the .bashrc file and source it. See below for sample output:
sri@PeriyaData:~$ ssh -i ~/.ssh/id_rsa_whirr jtv@ec2-174-129-76-176.compute-1.amazonaws.com
The authenticity of host 'ec2-174-129-76-176.compute-1.amazonaws.com (174.129.76.176)' can't be established.
RSA key fingerprint is bd:ba:56:2b:a1:2f:8e:c8:d1:5c:94:23:f7:1a:d2:c0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-174-129-76-176.compute-1.amazonaws.com,174.129.76.176' (RSA) to the list of known hosts.
Linux ip-10-72-231-30 2.6.32-318-ec2 #38-Ubuntu SMP Thu Sep 1 17:54:33 UTC 2011 i686 GNU/Linux
Ubuntu 10.04.3 LTS
Welcome to Ubuntu!
- Documentation: https://help.ubuntu.com/
System information as of Wed Dec 7 21:18:45 UTC 2011
System load: 0.01 Processes: 66
Usage of /: 9.8% of 9.84GB Users logged in: 0
Memory usage: 14% IP address for eth0: 10.72.231.30
Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
---------------------------------------------------------------------
At the moment, only the core of the system is installed. To tune the
system to your needs, you can choose to install one or more
predefined collections of software by running the following
command:
sudo tasksel --section server
---------------------------------------------------------------------
Get cloud support with Ubuntu Advantage Cloud Guest
http://www.ubuntu.com/business/services/cloud
Last login: Wed Dec 7 21:15:08 2011 from 108-90-42-72.lightspeed.sntcca.sbcglobal.net
jtv@ip-10-72-231-30:~$ java -version
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) Client VM (build 20.1-b02, mixed mode, sharing)
jtv@ip-10-72-231-30:~$ sudo su
root@ip-10-72-231-30:/home/users/jtv#
root@ip-10-72-231-30:/home/users/jtv#
root@ip-10-72-231-30:/home/users/jtv# hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
datanode run a DFS datanode
dfsadmin run a DFS admin client
mradmin run a Map-Reduce admin client
fsck run a DFS filesystem checking utility
fs run a generic filesystem user client
balancer run a cluster balancing utility
jobtracker run the MapReduce job Tracker node
pipes run a Pipes job
tasktracker run a MapReduce task Tracker node
job manipulate MapReduce jobs
queue get information regarding JobQueues
version print the version
jar <jar> run a jar file
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME <src>* <dest> create a hadoop archive
daemonlog get/set the log level for each daemon
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
root@ip-10-72-231-30:/home/users/jtv# hadoop version
Error: JAVA_HOME is not set.
root@ip-10-72-231-30:/home/users/jtv#
*****LAST FEW LINES OF BASHRC FILE ***********
- enable programmable completion features (you don't need to enable
- this, if it's already enabled in /etc/bash.bashrc and /etc/profile
- sources /etc/bash.bashrc).
#if [ -f /etc/bash_completion ] && ! shopt -oq posix; then - . /etc/bash_completion
#fi
export HADOOP_HOME=/usr/local/hadoop-0.20.2
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
------------------------------------------------------------------------
root@ip-10-72-231-30:/home/users/jtv# cd /usr/lib/jvm
root@ip-10-72-231-30:/usr/lib/jvm# ls -l
total 4
lrwxrwxrwx 1 root root 19 2011-12-07 21:10 java-6-sun -> java-6-sun-1.6.0.26
drwxr-xr-x 8 root root 4096 2011-12-07 21:10 java-6-sun-1.6.0.26
root@ip-10-72-231-30:/usr/lib/jvm#
FIX: I added a line "export JAVA_HOME=/usr/lib/jvm/java-6-sun" in .bashrc and sourced it.
root@ip-10-72-231-30:/usr/lib/jvm# source ~/.bashrc
root@ip-10-72-231-30:/usr/lib/jvm# hadoop version
Hadoop 0.20.2
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707
Compiled by chrisdo on Fri Feb 19 08:07:34 UTC 2010
root@ip-10-72-231-30:/usr/lib/jvm#