Details
Description
With Kerberos enabled, any job that is taking as input or output s3 files fails.
It can be easily reproduced with wordcount shipped in hadoop-examples.jar and a public S3 file:
/opt/hadoop/bin/hadoop --config /opt/hadoop/conf/ jar /opt/hadoop/hadoop-examples-1.0.0.jar wordcount s3n://ubikodpublic/test out01
returns:
12/08/10 12:40:19 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 192 for hadoop on 10.85.151.233:9000 12/08/10 12:40:19 INFO security.TokenCache: Got dt for hdfs://aws04.machine.com:9000/mapred/staging/hadoop/.staging/job_201208101229_0004;uri=10.85.151.233:9000;t.service=10.85.151.233:9000 12/08/10 12:40:19 INFO mapred.JobClient: Cleaning up the staging area hdfs://aws04.machine.com:9000/mapred/staging/hadoop/.staging/job_201208101229_0004 java.lang.IllegalArgumentException: java.net.UnknownHostException: ubikodpublic at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:293) at org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:317) at org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:189) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:92) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:79) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:197) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252) <SNIP>