Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Not A Problem
-
0.23.1, 2.0.0-alpha
-
None
-
None
-
namenode:1 (IP:10.18.40.154)
datanode:3 (IP:10.18.40.154,10.18.40.102,10.18.52.55)HOST-10-18-40-154:/home/APril20/install/hadoop/namenode/bin # ./hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.Configured Capacity: 129238446080 (120.36 GB)
Present Capacity: 51742765056 (48.19 GB)
DFS Remaining: 49548591104 (46.15 GB)
DFS Used: 2194173952 (2.04 GB)
DFS Used%: 4.24%
Under replicated blocks: 14831
Blocks with corrupt replicas: 1
Missing blocks: 100-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)Live datanodes:
Name: 10.18.40.102:50010 (10.18.40.102)
Hostname: linux.site
Decommission Status : Normal
Configured Capacity: 22765834240 (21.2 GB)
DFS Used: 634748928 (605.34 MB)
Non DFS Used: 1762299904 (1.64 GB)
DFS Remaining: 20368785408 (18.97 GB)
DFS Used%: 2.79%
DFS Remaining%: 89.47%
Last contact: Fri Apr 27 10:35:57 IST 2012Name: 10.18.40.154:50010 (HOST-10-18-40-154)
Hostname: HOST-10-18-40-154
Decommission Status : Normal
Configured Capacity: 23259897856 (21.66 GB)
DFS Used: 812396544 (774.76 MB)
Non DFS Used: 8297279488 (7.73 GB)
DFS Remaining: 14150221824 (13.18 GB)
DFS Used%: 3.49%
DFS Remaining%: 60.84%
Last contact: Fri Apr 27 10:35:58 IST 2012Name: 10.18.52.55:50010 (10.18.52.55)
Hostname: HOST-10-18-52-55
Decommission Status : Normal
Configured Capacity: 83212713984 (77.5 GB)
DFS Used: 747028480 (712.42 MB)
Non DFS Used: 67436101632 (62.8 GB)
DFS Remaining: 15029583872 (14 GB)
DFS Used%: 0.9%
DFS Remaining%: 18.06%
Last contact: Fri Apr 27 10:35:58 IST 2012namenode:1 (IP:10.18.40.154) datanode:3 (IP:10.18.40.154,10.18.40.102,10.18.52.55) HOST-10-18-40-154:/home/APril20/install/hadoop/namenode/bin # ./hadoop dfsadmin -report DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. Configured Capacity: 129238446080 (120.36 GB) Present Capacity: 51742765056 (48.19 GB) DFS Remaining: 49548591104 (46.15 GB) DFS Used: 2194173952 (2.04 GB) DFS Used%: 4.24% Under replicated blocks: 14831 Blocks with corrupt replicas: 1 Missing blocks: 100 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) Live datanodes: Name: 10.18.40.102:50010 (10.18.40.102) Hostname: linux.site Decommission Status : Normal Configured Capacity: 22765834240 (21.2 GB) DFS Used: 634748928 (605.34 MB) Non DFS Used: 1762299904 (1.64 GB) DFS Remaining: 20368785408 (18.97 GB) DFS Used%: 2.79% DFS Remaining%: 89.47% Last contact: Fri Apr 27 10:35:57 IST 2012 Name: 10.18.40.154:50010 (HOST-10-18-40-154) Hostname: HOST-10-18-40-154 Decommission Status : Normal Configured Capacity: 23259897856 (21.66 GB) DFS Used: 812396544 (774.76 MB) Non DFS Used: 8297279488 (7.73 GB) DFS Remaining: 14150221824 (13.18 GB) DFS Used%: 3.49% DFS Remaining%: 60.84% Last contact: Fri Apr 27 10:35:58 IST 2012 Name: 10.18.52.55:50010 (10.18.52.55) Hostname: HOST-10-18-52-55 Decommission Status : Normal Configured Capacity: 83212713984 (77.5 GB) DFS Used: 747028480 (712.42 MB) Non DFS Used: 67436101632 (62.8 GB) DFS Remaining: 15029583872 (14 GB) DFS Used%: 0.9% DFS Remaining%: 18.06% Last contact: Fri Apr 27 10:35:58 IST 2012
Description
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
java.io.IOException: File /user/root/lwr/test31.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1259)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1916)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:472)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42602)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:428)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:905)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1684)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1682)
i:4284
at org.apache.hadoop.ipc.Client.call(Client.java:1159)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:185)
at $Proxy9.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:84)
at $Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:295)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1097)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:973)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)
testcase:
import java.io.IOException;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DistributedFileSystem;
public class Write1 {
/**
- @param args
- @throws Exception
*/
public static void main(String[] args) throws Exception {
//System.out.println("main");
String hdfsFile="/user/root/lwr/test31.txt";
byte writeBuff[] = new byte [1024 * 1024];
int i=0;
DistributedFileSystem dfs = new DistributedFileSystem();
Configuration conf=new Configuration();
//conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 512);
//conf.setLong(DFSConfigKeys.DFS_REPLICATION_KEY, 2);
// conf.setInt("dfs.replication", 3);
conf.setLong("dfs.blocksize", 512);
dfs.initialize(URI.create("hdfs://10.18.40.154:9000"), conf);
//dfs.delete(new Path(hdfsFile));
//appendFile(dfs,hdfsFile,1024 * 1024,true);
try
{
FSDataOutputStream out1=dfs.create(new Path(hdfsFile));
for(i=0;i<100000;i++)
{ out1.write(writeBuff, 0, 512); } out1.hsync();
out1.close();
/*
FSDataOutputStream out=dfs.append(new Path(hdfsFile),4096);
out.write(writeBuff, 0, 512 * 1024);
out.hsync();
out.close();
*/
}catch (IOException e)
finally
{ System.out.println("i:" + i); System.out.println("end!"); }}
}