Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
0.23.0
-
None
-
None
Description
Steps to reproduce:
- Format the dfs.
- Start it.
- Put a file without specifying the destination.
$ hadoop fs -ls ls: `.': No such file or directory $ hadoop fs -put /etc/passwd $ hadoop fs -ls Found 1 items -rw-r--r-- 1 kihwal supergroup 2076 2011-11-04 10:37 .._COPYING_
The namenode log:
- ugi=kihwal ip=/127.0.0.1 cmd=create src=/user/kihwal/.._COPYING_ dst=null perm=kihwal:supergroup:rw-r--r-- - BLOCK* NameSystem.allocateBlock: /user/kihwal/.._COPYING_. BP-221429388-10.74.90.166-1320420960536 blk_1038813851536531761_1001{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:50010|RBW]]} - BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_1038813851536531761_1001{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:50010|RBW]]} size 0 - DIR* NameSystem.completeFile: file /user/kihwal/.._COPYING_ is closed by DFSClient_NONMAPREDUCE_809910865_1 - ugi=kihwal ip=/127.0.0.1 cmd=rename src=/user/kihwal/.._COPYING_ dst=/user/kihwal perm=kihwal:supergroup:rwxr-xr-x
It ends up creating a wrong file.
Attachments
Issue Links
- is duplicated by
-
HADOOP-8131 FsShell put doesn't correctly handle a non-existent dir
- Resolved