Affects Version/s: 0.7.2
Fix Version/s: None
I get a different response from dfs -ls depending on whether or not the
ls contained a directory or a wildcard. The wild card misses the
response that tells me how many files.
It did indeed cause a problem with my scripts. It was easy to filter
out. If it's a desired action to do it sometimes and others then that's
it's ok. It doesn't seem like this is really the intention so I pointed
it out. My script is now happy with both forms. I do really on the fact
that the 3 column of the output is the file size. Perhaps I shouldn't be
doing this but I am writing some automated scripts in python to drive
hadoop so it is useful to check file sizes. If you were to provide a
format string that would allow me to place the items in a specific
format then you would be free to change it, whenever, however, you wanted.
|Component/s||dfs [ 12310710 ]|
|Status||Resolved [ 5 ]||Closed [ 6 ]|
|Status||Open [ 1 ]||Resolved [ 5 ]|
|Resolution||Fixed [ 1 ]|
[ What's your command line ?
If you use command like "bin/hadoop dfs -ls *txt" , I think the bash will substitute the *.txt with the files who match the pattern in your current directory.
I think you can try to quot your pattern string with quotation marks.
bin/hadoop dfs -ls "*txt" ]
|Field||Original Value||New Value|
|Assignee||Jiang Lei [ jianglei ]|