Description
Here is the bug report from Michael Stack:
Here I'm listing a BUCKET directory that was copied up using 'hadoop
fs', then rmr'ing it and then listing again:
stack@bregeon:~/checkouts/hadoop$ ./bin/hadoop fs -fs
s3://ID:SECRET@BUCKET -ls /fromfile
Found 2 items
/fromfile/diff.txt <r 1> 591
/fromfile/x.js <r 1> 2477
stack@bregeon:~/checkouts/hadoop$ ./bin/hadoop fs -fs
s3://ID:SECRET@BUCKET -rmr /fromfile
Deleted /fromfile
stack@bregeon:~/checkouts/hadoop$ ./bin/hadoop fs -fs
s3://ID:SECRET@BUCKET -ls /fromfile
Found 0 items
The '0 items' is odd because, now, listing my BUCKET using a tool other
than 'hadoop fs' (i.e. hanzo webs python scripts):
stack@bregeon:~/checkouts/hadoop.trunk$ s3ls BUCKET
%2F
%2Ffromfile%2F.diff.txt.crc
%2Ffromfile%2F.x.js.crc
%2Ffromfile%2Fdiff.txt
%2Ffromfile%2Fx.js
block_-5013142890590722396
block_5832002498000415319
block_6889488315428893905
block_9120115089645350905
Its all still there still. I can subsequently do the likes of the
following:
stack@bregeon:~/checkouts/hadoop$ ./bin/hadoop fs -fs
s3://ID:SECRET@BUCKET -rmr /fromfile/diff.txt
... and the delete will succeed and looking at the bucket with alternate
tools shows that it has actually been remove, and so on up the hierarchy.