Specifically, if a user:
- Creates an empty directory with hadoop fs -mkdir s3a://bucket/path
- Copies data into that directory via another tool, i.e. aws cli.
- Tries to access the data in that directory with any Hadoop software.
Then the last step fails because the fake empty directory blob that s3a wrote in the first step, causes s3a (listStatus() etc.) to continue to treat that directory as empty, even though the second step was supposed to populate the directory with data.
I wanted to document this fact for users. We may mark this as not-fix, "by design".. May also be interesting to brainstorm solutions and/or a config option to change the behavior if folks care.