Regarding the org.apache.hadoop.fs.azurenative classes
- keys like "fs.azure.buffer.dir" need to be pulled out and made constants; the embedding of strings is something the main codebase is slowly moving away from. Some of the code does this, but not all.
- The code depends on microsoft-windowsazure-api 1.2.0 , which is in the maven repository. There's also a 0.2.0 version in there -any particular reason for not using the latest release?
- Testing? How is anyone working with this code going to use the FS? Is there S3-style remote access, or do you have to bring up a VM in the cluster?
- The catch of Exception and wrapping with AzureException is best set up so that IOException exceptions aren't caught and wrapped, as they match the signature. I don't know if the native API throws these, but adding an extra layer of nesting never helps with troubleshooting live systems.
It may be cleaner to keep the azure FS source tree outside the main hadoop code, and host it in a parallel hadoop-azurefs project with the extra dependency, and the extra output artifacts. Anyone who added a mvn or ivy dependency on hadoop-azurefs would get the -api JAR, and testing could be isolated. This could also be a good opportunity to do the same for KFS, which is under-tested in the current release process, and for any other DFS clients that people want in the codebase. Maybe the policy should be: if it is testable by anyone, put it in the hadoop source tree, but if not, the FS vendor has to do it. (I'm thinking of things like GPFS here and others, not just AzureFS)