Description
I followed the example from http://hortonworks.com/blog/introduction-apache-falcon-hadoop, where all locations (i.e. staging, working, temp) of the cluster are set to the same directory.
<?xml version="1.0" encoding="UTF-8"?> <cluster colo="toronto" description="Primary Cluster" (...) <locations> <location name="staging" path="/tmp/falcon"/> <location name="working" path="/tmp/falcon"/> <location name="temp" path="/tmp/falcon"/> </locations> </cluster>
When submitting such a cluster entity, I got:
bash-4.1$ ./bin/falcon entity -submit -type cluster -file cluster.xml Stacktrace: org.apache.falcon.client.FalconCLIException: Bad Request;Path /tmp/falcon has permissions: rwxr-xr-x, should be rwxrwxrwx at org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44) at org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162) at org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684) at org.apache.falcon.client.FalconClient.submit(FalconClient.java:323) at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:361) at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182) at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132) bash-4.1$ ./bin/falcon entity -submit -type cluster -file cluster.xml Stacktrace: org.apache.falcon.client.FalconCLIException: Bad Request;Path /tmp/falcon has permissions: rwxrwxrwx, should be rwxr-xr-x at org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44) at org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162) at org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684) at org.apache.falcon.client.FalconClient.submit(FalconClient.java:323) at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:361) at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182) at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
I can change these permission forever with the same effect
for (Location location : cluster.getLocations().getLocations()) { final String locationName = location.getName(); if (locationName.equals("temp")) { continue; } try { checkPathOwnerAndPermission(cluster.getName(), location.getPath(), fs, "staging".equals(locationName) ? HadoopClientFactory.ALL_PERMISSION : HadoopClientFactory.READ_EXECUTE_PERMISSION); } catch (IOException e) { (...) } }
This basically means:
- staging directory must have exactly ALL permissions
- execute directory must have exactly READ_EXECUTE permissions
If the staging and execute directories are the same, then we have the misconfiguration that is hard to detect based on the current message.
Therefore:
- a better (less confusing) message could be printed
- or, code could be fixed that execute directory should have at least (not exactly) READ_EXECUTE permissions.
Attachments
Attachments
Issue Links
- is related to
-
FALCON-817 Simplify operability by merging the staging and working dirs in cluster entity into one dir
- Resolved