Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
Description
Some Hadoop shell scripts infer their own name using this bit of shell magic:
18 MYNAME="${BASH_SOURCE-$0}" 19 HADOOP_SHELL_EXECNAME="${MYNAME##*/}"
e.g. see the hdfs script.
The inferred shell script name is later passed to hadoop-functions.sh which uses it to construct the names of some environment variables. E.g. when invoking hdfs datanode, the options variable name is inferred as follows:
# HDFS + DATANODE + OPTS -> HDFS_DATANODE_OPTS
This works well if the calling script name is standard hdfs or yarn. If a distribution renames the script to something like foo.bar, , then the variable names will be inferred as FOO.BAR_DATANODE_OPTS. This is not a valid bash variable name.
Attachments
Attachments
- HADOOP-14976.04.patch
- 7 kB
- Arpit Agarwal
- HADOOP-14976.03.patch
- 7 kB
- Arpit Agarwal
- HADOOP-14976.02.patch
- 9 kB
- Arpit Agarwal
- HADOOP-14976.01.patch
- 2 kB
- Arpit Agarwal
Activity
This is not a valid bash variable name.
Worse, it pretty much breaks the Hadoop user experience.
There may be better alternatives.
Maybe the bigger question is why would a distribution rename the binaries. With hadoop 3.x being significantly more flexible as to the shell environment configuration (e.g., function overrides) those distributions might need to reconsider their strategies.
Worse, it pretty much breaks the Hadoop user experience.
Agreed, not a great user experience to change names of standard environment variables. If the variable names are fixed, is there a benefit to inferring HADOOP_SHELL_EXECNAME instead of passing a fixed string to hadoop_subcommand_opts, since the calling script always knows what is necessary? e.g. do you see any downside to updating the following line
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs#L269
to
hadoop_java_exec hdfs "${HADOOP_CLASSNAME}" "${HADOOP_SUBCMD_ARGS[@]}"
since the calling script always knows what is necessary?
I'd need to be convinced this is true. A lot of the work done in the shell script rewrite and follow on work was to make the "front end" scripts as dumb as possible in order to centralize the program logic. This gave huge benefits in the form of script consistency, testing, and more.
Besides, EXECNAME is used for very specific things:
e.g.:
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs#L67
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh#L20
are great examples where the execname is exactly what needs to be reported.
.. and that's even before 3rd party add-ons that might expect HADOOP_SHELL_EXECNAME to work as expected.
If distributions really are renaming the scripts (which is extremely problematic for lots of reasons), there isn't much of a reason they couldn't just tuck them away in a non-PATH directory and use the same names or even just rewrite the scripts directly. (See above about removing as much logic as possible.)
I've had in my head a "vendor" version of hadoop-user-function.sh, but I'm not sure if even that would help here. It really depends upon the why the bin scripts are getting renamed, if the problem being solved is actually more appropriate for hadoop-layout.sh, etc.
I see nothing but pain and misfortune for mucking with HADOOP_SHELL_EXECNAME though.
I see, thank you for the clarification.
In this case, the scripts were historically renamed to allow multi-versioning - during upgrades for example. Perhaps the 3.0 script rewrite makes it easier to do this differently. I can guess why the variable name detection is done this way in 3.0 - it allows separating out script-specific logic into the generic hadoop-functions.sh which is neat.
For now it's just an unexpected side effect that rename breaks the interpretation of shell variables. My first thought was letting script callers override HADOOP_SHELL_EXECNAME, so scripts don't initialize it if it's already set. One situation where that might break things is when one script invokes and we want HADOOP_SHELL_EXECNAME to reflect the callee script. It looks like that doesn't happen today though, scripts like hadoop-distcp.sh and hadoop-streaming.sh don't rewrite HADOOP_SHELL_EXECNAME.
allow multi-versioning
Ignoring the whole Linux alternatives bits, it's probably easier to write a custom hdfs wrapper that does this. e.g.:
/usr/bin/hdfs = wrapper
/usr/hadoop-3.0.0/bin/hdfs = 3.0.0 hdfs
/usr/hadoop-3.1.0/bin/hdfs = 3.1.0 hdfs
/usr/hadoop-default symlink to hadoop-3.0.0
wrapper basically adds --version parameter:
hdfs --version 3.0.0 foo => exec /usr/hadoop-3.0.0/bin/hdfs
hdfs --version 3.1.0 foo => exec /usr/hadoop-3.1.0/bin/hdfs
hdfs foo => exec /usr/hadoop-default/bin/hdfs
Plus this works with any version of Hadoop.... and it's a relatively simple script. (10 lines max?)
You can also cheat and use .hadoop-env / .hadooprc to do some of the same sorts of things for individual users, but that's a bit more complicated.
Actually, there is an easy way out of this: hard set HADOOP_SHELL_EXECNAME rather than calculate it dynamically.
Originally, HADOOP_SHELL_EXECNAME was meant to be 'smart' and was only used for hadoop_usage output. When dynamic subcommands came along, it became obvious that the code would need some way to tell apart commands. HADOOP_SHELL_EXECNAME got re-purposed a bit but kept the dynamic nature to make it easy to copy-n-paste the header between the executables.
So we could just change the top to remove the calculation off of MYNAME, running the risk that someone who copies and pastes the code without understanding it may make some big mistakes. [IMO, this risk is higher than one would think, given how often hadoop's shell code was previously propagated. Those mistakes are still being repeated in new projects.]
So we could just change the top to remove the calculation off of MYNAME, running the risk that someone who copies and pastes the code without understanding it may make some big mistakes.
Thank you for suggesting that alternative. What do you think is the right thing to do - make that change and formalize the fact that these environment variables are not meant to be renamed; or leave it as it is to avoid copy-paste bugs like you described?
The v01 patch hard-codes HADOOP_SHELL_EXECNAME for hadoop, hdfs, mapred and yarn. Checked there are no other scripts that need to be fixed via source scan and from https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/UnixShellGuide.html.
The remaining scripts that reference HADOOP_SHELL_EXECNAME like hadoop-distcp.sh, hadoop-streaming.sh etc. all depend on it being initialized in the calling script.
I ran shelltests locally. I couldn't think of any shell unit tests that need to be added as part of this change.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 15s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
-1 | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. |
trunk Compile Tests | |||
0 | mvndep | 0m 27s | Maven dependency ordering for branch |
+1 | mvninstall | 21m 41s | trunk passed |
+1 | mvnsite | 10m 3s | trunk passed |
+1 | shadedclient | 11m 56s | branch has no errors when building and testing our client artifacts. |
Patch Compile Tests | |||
0 | mvndep | 0m 24s | Maven dependency ordering for patch |
+1 | mvnsite | 10m 25s | the patch passed |
+1 | shellcheck | 1m 33s | There were no new shellcheck issues. |
+1 | shelldocs | 0m 13s | There were no new shelldocs issues. |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 12m 17s | patch has no errors when building and testing our client artifacts. |
Other Tests | |||
+1 | unit | 2m 54s | hadoop-common in the patch passed. |
+1 | unit | 1m 23s | hadoop-hdfs in the patch passed. |
+1 | unit | 9m 20s | hadoop-yarn in the patch passed. |
+1 | unit | 3m 18s | hadoop-mapreduce-project in the patch passed. |
-1 | asflicense | 0m 48s | The patch generated 3 ASF License warnings. |
87m 30s |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12894784/HADOOP-14976.01.patch |
Optional Tests | asflicense mvnsite unit shellcheck shelldocs |
uname | Linux 65e5605ad887 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 9a7e810 |
maven | version: Apache Maven 3.3.9 |
shellcheck | v0.4.6 |
Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13597/testReport/ |
asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/13597/artifact/out/patch-asflicense-problems.txt |
modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn hadoop-mapreduce-project U: . |
Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13597/console |
Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Thank you.
The v2 patch fixes that and adds bats tests for MYNAME and HADOOP_SHELL_EXECNAME for the hdfs and hadoop scripts. I'm testing it via hadoop_debug statements, so I had to update a couple of existing bats tests that assert on exact output.
+1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 17s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. |
trunk Compile Tests | |||
0 | mvndep | 0m 20s | Maven dependency ordering for branch |
+1 | mvninstall | 16m 38s | trunk passed |
+1 | mvnsite | 7m 21s | trunk passed |
+1 | shadedclient | 11m 0s | branch has no errors when building and testing our client artifacts. |
Patch Compile Tests | |||
0 | mvndep | 0m 22s | Maven dependency ordering for patch |
+1 | mvnsite | 8m 11s | the patch passed |
+1 | shellcheck | 1m 23s | There were no new shellcheck issues. |
+1 | shelldocs | 0m 9s | There were no new shelldocs issues. |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 10m 31s | patch has no errors when building and testing our client artifacts. |
Other Tests | |||
+1 | unit | 2m 7s | hadoop-common in the patch passed. |
+1 | unit | 1m 7s | hadoop-hdfs in the patch passed. |
+1 | unit | 7m 54s | hadoop-yarn in the patch passed. |
+1 | unit | 2m 32s | hadoop-mapreduce-project in the patch passed. |
+1 | asflicense | 0m 36s | The patch does not generate ASF License warnings. |
70m 59s |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896019/HADOOP-14976.02.patch |
Optional Tests | asflicense mvnsite unit shellcheck shelldocs |
uname | Linux 6e4f1ab3f48d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 59d78a5 |
maven | version: Apache Maven 3.3.9 |
shellcheck | v0.4.6 |
Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13628/testReport/ |
Max. process+thread count | 329 (vs. ulimit of 5000) |
modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn hadoop-mapreduce-project U: . |
Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13628/console |
Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Rather than use hadoop_debug for this, modify envvars to print them out if QATESTMODE is set.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 8s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. |
trunk Compile Tests | |||
0 | mvndep | 0m 19s | Maven dependency ordering for branch |
+1 | mvninstall | 16m 18s | trunk passed |
+1 | mvnsite | 6m 50s | trunk passed |
+1 | shadedclient | 9m 51s | branch has no errors when building and testing our client artifacts. |
Patch Compile Tests | |||
0 | mvndep | 0m 20s | Maven dependency ordering for patch |
+1 | mvnsite | 6m 22s | the patch passed |
+1 | shellcheck | 1m 19s | There were no new shellcheck issues. |
+1 | shelldocs | 0m 8s | There were no new shelldocs issues. |
-1 | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply |
+1 | shadedclient | 10m 5s | patch has no errors when building and testing our client artifacts. |
Other Tests | |||
+1 | unit | 2m 9s | hadoop-common in the patch passed. |
+1 | unit | 0m 58s | hadoop-hdfs in the patch passed. |
+1 | unit | 6m 41s | hadoop-yarn in the patch passed. |
+1 | unit | 1m 57s | hadoop-mapreduce-project in the patch passed. |
+1 | asflicense | 0m 34s | The patch does not generate ASF License warnings. |
64m 28s |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896783/HADOOP-14976.03.patch |
Optional Tests | asflicense mvnsite unit shellcheck shelldocs |
uname | Linux 51d74d28516b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 49b4c0b |
maven | version: Apache Maven 3.3.9 |
shellcheck | v0.4.6 |
whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/13652/artifact/out/whitespace-eol.txt |
Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13652/testReport/ |
Max. process+thread count | 336 (vs. ulimit of 5000) |
modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn hadoop-mapreduce-project U: . |
Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13652/console |
Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
+1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 15m 12s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. |
trunk Compile Tests | |||
0 | mvndep | 1m 41s | Maven dependency ordering for branch |
+1 | mvninstall | 17m 2s | trunk passed |
+1 | mvnsite | 7m 37s | trunk passed |
+1 | shadedclient | 9m 32s | branch has no errors when building and testing our client artifacts. |
Patch Compile Tests | |||
0 | mvndep | 0m 20s | Maven dependency ordering for patch |
+1 | mvnsite | 7m 44s | the patch passed |
+1 | shellcheck | 1m 20s | There were no new shellcheck issues. |
+1 | shelldocs | 0m 8s | There were no new shelldocs issues. |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 9m 47s | patch has no errors when building and testing our client artifacts. |
Other Tests | |||
+1 | unit | 1m 57s | hadoop-common in the patch passed. |
+1 | unit | 1m 9s | hadoop-hdfs in the patch passed. |
+1 | unit | 8m 14s | hadoop-yarn in the patch passed. |
+1 | unit | 2m 34s | hadoop-mapreduce-project in the patch passed. |
+1 | asflicense | 0m 33s | The patch does not generate ASF License warnings. |
85m 16s |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896926/HADOOP-14976.04.patch |
Optional Tests | asflicense mvnsite unit shellcheck shelldocs |
uname | Linux 1936a54d67d6 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / a2c150a |
maven | version: Apache Maven 3.3.9 |
shellcheck | v0.4.6 |
Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13656/testReport/ |
Max. process+thread count | 297 (vs. ulimit of 5000) |
modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn hadoop-mapreduce-project U: . |
Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13656/console |
Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Allen, do you have any comments on the updated patch or the test case? Thanks.
Thanks arpitagarwal for working on this and aw for the discussion! +1 on v004 patch.
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13322 (See https://builds.apache.org/job/Hadoop-trunk-Commit/13322/)
HADOOP-14976. Set HADOOP_SHELL_EXECNAME explicitly in scripts. (arp: rev e00c7f78c1c00467319ce5b92e4a3ebc691d246e)
- (add) hadoop-hdfs-project/hadoop-hdfs/src/test/scripts/hadoop_shell_execname.bats
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
- (edit) hadoop-mapreduce-project/bin/mapred
- (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop
- (edit) hadoop-yarn-project/hadoop-yarn/bin/yarn
- (add) hadoop-common-project/hadoop-common/src/test/scripts/hadoop_shell_execname.bats
Allowing calling scripts to override the HADOOP_SHELL_EXECNAME detection is just one possible solution. There may be better alternatives.