Sorry please don't take it personal. You might know that with regard to migration to Java 9 I have a very hard standpoint. Sorry for being harsh, but at some point it gets disappointing when a lot of third party libraries use sun.misc.Unsafe everywhere, forget to add AccessController.doPrivileged, or spawn processes and do other stuff like broken checks in static initializers affecting applications and other libraries that depend on it.
the whole code is outdated and no longer applicable,
It is already outdated in branch 2.7, because according to Akira Ajisaka the 2.7.x Hadoop release was done with a minimum of Java 7. So when updating your code to Java 7 back at that time, the check should have been replaced from the beginning. I know, sometimes you miss to fix such code when you update your pom.xml to have another minimum Java version! Nevertheless, such type of code is a no-go, and I am still walking around and arguing about such type of code, because it hinders the adoption of Java 9 - and especially the Apache Lucene people are really pushing all our users to move to Java 9 as soon as it is out (for security reasons in Elasticsearch/Solr and also for performance reasons with MappedByteBuffers). Here is my huge problem with this type of code:
- Never ever write code in static analyzers that may break (in this case the hardcoded substring(0,2)).
- Don't expect strings that come from the outside to conform to any syntax / string length you expect.
In that regard (I also see the Shell class and its usage everywhere as a big security risk), you should really fix the code - although in an outdated version. This is a serious issue, especially as hadoop-common-2.7 is still used in many projects. The problem is that any code that touches the Shell class (indirectly just by classloading) breaks! The fix is a one liner (the patch here is much too large). Just replace the whole line by a simple "true" and we are done, unrisky and well enough for a bugfix release.
P.S.: Most of the stuff in the Shell class can be done with java.nio.file APIs (including symlinks, or instead of calling "df -h" to get all mount points and their free space), there is no operating system shell needed anymore. This may already be partially fixed in later versions of Hadoop, I just say: I am not the first one complaining about that. I know from several other issues that this is seen very critical from 3rd parties. This would also make it easier for people to install or run tests on platforms like windows. I don't know, why a 3rd party library like hadoop-common should ever spawn child processes without asking security manager or give the user an option to do it without. That's what I am arguing about. – And if the same class then also fails in its static initializer, causing a cascade of class-loading related problems, because the person who wrote the code did not ever think about an IndexOutOfBoundsException, I keep getting sarcastic, sorry.
P.P.S.: What I write on twitter is unrelated to this issue and is my personal opinion!