HDFS client requires dangerous permission, in particular execute on all files despite only trying to connect to an HDFS cluster.
A full list (for both Hadoop 1 and 2) is available here along with the place in code where they occur.
While it is understandable for some permissions to be used, requiring FilePermission <<ALL FILES>> execute to simply initialize a class field Shell which in the end is not used (since it's just a client) simply compromises the entire security system.
To make matters worse, the code is executed to initialize a field so in case the permissions is not granted, the VM fails with InitializationError which is unrecoverable.
Ironically enough, on Windows this problem does not appear since the code simply bypasses it and initializes the field with a fall back value (false).
A quick fix would be to simply take into account that the JVM SecurityManager might be active and the permission not granted or that the external process fails and use a fall back value.
A proper and long-term fix would be to minimize the use of permissions for hdfs client since it is simply not required. A client should be as light as possible and not have the server requirements leaked onto.