Sure. -1 on allowing unsecured datanodes to join a secure cluster, and at the moment Hadoop doesn't have a non-jsvc way of securing/verifying datanodes' ports.
Currently, we secure the datanodes via jsvc, and the reasons for doing so were discussed extensively on this JIRA. Were we to allow the behavior requested, a mis-configured cluster could end up partially unsecured with no warning that it is in such a state, which is not acceptable.
What you're asking for is essentially to make securing the datanodes' non-RPC ports pluggable, which we fully expect and plan to do. I'll open a JIRA to make datanode-port security pluggable once 1150 has been finished off. jsvc was a reliable solution to a problem discovered very late in security's development, which has worked very well on our production clusters, but certainly still has the odor of a hack about it. All that's needed is a way of auditing and verifying that the ports we're running are on are secure by Ops' estimation; jsvc, SELinux, AppArmor will all be reasonable ways of fulfilling such a contract.
But until we actually have a plan to implement this in a reliable, verifiable and documented way, it's best to err on the side of caution and security and provide as much guarantee as possible that the datanodes are indeed secure in a secure cluster. Until we support non-jsvc methods of doing this, it's not going to work to have a non-jsvc verified datanode.
As far as a config as mentioned above, it would essentially be my.cluster.is.secure.except.for.this.one.attack.vector, which is not a good idea for the same reasons as above - it's a huge configuration mistake waiting to happen - and moreover will be unnecessary once a fully pluggable system is in place. The one place it would be very useful and justifiable would be for developer testing, since it is a serious pain to start up these secure nodes while doing development now.