Looking through the code for spark on yarn I don't see that spark.executor.extraLibraryPath is being properly applied when it launches executors. It is using the spark.driver.libraryPath in the ClientBase.
Note I didn't actually test it so its possible I missed something.
I also think better to use LD_LIBRARY_PATH rather then -Djava.library.path. once java.library.path is set, it doesn't search LD_LIBRARY_PATH. In Hadoop we switched to use LD_LIBRARY_PATH instead of java.library.path. See https://issues.apache.org/jira/browse/MAPREDUCE-4072. I'll split this into separate jira.