I suggest shading in Apache Spark, to resolve the dependency hell that may occur when building / deploying Apache Spark. This mainly occurs on Java projects and on Hadoop environments, but shading will help for using Spark with Scala & even Python either.
Flink has a similar solution, delivering flink-shaded.
The dependencies I think that are relevant for shading are Jackson, Guava, Netty & any of the Hadoop ecosystems if possible.
As for releasing sources for the shaded version, I think the issue that has been raised in Flink is relevant and unanswered here too, hence I don't think that's an option currently (personally I don't see any value for it either).