Description
Right now, we always share Hadoop classes between Spark side and the metastore client side (HiveClientImpl). However, it is possible that the Hadoop used by the metastore client is in a different version of Hadoop. Thus, in this case, we cannot share Hadoop classes. Once we disable sharing Hadoop classes, we cannot pass a Hadoop Configuration to HiveClientImpl because Configuration will be loaded by different classloaders.
Attachments
Issue Links
- links to