Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
1.6.0
-
None
Description
To repro, create any table with location on s3a, file:///, etc., and enable HDFS Sync. It will cause the plugin to go into an invalid state with the below exception:
ERROR org.apache.sentry.hdfs.MetastorePlugin: [main]: #### Could not create Initial AuthzPaths or HMSHandler !! java.lang.IllegalArgumentException: pathElements cannot be NULL at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.sentry.hdfs.HMSPaths$Entry.findPrefixEntry(HMSPaths.java:250) at org.apache.sentry.hdfs.HMSPaths$Entry.createAuthzObjPath(HMSPaths.java:197) at org.apache.sentry.hdfs.HMSPaths.addAuthzObject(HMSPaths.java:354) at org.apache.sentry.hdfs.HMSPaths.addPathsToAuthzObject(HMSPaths.java:388) at org.apache.sentry.hdfs.UpdateableAuthzPaths.applyPartialUpdate(UpdateableAuthzPaths.java:112) at org.apache.sentry.hdfs.UpdateableAuthzPaths.updatePartial(UpdateableAuthzPaths.java:74) at org.apache.sentry.hdfs.MetastoreCacheInitializer.createInitialUpdate(MetastoreCacheInitializer.java:245) at org.apache.sentry.hdfs.MetastorePlugin$1.run(MetastorePlugin.java:160) at org.apache.sentry.hdfs.MetastorePlugin.<init>(MetastorePlugin.java:197) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.sentry.binding.metastore.SentryMetastorePostEventListener.<init>(SentryMetastorePostEventListener.java:78) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:422) at org.apache.hadoop.hive.metastore.MetaStoreUtils.getMetaStoreListeners(MetaStoreUtils.java:1439) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:485) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5775) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5770) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6022) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5947) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
We check parsed paths for nulls before we add them to the PathsUpdate object in DbTask and PartitionTask, but this is omitted accidentally in TableTask.