Description
Our hdfs cluster uses router-based federation(https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html).
Opening the system cube configuration, hiveproducer write() function throw exception :
//代码占位符 ERROR [metrics-blocking-reservoir-scheduler-0] hive.HiveReservoirReporter:119 : Wrong FS: hdfs://DClusterNmg4/user/kylin/hive/hive_metrics_job_exception_qa/kday_date=2019-09-04, expected: hdfs://difed java.lang.IllegalArgumentException: Wrong FS: hdfs://DClusterNmg4/user/kylin/hive/hive_metrics_job_exception_qa/kday_date=2019-09-04, expected: hdfs://difed at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:717) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:109) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1390) at org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1386) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1402) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1494) at org.apache.kylin.metrics.lib.impl.hive.HiveProducer.write(HiveProducer.java:137) at org.apache.kylin.metrics.lib.impl.hive.HiveProducer.send(HiveProducer.java:122) at org.apache.kylin.metrics.lib.impl.hive.HiveReservoirReporter$HiveReservoirListener.onRecordUpdate(HiveReservoirReporter.java:117) at org.apache.kylin.metrics.lib.impl.BlockingReservoir.notifyListenerOfUpdatedRecord(BlockingReservoir.java:105) at org.apache.kylin.metrics.lib.impl.BlockingReservoir.onRecordUpdate(BlockingReservoir.java:93) at org.apache.kylin.metrics.lib.impl.BlockingReservoir.access$300(BlockingReservoir.java:33) at org.apache.kylin.metrics.lib.impl.BlockingReservoir$ReporterRunnable.run(BlockingReservoir.java:152) at java.lang.Thread.run(Thread.java:745)
This is because the default router namespace is hdfs://difed, and the actual federation namespaces are the hdfs://DClusterNmg4, the hdfs://DClusterNmg1, and the hdfs://DClusterNmg2...
So fs.defaultFS in core-sie.xml is hdfs ://difed, But this hive table location path is hdfs://DClusterNmg4/user/... . Then defaultFs.exists(hiveLocationPath) throw exception.
So we need to check if the prefix is same. If defaut fs is not a prefix of hive table location path, use location path get a new filesystem
Attachments
Attachments
Issue Links
- links to