Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
v2.1.0
-
kylin:2.1.0
hadoop:2.7.3
hive:1.2.1
hbase: 1.2.5
-
Patch
Description
Hbase is running on separate cluster with hive and mapreduce, then it will throw the
exception. See the link for details http://apache-kylin.74782.x6.nabble.com/wrong-fs-when-use-two-cluster-td8985.html
Except that, I found that kylin-2.0 is not compatible is kylin-2.1 which
will cause query failed. In function writeLargeCellToHdfs(...) , the
kylin-2.1 will write content to cluster of mapreduce. In kylin-2.0, the
destination is hbase's cluster.
Attachments
Attachments
Issue Links
- relates to
-
KYLIN-2869 When using non-default FS as the working-dir, Kylin puts large metadata file to default FS
- Closed