Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
Description
Currently, Kylin on Parquet already supports debuging source code with local csv files and Not dependent on remote HDP sandbox, but it's a little bit complex. The steps are as follows:
- edit the properties of $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
kylin.metadata.url=$LOCAL_META_DIR kylin.env.zookeeper-is-local=true kylin.env.hdfs-working-dir=file:///path/to/local/dir kylin.engine.spark-conf.spark.master=local kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir kylin.env=UT
- debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option "-Dspark.local=true"
- Load csv data source by pressing button "Data Source->Load CSV File as Table" on "Model" page, and set the schema for your table. Then press "submit" to save.
Most time we debug just want to build and query cube quickly and focus the bug we want to resolve. But current way is complex to load csv tables, create model and cube and it's hard to use kylin sample cube. So, I want to add a csv source which using the model of kylin sample data directly when debug tomcat started.