Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
2.0.0
-
None
-
3 node ANT cluster one track.
Description
Spark-Shell program is not working, which is given in the https://github.com/apache/carbondata/blob/master/docs/hive-guide.md
Working program is given as below:
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.CarbonSession._
val newSpark = SparkSession.builder().config(sc.getConf).enableHiveSupport.config("spark.sql.extensions","org.apache.spark.sql.CarbonExtensions").getOrCreate()
newSpark.sql("drop table if exists hive_carbon").show
newSpark.sql("create table hive_carbon(id int, name string, scale decimal, country string, salary double) STORED AS carbondata").show
newSpark.sql("LOAD DATA INPATH 'hdfs://hacluster/user/prasanna/samplehive.csv' INTO TABLE hive_carbon").show
newSpark.sql("SELECT * FROM hive_carbon").show()
so could update the above working program in the https://github.com/apache/carbondata/blob/master/docs/hive-guide.md page, under the "Start Spark shell by running the following command in the Spark directory" section.