Details
Description
When trying to create a table like this:
CREATE TABLE IF NOT EXISTS will_not_work( timestamp string, name string ) PARTITIONED BY (dt string, hr string) STORED AS carbondata LOCATION 's3a://my-bucket/CarbonDataTests/will_not_work
The folder 's3a://my-bucket/CarbonDataTests/will_not_work' is a not existing folder
I get the following error:
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: Partition is not supported for external table at org.apache.spark.sql.parser.CarbonSparkSqlParserUtil$.buildTableInfoFromCatalogTable(CarbonSparkSqlParserUtil.scala:219) at org.apache.spark.sql.CarbonSource$.createTableInfo(CarbonSource.scala:235) at org.apache.spark.sql.CarbonSource$.createTableMeta(CarbonSource.scala:394) at org.apache.spark.sql.execution.command.table.CarbonCreateDataSourceTableCommand.processMetadata(CarbonCreateDataSourceTableCommand.scala:69) at org.apache.spark.sql.execution.command.MetadataCommand$$anonfun$run$1.apply(package.scala:137) at org.apache.spark.sql.execution.command.MetadataCommand$$anonfun$run$1.apply(package.scala:137) at org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:118) at org.apache.spark.sql.execution.command.MetadataCommand.runWithAudit(package.scala:134) at org.apache.spark.sql.execution.command.MetadataCommand.run(package.scala:137) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194) at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643) ... 64 elided