Uploaded image for project: 'CarbonData'
  1. CarbonData
  2. CARBONDATA-4299

Support alter change datatype/change decimal precision for complex column

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 2.3.0
    • None
    • data-query
    • None
    • Contents verified on Spark 3.1.1

    Description

      Issue 1alter change decimal precision for complex column fails  with spark 3.1.1.**

      drop table if exists alTer_Com;
      CREATE TABLE alter_com (intfield int,EDUCATED string ,rankk string ,map1 map<int,int> , map2 map<string,array<int>> , map3 map<int, map<string,int>> , map4 map<string,struct<b:int>> ,strField struct<a:decimal(5,2)> ,mapField1 map<int,decimal(5,2)> , mapField2 map<int,struct<a:decimal(5,2)>> ,arrField array<decimal(5,2)>) STORED AS carbondata;
      insert into alter_com values (1,'cse','xi',map(1,2), map('a',array(1,2)), map(2,map('hello',1)), map('hi',named_struct('b',3)),named_struct('a', 123.45),map(1, 123.45),map(1, named_struct('a', 123.45)),array(123.45));
      alter table alter_com change map1 map11 map<int,int>;
      alter table alter_com change strField strField1 struct<a1:decimal(6,2)> ;
      alter table alter_com change mapField1 mapField11 map<int,decimal(6,2)> ;

      error message for all above scenario:

      0: jdbc:hive2://10.21.19.14:23040/default> alter table alter_com change strField strField1 struct<a1:decimal(6,2)> ;
      Error: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.carbondata.spark.exception.ProcessMetaDataException: operation failed for ranjan.alter_com: Alter table data type change or column rename operation failed: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. The following columns have types incompatible with the existing columns in their respective positions :
      col
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:361)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:263)
      at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78)
      at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:263)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:258)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:422)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:272)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.carbondata.spark.exception.ProcessMetaDataException: operation failed for ranjan.alter_com: Alter table data type change or column rename operation failed: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. The following columns have types incompatible with the existing columns in their respective positions :
      col
      at org.apache.spark.sql.execution.command.MetadataProcessOperation.throwMetadataException(package.scala:69)
      at org.apache.spark.sql.execution.command.MetadataProcessOperation.throwMetadataException$(package.scala:68)
      at org.apache.spark.sql.execution.command.MetadataCommand.throwMetadataException(package.scala:134)
      at org.apache.spark.sql.execution.command.schema.CarbonAlterTableColRenameDataTypeChangeCommand.processMetadata(CarbonAlterTableColRenameDataTypeChangeCommand.scala:360)
      at org.apache.spark.sql.execution.command.MetadataCommand.$anonfun$run$1(package.scala:137)
      at org.apache.spark.sql.execution.command.Auditable.runWithAudit(package.scala:118)
      at org.apache.spark.sql.execution.command.Auditable.runWithAudit$(package.scala:114)
      at org.apache.spark.sql.execution.command.MetadataCommand.runWithAudit(package.scala:134)
      at org.apache.spark.sql.execution.command.MetadataCommand.run(package.scala:137)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
      at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228)
      at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)
      at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
      at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
      at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
      at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
      at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
      at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228)
      at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
      at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
      at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615)
      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
      at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610)
      at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
      ... 16 more (state=,code=0)

       

      Issue 2alter change datatype for complex column fails  with spark 3.1.1.

      DROP TABLE IF EXISTS alter_com;
      CREATE TABLE alter_com(intfield int,EDUCATED string ,rankk string,name string ) STORED AS carbondata;
      alter table alter_com add columns(mapField1 MAP<int, int>,strField1 struct<a:int,b:decimal(5,2)>, arrField1 array<int>);
      insert into alter_com values(1,'cse','xi','df',map(5, 6),named_struct('a',1,'b', 123.45),array(1));
      alter table alter_com change mapField1 mapField1 MAP<int, long>;

       

      error message for all above scenario:**

      0: jdbc:hive2://10.21.19.14:23040/default> alter table alter_com change mapField1 mapField1 MAP<int, long>;
      Error: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.carbondata.spark.exception.ProcessMetaDataException: operation failed for ranjan.alter_com: Alter table data type change or column rename operation failed: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. The following columns have types incompatible with the existing columns in their respective positions :
      col
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:361)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:263)
      at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
      at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78)
      at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:263)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:258)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:422)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:272)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.carbondata.spark.exception.ProcessMetaDataException: operation failed for ranjan.alter_com: Alter table data type change or column rename operation failed: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter table. The following columns have types incompatible with the existing columns in their respective positions :
      col
      at org.apache.spark.sql.execution.command.MetadataProcessOperation.throwMetadataException(package.scala:69)
      at org.apache.spark.sql.execution.command.MetadataProcessOperation.throwMetadataException$(package.scala:68)
      at org.apache.spark.sql.execution.command.MetadataCommand.throwMetadataException(package.scala:134)
      at org.apache.spark.sql.execution.command.schema.CarbonAlterTableColRenameDataTypeChangeCommand.processMetadata(CarbonAlterTableColRenameDataTypeChangeCommand.scala:360)
      at org.apache.spark.sql.execution.command.MetadataCommand.$anonfun$run$1(package.scala:137)
      at org.apache.spark.sql.execution.command.Auditable.runWithAudit(package.scala:118)
      at org.apache.spark.sql.execution.command.Auditable.runWithAudit$(package.scala:114)
      at org.apache.spark.sql.execution.command.MetadataCommand.runWithAudit(package.scala:134)
      at org.apache.spark.sql.execution.command.MetadataCommand.run(package.scala:137)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
      at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228)
      at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)
      at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
      at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
      at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
      at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
      at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
      at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228)
      at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
      at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
      at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615)
      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
      at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610)
      at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
      at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
      ... 16 more (state=,code=0)

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            pwx944901 PRIYESH RANJAN
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: