Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Fixed
-
1.17.0
-
None
Description
For the case when metastore is enabled and ANALYZE was produced for the table, DESCRIBE TABLE statement fails with ClassCastException:
set `metastore.enabled`=true; analyze table lineitem refresh metadata; describe table lineitem;
Error: SYSTEM ERROR: ClassCastException: java.lang.Long cannot be cast to java.lang.Double
Stack trace from the logs:
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: ClassCastException: java.lang.Long cannot be cast to java.lang.Double Fragment 0:0 Please, refer to logs for more information. [Error Id: 6b1295ee-7674-4362-a3c4-096e0688ed0b on user515050-pc:31010] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:637) at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:363) at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:219) at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:329) at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Double at org.apache.drill.exec.store.ischema.Records$Column.<init>(Records.java:652) at org.apache.drill.exec.store.ischema.RecordCollector$MetastoreRecordCollector.lambda$columns$4(RecordCollector.java:350) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.apache.drill.exec.store.ischema.RecordCollector$MetastoreRecordCollector.columns(RecordCollector.java:333) at org.apache.drill.exec.store.ischema.RecordCollector$MetastoreRecordCollector.lambda$columns$3(RecordCollector.java:308) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485) at org.apache.drill.exec.store.ischema.RecordCollector$MetastoreRecordCollector.columns(RecordCollector.java:309) at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator$Columns.collect(InfoSchemaRecordGenerator.java:170) at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.lambda$visit$0(InfoSchemaRecordGenerator.java:75) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) at java.util.stream.AbstractTask.compute(AbstractTask.java:327) at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.visit(InfoSchemaRecordGenerator.java:77) at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:69) at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:63) at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:63) at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:51) at org.apache.drill.exec.store.ischema.InfoSchemaTableType.getRecordReader(InfoSchemaTableType.java:87) at org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:35) at org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30) at org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:163) at org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:186) at org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:141) at org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:186) at org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:114) at org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:90) at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:263) ... 4 common frames omitted
This issue was caused by DRILL-7273 because there were mixed statistics and metadata and therefore in some places was misused ColumnStatisticsKind.NON_NULL_COUNT column statistics.
Attachments
Attachments
Issue Links
- links to