Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.16.0
-
None
Description
Prerequisites:
- Create a simple .csv file with header, like this:
col1,col2,col3 1,2,3 4,5,6 7,8,9
- Set exec.storage.enable_v3_text_reader=true
- Set "extractHeader": true for csv format in dfs storage plugin.
Query:
select columns[0] from dfs.tmp.`/test.csv`
Expected result: Exception should happen, here is the message from V2 reader:
UNSUPPORTED_OPERATION ERROR: Drill Remote Exception (java.lang.Exception) UNSUPPORTED_OPERATION ERROR: With extractHeader enabled, only header names are supported column name columns column index Fragment 0:0 [Error Id: 5affa696-1dbd-43d7-ac14-72d235c00f43 on userf87d-pc:31010] org.apache.drill.common.exceptions.UserException$Builder.build():630 org.apache.drill.exec.store.easy.text.compliant.FieldVarCharOutput.<init>():106 org.apache.drill.exec.store.easy.text.compliant.CompliantTextRecordReader.setup():139 org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():321 org.apache.drill.exec.physical.impl.ScanBatch.internalNext():216 org.apache.drill.exec.physical.impl.ScanBatch.next():271 org.apache.drill.exec.record.AbstractRecordBatch.next():126 org.apache.drill.exec.record.AbstractRecordBatch.next():116 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63 org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101 org.apache.drill.exec.record.AbstractRecordBatch.next():186 org.apache.drill.exec.record.AbstractRecordBatch.next():126 org.apache.drill.exec.record.AbstractRecordBatch.next():116 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63 org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101 org.apache.drill.exec.record.AbstractRecordBatch.next():186 org.apache.drill.exec.record.AbstractRecordBatch.next():126 org.apache.drill.exec.record.AbstractRecordBatch.next():116 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63 org.apache.drill.exec.record.AbstractRecordBatch.next():186 org.apache.drill.exec.record.AbstractRecordBatch.next():126 org.apache.drill.exec.record.AbstractRecordBatch.next():116 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141 org.apache.drill.exec.record.AbstractRecordBatch.next():186 org.apache.drill.exec.physical.impl.BaseRootExec.next():104 org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83 org.apache.drill.exec.physical.impl.BaseRootExec.next():94 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283 .......():0 org.apache.hadoop.security.UserGroupInformation.doAs():1746 org.apache.drill.exec.work.fragment.FragmentExecutor.run():283 org.apache.drill.common.SelfCleaningRunnable.run():38 .......():0
Actual result: The exception message is inadequate:
org.apache.drill.common.exceptions.UserRemoteException: EXECUTION_ERROR ERROR: Table schema must have exactly one column. Exception thrown from org.apache.drill.exec.physical.impl.scan.ScanOperatorExec Fragment 0:0 [Error Id: a76a1576-419a-413f-840f-088157167a6d on userf87d-pc:31010] (java.lang.IllegalStateException) Table schema must have exactly one column. org.apache.drill.exec.physical.impl.scan.columns.ColumnsArrayManager.resolveColumn():108 org.apache.drill.exec.physical.impl.scan.project.ReaderLevelProjection.resolveSpecial():91 org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.resolveRootTuple():62 org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.<init>():52 org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.doExplicitProjection():223 org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.reviseOutputProjection():155 org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.endBatch():117 org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.defineSchema():94 org.apache.drill.exec.physical.impl.scan.framework.ShimBatchReader.defineSchema():105 org.apache.drill.exec.physical.impl.scan.ReaderState.buildSchema():300 org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.nextAction():182 org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.buildSchema():122 org.apache.drill.exec.physical.impl.protocol.OperatorDriver.start():160 org.apache.drill.exec.physical.impl.protocol.OperatorDriver.next():112 org.apache.drill.exec.physical.impl.protocol.OperatorRecordBatch.next():147 org.apache.drill.exec.record.AbstractRecordBatch.next():126 org.apache.drill.exec.record.AbstractRecordBatch.next():116 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63 org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101 org.apache.drill.exec.record.AbstractRecordBatch.next():186 org.apache.drill.exec.record.AbstractRecordBatch.next():126 org.apache.drill.exec.record.AbstractRecordBatch.next():116 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63 org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101 org.apache.drill.exec.record.AbstractRecordBatch.next():186 org.apache.drill.exec.record.AbstractRecordBatch.next():126 org.apache.drill.exec.record.AbstractRecordBatch.next():116 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63 org.apache.drill.exec.record.AbstractRecordBatch.next():186 org.apache.drill.exec.record.AbstractRecordBatch.next():126 org.apache.drill.exec.record.AbstractRecordBatch.next():116 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141 org.apache.drill.exec.record.AbstractRecordBatch.next():186 org.apache.drill.exec.physical.impl.BaseRootExec.next():104 org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83 org.apache.drill.exec.physical.impl.BaseRootExec.next():94 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():422 org.apache.hadoop.security.UserGroupInformation.doAs():1746 org.apache.drill.exec.work.fragment.FragmentExecutor.run():283 org.apache.drill.common.SelfCleaningRunnable.run():38 java.util.concurrent.ThreadPoolExecutor.runWorker():1149 java.util.concurrent.ThreadPoolExecutor$Worker.run():624 java.lang.Thread.run():748
Attachments
Issue Links
- links to