Apache Drill
  1. Apache Drill
  2. DRILL-420

float literal is interpreted as BigInt

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.4.0
    • Fix Version/s: 0.4.0
    • Component/s: None
    • Labels:
      None

      Description

      From sqlline, issue the following:
      0: jdbc:drill:> select 1.1+2.6 from `customer.json` limit 1;
      ------------

      EXPR$0

      ------------

      3

      ------------
      Notice the result is 3 instead of 3.7.

      sqlline log shows we are taking in the value as BigInt:
      11:42:05.223 [Client-1] DEBUG o.a.d.e.rpc.user.QueryResultHandler - Received QueryId part1: -4656127306686443884
      part2: -8283317532111349525
      succesfully. Adding listener org.apache.drill.jdbc.DrillResultSet$Listener@5cd38dd
      11:42:05.915 [Client-1] DEBUG org.apache.drill.jdbc.DrillResultSet - Result arrived QueryResultBatch [header=query_id

      { part1: -4656127306686443884 part2: -8283317532111349525 }

      is_last_chunk: false
      row_count: 1
      def {
      field {
      def {
      name

      { type: NAME name: "EXPR$0" }

      major_type

      { minor_type: BIGINT mode: REQUIRED }

      }
      value_count: 1
      buffer_length: 8
      }
      record_count: 1
      is_selection_vector_2: false
      }
      , data=SlicedByteBuf(ridx: 0, widx: 8, cap: 8/8, unwrapped: AccountingByteBuf [Inner buffer=PooledUnsafeDirectByteBufL(ridx: 76, widx: 76, cap: 76), size=76])]
      11:42:05.917 [Client-1] DEBUG org.apache.drill.jdbc.DrillResultSet - Result arrived QueryResultBatch [header=query_id

      { part1: -4656127306686443884 part2: -8283317532111349525 }

      is_last_chunk: true
      row_count: 0
      def {
      }
      , data=null]

      drillbit log confirms that:
      11:42:05.226 [WorkManager Event Thread] DEBUG o.apache.drill.exec.work.WorkManager - Starting pending task org.apache.drill.exec.work.foreman.Foreman@4c097753
      11:42:05.238 [WorkManager-3] DEBUG o.a.d.e.planner.logical.DrillOptiq - RexCall +(1.1, 2.6), {}
      11:42:05.238 [WorkManager-3] DEBUG o.a.d.e.planner.logical.DrillOptiq - Binary
      11:42:05.242 [WorkManager-3] DEBUG o.a.drill.exec.work.foreman.Foreman - Converting logical plan {
      "head" : {
      "version" : 1,
      "generator" :

      { "type" : "org.apache.drill.exec.planner.logical.DrillImplementor", "info" : "" }

      ,
      "type" : "APACHE_DRILL_LOGICAL",
      "resultMode" : "EXEC"
      },
      "storage" : {
      "cp" :

      { "type" : "file", "connection" : "classpath:///", "workspaces" : null, "formats" : null }

      },
      "query" : [ {
      "op" : "scan",
      "@id" : 1,
      "storageengine" : "cp",
      "selection" : {
      "format" :

      { "type" : "json" }

      ,
      "files" : [ "/customer.json" ]
      },
      "ref" : null
      }, {
      "op" : "project",
      "@id" : 2,
      "input" : 1,
      "projections" : [

      { "ref" : "output.EXPR$0", "expr" : " (1) + (2) " }

      ]
      },

      { "op" : "limit", "@id" : 3, "input" : 2, "first" : 0, "last" : 1 }

      ,

      { "op" : "store", "@id" : 4, "input" : 3, "target" : null, "storageEngine" : "--SCREEN--" }

      ]
      }.
      11:42:05.244 [WorkManager-3] DEBUG o.a.drill.common.config.DrillConfig - Loading configs at the following URLs [jar:file:/opt/drill/jars/drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar!/drill-module.conf, jar:file:/opt/drill/jars/drill-common-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar!/drill-module.conf]
      11:42:05.250 [WorkManager-3] DEBUG o.a.d.c.l.data.LogicalOperatorBase - Adding Logical Operator sub types: [class org.apache.drill.common.logical.data.Transform, class org.apache.drill.common.logical.data.Limit, class org.apache.drill.common.logical.data.Union, class org.apache.drill.common.logical.data.Sequence, class org.apache.drill.common.logical.data.Scan, class org.apache.drill.common.logical.data.Order, class org.apache.drill.common.logical.data.WindowFrame, class org.apache.drill.common.logical.data.Constant, class org.apache.drill.common.logical.data.Project, class org.apache.drill.common.logical.data.Join, class org.apache.drill.common.logical.data.GroupingAggregate, class org.apache.drill.common.logical.data.Store, class org.apache.drill.common.logical.data.Filter, class org.apache.drill.common.logical.data.RunningAggregate, class org.apache.drill.common.logical.data.Flatten]
      11:42:05.251 [WorkManager-3] DEBUG o.a.d.c.l.StoragePluginConfigBase - Adding Storage Engine Configs including [class org.apache.drill.exec.store.ischema.InfoSchemaConfig, class org.apache.drill.exec.store.mock.MockStorageEngineConfig, class org.apache.drill.exec.store.dfs.FileSystemConfig, class org.apache.drill.exec.store.NamedStoragePluginConfig, class org.apache.drill.exec.store.dfs.FileSystemFormatConfig, class org.apache.drill.exec.store.hive.HiveStoragePluginConfig]
      11:42:05.252 [WorkManager-3] DEBUG o.a.d.c.l.FormatPluginConfigBase - Adding Format Plugin Configs including [class org.apache.drill.exec.store.dfs.NamedFormatPluginConfig, class org.apache.drill.exec.store.parquet.ParquetFormatConfig, class org.apache.drill.exec.store.easy.json.JSONFormatPlugin$JSONFormatConfig]
      11:42:05.269 [WorkManager-3] DEBUG o.a.d.e.s.schedule.BlockMapBuilder - Took 0 ms to build endpoint map
      11:42:05.271 [WorkManager-3] DEBUG o.a.d.e.s.schedule.BlockMapBuilder - Failure finding Drillbit running on host localhost. Skipping affinity to that host.
      11:42:05.271 [WorkManager-3] DEBUG o.a.d.e.s.schedule.BlockMapBuilder - FileWork group (/customer.json,0) max bytes 0
      11:42:05.271 [WorkManager-3] DEBUG o.a.d.e.s.schedule.BlockMapBuilder - Took 0 ms to set endpoint bytes
      11:42:05.272 [WorkManager-3] DEBUG o.a.d.e.s.schedule.AffinityCreator - Took 0 ms to get operator affinity
      11:42:05.273 [WorkManager-3] DEBUG o.a.d.e.s.schedule.AssignmentCreator - Took 0 ms to apply assignments
      11:42:05.276 [WorkManager-3] DEBUG o.a.d.e.p.f.SimpleParallelizer - Root fragment:
      handle {
      query_id

      { part1: -4656127306686443884 part2: -8283317532111349525 }

      major_fragment_id: 0
      minor_fragment_id: 0
      }
      network_cost: 0.0
      cpu_cost: 0.0
      disk_cost: 0.0
      memory_cost: 0.0
      fragment_json: "{\n \"pop\" : \"screen\",\n \"@id\" : 1,\n \"child\" : {\n \"pop\" : \"selection-vector-remover\",\n \"@id\" : 2,\n \"child\" : {\n \"pop\" : \"limit\",\n \"@id\" : 3,\n \"child\" : {\n \"pop\" : \"project\",\n \"@id\" : 4,\n \"exprs\" : [

      {\n \"ref\" : \"output.EXPR$0\",\n \"expr\" : \" (1) + (2) \"\n }

      ],\n \"child\" : {\n \"pop\" : \"fs-sub-scan\",\n \"@id\" : 5,\n \"files\" : [

      {\n \"start\" : 0,\n \"length\" : 1,\n \"path\" : \"/customer.json\"\n }

      ],\n \"storage\" :

      {\n \"type\" : \"file\",\n \"connection\" : \"classpath:///\",\n \"workspaces\" : null,\n \"formats\" : null\n }

      ,\n \"format\" :

      {\n \"type\" : \"json\"\n }

      \n }\n },\n \"first\" : 0,\n \"last\" : 1\n }\n }\n}"
      leaf_fragment: true
      assignment

      { address: "qa-node118.qa.lab" user_port: 31010 control_port: 31011 data_port: 31012 }

      foreman

      { address: "qa-node118.qa.lab" user_port: 31010 control_port: 31011 data_port: 31012 }

      11:42:05.277 [WorkManager-3] DEBUG o.a.d.exec.rpc.control.WorkEventBus - Adding fragment status listener for queryId part1: -4656127306686443884
      part2: -8283317532111349525
      .
      11:42:05.277 [WorkManager-3] DEBUG o.a.drill.exec.work.foreman.Foreman - Storing fragments
      11:42:05.277 [WorkManager-3] DEBUG o.a.drill.exec.work.foreman.Foreman - Fragments stored.
      11:42:05.278 [WorkManager-3] DEBUG o.a.drill.exec.work.foreman.Foreman - Submitting fragments to run.
      11:42:05.278 [WorkManager-3] DEBUG o.a.d.exec.work.foreman.QueryManager - Setting up fragment runs.
      11:42:05.278 [WorkManager-3] DEBUG o.a.d.exec.work.foreman.QueryManager - Setting up root context.
      11:42:05.279 [WorkManager-3] DEBUG o.a.drill.exec.ops.FragmentContext - Getting initial memory allocation of 20000000
      11:42:05.279 [WorkManager-3] DEBUG o.a.d.exec.work.foreman.QueryManager - Setting up incoming buffers
      11:42:05.280 [WorkManager-3] DEBUG o.a.d.e.work.batch.IncomingBuffers - Came up with a list of 0 required fragments. Fragments {}
      11:42:05.280 [WorkManager-3] DEBUG o.a.d.exec.work.foreman.QueryManager - Setting buffers on root context.
      11:42:05.281 [WorkManager-3] DEBUG o.a.d.exec.work.foreman.QueryManager - Generating Exec tree
      11:42:05.313 [WorkManager-3] DEBUG o.a.d.e.p.i.s.RemovingRecordBatch - Created.
      11:42:05.314 [WorkManager-3] DEBUG o.a.d.exec.work.foreman.QueryManager - Exec tree generated.
      11:42:05.314 [WorkManager-3] DEBUG o.a.d.exec.work.foreman.QueryManager - Fragment added to local node.
      11:42:05.315 [WorkManager-3] DEBUG o.apache.drill.exec.work.WorkManager - Adding pending task org.apache.drill.exec.work.fragment.FragmentExecutor@4ea74049
      11:42:05.315 [WorkManager-3] DEBUG o.a.d.exec.work.foreman.QueryManager - Fragment runs setup is complete.
      11:42:05.315 [WorkManager Event Thread] DEBUG o.apache.drill.exec.work.WorkManager - Starting pending task org.apache.drill.exec.work.fragment.FragmentExecutor@4ea74049
      11:42:05.316 [WorkManager-3] DEBUG o.a.drill.exec.work.foreman.Foreman - Fragments running.
      11:42:05.316 [WorkManager-4] DEBUG o.a.d.e.w.fragment.FragmentExecutor - Starting fragment runner. 0:0
      11:42:05.316 [WorkManager-4] DEBUG o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to Foreman of memory_use: 0
      batches_completed: 0
      records_completed: 0
      state: RUNNING
      data_processed: 0
      handle {
      query_id

      { part1: -4656127306686443884 part2: -8283317532111349525 }

      major_fragment_id: 0
      minor_fragment_id: 0
      }
      running_time: 10523888674029567

      11:42:05.889 [WorkManager-4] DEBUG o.a.d.e.p.i.p.ProjectRecordBatch - Added eval.
      11:42:05.891 [WorkManager-4] DEBUG o.a.d.e.compile.JaninoClassCompiler - Compiling:
      1:
      2: package org.apache.drill.exec.test.generated;
      3:
      4: import org.apache.drill.exec.exception.SchemaChangeException;
      5: import org.apache.drill.exec.expr.holders.BigIntHolder;
      6: import org.apache.drill.exec.ops.FragmentContext;
      7: import org.apache.drill.exec.record.RecordBatch;
      8: import org.apache.drill.exec.vector.BigIntVector;
      9:
      10: public class ProjectorGen29 {
      11:
      12: BigIntVector vv3;
      13:
      14: public void doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing)
      15: throws SchemaChangeException
      16: {
      17: {
      18: /** start SETUP for function add **/
      19: {
      20: {}
      21: }
      22: /** end SETUP for function add **/
      23: Object tmp4 = (outgoing).getValueAccessorById(0, BigIntVector.class).getValueVector();
      24: if (tmp4 == null)

      { 25: throw new SchemaChangeException("Failure while loading vector vv3 with id: TypedFieldId [type=minor_type: BIGINT\nmode: REQUIRED\n, fieldId=0, isSuperReader=false]."); 26: }

      27: vv3 = ((BigIntVector) tmp4);
      28: }
      29: }
      30:
      31: public void doEval(int inIndex, int outIndex)
      32: throws SchemaChangeException
      33: {
      34: {
      35: BigIntHolder out0 = new BigIntHolder();
      36: out0 .value = 1L;
      37: BigIntHolder out1 = new BigIntHolder();
      38: out1 .value = 2L;
      39: BigIntHolder out2 = new BigIntHolder();
      40:

      { 41: final BigIntHolder out = new BigIntHolder(); 42: BigIntHolder in1 = out0; 43: BigIntHolder in2 = out1; 44: 45: out.value = (long) (in1.value + in2.value); 46: 47: out2 = out; 48: }

      49: vv3 .getMutator().set((outIndex), out2 .value);
      50: }
      51: }
      52:
      53: }

      11:42:05.900 [WorkManager-4] DEBUG o.a.drill.exec.compile.MergeAdapter - Skipping copy of 'doSetup()' since it is abstract or listed elsewhere.
      11:42:05.901 [WorkManager-4] DEBUG o.a.drill.exec.compile.MergeAdapter - Skipping copy of 'doEval()' since it is abstract or listed elsewhere.
      11:42:05.903 [WorkManager-4] DEBUG o.a.drill.exec.ops.FragmentContext - Compile time: 13 millis.
      11:42:05.909 [WorkManager-4] DEBUG o.a.d.e.compile.JaninoClassCompiler - Compiling:
      1:
      2: package org.apache.drill.exec.test.generated;
      3:
      4: import org.apache.drill.exec.exception.SchemaChangeException;
      5: import org.apache.drill.exec.ops.FragmentContext;
      6: import org.apache.drill.exec.record.RecordBatch;
      7: import org.apache.drill.exec.vector.BigIntVector;
      8:
      9: public class CopierGen30 {
      10:
      11: BigIntVector vv0;
      12: BigIntVector vv2;
      13:
      14: public void doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing)
      15: throws SchemaChangeException
      16: {
      17: {
      18: Object tmp1 = (incoming).getValueAccessorById(0, BigIntVector.class).getValueVector();
      19: if (tmp1 == null)

      { 20: throw new SchemaChangeException("Failure while loading vector vv0 with id: TypedFieldId [type=minor_type: BIGINT\nmode: REQUIRED\n, fieldId=0, isSuperReader=false]."); 21: }

      22: vv0 = ((BigIntVector) tmp1);
      23: Object tmp3 = (outgoing).getValueAccessorById(0, BigIntVector.class).getValueVector();
      24: if (tmp3 == null)

      { 25: throw new SchemaChangeException("Failure while loading vector vv2 with id: TypedFieldId [type=minor_type: BIGINT\nmode: REQUIRED\n, fieldId=0, isSuperReader=false]."); 26: }

      27: vv2 = ((BigIntVector) tmp3);
      28: }
      29: }
      30:
      31: public void doEval(int inIndex, int outIndex)
      32: throws SchemaChangeException
      33: {
      34:

      { 35: vv2 .copyFrom((inIndex), (outIndex), vv0); 36: }

      37: }
      38:
      39: }

      11:42:05.914 [WorkManager-4] DEBUG o.a.drill.exec.compile.MergeAdapter - Skipping copy of 'doSetup()' since it is abstract or listed elsewhere.
      11:42:05.914 [WorkManager-4] DEBUG o.a.drill.exec.compile.MergeAdapter - Skipping copy of 'doEval()' since it is abstract or listed elsewhere.
      11:42:05.916 [WorkManager-4] DEBUG o.a.drill.exec.ops.FragmentContext - Compile time: 7 millis.
      11:42:05.918 [WorkManager-4] DEBUG o.a.d.exec.work.foreman.QueryManager - New fragment status was provided to Foreman of memory_use: 0
      batches_completed: 2
      records_completed: 1
      state: FINISHED
      data_processed: 0
      handle {
      query_id

      { part1: -4656127306686443884 part2: -8283317532111349525 }

      major_fragment_id: 0
      minor_fragment_id: 0
      }
      running_time: 601395390

      11:42:05.921 [WorkManager-4] DEBUG o.a.d.e.w.fragment.FragmentExecutor - Fragment runner complete. 0:0

        Activity

        Hide
        Jacques Nadeau added a comment -

        Should be fixed as part of 129cd77

        Show
        Jacques Nadeau added a comment - Should be fixed as part of 129cd77
        Hide
        Chun Chang added a comment -

        verified.

        0: jdbc:drill:schema=dfs> select 1.1+2.6 from customer limit 1;
        ------------

        EXPR$0

        ------------

        3.7

        ------------

        Show
        Chun Chang added a comment - verified. 0: jdbc:drill:schema=dfs> select 1.1+2.6 from customer limit 1; ------------ EXPR$0 ------------ 3.7 ------------

          People

          • Assignee:
            Unassigned
            Reporter:
            Chun Chang
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development