Hive
  1. Hive
  2. HIVE-3303

Fix error code inconsistency bug in mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q when running hive on hadoop23

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.10.0
    • Component/s: None
    • Labels:
      None

      Description

      when running hive on hadoop23, mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q are having inconsistent error code diffs:

      [junit] diff -a /home/cloudera/Code/hive/build/ql/test/logs/clientnegative/mapreduce_stack_trace.q.out /home/cloudera/Code/hive/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out
      [junit] < FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
      [junit] > FAILED: Execution Error, return code 20000 from org.apache.hadoop.hive.ql.exec.MapRedTask. Unable to initialize custom script.

      [junit] diff -a /home/cloudera/Code/hive/build/ql/test/logs/clientnegative/mapreduce_stack_trace_turnoff.q.out /home/cloudera/Code/hive/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff.q.out
      [junit] 5c5
      [junit] < FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
      [junit]
      [junit] > FAILED: Execution Error, return code 20000 from org.apache.hadoop.hive.ql.exec.MapRedTask. Unable to initialize custom script

      The error code 20000(which indicates unable to initialize custom script) could not be retrieved.

        Issue Links

          Activity

          Hide
          Zhenxiao Luo added a comment -

          The problem is, hadoop23 is getting Task-diagnostics differently from hadoop20.

          In hadoop20, Task-diagnostics is retrieved via jobSubmitClient in JobClient.java:

          public String[] getTaskDiagnostics(TaskAttemptID id) throws IOException

          { return jobSubmitClient.getTaskDiagnostics(id); }

          And in JobTracker.java, all the related logs are put into diagnostic info:

          public synchronized String[] getTaskDiagnostics(TaskAttemptID taskId)
          throws IOException {

          JobID jobId = taskId.getJobID();
          TaskID tipId = taskId.getTaskID();
          JobInProgress job = jobs.get(jobId);
          if (job == null)

          { throw new IllegalArgumentException("Job " + jobId + " not found."); }

          TaskInProgress tip = job.getTaskInProgress(tipId);
          if (tip == null)

          { throw new IllegalArgumentException("TIP " + tipId + " not found."); }

          List<String> taskDiagnosticInfo = tip.getDiagnosticInfo(taskId);
          return ((taskDiagnosticInfo == null) ? null
          : taskDiagnosticInfo.toArray(new String[0]));
          }

          Here is the diagnostic info in hadoop20:

          java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row

          {"key":"238","value":"val_238"}

          [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
          [junit] at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
          [junit] at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
          [junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
          [junit] at org.apache.hadoop.mapred.Child.main(Child.java:170)
          [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row

          {"key":"238","value":"val_238"}

          [junit] at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
          [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
          [junit] ... 4 more
          [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20000]: Unable to initialize custom script.
          [junit] at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:346)
          [junit] at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
          [junit] at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
          [junit] at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
          [junit] at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
          [junit] at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
          [junit] at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
          [junit] at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
          [junit] at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
          [junit] at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
          [junit] ... 5 more
          [junit] Caused by: java.io.IOException: Cannot run program "script_does_not_exist": java.io.IOException: error=2, No such file or directory
          [junit] at java.lang.ProcessBuilder.start(ProcessBuilder.java:475)
          [junit] at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:305)
          [junit] ... 14 more
          [junit] Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory
          [junit] at java.lang.UNIXProcess.<init>(UNIXProcess.java:164)
          [junit] at java.lang.ProcessImpl.start(ProcessImpl.java:81)
          [junit] at java.lang.ProcessBuilder.start(ProcessBuilder.java:468)
          [junit] ... 15 more

          The error code [20000] appears in the diagnostic info, and could be retrieved by Hive.

          While, in hadoop23, in Job.java, a different execution path is:

          public String[] getTaskDiagnostics(final TaskAttemptID taskid)
          throws IOException, InterruptedException {
          ensureState(JobState.RUNNING);
          return ugi.doAs(new PrivilegedExceptionAction<String[]>() {
          @Override
          public String[] run() throws IOException, InterruptedException

          { return cluster.getClient().getTaskDiagnostics(taskid); }

          });
          }

          Here is the diagnostic info in hadoop23:
          java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row

          {"key":"238","value":"val_238"}

          [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
          [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
          [junit] at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
          [junit] at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393)
          [junit] at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
          [junit] at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393)
          [junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327)
          [junit] at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
          [junit] at java.security.AccessController.doPrivileged(Native Method)
          [junit] at javax.security.auth.Subject.doAs(Subject.java:416)
          [junit] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
          [junit] at org.apache.hadoop.mapred.Child.main(Child.java:264)
          [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row

          {"key":"238","value":"val_238"}

          [junit] at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
          [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
          [junit] ... 8 more
          [junit] C
          [junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327)
          [junit] at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
          [junit] at java.security.AccessController.doPrivileged(Native Method)
          [junit] at javax.security.auth.Subject.doAs(Subject.java:416)
          [junit] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
          [junit] at org.apache.hadoop.mapred.Child.main(Child.java:264)
          [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row

          {"key":"238","value":"val_238"}

          [junit] at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
          [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
          [junit] ... 8 more

          Except for the Hive Runtime exception saying error while processing row, no more info is get in the diagnostic info.

          Since hive is using Hadoop's diagnostic info to extract Error Code, in hadoop 23, error code is only set to be 2(as no other error code could be extracted).

          JobDebugger.java, getTasksInfo() function:

          if (t.getTaskStatus() != TaskCompletionEvent.Status.SUCCEEDED) {
          if (ti.getErrorCode() == 0)

          { String[] diags = rj.getTaskDiagnostics(t.getTaskAttemptId()); ti.setErrorCode(extractErrorCode(diags)); ti.setDiagnosticMesgs(diags); }

          I think the possible solution is to have separate expected outputs for hadoop20 and hadoop23. Since mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q are the testcases for Negative MiniMRCluster, there is no such utility there.

          any suggestions are appreciated.

          Show
          Zhenxiao Luo added a comment - The problem is, hadoop23 is getting Task-diagnostics differently from hadoop20. In hadoop20, Task-diagnostics is retrieved via jobSubmitClient in JobClient.java: public String[] getTaskDiagnostics(TaskAttemptID id) throws IOException { return jobSubmitClient.getTaskDiagnostics(id); } And in JobTracker.java, all the related logs are put into diagnostic info: public synchronized String[] getTaskDiagnostics(TaskAttemptID taskId) throws IOException { JobID jobId = taskId.getJobID(); TaskID tipId = taskId.getTaskID(); JobInProgress job = jobs.get(jobId); if (job == null) { throw new IllegalArgumentException("Job " + jobId + " not found."); } TaskInProgress tip = job.getTaskInProgress(tipId); if (tip == null) { throw new IllegalArgumentException("TIP " + tipId + " not found."); } List<String> taskDiagnosticInfo = tip.getDiagnosticInfo(taskId); return ((taskDiagnosticInfo == null) ? null : taskDiagnosticInfo.toArray(new String [0] )); } Here is the diagnostic info in hadoop20: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"key":"238","value":"val_238"} [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161) [junit] at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) [junit] at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358) [junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307) [junit] at org.apache.hadoop.mapred.Child.main(Child.java:170) [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"key":"238","value":"val_238"} [junit] at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548) [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143) [junit] ... 4 more [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: [Error 20000] : Unable to initialize custom script. [junit] at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:346) [junit] at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471) [junit] at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762) [junit] at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84) [junit] at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471) [junit] at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762) [junit] at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83) [junit] at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471) [junit] at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762) [junit] at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529) [junit] ... 5 more [junit] Caused by: java.io.IOException: Cannot run program "script_does_not_exist": java.io.IOException: error=2, No such file or directory [junit] at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) [junit] at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:305) [junit] ... 14 more [junit] Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory [junit] at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) [junit] at java.lang.ProcessImpl.start(ProcessImpl.java:81) [junit] at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) [junit] ... 15 more The error code [20000] appears in the diagnostic info, and could be retrieved by Hive. While, in hadoop23, in Job.java, a different execution path is: public String[] getTaskDiagnostics(final TaskAttemptID taskid) throws IOException, InterruptedException { ensureState(JobState.RUNNING); return ugi.doAs(new PrivilegedExceptionAction<String[]>() { @Override public String[] run() throws IOException, InterruptedException { return cluster.getClient().getTaskDiagnostics(taskid); } }); } Here is the diagnostic info in hadoop23: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"key":"238","value":"val_238"} [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161) [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161) [junit] at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) [junit] at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393) [junit] at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) [junit] at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:393) [junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327) [junit] at org.apache.hadoop.mapred.Child$4.run(Child.java:270) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at javax.security.auth.Subject.doAs(Subject.java:416) [junit] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) [junit] at org.apache.hadoop.mapred.Child.main(Child.java:264) [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"key":"238","value":"val_238"} [junit] at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548) [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143) [junit] ... 8 more [junit] C [junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:327) [junit] at org.apache.hadoop.mapred.Child$4.run(Child.java:270) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at javax.security.auth.Subject.doAs(Subject.java:416) [junit] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) [junit] at org.apache.hadoop.mapred.Child.main(Child.java:264) [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"key":"238","value":"val_238"} [junit] at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548) [junit] at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143) [junit] ... 8 more Except for the Hive Runtime exception saying error while processing row, no more info is get in the diagnostic info. Since hive is using Hadoop's diagnostic info to extract Error Code, in hadoop 23, error code is only set to be 2(as no other error code could be extracted). JobDebugger.java, getTasksInfo() function: if (t.getTaskStatus() != TaskCompletionEvent.Status.SUCCEEDED) { if (ti.getErrorCode() == 0) { String[] diags = rj.getTaskDiagnostics(t.getTaskAttemptId()); ti.setErrorCode(extractErrorCode(diags)); ti.setDiagnosticMesgs(diags); } I think the possible solution is to have separate expected outputs for hadoop20 and hadoop23. Since mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q are the testcases for Negative MiniMRCluster, there is no such utility there. any suggestions are appreciated.
          Hide
          Carl Steinbach added a comment -

          We should use the [INCLUDE|EXCLUDE]_HADOOP_MAJOR_VERSIONS macros to fix this. The 0.23 behavior should be the standard going forward, so please create mapreduce_stack_trace_h20.q and use the INCLUDE macro, and EXCLUDE 0.20 from mapreduce_stack_trace.q

          Show
          Carl Steinbach added a comment - We should use the [INCLUDE|EXCLUDE] _HADOOP_MAJOR_VERSIONS macros to fix this. The 0.23 behavior should be the standard going forward, so please create mapreduce_stack_trace_h20.q and use the INCLUDE macro, and EXCLUDE 0.20 from mapreduce_stack_trace.q
          Hide
          Zhenxiao Luo added a comment -

          review request submitted at:
          https://reviews.facebook.net/D4365

          Show
          Zhenxiao Luo added a comment - review request submitted at: https://reviews.facebook.net/D4365
          Hide
          Ashutosh Chauhan added a comment -

          +1

          Show
          Ashutosh Chauhan added a comment - +1
          Hide
          Ashutosh Chauhan added a comment -

          Committed to trunk. Thanks, Zhenxiao!

          Show
          Ashutosh Chauhan added a comment - Committed to trunk. Thanks, Zhenxiao!
          Hide
          Hudson added a comment -

          Integrated in Hive-trunk-h0.21 #1579 (See https://builds.apache.org/job/Hive-trunk-h0.21/1579/)
          HIVE-3303: Fix error code inconsistency bug in mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q when running hive on hadoop23 (Zhenxiao Luo via Ashutosh Chauhan) (Revision 1367413)

          Result = FAILURE
          hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1367413
          Files :

          • /hive/trunk/build-common.xml
          • /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace.q
          • /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_hadoop20.q
          • /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff.q
          • /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q
          • /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out
          • /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_hadoop20.q.out
          • /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff.q.out
          • /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q.out
          Show
          Hudson added a comment - Integrated in Hive-trunk-h0.21 #1579 (See https://builds.apache.org/job/Hive-trunk-h0.21/1579/ ) HIVE-3303 : Fix error code inconsistency bug in mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q when running hive on hadoop23 (Zhenxiao Luo via Ashutosh Chauhan) (Revision 1367413) Result = FAILURE hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1367413 Files : /hive/trunk/build-common.xml /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace.q /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_hadoop20.q /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff.q /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_hadoop20.q.out /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff.q.out /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q.out
          Hide
          Hudson added a comment -

          Integrated in Hive-trunk-hadoop2 #54 (See https://builds.apache.org/job/Hive-trunk-hadoop2/54/)
          HIVE-3303: Fix error code inconsistency bug in mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q when running hive on hadoop23 (Zhenxiao Luo via Ashutosh Chauhan) (Revision 1367413)

          Result = ABORTED
          hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1367413
          Files :

          • /hive/trunk/build-common.xml
          • /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace.q
          • /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_hadoop20.q
          • /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff.q
          • /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q
          • /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out
          • /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_hadoop20.q.out
          • /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff.q.out
          • /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q.out
          Show
          Hudson added a comment - Integrated in Hive-trunk-hadoop2 #54 (See https://builds.apache.org/job/Hive-trunk-hadoop2/54/ ) HIVE-3303 : Fix error code inconsistency bug in mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q when running hive on hadoop23 (Zhenxiao Luo via Ashutosh Chauhan) (Revision 1367413) Result = ABORTED hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1367413 Files : /hive/trunk/build-common.xml /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace.q /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_hadoop20.q /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff.q /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_hadoop20.q.out /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff.q.out /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q.out
          Hide
          Ashutosh Chauhan added a comment -

          This issue is fixed and released as part of 0.10.0 release. If you find an issue which seems to be related to this one, please create a new jira and link this one with new jira.

          Show
          Ashutosh Chauhan added a comment - This issue is fixed and released as part of 0.10.0 release. If you find an issue which seems to be related to this one, please create a new jira and link this one with new jira.

            People

            • Assignee:
              Zhenxiao Luo
              Reporter:
              Zhenxiao Luo
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development