Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-6886

Fix Timestamp field can not be selected in event time case when toDataStream[T], `T` not a `Row` Type.

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 1.4.0
    • Fix Version/s: 1.3.1, 1.4.0
    • Component/s: Table API & SQL
    • Labels:
      None

      Description

      Currently for event-time window(group/over), When contain `Timestamp` type field in `SELECT Clause`, And toDataStream[T], `T` not a `Row` Type, Such `PojoType`, will throw a exception. In this JIRA. will fix this bug. For example:
      Group Window on SQL:

      SELECT name, max(num) as myMax, TUMBLE_START(rowtime, INTERVAL '5' SECOND) as winStart,TUMBLE_END(rowtime, INTERVAL '5' SECOND) as winEnd FROM T1 GROUP BY name, TUMBLE(rowtime, INTERVAL '5' SECOND)
      

      Throw Exception:

      org.apache.flink.table.api.TableException: The field types of physical and logical row types do not match.This is a bug and should not happen. Please file an issue.
      
      	at org.apache.flink.table.api.TableException$.apply(exceptions.scala:53)
      	at org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:721)
      	at org.apache.flink.table.api.StreamTableEnvironment.getConversionMapper(StreamTableEnvironment.scala:247)
      	at org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:647)
      

      In fact, when we solve this exception,subsequent other exceptions will be thrown. The real reason is TableEnvironment#generateRowConverterFunction method bug. So in this JIRA. will fix it.

        Issue Links

          Activity

          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user sunjincheng121 closed the pull request at:

          https://github.com/apache/flink/pull/4101

          Show
          githubbot ASF GitHub Bot added a comment - Github user sunjincheng121 closed the pull request at: https://github.com/apache/flink/pull/4101
          Hide
          githubbot ASF GitHub Bot added a comment -

          GitHub user sunjincheng121 opened a pull request:

          https://github.com/apache/flink/pull/4102

          FLINK-6886[table]Fix Timestamp field can not be selected in event t…

          In this JIRA. will fix the exception when we run the folllows SQL:
          `SELECT name, max(num) as myMax, TUMBLE_START(rowtime, INTERVAL '5' SECOND) as winStart,TUMBLE_END(rowtime, INTERVAL '5' SECOND) as winEnd FROM T1 GROUP BY name, TUMBLE(rowtime, INTERVAL '5' SECOND)`
          Exception Info:
          ```
          org.apache.flink.table.api.TableException: The field types of physical and logical row types do not match.This is a bug and should not happen. Please file an issue.

          at org.apache.flink.table.api.TableException$.apply(exceptions.scala:53)
          at org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:721)
          at org.apache.flink.table.api.StreamTableEnvironment.getConversionMapper(StreamTableEnvironment.scala:247)
          at org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:647)
          ```

          • [x] General
          • The pull request references the related JIRA issue ("[FLINK-XXX] Jira title text")
          • The pull request addresses only one issue
          • Each commit in the PR has a meaningful commit message (including the JIRA id)
          • [ ] Documentation
          • Documentation has been added for new functionality
          • Old documentation affected by the pull request has been updated
          • JavaDoc for public methods has been added
          • [x] Tests & Build
          • Functionality added by the pull request is covered by tests
          • `mvn clean verify` has been executed successfully locally or a Travis build has passed

          You can merge this pull request into a Git repository by running:

          $ git pull https://github.com/sunjincheng121/flink FLINK-6886

          Alternatively you can review and apply these changes as the patch at:

          https://github.com/apache/flink/pull/4102.patch

          To close this pull request, make a commit to your master/trunk branch
          with (at least) the following in the commit message:

          This closes #4102


          commit 3c89d1c8576ec44245ecb381be2ef35f5479c149
          Author: sunjincheng121 <sunjincheng121@gmail.com>
          Date: 2017-06-11T05:53:15Z

          FLINK-6886[table]Fix Timestamp field can not be selected in event time case when toDataStream[T], `T` not a `Row` Type.


          Show
          githubbot ASF GitHub Bot added a comment - GitHub user sunjincheng121 opened a pull request: https://github.com/apache/flink/pull/4102 FLINK-6886 [table] Fix Timestamp field can not be selected in event t… In this JIRA. will fix the exception when we run the folllows SQL: `SELECT name, max(num) as myMax, TUMBLE_START(rowtime, INTERVAL '5' SECOND) as winStart,TUMBLE_END(rowtime, INTERVAL '5' SECOND) as winEnd FROM T1 GROUP BY name, TUMBLE(rowtime, INTERVAL '5' SECOND)` Exception Info: ``` org.apache.flink.table.api.TableException: The field types of physical and logical row types do not match.This is a bug and should not happen. Please file an issue. at org.apache.flink.table.api.TableException$.apply(exceptions.scala:53) at org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:721) at org.apache.flink.table.api.StreamTableEnvironment.getConversionMapper(StreamTableEnvironment.scala:247) at org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:647) ``` [x] General The pull request references the related JIRA issue (" [FLINK-XXX] Jira title text") The pull request addresses only one issue Each commit in the PR has a meaningful commit message (including the JIRA id) [ ] Documentation Documentation has been added for new functionality Old documentation affected by the pull request has been updated JavaDoc for public methods has been added [x] Tests & Build Functionality added by the pull request is covered by tests `mvn clean verify` has been executed successfully locally or a Travis build has passed You can merge this pull request into a Git repository by running: $ git pull https://github.com/sunjincheng121/flink FLINK-6886 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/4102.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4102 commit 3c89d1c8576ec44245ecb381be2ef35f5479c149 Author: sunjincheng121 <sunjincheng121@gmail.com> Date: 2017-06-11T05:53:15Z FLINK-6886 [table] Fix Timestamp field can not be selected in event time case when toDataStream [T] , `T` not a `Row` Type.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user wuchong commented on a diff in the pull request:

          https://github.com/apache/flink/pull/4102#discussion_r121694007

          — Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/TableEnvironment.scala —
          @@ -715,15 +715,8 @@ abstract class TableEnvironment(val config: TableConfig) {
          functionName: String):
          GeneratedFunction[MapFunction[Row, OUT], OUT] = {

          • // validate that at least the field types of physical and logical type match
          • // we do that here to make sure that plan translation was correct
          • if (schema.physicalTypeInfo != inputTypeInfo) { - throw TableException("The field types of physical and logical row types do not match." + - "This is a bug and should not happen. Please file an issue.") - }

            -

          • val fieldTypes = schema.physicalFieldTypeInfo
          • val fieldNames = schema.physicalFieldNames
            + val fieldTypes = schema.logicalFieldTypeInfo
            + val fieldNames = schema.logicalFieldNames
              • End diff –

          I'm not sure about this change. The physical and logical type check is to make sure all the time indicators are translated.

          Show
          githubbot ASF GitHub Bot added a comment - Github user wuchong commented on a diff in the pull request: https://github.com/apache/flink/pull/4102#discussion_r121694007 — Diff: flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/TableEnvironment.scala — @@ -715,15 +715,8 @@ abstract class TableEnvironment(val config: TableConfig) { functionName: String): GeneratedFunction[MapFunction [Row, OUT] , OUT] = { // validate that at least the field types of physical and logical type match // we do that here to make sure that plan translation was correct if (schema.physicalTypeInfo != inputTypeInfo) { - throw TableException("The field types of physical and logical row types do not match." + - "This is a bug and should not happen. Please file an issue.") - } - val fieldTypes = schema.physicalFieldTypeInfo val fieldNames = schema.physicalFieldNames + val fieldTypes = schema.logicalFieldTypeInfo + val fieldNames = schema.logicalFieldNames End diff – I'm not sure about this change. The physical and logical type check is to make sure all the time indicators are translated.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user wuchong commented on the issue:

          https://github.com/apache/flink/pull/4102

          I think the root cause is that TUMBLE_START inherits the wrong type (`TimeIndicatorRelDataType`) from rowtime column. So maybe a better solution is to re-create a `RelDataType` from `optimizedPlan`'s rowType and `orignalPlan`'s rowType and pass into the `getConversionMapper` method. I create a simple commit for this: https://github.com/wuchong/flink/commit/82c17ab45699f5f9beb925b156e760ebdeff79fb What do you think? @sunjincheng121 @fhueske

          BTW, do we really need so many IT cases for this ?

          Show
          githubbot ASF GitHub Bot added a comment - Github user wuchong commented on the issue: https://github.com/apache/flink/pull/4102 I think the root cause is that TUMBLE_START inherits the wrong type (`TimeIndicatorRelDataType`) from rowtime column. So maybe a better solution is to re-create a `RelDataType` from `optimizedPlan`'s rowType and `orignalPlan`'s rowType and pass into the `getConversionMapper` method. I create a simple commit for this: https://github.com/wuchong/flink/commit/82c17ab45699f5f9beb925b156e760ebdeff79fb What do you think? @sunjincheng121 @fhueske BTW, do we really need so many IT cases for this ?
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user sunjincheng121 commented on the issue:

          https://github.com/apache/flink/pull/4102

          Hi @wuchong Thanks very much for your reviewing. According to your suggestions and our discussion, I made the following two changes:

          1. Reduce ITCase, remove all the ITCase which added in `TableSoruceITCase`.
          2. Replace `schema: RowSchema` param to `fieldNames: Seq[String]`.

          @twalthr I appreciated if you can take look at the change #2.
          Best,
          SunJincheng

          Show
          githubbot ASF GitHub Bot added a comment - Github user sunjincheng121 commented on the issue: https://github.com/apache/flink/pull/4102 Hi @wuchong Thanks very much for your reviewing. According to your suggestions and our discussion, I made the following two changes: 1. Reduce ITCase, remove all the ITCase which added in `TableSoruceITCase`. 2. Replace `schema: RowSchema` param to `fieldNames: Seq [String] `. @twalthr I appreciated if you can take look at the change #2. Best, SunJincheng
          Hide
          fhueske Fabian Hueske added a comment -

          Maybe there's another way to fix this problem. I played a bit around and found the following:

          The following Table API query is executed correctly:

          val table = stream.toTable(tEnv, 'l, 'i, 'n, 'proctime.proctime)
          
              val windowedTable = table
                .window(Tumble over 2.seconds on 'proctime as 'w)
                .groupBy('w, 'n)
                .select('n, 'i.count as 'cnt, 'w.start as 's, 'w.end as 'e)
          val results = windowedTable.toAppendStream[MP](queryConfig)
          
          // POJO
          
          class MP(var s: Timestamp, var e: Timestamp, var cnt: Long, var n: String) {
            def this() { this(null, null, 0, null) }
            override def toString: String = s"$n,${s.toString},${e.toString},$cnt"
          }
          

          whereas the equivalent SQL query fails with the reported exception ("The field types of physical and logical row types do not match")

          val sqlTable = tEnv.sql(
                s"""SELECT TUMBLE_START(proctime, INTERVAL '2' SECOND) AS s,
                  |  TUMBLE_END(proctime, INTERVAL '2' SECOND) AS e,
                  |  n,
                  |  COUNT(i) as cnt
                  |FROM $table
                  |GROUP BY n, TUMBLE(proctime, INTERVAL '2' SECOND)
                  |
                """.stripMargin)
          
          val results = sqlTable.toAppendStream[MP](queryConfig)
          

          The plans of both queries look similar, but the SQL plan seems to lack the correct final projection:

          // Table API plan
          == Abstract Syntax Tree ==
          LogicalProject(n=[$0], cnt=[AS($1, 'cnt')], s=[AS($2, 's')], e=[AS($3, 'e')])
            LogicalWindowAggregate(group=[{0}], TMP_0=[COUNT($1)])
              LogicalProject(n=[$2], i=[$1], proctime=[$3])
                LogicalTableScan(table=[[_DataStreamTable_0]])
          
          == Optimized Logical Plan ==
          DataStreamCalc(select=[n, TMP_0 AS cnt, TMP_1 AS s, TMP_2 AS e])
            DataStreamGroupWindowAggregate(groupBy=[n], window=[TumblingGroupWindow('w, 'proctime, 2000.millis)], select=[n, COUNT(i) AS TMP_0, start('w) AS TMP_1, end('w) AS TMP_2])
              DataStreamCalc(select=[n, i, proctime])
                DataStreamScan(table=[[_DataStreamTable_0]])
          
          // SQL plans
          == Abstract Syntax Tree ==
          LogicalProject(s=[TUMBLE_START($1)], e=[TUMBLE_END($1)], n=[$0], cnt=[$2])
            LogicalAggregate(group=[{0, 1}], cnt=[COUNT($2)])
              LogicalProject(n=[$2], $f1=[TUMBLE($3, 2000)], i=[$1])
                LogicalTableScan(table=[[UnnamedTable$3]])
          
          == Optimized Logical Plan ==
          DataStreamCalc(select=[w$start, w$end, n, cnt])
            DataStreamGroupWindowAggregate(groupBy=[n], window=[TumblingGroupWindow('w$, 'proctime, 2000.millis)], select=[n, COUNT(i) AS cnt, start('w$) AS w$start, end('w$) AS w$end])
              DataStreamCalc(select=[n, proctime, i])
                DataStreamScan(table=[[_DataStreamTable_0]])
          

          So this doesn't seem to be a principled issue with the time attributes or window properties but rather an issue of the SQL optimization.

          What do you think sunjincheng and Jark Wu?

          Show
          fhueske Fabian Hueske added a comment - Maybe there's another way to fix this problem. I played a bit around and found the following: The following Table API query is executed correctly: val table = stream.toTable(tEnv, 'l, 'i, 'n, 'proctime.proctime) val windowedTable = table .window(Tumble over 2.seconds on 'proctime as 'w) .groupBy('w, 'n) .select('n, 'i.count as 'cnt, 'w.start as 's, 'w.end as 'e) val results = windowedTable.toAppendStream[MP](queryConfig) // POJO class MP( var s: Timestamp, var e: Timestamp, var cnt: Long , var n: String ) { def this () { this ( null , null , 0, null ) } override def toString: String = s "$n,${s.toString},${e.toString},$cnt" } whereas the equivalent SQL query fails with the reported exception ("The field types of physical and logical row types do not match") val sqlTable = tEnv.sql( s"""SELECT TUMBLE_START(proctime, INTERVAL '2' SECOND) AS s, | TUMBLE_END(proctime, INTERVAL '2' SECOND) AS e, | n, | COUNT(i) as cnt |FROM $table |GROUP BY n, TUMBLE(proctime, INTERVAL '2' SECOND) | """.stripMargin) val results = sqlTable.toAppendStream[MP](queryConfig) The plans of both queries look similar, but the SQL plan seems to lack the correct final projection: // Table API plan == Abstract Syntax Tree == LogicalProject(n=[$0], cnt=[AS($1, 'cnt')], s=[AS($2, 's')], e=[AS($3, 'e')]) LogicalWindowAggregate(group=[{0}], TMP_0=[COUNT($1)]) LogicalProject(n=[$2], i=[$1], proctime=[$3]) LogicalTableScan(table=[[_DataStreamTable_0]]) == Optimized Logical Plan == DataStreamCalc(select=[n, TMP_0 AS cnt, TMP_1 AS s, TMP_2 AS e]) DataStreamGroupWindowAggregate(groupBy=[n], window=[TumblingGroupWindow('w, 'proctime, 2000.millis)], select=[n, COUNT(i) AS TMP_0, start('w) AS TMP_1, end('w) AS TMP_2]) DataStreamCalc(select=[n, i, proctime]) DataStreamScan(table=[[_DataStreamTable_0]]) // SQL plans == Abstract Syntax Tree == LogicalProject(s=[TUMBLE_START($1)], e=[TUMBLE_END($1)], n=[$0], cnt=[$2]) LogicalAggregate(group=[{0, 1}], cnt=[COUNT($2)]) LogicalProject(n=[$2], $f1=[TUMBLE($3, 2000)], i=[$1]) LogicalTableScan(table=[[UnnamedTable$3]]) == Optimized Logical Plan == DataStreamCalc(select=[w$start, w$end, n, cnt]) DataStreamGroupWindowAggregate(groupBy=[n], window=[TumblingGroupWindow('w$, 'proctime, 2000.millis)], select=[n, COUNT(i) AS cnt, start('w$) AS w$start, end('w$) AS w$end]) DataStreamCalc(select=[n, proctime, i]) DataStreamScan(table=[[_DataStreamTable_0]]) So this doesn't seem to be a principled issue with the time attributes or window properties but rather an issue of the SQL optimization. What do you think sunjincheng and Jark Wu ?
          Hide
          sunjincheng121 sunjincheng added a comment -

          Hi, Fabian Hueske Thanks for check this issue. Calcite can not recognize TimeIndicatorRelDataType, So, In SQL case FlinkPlannerImpl#rel will keep the TimeIndicatorRelDataType.I think we can not touch the calcite, so,we only have a chance to do optimization when generating LogicalRelNode.

          Show
          sunjincheng121 sunjincheng added a comment - Hi, Fabian Hueske Thanks for check this issue. Calcite can not recognize TimeIndicatorRelDataType , So, In SQL case FlinkPlannerImpl#rel will keep the TimeIndicatorRelDataType .I think we can not touch the calcite, so,we only have a chance to do optimization when generating LogicalRelNode .
          Hide
          sunjincheng121 sunjincheng added a comment -

          But, I think not the optimization issue, because, FlinkPlannerImpl#rel will do the StreamTableEnvironment#translate, In this method we will do the optimize, After optimize we really translate TimeIndicatorRelDataType to TIMESTAMP. That's correct. The core problem is occur in {{ translate(dataStreamPlan, relNode.getRowType, queryConfig, withChangeFlag) }}, we can see the second param {{ relNode.getRowType}} is the non-optimized node which contains TimeIndicatorRelDataType. all of the follows operations are based on the type of non-optimal node, so there will be such a problem.

          Show
          sunjincheng121 sunjincheng added a comment - But, I think not the optimization issue, because, FlinkPlannerImpl#rel will do the StreamTableEnvironment#translate , In this method we will do the optimize , After optimize we really translate TimeIndicatorRelDataType to TIMESTAMP . That's correct. The core problem is occur in {{ translate(dataStreamPlan, relNode.getRowType, queryConfig, withChangeFlag) }}, we can see the second param {{ relNode.getRowType}} is the non-optimized node which contains TimeIndicatorRelDataType . all of the follows operations are based on the type of non-optimal node, so there will be such a problem.
          Hide
          sunjincheng121 sunjincheng added a comment -

          So, We have two chances to deal with the problem,one is the PR. did. one is copy relNode when generating LogicalRelNode.
          IMO. I'm not sure copy is best way. What do you think? Fabian Hueske Jark Wu

          Show
          sunjincheng121 sunjincheng added a comment - So, We have two chances to deal with the problem,one is the PR. did. one is copy relNode when generating LogicalRelNode . IMO. I'm not sure copy is best way. What do you think? Fabian Hueske Jark Wu
          Hide
          fhueske Fabian Hueske added a comment - - edited

          You are right. The problem is in translate(dataStreamPlan, relNode.getRowType, queryConfig, withChangeFlag).

          We use the RowType of the original input plan because the field names might change (Calcite prunes pure renaming projections as noops). However, the RelTimeIndicatorConverter (correctly!) changes the types of time indicators. So, types of the optimized plan are not identical to the original plan. This difference causes the exception.

          A simple solution would be to just merge the field names of the original plan with the field types of the optimized plan and construct a new RelDataType. I change the StreamTableEnvironment.translate() method to this:

          protected def translate[A](
                table: Table,
                queryConfig: StreamQueryConfig,
                updatesAsRetraction: Boolean,
                withChangeFlag: Boolean)(implicit tpe: TypeInformation[A]): DataStream[A] = {
              val relNode = table.getRelNode
              val dataStreamPlan = optimize(relNode, updatesAsRetraction)
          // zip original field names with optimized field types
              val x = relNode.getRowType.getFieldList.asScala.zip(dataStreamPlan.getRowType.getFieldList.asScala)
                // get name of original plan and type of optimized plan
                .map(x => (x._1.getName, x._2.getType))
                // add index
                .zipWithIndex
                // build new field types
                .map(x => new RelDataTypeFieldImpl(x._1._1, x._2, x._1._2))
              // build a record type from list of field types
              val rowType = new RelRecordType(x.toList.asJava.asInstanceOf[_root_.java.util.List[RelDataTypeField]])
          
              translate(dataStreamPlan, rowType, queryConfig, withChangeFlag)
            }
          

          and got it (and all tests) working.

          The field merging can be done a lot nicer.

          Show
          fhueske Fabian Hueske added a comment - - edited You are right. The problem is in translate(dataStreamPlan, relNode.getRowType, queryConfig, withChangeFlag) . We use the RowType of the original input plan because the field names might change (Calcite prunes pure renaming projections as noops). However, the RelTimeIndicatorConverter (correctly!) changes the types of time indicators. So, types of the optimized plan are not identical to the original plan. This difference causes the exception. A simple solution would be to just merge the field names of the original plan with the field types of the optimized plan and construct a new RelDataType . I change the StreamTableEnvironment.translate() method to this: protected def translate[A]( table: Table, queryConfig: StreamQueryConfig, updatesAsRetraction: Boolean , withChangeFlag: Boolean )(implicit tpe: TypeInformation[A]): DataStream[A] = { val relNode = table.getRelNode val dataStreamPlan = optimize(relNode, updatesAsRetraction) // zip original field names with optimized field types val x = relNode.getRowType.getFieldList.asScala.zip(dataStreamPlan.getRowType.getFieldList.asScala) // get name of original plan and type of optimized plan .map(x => (x._1.getName, x._2.getType)) // add index .zipWithIndex // build new field types .map(x => new RelDataTypeFieldImpl(x._1._1, x._2, x._1._2)) // build a record type from list of field types val rowType = new RelRecordType(x.toList.asJava.asInstanceOf[_root_.java.util.List[RelDataTypeField]]) translate(dataStreamPlan, rowType, queryConfig, withChangeFlag) } and got it (and all tests) working. The field merging can be done a lot nicer.
          Hide
          sunjincheng121 sunjincheng added a comment -

          Thanks Fabian Hueske your code is nicer. I'll update the PR.

          Show
          sunjincheng121 sunjincheng added a comment - Thanks Fabian Hueske your code is nicer. I'll update the PR.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user sunjincheng121 commented on the issue:

          https://github.com/apache/flink/pull/4102

          Hi @fhueske I have update the PR. according your suggestion. Please have a look at the changes.

          Best,
          SunJincheng

          Show
          githubbot ASF GitHub Bot added a comment - Github user sunjincheng121 commented on the issue: https://github.com/apache/flink/pull/4102 Hi @fhueske I have update the PR. according your suggestion. Please have a look at the changes. Best, SunJincheng
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user rmetzger commented on the issue:

          https://github.com/apache/flink/pull/4102

          Please see my message in the 1.3.1 thread on the dev@ list

          Show
          githubbot ASF GitHub Bot added a comment - Github user rmetzger commented on the issue: https://github.com/apache/flink/pull/4102 Please see my message in the 1.3.1 thread on the dev@ list
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user asfgit closed the pull request at:

          https://github.com/apache/flink/pull/4102

          Show
          githubbot ASF GitHub Bot added a comment - Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/4102
          Hide
          fhueske Fabian Hueske added a comment -

          Fixed for 1.3.1 with 8b91df2b3cd0c0ef733902ad742045b318bac0fd
          Fixed for 1.4.0 with d78eeca37554ac75faf1aa451d0b4107ebd96fb9

          Show
          fhueske Fabian Hueske added a comment - Fixed for 1.3.1 with 8b91df2b3cd0c0ef733902ad742045b318bac0fd Fixed for 1.4.0 with d78eeca37554ac75faf1aa451d0b4107ebd96fb9

            People

            • Assignee:
              sunjincheng121 sunjincheng
              Reporter:
              sunjincheng121 sunjincheng
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development