Uploaded image for project: 'Calcite'
  1. Calcite
  2. CALCITE-1798

In JDBC adapter, generate dialect-specific SQL for FLOOR operator

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 1.13.0
    • Component/s: jdbc-adapter
    • Labels:

      Description

      The FLOOR operator (on dates) is currently broken for all jdbc dialects.
      The syntax allowed by the parser looks like: "FLOOR(datetime to timeUnit)".
      However no jdbc dialect (as far as I'm aware) actually name the function FLOOR:
      In postgres: DATE_TRUNC('year', my_datetime)
      In hsqldb: TRUNC ( my_datetime, 'YYYY' )
      In oracle: TRUNC(my_datetime, 'YEAR')
      In mysql: There's no direct equivalent in mysql (though it could be emulated with some nasty timestamp diffing)

      The other issue is that the timeUnits are sometimes also named differently by each dialect (e.g. 'YYYY' in hsqldb).

        Issue Links

          Activity

          Hide
          chris-baynes Chris Baynes added a comment -

          I've made a first pass at this: https://github.com/apache/calcite/pull/453
          Have a couple of things I'd like to improve & get feedback on though:

          1. The convertToHsqlDb method isn't at all specific to flooring, it could also be used in other timestamp functions. Where would be a good place to create hsql dialect specifics?

          2. Would like to write some dialect specific tests to check the generated sql output, perhaps a JdbcDialectTest class. Is this possible without having the actual db available?

          I'll fill out the switch with the other dialects as soon as there's agreement that this looks like it's going in the right direction.

          Show
          chris-baynes Chris Baynes added a comment - I've made a first pass at this: https://github.com/apache/calcite/pull/453 Have a couple of things I'd like to improve & get feedback on though: 1. The convertToHsqlDb method isn't at all specific to flooring, it could also be used in other timestamp functions. Where would be a good place to create hsql dialect specifics? 2. Would like to write some dialect specific tests to check the generated sql output, perhaps a JdbcDialectTest class. Is this possible without having the actual db available? I'll fill out the switch with the other dialects as soon as there's agreement that this looks like it's going in the right direction.
          Hide
          julianhyde Julian Hyde added a comment -

          Fixed in http://git-wip-us.apache.org/repos/asf/calcite/commit/f8ab9078. Thanks for the PR, Chris Baynes!

          The convertToHsqlDb method isn't at all specific to flooring, it could also be used in other timestamp functions. Where would be a good place to create hsql dialect specifics?

          I don't know. But I wouldn't worry too much - if those methods are static and private or package-private you can freely refactor them as functionality gets added.

          Would like to write some dialect specific tests to check the generated sql output, perhaps a JdbcDialectTest class. Is this possible without having the actual db available?

          I added a test, RelToSqlConverterTest.testFloor(). You can use that as a model for other cross-dialect SQL generation tests.

          Show
          julianhyde Julian Hyde added a comment - Fixed in http://git-wip-us.apache.org/repos/asf/calcite/commit/f8ab9078 . Thanks for the PR, Chris Baynes ! The convertToHsqlDb method isn't at all specific to flooring, it could also be used in other timestamp functions. Where would be a good place to create hsql dialect specifics? I don't know. But I wouldn't worry too much - if those methods are static and private or package-private you can freely refactor them as functionality gets added. Would like to write some dialect specific tests to check the generated sql output, perhaps a JdbcDialectTest class. Is this possible without having the actual db available? I added a test, RelToSqlConverterTest.testFloor() . You can use that as a model for other cross-dialect SQL generation tests.
          Hide
          chris-baynes Chris Baynes added a comment -

          I've opened a new pull request here for the rest of the changes, with a couple of fixes and tests: https://github.com/apache/calcite/pull/458

          Again, I'm not totally happy with the structure - it would be nice to be able to split the floor operator into one file for each dialect.
          There will be more operators that require this kind of switching and having a common place to override the operator unparsing per dialect would keep those switches to a minimum.

          Show
          chris-baynes Chris Baynes added a comment - I've opened a new pull request here for the rest of the changes, with a couple of fixes and tests: https://github.com/apache/calcite/pull/458 Again, I'm not totally happy with the structure - it would be nice to be able to split the floor operator into one file for each dialect. There will be more operators that require this kind of switching and having a common place to override the operator unparsing per dialect would keep those switches to a minimum.
          Hide
          julianhyde Julian Hyde added a comment -
          Show
          julianhyde Julian Hyde added a comment - Committed the second PR as http://git-wip-us.apache.org/repos/asf/calcite/commit/946b24f4 .
          Hide
          julianhyde Julian Hyde added a comment -

          We could have a mechanism where there can be multiple handlers to unparse calls. Maybe a handler can identify itself as handling a particular dialect, or a particular set of operators. Or maybe it doesn't, and we use a chain of responsibility, stopping when we find a handler. And I suppose handlers could be loaded from the classpath the same way that JDBC drivers are. Feel free to log a new JIRA case with a proposed mechanism.

          Show
          julianhyde Julian Hyde added a comment - We could have a mechanism where there can be multiple handlers to unparse calls. Maybe a handler can identify itself as handling a particular dialect, or a particular set of operators. Or maybe it doesn't, and we use a chain of responsibility , stopping when we find a handler. And I suppose handlers could be loaded from the classpath the same way that JDBC drivers are. Feel free to log a new JIRA case with a proposed mechanism.
          Hide
          chris-baynes Chris Baynes added a comment -

          I'll log another JIRA case with the proposed mechanism.
          Just have one other issue with the current implementation - I've just noticed that the same node can be referenced by different parts of the query.
          For example if I add a GROUP BY clause using the same FLOOR operator, the setOperator call I've used will be applied to the SqlCall of the GROUP BY and then the unparse method will break when it is called on it later on.
          To fix that I'm using clone before setOperator, but needed to adjust the clone method on SqlBasicCall to make sure it clones the operands too.
          The PR is here: https://github.com/apache/calcite/pull/465

          Show
          chris-baynes Chris Baynes added a comment - I'll log another JIRA case with the proposed mechanism. Just have one other issue with the current implementation - I've just noticed that the same node can be referenced by different parts of the query. For example if I add a GROUP BY clause using the same FLOOR operator, the setOperator call I've used will be applied to the SqlCall of the GROUP BY and then the unparse method will break when it is called on it later on. To fix that I'm using clone before setOperator, but needed to adjust the clone method on SqlBasicCall to make sure it clones the operands too. The PR is here: https://github.com/apache/calcite/pull/465
          Hide
          julianhyde Julian Hyde added a comment -

          Let's make a new JIRA case for this. This case is fixed already. I think SqlNode.clone should remain shallow clone. Your proposed fix makes clone sometimes deep, sometimes shallow. In fact I don't think we should alter clone at all. My preferred solution would be to treat all SqlNodes as immutable during this entire process, so if we to change anything, we have to create a new copy.

          Show
          julianhyde Julian Hyde added a comment - Let's make a new JIRA case for this. This case is fixed already. I think SqlNode.clone should remain shallow clone. Your proposed fix makes clone sometimes deep, sometimes shallow. In fact I don't think we should alter clone at all. My preferred solution would be to treat all SqlNodes as immutable during this entire process, so if we to change anything, we have to create a new copy.
          Hide
          jcamachorodriguez Jesus Camacho Rodriguez added a comment -

          Resolved in release 1.13.0 (2017-06-26).

          Show
          jcamachorodriguez Jesus Camacho Rodriguez added a comment - Resolved in release 1.13.0 (2017-06-26).

            People

            • Assignee:
              julianhyde Julian Hyde
              Reporter:
              chris-baynes Chris Baynes
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development