Hive
  1. Hive
  2. HIVE-1609

Support partition filtering in metastore

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.7.0
    • Component/s: Metastore
    • Hadoop Flags:
      Reviewed
    • Release Note:
      Hide
      Added support for a new listPartitionsByFilter API in HiveMetaStoreClient. This returns the list of partitions matching a specified partition filter. The filter supports "=", "!=", ">", "<", ">=", "<=" and "LIKE" operations on partition keys of type string. "AND" and "OR" logical operations are supported in the filter. So for example, for a table having partition keys country and state, the filter can be 'country = "USA" AND (state = "CA" OR state = "AZ")'
      Show
      Added support for a new listPartitionsByFilter API in HiveMetaStoreClient. This returns the list of partitions matching a specified partition filter. The filter supports "=", "!=", ">", "<", ">=", "<=" and "LIKE" operations on partition keys of type string. "AND" and "OR" logical operations are supported in the filter. So for example, for a table having partition keys country and state, the filter can be 'country = "USA" AND (state = "CA" OR state = "AZ")'

      Description

      WARNING: This patch was subsequently disabled in HIVE-1853 due to stability concerns related to the JDO version upgrade.

      The metastore needs to have support for returning a list of partitions based on user specified filter conditions. This will be useful for tools which need to do partition pruning. Howl is one such use case. The way partition pruning is done during hive query execution need not be changed.

      1. hive_1609.patch
        187 kB
        Ajay Kidave
      2. hive_1609_3.patch
        132 kB
        Ajay Kidave
      3. hive_1609_2.patch
        192 kB
        Ajay Kidave

        Issue Links

          Activity

          Hide
          Ajay Kidave added a comment -

          Attached patch with support for a new metastore API which returns list of partitions matching specified string filter. Thrift does not support recursive nested structures, so the filter is specified as a string instead of an expression object. The datanucleus jar version is upgraded to get support for the JDOQL operations needed (a clean build is required at the root level to remove the older version of the datanucleus jars from build/ivy/lib).

          Show
          Ajay Kidave added a comment - Attached patch with support for a new metastore API which returns list of partitions matching specified string filter. Thrift does not support recursive nested structures, so the filter is specified as a string instead of an expression object. The datanucleus jar version is upgraded to get support for the JDOQL operations needed (a clean build is required at the root level to remove the older version of the datanucleus jars from build/ivy/lib).
          Hide
          HBase Review Board added a comment -

          Message from: "Carl Steinbach" <carl@cloudera.com>

          -----------------------------------------------------------
          This is an automatically generated e-mail. To reply, visit:
          http://review.cloudera.org/r/763/
          -----------------------------------------------------------

          Review request for Hive Developers.

          Summary
          -------

          HIVE-1609: Support partition filtering in metastore

          This addresses bug HIVE-1609.
          http://issues.apache.org/jira/browse/HIVE-1609

          Diffs


          trunk/metastore/build.xml 991274
          trunk/metastore/if/hive_metastore.thrift 991397
          trunk/metastore/ivy.xml 991274
          trunk/metastore/src/gen-cpp/ThriftHiveMetastore.h 991274
          trunk/metastore/src/gen-cpp/ThriftHiveMetastore.cpp 991274
          trunk/metastore/src/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp 991274
          trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/FilterParser.java PRE-CREATION
          trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/FilterParserConstants.java PRE-CREATION
          trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/FilterParserTokenManager.java PRE-CREATION
          trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/ParseException.java PRE-CREATION
          trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/SimpleCharStream.java PRE-CREATION
          trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/Token.java PRE-CREATION
          trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/TokenMgrError.java PRE-CREATION
          trunk/metastore/src/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java 991274
          trunk/metastore/src/gen-php/ThriftHiveMetastore.php 991274
          trunk/metastore/src/gen-py/hive_metastore/ThriftHiveMetastore-remote 991274
          trunk/metastore/src/gen-py/hive_metastore/ThriftHiveMetastore.py 991274
          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 991274
          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 991274
          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 991274
          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 991274
          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java PRE-CREATION
          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/filter_parser.jj PRE-CREATION
          trunk/metastore/src/model/package.jdo 991274
          trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java 991274
          trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 991274

          Diff: http://review.cloudera.org/r/763/diff

          Testing
          -------

          Thanks,

          Carl

          Show
          HBase Review Board added a comment - Message from: "Carl Steinbach" <carl@cloudera.com> ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: http://review.cloudera.org/r/763/ ----------------------------------------------------------- Review request for Hive Developers. Summary ------- HIVE-1609 : Support partition filtering in metastore This addresses bug HIVE-1609 . http://issues.apache.org/jira/browse/HIVE-1609 Diffs trunk/metastore/build.xml 991274 trunk/metastore/if/hive_metastore.thrift 991397 trunk/metastore/ivy.xml 991274 trunk/metastore/src/gen-cpp/ThriftHiveMetastore.h 991274 trunk/metastore/src/gen-cpp/ThriftHiveMetastore.cpp 991274 trunk/metastore/src/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp 991274 trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/FilterParser.java PRE-CREATION trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/FilterParserConstants.java PRE-CREATION trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/FilterParserTokenManager.java PRE-CREATION trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/ParseException.java PRE-CREATION trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/SimpleCharStream.java PRE-CREATION trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/Token.java PRE-CREATION trunk/metastore/src/gen-java/org/apache/hadoop/hive/metastore/parser/TokenMgrError.java PRE-CREATION trunk/metastore/src/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java 991274 trunk/metastore/src/gen-php/ThriftHiveMetastore.php 991274 trunk/metastore/src/gen-py/hive_metastore/ThriftHiveMetastore-remote 991274 trunk/metastore/src/gen-py/hive_metastore/ThriftHiveMetastore.py 991274 trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 991274 trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 991274 trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 991274 trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 991274 trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java PRE-CREATION trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/filter_parser.jj PRE-CREATION trunk/metastore/src/model/package.jdo 991274 trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java 991274 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 991274 Diff: http://review.cloudera.org/r/763/diff Testing ------- Thanks, Carl
          Hide
          Carl Steinbach added a comment -

          RB's jiraposter seems to be lagging. In the meantime I left some comments here: https://review.cloudera.org//r/763/#review1075

          Show
          Carl Steinbach added a comment - RB's jiraposter seems to be lagging. In the meantime I left some comments here: https://review.cloudera.org//r/763/#review1075
          Hide
          HBase Review Board added a comment -

          Message from: "Carl Steinbach" <carl@cloudera.com>

          -----------------------------------------------------------
          This is an automatically generated e-mail. To reply, visit:
          http://review.cloudera.org/r/763/#review1075
          -----------------------------------------------------------

          trunk/metastore/build.xml
          <http://review.cloudera.org/r/763/#comment3422>

          Hive already uses ANTLR. Introducing a dependency on a new parser generator (especially one that the Pig devs are already unhappy with) seems unwise from a maintenance and build perspective. Can you please rewrite this to use ANTLR?

          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
          <http://review.cloudera.org/r/763/#comment3424>

          get_partitions_by_filter() should throw Unknown[DB|Table]Exception if either the database or table do not exist.

          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
          <http://review.cloudera.org/r/763/#comment3425>

          listPartitionsByFilter() should also throw UnknownDBException and UnknownTableException.

          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
          <http://review.cloudera.org/r/763/#comment3426>

          Please use StringBuilder instead.

          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
          <http://review.cloudera.org/r/763/#comment3427>

          The query should have an ORDER clause on MPartition.partitionName in order to insure that the results are deterministic.

          trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
          <http://review.cloudera.org/r/763/#comment3428>

          Minor organization issue: why does this method appear here instead of next to listMPartitionsByFilter()?

          • Carl
          Show
          HBase Review Board added a comment - Message from: "Carl Steinbach" <carl@cloudera.com> ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: http://review.cloudera.org/r/763/#review1075 ----------------------------------------------------------- trunk/metastore/build.xml < http://review.cloudera.org/r/763/#comment3422 > Hive already uses ANTLR. Introducing a dependency on a new parser generator (especially one that the Pig devs are already unhappy with) seems unwise from a maintenance and build perspective. Can you please rewrite this to use ANTLR? trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java < http://review.cloudera.org/r/763/#comment3424 > get_partitions_by_filter() should throw Unknown [DB|Table] Exception if either the database or table do not exist. trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java < http://review.cloudera.org/r/763/#comment3425 > listPartitionsByFilter() should also throw UnknownDBException and UnknownTableException. trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java < http://review.cloudera.org/r/763/#comment3426 > Please use StringBuilder instead. trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java < http://review.cloudera.org/r/763/#comment3427 > The query should have an ORDER clause on MPartition.partitionName in order to insure that the results are deterministic. trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java < http://review.cloudera.org/r/763/#comment3428 > Minor organization issue: why does this method appear here instead of next to listMPartitionsByFilter()? Carl
          Hide
          Ajay Kidave added a comment -

          Thanks for the review Carl. Javacc is already used in the hive serde code, so it is not a completely new dependency for hive. Javacc has issues with generating proper errors for multi-line inputs, since we are using it for a small filter string only, this issue should not be seen. The build approach is same as taken in serde, i.e the code is regenerated only if javacc.home is defined.

          Regarding throwing Unknown[DB|Table]Exception, it would require an extra database call to first check whether the database is valid. So I have changed it to throw a NoSuchObjectException saying db.table does not exist if the getMTable operation fails.

          I have attached a patch which addresses the other issues.

          Show
          Ajay Kidave added a comment - Thanks for the review Carl. Javacc is already used in the hive serde code, so it is not a completely new dependency for hive. Javacc has issues with generating proper errors for multi-line inputs, since we are using it for a small filter string only, this issue should not be seen. The build approach is same as taken in serde, i.e the code is regenerated only if javacc.home is defined. Regarding throwing Unknown [DB|Table] Exception, it would require an extra database call to first check whether the database is valid. So I have changed it to throw a NoSuchObjectException saying db.table does not exist if the getMTable operation fails. I have attached a patch which addresses the other issues.
          Hide
          John Sichi added a comment -

          I agree with Carl regarding the parser: let's move it to ANTLR. We have too much generated code checked into Hive already, and we're trying to move away from that.

          Show
          John Sichi added a comment - I agree with Carl regarding the parser: let's move it to ANTLR. We have too much generated code checked into Hive already, and we're trying to move away from that.
          Hide
          Ajay Kidave added a comment -

          The parser was written in javacc since it is derived from similar functionality in Owl. It was decided to reuse the existing parser when the filter representation was discussed. If generated code is the issue, I can change the build to pull javacc through ivy and not have the generated code checked in (it is so currently because that was how it is in serde). Another possibility is we can open another JIRA to change the parser implementation to ANTLR. Do let me know what would work.

          Show
          Ajay Kidave added a comment - The parser was written in javacc since it is derived from similar functionality in Owl. It was decided to reuse the existing parser when the filter representation was discussed. If generated code is the issue, I can change the build to pull javacc through ivy and not have the generated code checked in (it is so currently because that was how it is in serde). Another possibility is we can open another JIRA to change the parser implementation to ANTLR. Do let me know what would work.
          Hide
          Namit Jain added a comment -

          I think we should stick to antlr only - let us not check in javacc

          Show
          Namit Jain added a comment - I think we should stick to antlr only - let us not check in javacc
          Hide
          Carl Steinbach added a comment -

          DynamicSerDe is the component that has a JavaCC dependency. I think DynamicSerDe (and TCTLSeparatedProtocol) were deprecated a long time ago. Should we try to remove this code?

          Show
          Carl Steinbach added a comment - DynamicSerDe is the component that has a JavaCC dependency. I think DynamicSerDe (and TCTLSeparatedProtocol) were deprecated a long time ago. Should we try to remove this code?
          Hide
          John Sichi added a comment -

          @Carl: looks like Steven Wong and Zheng have been discussing how to get rid of the last uses of DynamicSerDe (over on hive-dev), so yeah, maybe we can do that once Steven completes the work.

          Show
          John Sichi added a comment - @Carl: looks like Steven Wong and Zheng have been discussing how to get rid of the last uses of DynamicSerDe (over on hive-dev), so yeah, maybe we can do that once Steven completes the work.
          Hide
          John Sichi added a comment -

          @Ajay:

          • In package.jdo, you added default-fetch-group="false" for the view attributes. Could you explain what that does and why that is needed? I'm guessing it defers fetch of these attributes, which makes sense as long as it's transparent.
          • HIVE-1539 mentions the need to upgrade datanucleus to 2.2.0.m2 in order to fix some classloader threading issues; maybe we should jump straight to that version?
          • Per HIVE-1626, we should avoid using java.util.Stack. Just FYI; we can clean this one up as part of that JIRA.
          • Run checkstyle on new code to bring it into conformance

          Also, I've asked Paul Yang to take a look at the patch to give feedback on any other issues (separate from the usage of JavaCC).

          Show
          John Sichi added a comment - @Ajay: In package.jdo, you added default-fetch-group="false" for the view attributes. Could you explain what that does and why that is needed? I'm guessing it defers fetch of these attributes, which makes sense as long as it's transparent. HIVE-1539 mentions the need to upgrade datanucleus to 2.2.0.m2 in order to fix some classloader threading issues; maybe we should jump straight to that version? Per HIVE-1626 , we should avoid using java.util.Stack. Just FYI; we can clean this one up as part of that JIRA. Run checkstyle on new code to bring it into conformance Also, I've asked Paul Yang to take a look at the patch to give feedback on any other issues (separate from the usage of JavaCC).
          Hide
          Paul Yang added a comment -

          A couple of suggestions:

          1. Update get_partitions_ps() to use get_partitions_by_filter(). By doing this, we should see some speedup when using the Hive CLI for tables with large number of partitions. (May require #2 for full benefits)

          2. could we add a listPartitionNamesByFilter() as well to RawStore/ObjectStore? This would allow partition filtering for get_partition_names_ps(). This could be done in a separate JIRA, if necessary.

          3. Add test cases to TestHiveMetastore for the new method(s).

          I'll take another look once the antlr stuff is done, but looks good.

          Show
          Paul Yang added a comment - A couple of suggestions: 1. Update get_partitions_ps() to use get_partitions_by_filter(). By doing this, we should see some speedup when using the Hive CLI for tables with large number of partitions. (May require #2 for full benefits) 2. could we add a listPartitionNamesByFilter() as well to RawStore/ObjectStore? This would allow partition filtering for get_partition_names_ps(). This could be done in a separate JIRA, if necessary. 3. Add test cases to TestHiveMetastore for the new method(s). I'll take another look once the antlr stuff is done, but looks good.
          Hide
          Ajay Kidave added a comment -

          Thanks for the review comments. I was on vacation for few days, sorry for late response. I have attached a new patch changing the parser to ANTLR. Added few more tests to check for parser issues.

          @John,

          • With new DataNucleus version, the default select query for a object fetch is a select distinct on all the columns. This does not work on LONGVARCHAR columns since they are not comparable (in derby). Setting default-fetch-group="false" menas that the distinct would not be applied for the viewtext columns. These columns would be fetched lazily when required.
          • I would like to let HIVE-1539 make the change as required, this current feature requires only DataNucleus 2.1.1. Since 2.2 is not released yet, we can wait for the 2.2 release before moving to that or HIVE-1539 can do the required testing before using the unreleased version.
          • When HIVE-1626 changes to the custom Stack implementation, it would be a one line import change for this patch.
          • I have run checkstyle and fixed the issues. Only one new issue reported is "Got an exception - java.lang.RuntimeException: Unable to get class information for MetaException." This is because checkstyle is not resolving generated classes, this issue is already present for other files referencing generated classes.

          @Paul

          • get_partitions_ps can be updated to use the new filtering method. This can be done separately since I am not sure of the existing use cases of get_partitions_ps and the howl use cases do not require this. Same applies for adding a new listPartitionNamesByFilter function. I can open a JIRA for these if required.
          • Test cases are present for the function added in this patch. If we add the above two, tests would be required for that along with the patch.
          Show
          Ajay Kidave added a comment - Thanks for the review comments. I was on vacation for few days, sorry for late response. I have attached a new patch changing the parser to ANTLR. Added few more tests to check for parser issues. @John, With new DataNucleus version, the default select query for a object fetch is a select distinct on all the columns. This does not work on LONGVARCHAR columns since they are not comparable (in derby). Setting default-fetch-group="false" menas that the distinct would not be applied for the viewtext columns. These columns would be fetched lazily when required. I would like to let HIVE-1539 make the change as required, this current feature requires only DataNucleus 2.1.1. Since 2.2 is not released yet, we can wait for the 2.2 release before moving to that or HIVE-1539 can do the required testing before using the unreleased version. When HIVE-1626 changes to the custom Stack implementation, it would be a one line import change for this patch. I have run checkstyle and fixed the issues. Only one new issue reported is "Got an exception - java.lang.RuntimeException: Unable to get class information for MetaException." This is because checkstyle is not resolving generated classes, this issue is already present for other files referencing generated classes. @Paul get_partitions_ps can be updated to use the new filtering method. This can be done separately since I am not sure of the existing use cases of get_partitions_ps and the howl use cases do not require this. Same applies for adding a new listPartitionNamesByFilter function. I can open a JIRA for these if required. Test cases are present for the function added in this patch. If we add the above two, tests would be required for that along with the patch.
          Hide
          John Sichi added a comment -

          @Ajay: thanks for the explanations; I'm fine with those choices.

          Show
          John Sichi added a comment - @Ajay: thanks for the explanations; I'm fine with those choices.
          Hide
          Ajay Kidave added a comment -

          Could someone please review/commit this patch. Thanks.

          Show
          Ajay Kidave added a comment - Could someone please review/commit this patch. Thanks.
          Hide
          Paul Yang added a comment -

          I'm taking another look - should be ready later today

          Show
          Paul Yang added a comment - I'm taking another look - should be ready later today
          Hide
          Paul Yang added a comment -

          Looks good to me +1

          @Ajay - can you create that JIRA and assign it to me?

          Show
          Paul Yang added a comment - Looks good to me +1 @Ajay - can you create that JIRA and assign it to me?
          Hide
          Namit Jain added a comment -

          Once this is in, it would be useful to add a API like get_partitions_ps in HIVE - I mean, get all sub-partitions.

          For example, if the table is partitioned on (ds, hr):
          something like

          show partitions (ds='2010-09-20', hr) should return all sub-partitions.

          Show
          Namit Jain added a comment - Once this is in, it would be useful to add a API like get_partitions_ps in HIVE - I mean, get all sub-partitions. For example, if the table is partitioned on (ds, hr): something like show partitions (ds='2010-09-20', hr) should return all sub-partitions.
          Hide
          Ning Zhang added a comment -

          @namit, the Hive metastore already has the API to get all sub-partitions given a partial specification like you provided – Hive.getPartitions(Table, partialPartSpec).

          Show
          Ning Zhang added a comment - @namit, the Hive metastore already has the API to get all sub-partitions given a partial specification like you provided – Hive.getPartitions(Table, partialPartSpec).
          Hide
          Namit Jain added a comment -

          I meant, exposing it via the Hive QL directly.
          I don't think there is a way to do that currently.

          Show
          Namit Jain added a comment - I meant, exposing it via the Hive QL directly. I don't think there is a way to do that currently.
          Hide
          Ajay Kidave added a comment -

          @Paul : I have created HIVE-1660 for the optimizations to get_partitions_ps.

          Show
          Ajay Kidave added a comment - @Paul : I have created HIVE-1660 for the optimizations to get_partitions_ps.
          Hide
          John Sichi added a comment -

          Running this one through tests now.

          Show
          John Sichi added a comment - Running this one through tests now.
          Hide
          He Yongqiang added a comment -

          Just want to make sure that this will aslo work in python client. I found there are some small problems when calling several partition functions from python.

          Show
          He Yongqiang added a comment - Just want to make sure that this will aslo work in python client. I found there are some small problems when calling several partition functions from python.
          Hide
          John Sichi added a comment -

          @Yongqiang: ant test just passed for me. Let me know if I should hold off on the commit until the python issues are resolved.

          Show
          John Sichi added a comment - @Yongqiang: ant test just passed for me. Let me know if I should hold off on the commit until the python issues are resolved.
          Hide
          Paul Yang added a comment -

          @Yongqiang - The new thrift function seems benign - what problems are you running into?

          Show
          Paul Yang added a comment - @Yongqiang - The new thrift function seems benign - what problems are you running into?
          Hide
          He Yongqiang added a comment -

          [by several partition functions in my previous comment, i mean the existing partition functions.] So just want to make sure the ones added in this jira will work finely for python client.

          @john, pls go ahead commit this. This is a really good one to have. We can fix problems later if there are any.

          Show
          He Yongqiang added a comment - [by several partition functions in my previous comment, i mean the existing partition functions.] So just want to make sure the ones added in this jira will work finely for python client. @john, pls go ahead commit this. This is a really good one to have. We can fix problems later if there are any.
          Hide
          John Sichi added a comment -

          Committed. Thanks Ajay!

          Show
          John Sichi added a comment - Committed. Thanks Ajay!

            People

            • Assignee:
              Ajay Kidave
              Reporter:
              Ajay Kidave
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development