Hive
  1. Hive
  2. HIVE-2757

hive can't find hadoop executor scripts without HADOOP_HOME set

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.8.0
    • Fix Version/s: 0.10.0
    • Component/s: CLI
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      The trouble is that in Hadoop 0.23 HADOOP_HOME has been deprecated. I think it would be really nice if bin/hive can be modified to capture the which hadoop
      and pass that as a property into the JVM.

      1. HIVE-2757-2.patch.txt
        5 kB
        Roman Shaposhnik
      2. HIVE-2757.patch.txt
        2 kB
        Roman Shaposhnik
      3. HIVE-2757.patch.txt
        2 kB
        Roman Shaposhnik
      4. hive-2757.diff
        3 kB
        Buddhika Chamith De Alwis
      5. ASF.LICENSE.NOT.GRANTED--HIVE-2757.D3075.1.patch
        2 kB
        Phabricator

        Issue Links

          Activity

          Hide
          Ashutosh Chauhan added a comment -

          This issue is fixed and released as part of 0.10.0 release. If you find an issue which seems to be related to this one, please create a new jira and link this one with new jira.

          Show
          Ashutosh Chauhan added a comment - This issue is fixed and released as part of 0.10.0 release. If you find an issue which seems to be related to this one, please create a new jira and link this one with new jira.
          Hide
          Hudson added a comment -

          Integrated in Hive-trunk-hadoop2 #54 (See https://builds.apache.org/job/Hive-trunk-hadoop2/54/)
          HIVE-3014 [jira] Fix metastore test failures caused by HIVE-2757
          (Zhenxiao Luo via Carl Steinbach)

          Summary: HIVE-3014: Fix metastore test failures caused by HIVE-2757

          Test Plan: EMPTY

          Reviewers: JIRA, cwsteinbach

          Reviewed By: cwsteinbach

          Differential Revision: https://reviews.facebook.net/D3213 (Revision 1339004)
          HIVE-2757. Hive can't find hadoop executor scripts without HADOOP_HOME set (Roman Shaposhnik via cws) (Revision 1336906)

          Result = ABORTED
          cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1339004
          Files :

          • /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java

          cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336906
          Files :

          • /hive/trunk/bin/ext/help.sh
          • /hive/trunk/bin/hive
          • /hive/trunk/bin/init-hive-dfs.sh
          • /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          Show
          Hudson added a comment - Integrated in Hive-trunk-hadoop2 #54 (See https://builds.apache.org/job/Hive-trunk-hadoop2/54/ ) HIVE-3014 [jira] Fix metastore test failures caused by HIVE-2757 (Zhenxiao Luo via Carl Steinbach) Summary: HIVE-3014 : Fix metastore test failures caused by HIVE-2757 Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D3213 (Revision 1339004) HIVE-2757 . Hive can't find hadoop executor scripts without HADOOP_HOME set (Roman Shaposhnik via cws) (Revision 1336906) Result = ABORTED cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1339004 Files : /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336906 Files : /hive/trunk/bin/ext/help.sh /hive/trunk/bin/hive /hive/trunk/bin/init-hive-dfs.sh /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          Hide
          Hudson added a comment -

          Integrated in Hive-trunk-h0.21 #1434 (See https://builds.apache.org/job/Hive-trunk-h0.21/1434/)
          HIVE-3014 [jira] Fix metastore test failures caused by HIVE-2757
          (Zhenxiao Luo via Carl Steinbach)

          Summary: HIVE-3014: Fix metastore test failures caused by HIVE-2757

          Test Plan: EMPTY

          Reviewers: JIRA, cwsteinbach

          Reviewed By: cwsteinbach

          Differential Revision: https://reviews.facebook.net/D3213 (Revision 1339004)

          Result = SUCCESS
          cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1339004
          Files :

          • /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          Show
          Hudson added a comment - Integrated in Hive-trunk-h0.21 #1434 (See https://builds.apache.org/job/Hive-trunk-h0.21/1434/ ) HIVE-3014 [jira] Fix metastore test failures caused by HIVE-2757 (Zhenxiao Luo via Carl Steinbach) Summary: HIVE-3014 : Fix metastore test failures caused by HIVE-2757 Test Plan: EMPTY Reviewers: JIRA, cwsteinbach Reviewed By: cwsteinbach Differential Revision: https://reviews.facebook.net/D3213 (Revision 1339004) Result = SUCCESS cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1339004 Files : /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          Hide
          Hudson added a comment -

          Integrated in Hive-trunk-h0.21 #1423 (See https://builds.apache.org/job/Hive-trunk-h0.21/1423/)
          HIVE-2757. Hive can't find hadoop executor scripts without HADOOP_HOME set (Roman Shaposhnik via cws) (Revision 1336906)

          Result = FAILURE
          cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336906
          Files :

          • /hive/trunk/bin/ext/help.sh
          • /hive/trunk/bin/hive
          • /hive/trunk/bin/init-hive-dfs.sh
          • /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          Show
          Hudson added a comment - Integrated in Hive-trunk-h0.21 #1423 (See https://builds.apache.org/job/Hive-trunk-h0.21/1423/ ) HIVE-2757 . Hive can't find hadoop executor scripts without HADOOP_HOME set (Roman Shaposhnik via cws) (Revision 1336906) Result = FAILURE cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1336906 Files : /hive/trunk/bin/ext/help.sh /hive/trunk/bin/hive /hive/trunk/bin/init-hive-dfs.sh /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
          Hide
          Carl Steinbach added a comment -

          Committed to trunk. Thanks Roman!

          Show
          Carl Steinbach added a comment - Committed to trunk. Thanks Roman!
          Hide
          Roman Shaposhnik added a comment -

          Attaching a patch that incorporated all the feedback from the reviews and also got tested. Attaching here since FB is broken.

          Show
          Roman Shaposhnik added a comment - Attaching a patch that incorporated all the feedback from the reviews and also got tested. Attaching here since FB is broken.
          Hide
          Phabricator added a comment -

          ashutoshc has requested changes to the revision "HIVE-2757 [jira] hive can't find hadoop executor scripts without HADOOP_HOME set Submitting this for review on behalf of Roman Shaposhnik based on the patch he uploaded here: https://issues.apache.org/jira/secure/attachment/12525320/HIVE-2757.patch.txt".

          INLINE COMMENTS
          common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:914 Please use File.Separator instead of "/" to avoid OS-specific code.

          REVISION DETAIL
          https://reviews.facebook.net/D3075

          BRANCH
          HIVE-2757-executor-hadoop-home

          Show
          Phabricator added a comment - ashutoshc has requested changes to the revision " HIVE-2757 [jira] hive can't find hadoop executor scripts without HADOOP_HOME set Submitting this for review on behalf of Roman Shaposhnik based on the patch he uploaded here: https://issues.apache.org/jira/secure/attachment/12525320/HIVE-2757.patch.txt ". INLINE COMMENTS common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:914 Please use File.Separator instead of "/" to avoid OS-specific code. REVISION DETAIL https://reviews.facebook.net/D3075 BRANCH HIVE-2757 -executor-hadoop-home
          Hide
          Phabricator added a comment -

          cwsteinbach requested code review of "HIVE-2757 [jira] hive can't find hadoop executor scripts without HADOOP_HOME set Submitting this for review on behalf of Roman Shaposhnik based on the patch he uploaded here: https://issues.apache.org/jira/secure/attachment/12525320/HIVE-2757.patch.txt".
          Reviewers: JIRA

          HIVE-2757. hive can't find hadoop executor scripts without HADOOP_HOME set

          The trouble is that in Hadoop 0.23 HADOOP_HOME has been deprecated. I think it
          would be really nice if bin/hive can be modified to capture the which hadoop
          and pass that as a property into the JVM.

          TEST PLAN
          EMPTY

          REVISION DETAIL
          https://reviews.facebook.net/D3075

          AFFECTED FILES
          common/src/java/org/apache/hadoop/hive/conf/HiveConf.java

          MANAGE HERALD DIFFERENTIAL RULES
          https://reviews.facebook.net/herald/view/differential/

          WHY DID I GET THIS EMAIL?
          https://reviews.facebook.net/herald/transcript/7011/

          Tip: use the X-Herald-Rules header to filter Herald messages in your client.

          Show
          Phabricator added a comment - cwsteinbach requested code review of " HIVE-2757 [jira] hive can't find hadoop executor scripts without HADOOP_HOME set Submitting this for review on behalf of Roman Shaposhnik based on the patch he uploaded here: https://issues.apache.org/jira/secure/attachment/12525320/HIVE-2757.patch.txt ". Reviewers: JIRA HIVE-2757 . hive can't find hadoop executor scripts without HADOOP_HOME set The trouble is that in Hadoop 0.23 HADOOP_HOME has been deprecated. I think it would be really nice if bin/hive can be modified to capture the which hadoop and pass that as a property into the JVM. TEST PLAN EMPTY REVISION DETAIL https://reviews.facebook.net/D3075 AFFECTED FILES common/src/java/org/apache/hadoop/hive/conf/HiveConf.java MANAGE HERALD DIFFERENTIAL RULES https://reviews.facebook.net/herald/view/differential/ WHY DID I GET THIS EMAIL? https://reviews.facebook.net/herald/transcript/7011/ Tip: use the X-Herald-Rules header to filter Herald messages in your client.
          Hide
          Carl Steinbach added a comment -

          @Roman: Can you submit a phabricator review request? Thanks.
          https://cwiki.apache.org/Hive/phabricatorcodereview.html

          Show
          Carl Steinbach added a comment - @Roman: Can you submit a phabricator review request? Thanks. https://cwiki.apache.org/Hive/phabricatorcodereview.html
          Hide
          Roman Shaposhnik added a comment -

          @Ashutosh,

          I think at this point I'd go for the minimally invasive patch (which is attached). Potential improvements would include merging the stuff that Buddhika posted (and better yet, following up with a patch that migrates us to ProcessBuilder). However, I believe all of those things need to be handled in separate JIRAs.

          @Edward,

          I agree with your comments on standardizing the environment. It is fine for Hive to keep depending on env. variables, but it also needs to have reasonable defaults.

          Show
          Roman Shaposhnik added a comment - @Ashutosh, I think at this point I'd go for the minimally invasive patch (which is attached). Potential improvements would include merging the stuff that Buddhika posted (and better yet, following up with a patch that migrates us to ProcessBuilder). However, I believe all of those things need to be handled in separate JIRAs. @Edward, I agree with your comments on standardizing the environment. It is fine for Hive to keep depending on env. variables, but it also needs to have reasonable defaults.
          Hide
          Roman Shaposhnik added a comment -

          Sorry for the delay – it seems I don't have much luck with running Hive unit tests on my machine

          On the positive side – I tested this patch against a real cluster and it works as expected.

          Show
          Roman Shaposhnik added a comment - Sorry for the delay – it seems I don't have much luck with running Hive unit tests on my machine On the positive side – I tested this patch against a real cluster and it works as expected.
          Hide
          Buddhika Chamith De Alwis added a comment -

          Recently I had this requirement of getting hive running in local mode without the assumption of HADOOP_HOME being set. My use case was to run Hive embedded inside an OSGi environment running the Hive Thrift Service. So I had to make some changes in local mode job submission codes to directly invoke Hadoop instead of going through Hadoop scripts. I am attaching the changes as a patch in case you folks find it useful. I took the liberty of defining couple of new properties so that it would continue to work with Hadoop scripts as normal if above properties are not explicitly set. Anyway this doesn't take away the HADOOP_HOME requirement to start hive using hive scripts since they are heavily dependent on being HADOOP_HOME set. In my case this was sufficient since I was starting Hive server programatically without going through hive scripts. Also here I have used a HADOOPLIB property where I assume all Hadoop related dependencies are present. In my case this was not bound to HADOOP_HOME, was just a specific location inside OSGi container where all Hive and Hadoop jars were present together.

          Show
          Buddhika Chamith De Alwis added a comment - Recently I had this requirement of getting hive running in local mode without the assumption of HADOOP_HOME being set. My use case was to run Hive embedded inside an OSGi environment running the Hive Thrift Service. So I had to make some changes in local mode job submission codes to directly invoke Hadoop instead of going through Hadoop scripts. I am attaching the changes as a patch in case you folks find it useful. I took the liberty of defining couple of new properties so that it would continue to work with Hadoop scripts as normal if above properties are not explicitly set. Anyway this doesn't take away the HADOOP_HOME requirement to start hive using hive scripts since they are heavily dependent on being HADOOP_HOME set. In my case this was sufficient since I was starting Hive server programatically without going through hive scripts. Also here I have used a HADOOPLIB property where I assume all Hadoop related dependencies are present. In my case this was not bound to HADOOP_HOME, was just a specific location inside OSGi container where all Hive and Hadoop jars were present together.
          Hide
          Ashutosh Chauhan added a comment -

          @Roman,
          Your patch is definitely an improvement over status quo. HADOOP_HOME is deprecated and we have to accomodate for HADOOP_PREFIX for Yarn. Instead of doing those changes in bin/hive, it definitely makes sense to do it here. Also, patch doesn't change the existing behavior. So, I am +1 on the approach.
          Can you post the full patch so that I can see if there are other changes you have in mind.

          Show
          Ashutosh Chauhan added a comment - @Roman, Your patch is definitely an improvement over status quo. HADOOP_HOME is deprecated and we have to accomodate for HADOOP_PREFIX for Yarn. Instead of doing those changes in bin/hive, it definitely makes sense to do it here. Also, patch doesn't change the existing behavior. So, I am +1 on the approach. Can you post the full patch so that I can see if there are other changes you have in mind.
          Hide
          Edward Capriolo added a comment -

          We are missing the larger issue here. Bigtop only cares about two entry points does not mean there are only two we should care about. We have hive web interface, hive-thrift-service, lineage tool. Oozie should probably interface with the Hive-thrift-service I did that here: https://github.com/edwardcapriolo/m6d_oozie/blob/master/src/main/java/com/m6d/oozie/HiveServiceBAction.java. Thrift is the best programmatic way to interface with hive.

          We have a long standing ticket open to standardize the environment for all these processes so they all have a common entry point. (I can not find the ticket ATM).

          We are always going to need something like HADOOP_HOME because hive assumes hadoop jars are "provided" (like in maven terms) and hadoop configuration is "provided". Unless we copy all the hadoop jars to hive/lib and copy all the hadoop configuration to hive. With the patch about we still use HADOOP_HOME to build the classpath to start hive.

          Also HADOOP_HOME is deprecated but in the setup docs "http://hadoop.apache.org/common/docs/r0.23.1/hadoop-yarn/hadoop-yarn-site/SingleCluster.html"

          "Assuming that the environment variables $HADOOP_COMMON_HOME, $HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and $HADOOP_CONF_DIR have been set appropriately. Set $$YARN_CONF_DIR the same as $HADOOP_CONF_DIR"

          I still feel like hive is going to end up using environment variables to start up.

          Show
          Edward Capriolo added a comment - We are missing the larger issue here. Bigtop only cares about two entry points does not mean there are only two we should care about. We have hive web interface, hive-thrift-service, lineage tool. Oozie should probably interface with the Hive-thrift-service I did that here: https://github.com/edwardcapriolo/m6d_oozie/blob/master/src/main/java/com/m6d/oozie/HiveServiceBAction.java . Thrift is the best programmatic way to interface with hive. We have a long standing ticket open to standardize the environment for all these processes so they all have a common entry point. (I can not find the ticket ATM). We are always going to need something like HADOOP_HOME because hive assumes hadoop jars are "provided" (like in maven terms) and hadoop configuration is "provided". Unless we copy all the hadoop jars to hive/lib and copy all the hadoop configuration to hive. With the patch about we still use HADOOP_HOME to build the classpath to start hive. Also HADOOP_HOME is deprecated but in the setup docs "http://hadoop.apache.org/common/docs/r0.23.1/hadoop-yarn/hadoop-yarn-site/SingleCluster.html" "Assuming that the environment variables $HADOOP_COMMON_HOME, $HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and $HADOOP_CONF_DIR have been set appropriately. Set $$YARN_CONF_DIR the same as $HADOOP_CONF_DIR" I still feel like hive is going to end up using environment variables to start up.
          Hide
          Carl Steinbach added a comment -

          @Ed: o.a.h.Configuration is responsible for locating the Hadoop configuration files on the classpath at time of initialization. Since HiveConf extends Configuration we inherit this functionality for free. I agree that it may eventually become necessary to dynamically modify the classpath at runtime to include the Hadoop conf directory, but right now we don't do that, and in the meantime ConfVars.HADOOPCONF is dead code (which is really confusing, since developers always assume that the code they read is used somewhere).

          Show
          Carl Steinbach added a comment - @Ed: o.a.h.Configuration is responsible for locating the Hadoop configuration files on the classpath at time of initialization. Since HiveConf extends Configuration we inherit this functionality for free. I agree that it may eventually become necessary to dynamically modify the classpath at runtime to include the Hadoop conf directory, but right now we don't do that, and in the meantime ConfVars.HADOOPCONF is dead code (which is really confusing, since developers always assume that the code they read is used somewhere).
          Hide
          Roman Shaposhnik added a comment -

          @Carl, bin/hive shell script calling bin/hadoop is a bit of an orthogonal issue here I think. The basic problem I'm trying to address in this JIRA is this: hive has at least 2 entry points Bigtop cares about for integration purposes:

          1. bin/hive
          2. org.apache.hadoop.hive.cli.CliDriver

          Regardless of which one is used, though, hive Java code will end up exec'ing hadoop launcher script when it comes to job submission. The reason I'm brining up the 2 is simple: given that #2 exists and is used by things like Oozie we can't rely on shell-level computation of environment (like finding the hadoop executable and passing that to java code via a property, etc.).

          What this JIRA is trying to accomplish is to push the logic of finding hadoop executable script (if none is given) back to the java code, since because of #2 it is the only place where it can be done reliably.

          Makes sense?

          Show
          Roman Shaposhnik added a comment - @Carl, bin/hive shell script calling bin/hadoop is a bit of an orthogonal issue here I think. The basic problem I'm trying to address in this JIRA is this: hive has at least 2 entry points Bigtop cares about for integration purposes: bin/hive org.apache.hadoop.hive.cli.CliDriver Regardless of which one is used, though, hive Java code will end up exec'ing hadoop launcher script when it comes to job submission. The reason I'm brining up the 2 is simple: given that #2 exists and is used by things like Oozie we can't rely on shell-level computation of environment (like finding the hadoop executable and passing that to java code via a property, etc.). What this JIRA is trying to accomplish is to push the logic of finding hadoop executable script (if none is given) back to the java code, since because of #2 it is the only place where it can be done reliably. Makes sense?
          Hide
          Roman Shaposhnik added a comment -

          @Edward, not sure I understand – at this point it is an unused variable and it serves no purpose. If, during the course of development of this patch, we find a use for it – sure we'll keep it.

          Does it make sense?

          Show
          Roman Shaposhnik added a comment - @Edward, not sure I understand – at this point it is an unused variable and it serves no purpose. If, during the course of development of this patch, we find a use for it – sure we'll keep it. Does it make sense?
          Hide
          Edward Capriolo added a comment -

          I am -1 on removing ConfVars.HADOOPCONF, because when we do remove HADOOP_BIN hive is going to need some way to locate the hadoop configuration files. Then this variable is useful again.

          Show
          Edward Capriolo added a comment - I am -1 on removing ConfVars.HADOOPCONF, because when we do remove HADOOP_BIN hive is going to need some way to locate the hadoop configuration files. Then this variable is useful again.
          Hide
          Carl Steinbach added a comment -

          This unique feature of Hive makes discovery of Hadoop executor script happen at the level of Java code.

          Just to make sure I understand, you're saying that right now the bin/hive shell script calls bin/hadoop, and you want to modify it so that bin/hive bypasses bin/hadoop and instead calls java directly with the appropriate Hive class? That sounds like an improvement to me. Even better would be to eliminate the Hive shell scripts entirely (or at least whittle them down to nothing) and push most of this logic into Java.

          Note that we're actually moving in the direction of removing the dependency on bin/hadoop entirely. HIVE-2646 is part of that effort and just got committed.

          P.S. I have also took the liberty of removing ConfVars.HADOOPCONF since I don't think it is used anymore.

          That looks good to me. I wasn't able to find any references to this variable either.

          Show
          Carl Steinbach added a comment - This unique feature of Hive makes discovery of Hadoop executor script happen at the level of Java code. Just to make sure I understand, you're saying that right now the bin/hive shell script calls bin/hadoop, and you want to modify it so that bin/hive bypasses bin/hadoop and instead calls java directly with the appropriate Hive class? That sounds like an improvement to me. Even better would be to eliminate the Hive shell scripts entirely (or at least whittle them down to nothing) and push most of this logic into Java. Note that we're actually moving in the direction of removing the dependency on bin/hadoop entirely. HIVE-2646 is part of that effort and just got committed. P.S. I have also took the liberty of removing ConfVars.HADOOPCONF since I don't think it is used anymore. That looks good to me. I wasn't able to find any references to this variable either.
          Hide
          Roman Shaposhnik added a comment -

          Here's an example patch that is not meant for inclusion, but rather to generate discussion whether such an approach would be acceptable.

          Basically, the fundamental problem is that Hive java code can be used in other projects (like Oozie) and hence it can't rely on launcher shell scripts always passing correct set of properties along based on querying the environment at shell level.

          This unique feature of Hive makes discovery of Hadoop executor script happen at the level of Java code. The patch contains a very naive attempt at doing that while maintaining backward compatibility with Hadoop 0.20.X and older releases. The most notable feature that is still missing is an ability to discover Hadoop that's part of the user's PATH. Before I implement that, however, I'd like to ask whether exec'ing via ProcessBuilder won't be a better option, rather than me manually tying to parse PATH (error prone).

          Please let me know what you think.

          P.S. I have also took the liberty of removing ConfVars.HADOOPCONF since I don't think it is used anymore.

          Show
          Roman Shaposhnik added a comment - Here's an example patch that is not meant for inclusion, but rather to generate discussion whether such an approach would be acceptable. Basically, the fundamental problem is that Hive java code can be used in other projects (like Oozie) and hence it can't rely on launcher shell scripts always passing correct set of properties along based on querying the environment at shell level. This unique feature of Hive makes discovery of Hadoop executor script happen at the level of Java code. The patch contains a very naive attempt at doing that while maintaining backward compatibility with Hadoop 0.20.X and older releases. The most notable feature that is still missing is an ability to discover Hadoop that's part of the user's PATH. Before I implement that, however, I'd like to ask whether exec'ing via ProcessBuilder won't be a better option, rather than me manually tying to parse PATH (error prone). Please let me know what you think. P.S. I have also took the liberty of removing ConfVars.HADOOPCONF since I don't think it is used anymore.

            People

            • Assignee:
              Roman Shaposhnik
              Reporter:
              Roman Shaposhnik
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development