Hadoop Common
  1. Hadoop Common
  2. HADOOP-6105

Provide a way to automatically handle backward compatibility of deprecated keys

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.21.0
    • Component/s: conf
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      There are cases when we have had to deprecate configuration keys. Use cases include, changing the names of variables to better match intent, splitting a single parameter into two - for maps, reduces etc.

      In such cases, we typically provide a backwards compatible option for the old keys. The handling of such cases might typically be common enough to actually add support for it in a generic fashion in the Configuration class. Some initial discussion around this started in HADOOP-5919, but since the project split happened in between we decided to open this issue to fix it in common.

      1. HADOOP-6105-9.patch
        26 kB
        V.V.Chaitanya Krishna
      2. HADOOP-6105-8.patch
        23 kB
        V.V.Chaitanya Krishna
      3. HADOOP-6105-7.patch
        23 kB
        V.V.Chaitanya Krishna
      4. HADOOP-6105-6.patch
        21 kB
        V.V.Chaitanya Krishna
      5. HADOOP-6105-5.patch
        21 kB
        V.V.Chaitanya Krishna
      6. HADOOP-6105-4.patch
        21 kB
        V.V.Chaitanya Krishna
      7. HADOOP-6105-3.patch
        21 kB
        V.V.Chaitanya Krishna
      8. HADOOP-6105-2.patch
        21 kB
        V.V.Chaitanya Krishna
      9. HADOOP-6105-10.patch
        26 kB
        V.V.Chaitanya Krishna
      10. HADOOP-6105-1.patch
        15 kB
        V.V.Chaitanya Krishna
      11. HADOOP-6105.patch
        17 kB
        V.V.Chaitanya Krishna
      12. HADOOP-6105.patch
        20 kB
        V.V.Chaitanya Krishna

        Issue Links

          Activity

          Hide
          Todd Lipcon added a comment -

          It seems there is a bit of a flaw in the way this JIRA was done. Could those who are still watching this please take a look at HADOOP-7287? Thanks.

          Show
          Todd Lipcon added a comment - It seems there is a bit of a flaw in the way this JIRA was done. Could those who are still watching this please take a look at HADOOP-7287 ? Thanks.
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk #89 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/89/)
          . Adds support for automatically handling deprecation of configuration keys. Contributed by V.V.Chaitanya Krishna.

          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk #89 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk/89/ ) . Adds support for automatically handling deprecation of configuration keys. Contributed by V.V.Chaitanya Krishna.
          Hide
          Hemanth Yamijala added a comment -

          I committed this to trunk. Thanks, Chaitanya !

          Show
          Hemanth Yamijala added a comment - I committed this to trunk. Thanks, Chaitanya !
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Common-trunk-Commit #15 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk-Commit/15/)
          . Adds support for automatically handling deprecation of configuration keys. Contributed by V.V.Chaitanya Krishna.

          Show
          Hudson added a comment - Integrated in Hadoop-Common-trunk-Commit #15 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Common-trunk-Commit/15/ ) . Adds support for automatically handling deprecation of configuration keys. Contributed by V.V.Chaitanya Krishna.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12418793/HADOOP-6105-10.patch
          against trunk revision 812031.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/20/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/20/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/20/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/20/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12418793/HADOOP-6105-10.patch against trunk revision 812031. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/20/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/20/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/20/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/20/console This message is automatically generated.
          Hide
          V.V.Chaitanya Krishna added a comment -

          Uploading patch with the above comments implemented.

          The warning message will be printed for every set and also for first get of deprecated key after every reload of configuration.

          Show
          V.V.Chaitanya Krishna added a comment - Uploading patch with the above comments implemented. The warning message will be printed for every set and also for first get of deprecated key after every reload of configuration.
          Hide
          Hemanth Yamijala added a comment -

          The changes look mostly fine. I have just a few minor comments:

          • "adds the deprecated key to the deprecation map." -> Capitalize 'adds'
          • addDeprecation should provide a way to specify custom message.
          • Throw an IllegalArgumentException if key or value paramaters are invalid.
          • getWarningMessage should cache the default message after it has built it.
          • [Issue] - should we warn every time in get ? Will it prove to be too much information like we discovered when we did MAPREDUCE-40 ? One option could be to log everytime in set. Log once in processDeprecatedKeys() to cover get. Thoughts ?
          • "It returns the value of the new key, if the old key is deprecated." - suggest rewording slightly to "If the key is deprecated, it returns the value of first key that replaces the deprecated key." Also, use one message consistently for all APIs.
          • I would suggest set doesn't recursively call set again for the replacement keys. We can avoid some additional function calls and lookups that way. This means replacement keys can't be deprecated, but I think that's a very reasonable assumption to make.
          • It will be good to check the consistency of get for old and new keys for all cases in the test case.
          • Need one test case for testing mapping of one key to more than one new keys. This must be tested for both set and get. I would suggest a new test case for this.
          Show
          Hemanth Yamijala added a comment - The changes look mostly fine. I have just a few minor comments: "adds the deprecated key to the deprecation map." -> Capitalize 'adds' addDeprecation should provide a way to specify custom message. Throw an IllegalArgumentException if key or value paramaters are invalid. getWarningMessage should cache the default message after it has built it. [Issue] - should we warn every time in get ? Will it prove to be too much information like we discovered when we did MAPREDUCE-40 ? One option could be to log everytime in set. Log once in processDeprecatedKeys() to cover get. Thoughts ? "It returns the value of the new key, if the old key is deprecated." - suggest rewording slightly to "If the key is deprecated, it returns the value of first key that replaces the deprecated key." Also, use one message consistently for all APIs. I would suggest set doesn't recursively call set again for the replacement keys. We can avoid some additional function calls and lookups that way. This means replacement keys can't be deprecated, but I think that's a very reasonable assumption to make. It will be good to check the consistency of get for old and new keys for all cases in the test case. Need one test case for testing mapping of one key to more than one new keys. This must be tested for both set and get. I would suggest a new test case for this.
          Hide
          Sreekanth Ramakrishnan added a comment -

          The changes look fine to me.

          Show
          Sreekanth Ramakrishnan added a comment - The changes look fine to me.
          Hide
          V.V.Chaitanya Krishna added a comment -

          Changes introduced in the new patch:

          • made hadoop.conf.extra.classes as a static final string.
          • introduced a testcase which ensures that there are no side-effects of reloadConfiguration on deprecations and get() of the new and old keys.
          • in order to ensure that deprecatedKeyMap data structure is not modified after loading all the resources for the first time, it is changed to unmodifiable after getting populated.
          • modified set(String, String) in order to ensure that even if a set() is called before the first get(), the deprecation information is populated and keys are set accordingly.
          • wrapped the methods populateDeprecationMapping() and processDeprecatedKeys() with new method processDeprecation() for better understanding of code.
          Show
          V.V.Chaitanya Krishna added a comment - Changes introduced in the new patch: made hadoop.conf.extra.classes as a static final string. introduced a testcase which ensures that there are no side-effects of reloadConfiguration on deprecations and get() of the new and old keys. in order to ensure that deprecatedKeyMap data structure is not modified after loading all the resources for the first time, it is changed to unmodifiable after getting populated. modified set(String, String) in order to ensure that even if a set() is called before the first get(), the deprecation information is populated and keys are set accordingly. wrapped the methods populateDeprecationMapping() and processDeprecatedKeys() with new method processDeprecation() for better understanding of code.
          Hide
          V.V.Chaitanya Krishna added a comment -
          • Uploading the patch with the above suggestions implemented. A new testcase TestConfigurationDeprecation is coded for testing of the deprecation functionality.
          • As mentioned in the above comments, reloadConfiguration() will not flush the deprecated key mappings and it will not trigger re-initialization of any of the classes that are once initialized due to the presence of its class name in the value of property hadoop.conf.extra.classes .
          • The value of hadoop.conf.extra.classes is currently set to the class name of JobConf in mapreduce. It can be added with the class name(s) in hdfs which extend Configuration and whose static block has deprecations related to hdfs.
          Show
          V.V.Chaitanya Krishna added a comment - Uploading the patch with the above suggestions implemented. A new testcase TestConfigurationDeprecation is coded for testing of the deprecation functionality. As mentioned in the above comments, reloadConfiguration() will not flush the deprecated key mappings and it will not trigger re-initialization of any of the classes that are once initialized due to the presence of its class name in the value of property hadoop.conf.extra.classes . The value of hadoop.conf.extra.classes is currently set to the class name of JobConf in mapreduce. It can be added with the class name(s) in hdfs which extend Configuration and whose static block has deprecations related to hdfs.
          Hide
          Hemanth Yamijala added a comment -

          Finally, we should handle the cases where one or all of the keys which are defined in extra.conf are not present in class path. I would suggest suppressing exception and logging it in debug mode

          Serious enough to log in WARN mode actually. Suppressing exception is fine, I suppose.

          Show
          Hemanth Yamijala added a comment - Finally, we should handle the cases where one or all of the keys which are defined in extra.conf are not present in class path. I would suggest suppressing exception and logging it in debug mode Serious enough to log in WARN mode actually. Suppressing exception is fine, I suppose.
          Hide
          Sreekanth Ramakrishnan added a comment -

          Took a look at the patch following are the comments which I had on the same:

          • org.apache.hadoop.conf.Configuration.addDeprecation(String, String[]) should be made public so classes can access it.
          • We dont need to call Class.newInstance() as we assume that child classes would have static block which would add deprecated keys, so during class loading we will take care of it.
          • We should make the process deprecation and data structures all multi-threaded.
          • Finally, we should handle the cases where one or all of the keys which are defined in extra.conf are not present in class path. I would suggest suppressing exception and logging it in debug mode.
          • lastly move the loaded deprecation boolean as last line in process deprecation.

          We should also take care that deprecated keys are not going to be reloaded and reprocessed with Configuration.reload() method.

          • Define the key in core-default.xml and make the same final.
          Show
          Sreekanth Ramakrishnan added a comment - Took a look at the patch following are the comments which I had on the same: org.apache.hadoop.conf.Configuration.addDeprecation(String, String[]) should be made public so classes can access it. We dont need to call Class.newInstance() as we assume that child classes would have static block which would add deprecated keys, so during class loading we will take care of it. We should make the process deprecation and data structures all multi-threaded. Finally, we should handle the cases where one or all of the keys which are defined in extra.conf are not present in class path. I would suggest suppressing exception and logging it in debug mode. lastly move the loaded deprecation boolean as last line in process deprecation. We should also take care that deprecated keys are not going to be reloaded and reprocessed with Configuration.reload() method. Define the key in core-default.xml and make the same final.
          Hide
          V.V.Chaitanya Krishna added a comment -

          The previous patch does not handle deprecations for mapreduce and hdfs. In order to support the deprecations in mapreduce and hdfs, a new key hadoop.conf.extra.classes is being added in core-site.xml. This key can have values as class names of classes from mapreduce and hdfs which extend Configuration and whose static blocks can have deprecations by using addDeprecation method. These classes will be loaded using reflection, thus ensuring that deprecations added in their static blocks are being considered.

          Uploading a patch with the above changes made.

          Show
          V.V.Chaitanya Krishna added a comment - The previous patch does not handle deprecations for mapreduce and hdfs. In order to support the deprecations in mapreduce and hdfs, a new key hadoop.conf.extra.classes is being added in core-site.xml. This key can have values as class names of classes from mapreduce and hdfs which extend Configuration and whose static blocks can have deprecations by using addDeprecation method. These classes will be loaded using reflection, thus ensuring that deprecations added in their static blocks are being considered. Uploading a patch with the above changes made.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12416861/HADOOP-6105-5.patch
          against trunk revision 804918.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/611/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/611/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/611/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/611/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12416861/HADOOP-6105-5.patch against trunk revision 804918. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/611/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/611/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/611/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/611/console This message is automatically generated.
          Hide
          V.V.Chaitanya Krishna added a comment -

          uploading the patch with the findbugs warning fixed.

          Show
          V.V.Chaitanya Krishna added a comment - uploading the patch with the findbugs warning fixed.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12416391/HADOOP-6105-4.patch
          against trunk revision 804918.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 1 new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/609/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/609/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/609/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/609/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12416391/HADOOP-6105-4.patch against trunk revision 804918. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 1 new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/609/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/609/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/609/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/609/console This message is automatically generated.
          Hide
          Sreekanth Ramakrishnan added a comment -

          The changes looks fine to me.
          +1 to the patch.

          Show
          Sreekanth Ramakrishnan added a comment - The changes looks fine to me. +1 to the patch.
          Hide
          V.V.Chaitanya Krishna added a comment -

          Uploading the patch with the above mentioned correction done.

          Show
          V.V.Chaitanya Krishna added a comment - Uploading the patch with the above mentioned correction done.
          Hide
          Sreekanth Ramakrishnan added a comment -

          Took a look at the lastest patch, the last test condition i.e. simulation of the user setting deprecated value and framework using new value is not being asserted, can you please add the assert condition there?

          Apart from that all other changes look fine.

          Show
          Sreekanth Ramakrishnan added a comment - Took a look at the lastest patch, the last test condition i.e. simulation of the user setting deprecated value and framework using new value is not being asserted, can you please add the assert condition there? Apart from that all other changes look fine.
          Hide
          V.V.Chaitanya Krishna added a comment -

          uploaded patch with the following test cases being added:

          1. User sets an old key and gets new key.

          2. User sets a new key and gets old key.

          The results should be the value which is set most recently, i.e., value corresponding to old key in the first case and value corresponding to new key in the second case.

          Show
          V.V.Chaitanya Krishna added a comment - uploaded patch with the following test cases being added: 1. User sets an old key and gets new key. 2. User sets a new key and gets old key. The results should be the value which is set most recently, i.e., value corresponding to old key in the first case and value corresponding to new key in the second case.
          Hide
          V.V.Chaitanya Krishna added a comment -

          Uploading the new patch with the above suggestions being implemented.

          Show
          V.V.Chaitanya Krishna added a comment - Uploading the new patch with the above suggestions being implemented.
          Hide
          Sreekanth Ramakrishnan added a comment -

          Took a look at the patch file:
          The code changes looks fine. Following are the comments with respect to test case:

          In testcase do assert that get of the old and new key matches correctly. We just need to do the following for these cases.
          Old final new final — Old.
          old non-final new final — New
          old final new non-final – Old
          old non-final new non-final – Old

          Add test case to check user code case also, i.e the configuration resource has only new keys. Then you can add deprecated keys using addDeprecation and then do get set and get of old keys.

          Show
          Sreekanth Ramakrishnan added a comment - Took a look at the patch file: The code changes looks fine. Following are the comments with respect to test case: In testcase do assert that get of the old and new key matches correctly. We just need to do the following for these cases. Old final new final — Old. old non-final new final — New old final new non-final – Old old non-final new non-final – Old Add test case to check user code case also, i.e the configuration resource has only new keys. Then you can add deprecated keys using addDeprecation and then do get set and get of old keys.
          Hide
          V.V.Chaitanya Krishna added a comment -

          Some changes made in the new patch are:

          • method to check if the key is deprecated.
          • method to check if the key is previously loaded with the value of the corresponding deprecated key.
          • method to set value of deprecated key to the new keys ( when deprecated key occurs in xm files).
          • method to generate appropriate warning message that needs to be logged when deprecated key is being considered.
          Show
          V.V.Chaitanya Krishna added a comment - Some changes made in the new patch are: method to check if the key is deprecated. method to check if the key is previously loaded with the value of the corresponding deprecated key. method to set value of deprecated key to the new keys ( when deprecated key occurs in xm files). method to generate appropriate warning message that needs to be logged when deprecated key is being considered.
          Hide
          V.V.Chaitanya Krishna added a comment -

          Uploaded the patch with the above suggested modifications done.

          Show
          V.V.Chaitanya Krishna added a comment - Uploaded the patch with the above suggested modifications done.
          Hide
          rahul k singh added a comment -

          Changes look good ,
          some minor comments:

          Configuration.java

          1.Check for null in addDeprecation method.
          2.Some extra lines are there in front of some of the methods , remove them

          3.Try to use foreach kind of syntax where ever possible.It makes code look simpler.

          TestConfiguration.java

          1.In tearDown method , you should delete the test-config.xml file also
          2.make sure that CONFIG3 is deleted.

          1.Diff has Index: junitvmwatcher8758949811953274592.properties
          I dont think it is required.

          Diff also has test-config3.xml
          i think this is becoz of CONFIG3 not being deleted in testcase.

          Show
          rahul k singh added a comment - Changes look good , some minor comments: Configuration.java 1.Check for null in addDeprecation method. 2.Some extra lines are there in front of some of the methods , remove them 3.Try to use foreach kind of syntax where ever possible.It makes code look simpler. TestConfiguration.java 1.In tearDown method , you should delete the test-config.xml file also 2.make sure that CONFIG3 is deleted. 1.Diff has Index: junitvmwatcher8758949811953274592.properties I dont think it is required. Diff also has test-config3.xml i think this is becoz of CONFIG3 not being deleted in testcase.
          Hide
          V.V.Chaitanya Krishna added a comment -

          Uploading the patch

          Show
          V.V.Chaitanya Krishna added a comment - Uploading the patch
          Hide
          V.V.Chaitanya Krishna added a comment -

          Assumptions/Requirements:

          • Deprecated keys never occur in default.xml files.
          • There won't be any storage for deprecated keys in the Configuration object. Instead, new key mappings will be used.
          • The precedence order is as follows:
            1. Value which comes with the final attribute as true.
            2. Value occurring with the deprecated key.

          The following table depicts the various cases arising, keeping in view whether the key in concern is final and if the key is deprecated:

          NOTE:

          1. Key - the current key that is to be loaded in loadResource method. It can be a deprecated key (old), key in default.xml (key_default) or key in site.xml (key_site).
          2. Prev. Occurrence - The attribute which changed the value of the Key most recently.
          3. isFinal - true if Key is having the final property set to true.
          4. isPrevFinal - true if Prev. Occurrence is having the final property set to true.
          5. Value - Value to be expected after Key is loaded. It can be either of the values corresponding to old, key_default or key_site.
          6. warn - logs a warning message indicating that a final parameter is being attempted to override.
            Also, a warning message which indicates the usage of a deprecated key is logged whenever Key is a deprecated one.
          Key isFinal Prev.Occurrence isPrevFinal Value
          old true key_default true old
          old true key_default false old
          old true key_site true old
          old true key_site false old
          old false key_default true key_default (warn)
          old false key_default false old
          old false key_site true key_site (warn)
          old false key_site false old
          key_site true old true old
          key_site true old false key_site
          key_site true key_default true key_default (warn)
          key_site true key_default false key_site
          key_site false old true old
          key_site false old false old
          key_site false key_default true key_default (warn)
          key_site false key_default false key_site
          Show
          V.V.Chaitanya Krishna added a comment - Assumptions/Requirements: Deprecated keys never occur in default.xml files. There won't be any storage for deprecated keys in the Configuration object. Instead, new key mappings will be used. The precedence order is as follows: Value which comes with the final attribute as true. Value occurring with the deprecated key. The following table depicts the various cases arising, keeping in view whether the key in concern is final and if the key is deprecated: NOTE: Key - the current key that is to be loaded in loadResource method. It can be a deprecated key ( old ), key in default.xml ( key_default ) or key in site.xml ( key_site ). Prev. Occurrence - The attribute which changed the value of the Key most recently. isFinal - true if Key is having the final property set to true. isPrevFinal - true if Prev. Occurrence is having the final property set to true. Value - Value to be expected after Key is loaded. It can be either of the values corresponding to old , key_default or key_site . warn - logs a warning message indicating that a final parameter is being attempted to override. Also, a warning message which indicates the usage of a deprecated key is logged whenever Key is a deprecated one. Key isFinal Prev.Occurrence isPrevFinal Value old true key_default true old old true key_default false old old true key_site true old old true key_site false old old false key_default true key_default (warn) old false key_default false old old false key_site true key_site (warn) old false key_site false old key_site true old true old key_site true old false key_site key_site true key_default true key_default (warn) key_site true key_default false key_site key_site false old true old key_site false old false old key_site false key_default true key_default (warn) key_site false key_default false key_site
          Hide
          Philip Zeyliger added a comment -

          I think you have to think about how Hadoop's notion of a "final" flag interacts with this, too. If a system administrator has set either A or B to be final, then that value must override any user-submitted value, regardless of which was set first.

          Show
          Philip Zeyliger added a comment - I think you have to think about how Hadoop's notion of a "final" flag interacts with this, too. If a system administrator has set either A or B to be final, then that value must override any user-submitted value, regardless of which was set first.
          Hide
          rahul k singh added a comment -

          To make the above proposals more clear:

          for example:
          ==========
          A (a deprecated key)
          B(new key mapping for A) ,
          SA (setValueForA) ,
          SB(setValueForB) ,
          GA(get("A))
          GB(get("B")).

          1. Always maintain new keys only. At the time of loading configuration xml , if A and B are present , we simply , replace B's values with A's value.

          • Advantages with this approach:
            • The whole system is consistent. we always maintain the single set of values , hence whole behavior is deterministic w.r.t SA or SB which ever is called latest.
          • Disadvantage:
            • Deprecated values would be over written. so if we call SB then A's values are also changed. //Is this a issue??

          2.Always give preference to deprecated keys. If we have both A and B present in configuration , give preference to A always.

          • Advantage:
            • System is deterministic as we always get deprecated value if present.
          • Disadvantage:
            • GB would check if GA is available and if yes , would return the A's values.So if user calls SB , and then calls GB , they would expect B's values, instead they would get A's value

          Any comments ?

          Show
          rahul k singh added a comment - To make the above proposals more clear: for example: ========== A (a deprecated key) B(new key mapping for A) , SA (setValueForA) , SB(setValueForB) , GA(get("A)) GB(get("B")). 1. Always maintain new keys only. At the time of loading configuration xml , if A and B are present , we simply , replace B's values with A's value. Advantages with this approach: The whole system is consistent. we always maintain the single set of values , hence whole behavior is deterministic w.r.t SA or SB which ever is called latest. Disadvantage: Deprecated values would be over written. so if we call SB then A's values are also changed. //Is this a issue?? 2.Always give preference to deprecated keys. If we have both A and B present in configuration , give preference to A always. Advantage: System is deterministic as we always get deprecated value if present. Disadvantage: GB would check if GA is available and if yes , would return the A's values.So if user calls SB , and then calls GB , they would expect B's values, instead they would get A's value Any comments ?
          Hide
          rahul k singh added a comment -

          The first comment defines the initial proposal , Which says that when both new and old values are set in configuration we give preference
          to new values. This we felt is not correct , we should give preference to old value.As we are trying to be backward compatible here.

          So in configuration if have both deprecated key and new key , new key's values would be replaced by the deprecated key's values.

          Now in taking the above consideration in mind we have 2 ways of solving this.

          1.Try to maintain a single set of key value always .When we have both deprecated key and new key , only maintain new key .So when we are trying to read the Configuration xmls , if we encounter any deprecated key , we simply replace new key's value with the deprecated key's value.
          when we do "set" in configuration for the depercated keys, we simply set the new key's with the values passed . When we do "get" we simply return the new key mapping.

          2. Always give preference to the deprecated key . While set of new values check if deprecated keys are present , if yes , then simply set deprecated values.

          We would like to go ahead with option 1. Where we make sure that old is preferred at the start while loading the configuration , after that it
          is simply the whichever key is set recently.

          Show
          rahul k singh added a comment - The first comment defines the initial proposal , Which says that when both new and old values are set in configuration we give preference to new values. This we felt is not correct , we should give preference to old value.As we are trying to be backward compatible here. So in configuration if have both deprecated key and new key , new key's values would be replaced by the deprecated key's values. Now in taking the above consideration in mind we have 2 ways of solving this. 1.Try to maintain a single set of key value always .When we have both deprecated key and new key , only maintain new key .So when we are trying to read the Configuration xmls , if we encounter any deprecated key , we simply replace new key's value with the deprecated key's value. when we do "set" in configuration for the depercated keys, we simply set the new key's with the values passed . When we do "get" we simply return the new key mapping. 2. Always give preference to the deprecated key . While set of new values check if deprecated keys are present , if yes , then simply set deprecated values. We would like to go ahead with option 1. Where we make sure that old is preferred at the start while loading the configuration , after that it is simply the whichever key is set recently.
          Hide
          Philip Zeyliger added a comment -

          You might consider logging a warning every time you run into (either via get or set) a "set" deprecated key.

          Show
          Philip Zeyliger added a comment - You might consider logging a warning every time you run into (either via get or set) a "set" deprecated key.
          Hide
          V.V.Chaitanya Krishna added a comment -

          Assumptions:
          1. None of the *-default.xml files would have deprecated keys.

          ========
          Changes to set and get method in configuration.

          get(name): the key(name) is checked in the deprecation map. If present,
          the method returns the value of the first key in the list of new keys.

          set(name,value): the key (name) is checked in the deprecation map. If
          present, the method sets all the new replacing keys with "value".

          ========
          Following table describes the set and get behaviour:

          Old Key New Key get(oldkey) get(newKey)
          set set old/newValue(whichever is recently set) old/new value(whichever
          is recently set)
          unSet unSet default value of newKey default value of new key
          set unSet oldValue old value
          unSet set newValue newValue

          Note: the set and unset in the above table refers to the keys being
          set dynamically i.e calling Configuration setter methods.

          Show
          V.V.Chaitanya Krishna added a comment - Assumptions: 1. None of the *-default.xml files would have deprecated keys. ======== Changes to set and get method in configuration. get(name): the key(name) is checked in the deprecation map. If present, the method returns the value of the first key in the list of new keys. set(name,value): the key (name) is checked in the deprecation map. If present, the method sets all the new replacing keys with "value". ======== Following table describes the set and get behaviour: Old Key New Key get(oldkey) get(newKey) set set old/newValue(whichever is recently set) old/new value(whichever is recently set) unSet unSet default value of newKey default value of new key set unSet oldValue old value unSet set newValue newValue Note: the set and unset in the above table refers to the keys being set dynamically i.e calling Configuration setter methods.
          Hide
          Philip Zeyliger added a comment -

          Owen,

          Apologies for missing this e-mail for so long. I'm behind on the "all-jira" bucket, and I failed to set a watch.

          Hemanth, you should definitely forge ahead with the simple, expedient solution.

          I'd like to convince you and Owen that the more complicated proposal is a net win (and I've used a similar system in the past), but I think the best way to do that is to actually write the code and transform a few usages. I've been busy with some other deadlines, so when I get there, I'll file a JIRA and bother you all again.

          (To answer Owen's questions: the couple of classes for ConfigVariable go into the configuration package; users are welcome to use the same classes to set their variables, or they can set them manually; the documentation for the variables themselves is generated, the documentation for the system lives in JavaDoc on the individual classes and the package.)

          – Philip

          Show
          Philip Zeyliger added a comment - Owen, Apologies for missing this e-mail for so long. I'm behind on the "all-jira" bucket, and I failed to set a watch. Hemanth, you should definitely forge ahead with the simple, expedient solution. I'd like to convince you and Owen that the more complicated proposal is a net win (and I've used a similar system in the past), but I think the best way to do that is to actually write the code and transform a few usages. I've been busy with some other deadlines, so when I get there, I'll file a JIRA and bother you all again. (To answer Owen's questions: the couple of classes for ConfigVariable go into the configuration package; users are welcome to use the same classes to set their variables, or they can set them manually; the documentation for the variables themselves is generated, the documentation for the system lives in JavaDoc on the individual classes and the package.) – Philip
          Hide
          Owen O'Malley added a comment -

          Phillip,
          I think this is too much structure for what is gained. In particular, it replaces a relatively simple string to string map with a lot of code. Where does all of the code go? How does it interact with user's job conf? How do we document it?

          I think we should go ahead with the simple approach for now...

          Show
          Owen O'Malley added a comment - Phillip, I think this is too much structure for what is gained. In particular, it replaces a relatively simple string to string map with a lot of code. Where does all of the code go? How does it interact with user's job conf? How do we document it? I think we should go ahead with the simple approach for now...
          Hide
          Philip Zeyliger added a comment -

          Hemanth,

          This JIRA is about backwards-compatibility of deprecated keys, which is something my comment addresses, so I thought it fit in well here. Think of it as an alternative solution to the problem you're trying to solve by keeping the map of deprecated keys in Configuration.java. Keeping a deprecation map is expedient and simple, but I think it may hamper a better, longer-term solution.

          The design goals above are "out of thin air" (in the sense that they haven't been discussed on JIRA outside of the JIRAs mentioned above and MAPREDUCE-475), though I hope they're reasonable. They were discussed a bit at http://wiki.apache.org/hadoop/DeveloperOffsite20090612, too. That said, I hope they help to frame the conversation a bit

          I very very much want there to be a path to be able to rename configuration keys, but I want to make sure that the solution that comes out of this JIRA is compatible with some future work.

          – Philip

          Show
          Philip Zeyliger added a comment - Hemanth, This JIRA is about backwards-compatibility of deprecated keys, which is something my comment addresses, so I thought it fit in well here. Think of it as an alternative solution to the problem you're trying to solve by keeping the map of deprecated keys in Configuration.java. Keeping a deprecation map is expedient and simple, but I think it may hamper a better, longer-term solution. The design goals above are "out of thin air" (in the sense that they haven't been discussed on JIRA outside of the JIRAs mentioned above and MAPREDUCE-475 ), though I hope they're reasonable. They were discussed a bit at http://wiki.apache.org/hadoop/DeveloperOffsite20090612 , too. That said, I hope they help to frame the conversation a bit I very very much want there to be a path to be able to rename configuration keys, but I want to make sure that the solution that comes out of this JIRA is compatible with some future work. – Philip
          Hide
          Hemanth Yamijala added a comment -

          Philip, this seems a very ambitious change to the Configuration framework. Before I can comment further, I would like to take a look at the exact JIRA where this is being discussed. Though I could find out a couple of JIRAs that seem related, none had all the points that you've mentioned here. Can you point me to such a jira, if it exists ?

          If not, I think it is important to open a new JIRA to discuss these points. That way it will find a better audience and we can discuss better. Please let me know about this.

          Show
          Hemanth Yamijala added a comment - Philip, this seems a very ambitious change to the Configuration framework. Before I can comment further, I would like to take a look at the exact JIRA where this is being discussed. Though I could find out a couple of JIRAs that seem related, none had all the points that you've mentioned here. Can you point me to such a jira, if it exists ? If not, I think it is important to open a new JIRA to discuss these points. That way it will find a better audience and we can discuss better. Please let me know about this.
          Hide
          Philip Zeyliger added a comment -

          I'm not enamored of this approach and would like to propose
          a slightly heavier-weight, but, I think, cleaner approach
          than stuffing more logic into the Configuration class.
          My apologies for coming to this conversation a bit late.

          If you don't want to read a long e-mail, skip down to the code examples
          at the bottom.

          Before I get to the proposal, I wanted to lay out what I think the goals
          are. Note that HADOOP-475 is also related.

          • Standardization of configuration names, documentation, and
            value formats. Today, the names tend to appear in the code, or, at best,
            in constants in the code, and the documentation, when it exists,
            may be in -default.xml. It would be nice if it was very difficult
            to avoid writing documentation for the variable you're introducing.
            Right now there are and have been a handful of bugs where the default
            in the code is different than the default in the XML file, and
            that gets really confusing.
          • Backwards compatibility. We'd love to rename "mapred.foo" and "mr.bar"
            to be consistent, but we want to maintain backwards compatibility.
            This ticket is all about that.
          • Availability to user code. Users should be able to use configuration the same way the core does.
            Users pass information to their jobs via Configuration, and they should
            use the same mechanism. This is true today.
          • Type-safety. Configurations have a handful of recurring types: number of bytes,
            filename, URI, hostport combination, arrays of paths, etc. The parsing
            is done in an ad-hoc fashion, which is a shame, since it doesn't have to be.
            It would be nice to have some generic runtime checking of configuration
            parameters, too, and perhaps even ranges (that number can't be negative!).
          • Upgradeability to a different configuration format. I don't think we'll
            leave a place where configuration has to be a key->value map (especially
            because of "availability to user code", but it would eventually be nice
            if configuration could be queried from other places, or if the
            values could have a bit more structure. (For example, we could use XML
            to separate out a list of paths, instead of blindly using comma-delimited,
            unescaped text.)
          • Development ease. It ought to be easier to find the places where configuration
            is used. Today the best we can do is a grep, and then follow references
            manually.
          • Autogenerated documentation. No-brainer.
          • Ability to specify visibility, scope, and stability. Alogn the lines of HADOOP-5073, configuration
            variables should be classified as deprecated, unstable, evolving, and stable. It would be
            nice to introduce variables (say, that were used for tuning), with the expectation that they are
            not part of the public API. Use at your own risk sort of thing.

          My proposal is to represent every configuration variable that's accessed in
          the Hadoop code by a static instance of a ConfigVariable<T> class. The interface
          is something like:

          public interface ConfigValue<T> {
            T get(Configuration conf);
            T getDefault();
            void set(Configuration conf, T value);
            String getHelp();
          }
          

          There's more than one way to implement this. Here's one proposal that uses
          Java annotations:

            @ConfigDescription(help="Some help text", 
                visibility=Visibility.PUBLIC)
            @ConfigAccessors({
              @ConfigAccessor(name="common.sample"),
              @ConfigAccessor(name="core.sample", deprecated="Use common.sample instead")
            })
            public final static ConfigVariable<Integer> myConfigVariable = 
              ConfigVariables.newIntConfigVariable(15 /* default value */);
          

          This approach would require pre-processing (at build time) the annotations
          into a data file, and then, at runtime, querying this data file.
          (It's not easily possible to get at the annotations on the
          field from within myConfigVariable.)

          I'm half-way to getting this working, and I actually think something
          like the following would be better:

            @ConfigVariableDeclaration
            public final static ConfigVariable<URI> fsDefaultName = 
              ConfigVariableBuilder.newURI()
                .setDefault(null)
                .setHelp("Default filesystem")
                .setVisibility(Visibility.PUBLIC)
                .addAccessor("fs.default.name")
                .addDeprecatedAccessor("core.default.fs", "Use foo instead")
                .addValidator(new ValidateSupportedFilesystem());
          

          This would still require build-time preprocessing (javac supports
          this) to find the variables, instantiate them, and output
          the documentation, but the rest of the processing is easy
          at runtime.

          A drawback of this approach is how to handle the defaults that
          default to other variables. Perhaps the easiest thing to do
          is to handle the same syntax we support now, like 'addIndirectDefault("$

          {default.dir}

          /mapred")',
          but something that references the other variable directly is more appealing, e.g.: 'addIndirectDefault(OtherClass.class, "fieldname")'.

          I think this can be implemented relatively quickly, with little impact on
          breaking stuff (because the old way of using Configuration continues to work).

          What do you think?

          Show
          Philip Zeyliger added a comment - I'm not enamored of this approach and would like to propose a slightly heavier-weight, but, I think, cleaner approach than stuffing more logic into the Configuration class. My apologies for coming to this conversation a bit late. If you don't want to read a long e-mail, skip down to the code examples at the bottom. Before I get to the proposal, I wanted to lay out what I think the goals are. Note that HADOOP-475 is also related. Standardization of configuration names, documentation, and value formats. Today, the names tend to appear in the code, or, at best, in constants in the code, and the documentation, when it exists, may be in -default.xml. It would be nice if it was very difficult to avoid writing documentation for the variable you're introducing. Right now there are and have been a handful of bugs where the default in the code is different than the default in the XML file, and that gets really confusing. Backwards compatibility. We'd love to rename "mapred.foo" and "mr.bar" to be consistent, but we want to maintain backwards compatibility. This ticket is all about that. Availability to user code. Users should be able to use configuration the same way the core does. Users pass information to their jobs via Configuration, and they should use the same mechanism. This is true today. Type-safety. Configurations have a handful of recurring types: number of bytes, filename, URI, hostport combination, arrays of paths, etc. The parsing is done in an ad-hoc fashion, which is a shame, since it doesn't have to be. It would be nice to have some generic runtime checking of configuration parameters, too, and perhaps even ranges (that number can't be negative!). Upgradeability to a different configuration format. I don't think we'll leave a place where configuration has to be a key->value map (especially because of "availability to user code", but it would eventually be nice if configuration could be queried from other places, or if the values could have a bit more structure. (For example, we could use XML to separate out a list of paths, instead of blindly using comma-delimited, unescaped text.) Development ease. It ought to be easier to find the places where configuration is used. Today the best we can do is a grep, and then follow references manually. Autogenerated documentation. No-brainer. Ability to specify visibility, scope, and stability. Alogn the lines of HADOOP-5073 , configuration variables should be classified as deprecated, unstable, evolving, and stable. It would be nice to introduce variables (say, that were used for tuning), with the expectation that they are not part of the public API. Use at your own risk sort of thing. My proposal is to represent every configuration variable that's accessed in the Hadoop code by a static instance of a ConfigVariable<T> class. The interface is something like: public interface ConfigValue<T> { T get(Configuration conf); T getDefault(); void set(Configuration conf, T value); String getHelp(); } There's more than one way to implement this. Here's one proposal that uses Java annotations: @ConfigDescription(help= "Some help text" , visibility=Visibility.PUBLIC) @ConfigAccessors({ @ConfigAccessor(name= "common.sample" ), @ConfigAccessor(name= "core.sample" , deprecated= "Use common.sample instead" ) }) public final static ConfigVariable< Integer > myConfigVariable = ConfigVariables.newIntConfigVariable(15 /* default value */); This approach would require pre-processing (at build time) the annotations into a data file, and then, at runtime, querying this data file. (It's not easily possible to get at the annotations on the field from within myConfigVariable.) I'm half-way to getting this working, and I actually think something like the following would be better: @ConfigVariableDeclaration public final static ConfigVariable<URI> fsDefaultName = ConfigVariableBuilder.newURI() .setDefault( null ) .setHelp( "Default filesystem" ) .setVisibility(Visibility.PUBLIC) .addAccessor( "fs. default .name" ) .addDeprecatedAccessor( "core. default .fs" , "Use foo instead" ) .addValidator( new ValidateSupportedFilesystem()); This would still require build-time preprocessing (javac supports this) to find the variables, instantiate them, and output the documentation, but the rest of the processing is easy at runtime. A drawback of this approach is how to handle the defaults that default to other variables. Perhaps the easiest thing to do is to handle the same syntax we support now, like 'addIndirectDefault("$ {default.dir} /mapred")', but something that references the other variable directly is more appealing, e.g.: 'addIndirectDefault(OtherClass.class, "fieldname")'. I think this can be implemented relatively quickly, with little impact on breaking stuff (because the old way of using Configuration continues to work). What do you think?
          Hide
          Arun C Murthy added a comment -

          +1 for this direction.

          Show
          Arun C Murthy added a comment - +1 for this direction.
          Hide
          Hemanth Yamijala added a comment -

          There are atleast two issues that were discussed about this approach in HADOOP-5919:

          • With project split, how would we add deprecated keys from mapreduce or hdfs into common.
          • What if there's no one-to-one mapping from old keys to new keys.

          With the project split over now, maybe the first issue requires a solution as part of this JIRA. Owen's suggestion was to define a key like hadoop.conf.extra.classes which would be a list of class names that will be loaded by Configuration when it is loaded. By default this could be null, but in a cluster installation, we could put up basic classes like JobConf. This would give an opportunity for the extra classes to add more mappings to the new keys.

          The second problem is a bit more involved, though some obvious solutions exist. And we could take the approach that we will not solve it in this patch, but only restrict the utility of this framework for the more straightforward mapping cases.

          Thoughts ?

          Show
          Hemanth Yamijala added a comment - There are atleast two issues that were discussed about this approach in HADOOP-5919 : With project split, how would we add deprecated keys from mapreduce or hdfs into common. What if there's no one-to-one mapping from old keys to new keys. With the project split over now, maybe the first issue requires a solution as part of this JIRA. Owen's suggestion was to define a key like hadoop.conf.extra.classes which would be a list of class names that will be loaded by Configuration when it is loaded. By default this could be null, but in a cluster installation, we could put up basic classes like JobConf. This would give an opportunity for the extra classes to add more mappings to the new keys. The second problem is a bit more involved, though some obvious solutions exist. And we could take the approach that we will not solve it in this patch, but only restrict the utility of this framework for the more straightforward mapping cases. Thoughts ?
          Hide
          Hemanth Yamijala added a comment -

          Initial proposal is to keep it dead simple:

          • Keep a static map of keys in the Configuration class that maps the deprecated key to a set of new keys.
          • get of the deprecated key will return the value of the first new key in the mapping set.
          • set of the deprecated key will set the same value to all new keys in the mapping set.
          • There will be a provision to define a custom message in the configuration class whenever access is done on the deprecated keys. Otherwise, a standard message such as: "This key is deprecated. Use this other key instead" will be printed.
          • When both old and new keys are defined, the new keys will always take precedence.
          Show
          Hemanth Yamijala added a comment - Initial proposal is to keep it dead simple: Keep a static map of keys in the Configuration class that maps the deprecated key to a set of new keys. get of the deprecated key will return the value of the first new key in the mapping set. set of the deprecated key will set the same value to all new keys in the mapping set. There will be a provision to define a custom message in the configuration class whenever access is done on the deprecated keys. Otherwise, a standard message such as: "This key is deprecated. Use this other key instead" will be printed. When both old and new keys are defined, the new keys will always take precedence.

            People

            • Assignee:
              V.V.Chaitanya Krishna
              Reporter:
              Hemanth Yamijala
            • Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development