Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-1052

HDFS scalability with multiple namenodes

    Details

    • Type: New Feature New Feature
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.22.0
    • Fix Version/s: 0.23.0
    • Component/s: namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      HDFS currently uses a single namenode that limits scalability of the cluster. This jira proposes an architecture to scale the nameservice horizontally using multiple namenodes.

      1. Block pool proposal.pdf
        410 kB
        Suresh Srinivas
      2. Mulitple Namespaces5.pdf
        1.85 MB
        Sanjay Radia
      3. high-level-design.pdf
        620 kB
        Suresh Srinivas
      4. HDFS-1052.patch
        968 kB
        Suresh Srinivas
      5. HDFS-1052.3.patch
        1.09 MB
        Suresh Srinivas
      6. HDFS-1052.4.patch
        1.09 MB
        Suresh Srinivas
      7. HDFS-1052.5.patch
        1.09 MB
        Suresh Srinivas
      8. HDFS-1052.6.patch
        1.10 MB
        Suresh Srinivas

        Issue Links

        1.
        HDFS federation: Add BlockPoolID to block Sub-task Resolved Suresh Srinivas
         
        2.
        HDFS federation: propose ClusterID and BlockPoolID format Sub-task Resolved Tanping Wang
         
        3.
        HDFS Federation: modify -format option for namenode to generated new blockpool id and accept newcluster Sub-task Resolved Boris Shkolnik
         
        4.
        HDFS federation: Storage directory of VERSION(/ID) file Sub-task Resolved Unassigned
         
        5.
        HDFS federation: Upgrade and rolling back of Federation Sub-task Resolved Unassigned
         
        6.
        HDFS federation : fix unit test cases Sub-task Resolved Unassigned
         
        7.
        HDFS federation : add cluster ID and block pool ID into Name node web UI Sub-task Resolved Tanping Wang
         
        8.
        HDFS Federation: Convert single threaded DataNode into per BlockPool thread model. Sub-task Resolved Boris Shkolnik
         
        9.
        HDFS federation: Introduce block pool ID into FSDatasetInterface Sub-task Resolved Suresh Srinivas
         
        10.
        HDFS Federation: DataNode.handleDiskError needs to inform ALL namenodes if a disk failed Sub-task Resolved Boris Shkolnik
         
        11.
        HDFS Federation: Add block pool management to FSDataset. Sub-task Resolved Suresh Srinivas
         
        12.
        HDFS Federation: Datanode fields that are no longer used should be removed Sub-task Resolved Boris Shkolnik
         
        13.
        HDFS Federation: add Datanode.getDNRegistration(String bpid) method Sub-task Resolved Boris Shkolnik
         
        14.
        HDFS Federation: remove namenode argument from DataNode constructor Sub-task Resolved Boris Shkolnik
         
        15.
        HDFS Federation: DatanodeCommand Finalize sent from namenode to datanode must include block pool ID Sub-task Resolved Suresh Srinivas
         
        16.
        HDFS Federation: MiniDFSCluster#waitActive() waits for ever in federation Sub-task Resolved Suresh Srinivas
         
        17.
        HDFS-Federation: Add support for multiple namenodes in MiniDFSCluster Sub-task Resolved Suresh Srinivas
         
        18.
        HDFS Federation: Fix TestDFSUpgrade and TestDFSRollback failures. Sub-task Resolved Suresh Srinivas
         
        19.
        HDFS Federation: Tests that corrupt block files fail due to changed block pool file path in federation Sub-task Resolved Suresh Srinivas
         
        20.
        HDFS Federation: Datanode doesn't start with two namenodes Sub-task Resolved Boris Shkolnik
         
        21.
        Federation: Datanode needs to send block pool usage information in registration Sub-task Resolved Suresh Srinivas
         
        22.
        Federation: Datanode servlets need information about the namenode to service the request Sub-task Resolved Suresh Srinivas
         
        23.
        Federation: FSDataset in Datanode should be created after initial handshake with namenode Sub-task Resolved Jitendra Nath Pandey
         
        24.
        Federation: Multiple namenode configuration Sub-task Resolved Jitendra Nath Pandey
         
        25.
        Federation: Datanode command to refresh namenode list at the datanode. Sub-task Resolved Jitendra Nath Pandey
         
        26.
        Hdfs Federation: Remove unnecessary TODO:FEDERATION comments. Sub-task Resolved Jitendra Nath Pandey
         
        27.
        HDFS Federation: remove dnRegistration from Datanode Sub-task Resolved Boris Shkolnik
         
        28.
        HDFS Federation: shutdown in DataNode should be able to shutdown individual BP threads as well as the whole DN Sub-task Resolved Boris Shkolnik
         
        29.
        HDFS Federation: refactor stopDatanode(name) to work with multiple Block Pools Sub-task Resolved Boris Shkolnik
         
        30.
        Federation: Datanode changes to track block token secret per namenode. Sub-task Resolved Suresh Srinivas
         
        31.
        HDFS Federation: BPOfferService exits after the first iteration of the loop Sub-task Resolved Tanping Wang
         
        32.
        HDFS Federation: data node storage structure changes and introduce block pool storage for Federation. Sub-task Resolved Tanping Wang
         
        33.
        Federation: Block received is sent with invalid DatanodeRegistration Sub-task Resolved Tanping Wang
         
        34.
        Federation: Only DataStorage must be locked using in_use.lock and no locks must be associated with BlockPoolStorage Sub-task Resolved Tanping Wang
         
        35.
        HDFS Federation : fix unit test case, TestReplication Sub-task Resolved Tanping Wang
         
        36.
        Federation: Tests fail due to null pointer exception in Datanode#shutdown() Sub-task Resolved Tanping Wang
         
        37.
        HDFS federation: fix unit test case, TestCheckpoint and TestDataNodeMXBean Sub-task Resolved Tanping Wang
         
        38.
        HDFS federation: Rename getPoolId() everywhere to getBlockPoolId() Sub-task Resolved Tanping Wang
         
        39.
        HDFS federation :Add new
 column
s of Block
Pool
Used 
and
 Block
Pool
Used
%
 into JSP Sub-task Resolved Tanping Wang
         
        40.
        HDFS Federation: Rename BlockPool class to BlockPoolSlice Sub-task Resolved Tanping Wang
         
        41.
        HDF federation: Fix TestFsck and TestListCorruptFileBlocks failures Sub-task Resolved Tanping Wang
         
        42.
        HDFS federation: Remove getBlockpool() for NameNodeMXBean in FSNameSystem. Sub-task Resolved Tanping Wang
         
        43.
        Federation: fix TestBalancer Sub-task Resolved Tsz Wo Nicholas Sze
         
        44.
        Federation: support per pool and per node policies in Balancer Sub-task Resolved Tsz Wo Nicholas Sze
         
        45.
        Federation: Change Balancer CLI for multiple namenodes and balancing policy Sub-task Resolved Tsz Wo Nicholas Sze
         
        46.
        Federation: Test Balancer With Multiple NameNodes Sub-task Resolved Tsz Wo Nicholas Sze
         
        47.
        Federation: Balancer cannot start with with multiple namenodes. Sub-task Resolved Tsz Wo Nicholas Sze
         
        48. Federation: Add more Balancer tests with federation setting Sub-task Open Tsz Wo Nicholas Sze
         
        49.
        Federation: Fix failures in fault injection tests, TestDiskError, TestDatanodeRestart and TestDFSTartupVersions. Sub-task Resolved Suresh Srinivas
         
        50.
        Hdfs Federation: Configuration for namenodes Sub-task Resolved Jitendra Nath Pandey
         
        51.
        Federation: TestDFSStorageStateRecovery fails Sub-task Resolved Suresh Srinivas
         
        52.
        Federation: SimulatedFSDataset changes to work with federation and multiple block pools Sub-task Resolved Suresh Srinivas
         
        53.
        HDFS federation: Fix testOIV and TestDatanodeUtils Sub-task Resolved Tanping Wang
         
        54.
        HDFS Federation: when build version doesn't match - datanode should wait (keep connecting) untill NN comes up with the right version Sub-task Resolved Boris Shkolnik
         
        55.
        HDFS federation: fix testBlockRecovery Sub-task Resolved Boris Shkolnik
         
        56.
        Hdfs Federation: TestOverReplicatedBlocks and TestWriteReplica failing Sub-task Resolved Jitendra Nath Pandey
         
        57.
        Federation: Bump up LAYOUT_VERSION in FSConstants.java for Federation before committing to trunk Sub-task Resolved Unassigned
         
        58.
        HDFS federation :TestHeartbeatHandling fails: wrong number of items in cmds array Sub-task Resolved Tanping Wang
         
        59.
        HDFS federation : Fix TestBackupNode and TestRefreshNamenodes test failure. Sub-task Resolved Tanping Wang
         
        60.
        HDFS federation: Improve start/stop scripts and add script to decommission datanodes Sub-task Closed Tanping Wang
         
        61.
        Federation: Add a tool that lists namenodes, secondary and backup nodes from the configuration file Sub-task Resolved Suresh Srinivas
         
        62.
        HDFS federation: Balancer command throws NullPointerException Sub-task Resolved Suresh Srinivas
         
        63.
        Hdfs Federation: TestFileAppend2, TestFileAppend3 and TestBlockTokenWithDFS failing Sub-task Resolved Jitendra Nath Pandey
         
        64.
        Hdfs Federation: Failure in browsing data on new namenodes Sub-task Resolved Jitendra Nath Pandey
         
        65.
        Federation: remove datanode's datanodeId TODOs Sub-task Resolved Boris Shkolnik
         
        66.
        Error "nnaddr url param is null" when clicking on a node from NN Live Node Link Sub-task Resolved Jitendra Nath Pandey
         
        67.
        HDFS Federation: create method for updating machine name in DataNode.java Sub-task Resolved Boris Shkolnik
         
        68.
        HDFS Federation: when looking up datanode we should use machineNmae (in testOverReplicatedBlocks) Sub-task Resolved Boris Shkolnik
         
        69.
        Hdfs Federation: Prevent DataBlockScanner from running in tight loop Sub-task Resolved Jitendra Nath Pandey
         
        70.
        HDFS Federation: refactor upgrade object in DataNode Sub-task Resolved Boris Shkolnik
         
        71.
        HDFS Federation: warning/error not generated when datanode sees inconsistent/different Cluster ID between namenodes Sub-task Resolved Boris Shkolnik
         
        72.
        Federation: Add decommission tests for federated namenodes Sub-task Resolved Suresh Srinivas
         
        73.
        Federation: FSDataset volumeMap access is not correctly synchronized Sub-task Resolved Suresh Srinivas
         
        74.
        HDFS Federation: MiniDFSCluster#waitActive() bug causes some tests to fail Sub-task Resolved Suresh Srinivas
         
        75.
        Federation: TestDFSRemove fails intermittently Sub-task Resolved Suresh Srinivas
         
        76.
        Federation: FSVolumeSet volumes is not synchronized correctly Sub-task Resolved Suresh Srinivas
         
        77.
        Hdfs Federation: Configuration for principal names should not be namenode specific. Sub-task Resolved Jitendra Nath Pandey
         
        78.
        HDFS Federation: Add flag to MiniDFSCluster to differentiate between two different modes-Federation and not. Sub-task Resolved Boris Shkolnik
         
        79.
        Federation: merge FSImage changes into FSImage+NNStorage refactoring in trunk Sub-task Resolved Suresh Srinivas
         
        80.
        Federation: Update the layout version for federation changes Sub-task Resolved Suresh Srinivas
         
        81.
        Federation: Add new layout version to offline image/edits viewer Sub-task Resolved Suresh Srinivas
         
        82.
        Federation: fix fault injection test build failure Sub-task Resolved Suresh Srinivas
         
        83.
        Hdfs Federation: TestFileAppend3 fails intermittently. Sub-task Resolved Jitendra Nath Pandey
         
        84.
        HDFS Federation: TestBackupNode Fails on Federation branch Sub-task Resolved Boris Shkolnik
         
        85.
        Hdfs Federation: TestListCorruptFileBlocks failing in federation branch. Sub-task Resolved Jitendra Nath Pandey
         
        86.
        Federation HDFS: testFsck fails Sub-task Resolved Boris Shkolnik
         
        87.
        Hdfs Federation: The BPOfferService must always connect to namenode as the login user. Sub-task Resolved Jitendra Nath Pandey
         
        88.
        Hdfs Federation: Add command to delete block pool directories from a datanode. Sub-task Resolved Jitendra Nath Pandey
         
        89.
        Federation: DatablockScanner should scan blocks for all the block pools. Sub-task Resolved Jitendra Nath Pandey
         
        90. Address all the federation TODOs Sub-task Open Suresh Srinivas
         
        91. Enable a single 2nn to checkpoint multiple nameservices Sub-task Open Unassigned
         

          Activity

          Hide
          Suresh Srinivas added a comment -

          Will post the proposal document in a couple of days.

          Show
          Suresh Srinivas added a comment - Will post the proposal document in a couple of days.
          Hide
          ryan rawson added a comment -

          This sounds great! Also as part of the architecture can you explain how you will improve availability?

          Show
          ryan rawson added a comment - This sounds great! Also as part of the architecture can you explain how you will improve availability?
          Show
          Jeff Hammerbacher added a comment - Ryan: see Sanjay's comment at https://issues.apache.org/jira/browse/HDFS-1051?focusedCommentId=12848235&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12848235
          Hide
          dhruba borthakur added a comment -

          Hi Ryan, I think scalability of the NN is not directly related to the availability of the NN, isn't it?

          Show
          dhruba borthakur added a comment - Hi Ryan, I think scalability of the NN is not directly related to the availability of the NN, isn't it?
          Hide
          Sanjay Radia added a comment -

          There are no failover notions amongst the multiple NNs in this proposal (ie the multiple NNs are not m+k failover); failover is separable and complementary to this proposal.

          However, multiple namenodes in a cluster will help with availability in the sense that it allows one to isolate, say production apps from non-production apps by giving them different namespaces and hence different NNs. Further with this proposal one is likely to run multiple smaller NNs and hence startup will be faster which of course helps availability of the system.

          Show
          Sanjay Radia added a comment - There are no failover notions amongst the multiple NNs in this proposal (ie the multiple NNs are not m+k failover); failover is separable and complementary to this proposal. However, multiple namenodes in a cluster will help with availability in the sense that it allows one to isolate, say production apps from non-production apps by giving them different namespaces and hence different NNs. Further with this proposal one is likely to run multiple smaller NNs and hence startup will be faster which of course helps availability of the system.
          Hide
          Suresh Srinivas added a comment -

          Attaching the document that covers the proposal. Design and other details will follow soon.

          Show
          Suresh Srinivas added a comment - Attaching the document that covers the proposal. Design and other details will follow soon.
          Hide
          Doug Cutting added a comment -

          If I follow the document, the initial implementation can confine its changes to the datanode: the namenode need not initially be aware of block pools, only the datanode. Is that correct? If so, I like that simplification.

          Show
          Doug Cutting added a comment - If I follow the document, the initial implementation can confine its changes to the datanode: the namenode need not initially be aware of block pools, only the datanode. Is that correct? If so, I like that simplification.
          Hide
          Sanjay Radia added a comment -

          Do you mean initial implementation or first patch?
          The client protocol to the DNs needs to include the block pool id if the DNs manage multiple block pools.
          We should change all wire protocols to include the
          block pool id and have the NN simply set the block pool id to zero in the reply to getBlockLocations().

          Show
          Sanjay Radia added a comment - Do you mean initial implementation or first patch? The client protocol to the DNs needs to include the block pool id if the DNs manage multiple block pools. We should change all wire protocols to include the block pool id and have the NN simply set the block pool id to zero in the reply to getBlockLocations().
          Hide
          Sanjay Radia added a comment -

          Minor updates to the doc (plus name change).

          Show
          Sanjay Radia added a comment - Minor updates to the doc (plus name change).
          Hide
          Min Zhou added a comment -

          I don't think multiple namespaces is a good solution for this issue. The datasets stored on our cluster are shared by many departments of our company. If these datasets are seperated by a number of namespaces, there is no data sharing; If we put them in one namespace managed by a single NameNode, however, the scalability is limited by NameNode's memory .
          Why don't we employ some distributed metadata management approaches like dynamic subtree patitioning(ceph) or hash-based partitioning(Lustre) ?

          Min

          Show
          Min Zhou added a comment - I don't think multiple namespaces is a good solution for this issue. The datasets stored on our cluster are shared by many departments of our company. If these datasets are seperated by a number of namespaces, there is no data sharing; If we put them in one namespace managed by a single NameNode, however, the scalability is limited by NameNode's memory . Why don't we employ some distributed metadata management approaches like dynamic subtree patitioning(ceph) or hash-based partitioning(Lustre) ? Min
          Hide
          Guilin Sun added a comment -

          Hi Suresh, I have a small question, how a client gets correct NameNode of a path such as "/a/b/c" in your proposal?

          Thanks!

          Show
          Guilin Sun added a comment - Hi Suresh, I have a small question, how a client gets correct NameNode of a path such as "/a/b/c" in your proposal? Thanks!
          Hide
          Suresh Srinivas added a comment -

          Min, yes distributed namespace could be another proposal to solve this problem. However, it is a lot more complicated solution to develop, takes much longer time and involves a lot of changes to the system. This does not fit the time line in which we need a solution to namenode scalability.

          Show
          Suresh Srinivas added a comment - Min, yes distributed namespace could be another proposal to solve this problem. However, it is a lot more complicated solution to develop, takes much longer time and involves a lot of changes to the system. This does not fit the time line in which we need a solution to namenode scalability.
          Hide
          Suresh Srinivas added a comment -

          Gulin,

          1. An application could choose to use one of the namenodes as default file system in its configuration. In that case /a/b/c will be resolved relative to that namespace.
          2. There is a proposal in HDFS-1053 for client side mount tables, where client can define it's namespace and how it maps to server side namespace. In that case /a/b/c will be resolved in the context of client side mount table.
          Show
          Suresh Srinivas added a comment - Gulin, An application could choose to use one of the namenodes as default file system in its configuration. In that case /a/b/c will be resolved relative to that namespace. There is a proposal in HDFS-1053 for client side mount tables, where client can define it's namespace and how it maps to server side namespace. In that case /a/b/c will be resolved in the context of client side mount table.
          Hide
          Konstantin Boudnik added a comment -

          It seems to me that multiple namenode approach just begs for a namenode autodiscovery, doesn't it? If a DN has to track all NNs in the cluster it adds a complexity to already puzzling configuration management and put yet another error-prone element into it.

          Show
          Konstantin Boudnik added a comment - It seems to me that multiple namenode approach just begs for a namenode autodiscovery, doesn't it? If a DN has to track all NNs in the cluster it adds a complexity to already puzzling configuration management and put yet another error-prone element into it.
          Hide
          Guilin Sun added a comment -

          Thanks Suresh, so in this proposal you means we just make DataNodes shared by several NameNodes, and in client side view, it's totally dependent & different HDFS clusteres?

          Show
          Guilin Sun added a comment - Thanks Suresh, so in this proposal you means we just make DataNodes shared by several NameNodes, and in client side view, it's totally dependent & different HDFS clusteres?
          Hide
          Guilin Sun added a comment -

          Sorry,I means "independent".

          Show
          Guilin Sun added a comment - Sorry,I means "independent".
          Hide
          Suresh Srinivas added a comment -

          It is a single cluster made of multiple independent namenodes/namespaces all sharing the same set of datanodes.

          Show
          Suresh Srinivas added a comment - It is a single cluster made of multiple independent namenodes/namespaces all sharing the same set of datanodes.
          Hide
          ryan rawson added a comment -

          Have you considered merely increasing the heap size? Switch to a no-pause GC collector, one of them lies therein: http://www.managedruntime.org/ Right now a machine can have 256 GB of ram for ~ $10,000. That is a 4x increase over what we have now. Added bonus: no additional complexity!

          Show
          ryan rawson added a comment - Have you considered merely increasing the heap size? Switch to a no-pause GC collector, one of them lies therein: http://www.managedruntime.org/ Right now a machine can have 256 GB of ram for ~ $10,000. That is a 4x increase over what we have now. Added bonus: no additional complexity!
          Hide
          Suresh Srinivas added a comment -

          ryan, this is discussed in the proposal already. Let me summarize:

          1. Increasing the namenode heap does not increase the namenode throughput
          2. Currently NN takes 30 mins to startup with 50G heap. The startup time would go to 2.5 hrs. There are couple of jiras improving the NN startup time. Even with that, start up time would be > 1 hr for such a large heap.
          3. While debugging memory leaks in NN, I could not get lot of tools to work with the heap dump of 40G, especially jhat. Not sure how well the tools can support 250G heap dump.
          4. This solution does not work for installation where the NN needs to support more 4x scaling. This is needed in clusters that might want to store smaller files instead of depending on large files to reduce object count.

          The solution proposed here does not preclude one from using a single namenode and vertically scaling it.

          I am also curious about your experience and challenges of running a namenode with such large heap. We could have that discussion offline.

          Show
          Suresh Srinivas added a comment - ryan, this is discussed in the proposal already. Let me summarize: Increasing the namenode heap does not increase the namenode throughput Currently NN takes 30 mins to startup with 50G heap. The startup time would go to 2.5 hrs. There are couple of jiras improving the NN startup time. Even with that, start up time would be > 1 hr for such a large heap. While debugging memory leaks in NN, I could not get lot of tools to work with the heap dump of 40G, especially jhat. Not sure how well the tools can support 250G heap dump. This solution does not work for installation where the NN needs to support more 4x scaling. This is needed in clusters that might want to store smaller files instead of depending on large files to reduce object count. The solution proposed here does not preclude one from using a single namenode and vertically scaling it. I am also curious about your experience and challenges of running a namenode with such large heap. We could have that discussion offline.
          Hide
          Suresh Srinivas added a comment -

          Here is the high level design for the feature. This feature will be implemented in multiple jiras, which will have further details of the design and the change.

          Show
          Suresh Srinivas added a comment - Here is the high level design for the feature. This feature will be implemented in multiple jiras, which will have further details of the design and the change.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          I have created "Federation Branch" JIRA version. Please select it for the related JIRAs.

          Show
          Tsz Wo Nicholas Sze added a comment - I have created "Federation Branch" JIRA version. Please select it for the related JIRAs.
          Hide
          Suresh Srinivas added a comment -

          Consolidated patch for integrating from federation to trunk.

          Show
          Suresh Srinivas added a comment - Consolidated patch for integrating from federation to trunk.
          Hide
          Suresh Srinivas added a comment -

          Latest patch.

          Show
          Suresh Srinivas added a comment - Latest patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12476941/HDFS-1052.patch
          against trunk revision 1095461.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 322 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The patch appears to cause tar ant target to fail.

          -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:

          -1 contrib tests. The patch failed contrib unit tests.

          -1 system test framework. The patch failed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/393//testReport/
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/393//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12476941/HDFS-1052.patch against trunk revision 1095461. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 322 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: -1 contrib tests. The patch failed contrib unit tests. -1 system test framework. The patch failed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/393//testReport/ Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/393//console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          New patch with files missed in the previous patch.

          Show
          Suresh Srinivas added a comment - New patch with files missed in the previous patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12477028/HDFS-1052.4.patch
          against trunk revision 1095789.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 344 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The patch appears to cause tar ant target to fail.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:

          -1 contrib tests. The patch failed contrib unit tests.

          -1 system test framework. The patch failed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/400//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/400//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/400//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12477028/HDFS-1052.4.patch against trunk revision 1095789. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 344 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause tar ant target to fail. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: -1 contrib tests. The patch failed contrib unit tests. -1 system test framework. The patch failed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/400//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/400//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/400//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12477039/HDFS-1052.5.patch
          against trunk revision 1095830.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 348 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.server.namenode.TestBackupNode
          org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
          org.apache.hadoop.hdfs.TestFileConcurrentReader

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/401//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/401//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/401//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12477039/HDFS-1052.5.patch against trunk revision 1095830. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 348 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.server.namenode.TestBackupNode org.apache.hadoop.hdfs.TestDFSStorageStateRecovery org.apache.hadoop.hdfs.TestFileConcurrentReader +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/401//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/401//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/401//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12477039/HDFS-1052.5.patch
          against trunk revision 1095830.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 348 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.server.namenode.TestBackupNode
          org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
          org.apache.hadoop.hdfs.TestFileAppend4
          org.apache.hadoop.hdfs.TestLargeBlock
          org.apache.hadoop.hdfs.TestWriteConfigurationToDFS

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/402//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/402//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/402//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12477039/HDFS-1052.5.patch against trunk revision 1095830. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 348 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.server.namenode.TestBackupNode org.apache.hadoop.hdfs.TestDFSStorageStateRecovery org.apache.hadoop.hdfs.TestFileAppend4 org.apache.hadoop.hdfs.TestLargeBlock org.apache.hadoop.hdfs.TestWriteConfigurationToDFS +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/402//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/402//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/402//console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          Some benchmark results:

          TestDFSIO read tests

          Without federation:

          ----- TestDFSIO ----- : read
                     Date & time: Wed Apr 27 02:04:24 PDT 2011
                 Number of files: 1000
          Total MBytes processed: 30000.0
               Throughput mb/sec: 43.62329251162561
          Average IO rate mb/sec: 44.619869232177734
           IO rate std deviation: 5.060306158158443
              Test exec time sec: 959.943
          

          With federation:

          ----- TestDFSIO ----- : read
                     Date & time: Wed Apr 27 02:43:10 PDT 2011
                 Number of files: 1000
          Total MBytes processed: 30000.0
               Throughput mb/sec: 45.657513857055456
          Average IO rate mb/sec: 46.72107696533203
           IO rate std deviation: 5.455125923399539
              Test exec time sec: 924.922
          

          TestDFSIO write tests

          Without federation:

          ----- TestDFSIO ----- : write
                     Date & time: Wed Apr 27 01:47:50 PDT 2011
                 Number of files: 1000
          Total MBytes processed: 30000.0
               Throughput mb/sec: 35.940755259031015
          Average IO rate mb/sec: 38.236236572265625
           IO rate std deviation: 5.929484960036511
              Test exec time sec: 1266.624
          

          With federation:

          ----- TestDFSIO ----- : write
                     Date & time: Wed Apr 27 02:27:12 PDT 2011
                 Number of files: 1000
          Total MBytes processed: 30000.0
               Throughput mb/sec: 42.17884674597227
          Average IO rate mb/sec: 43.11423873901367
           IO rate std deviation: 5.357057259968647
              Test exec time sec: 1135.298
          
          Show
          Suresh Srinivas added a comment - Some benchmark results: TestDFSIO read tests Without federation: ----- TestDFSIO ----- : read Date & time: Wed Apr 27 02:04:24 PDT 2011 Number of files: 1000 Total MBytes processed: 30000.0 Throughput mb/sec: 43.62329251162561 Average IO rate mb/sec: 44.619869232177734 IO rate std deviation: 5.060306158158443 Test exec time sec: 959.943 With federation: ----- TestDFSIO ----- : read Date & time: Wed Apr 27 02:43:10 PDT 2011 Number of files: 1000 Total MBytes processed: 30000.0 Throughput mb/sec: 45.657513857055456 Average IO rate mb/sec: 46.72107696533203 IO rate std deviation: 5.455125923399539 Test exec time sec: 924.922 TestDFSIO write tests Without federation: ----- TestDFSIO ----- : write Date & time: Wed Apr 27 01:47:50 PDT 2011 Number of files: 1000 Total MBytes processed: 30000.0 Throughput mb/sec: 35.940755259031015 Average IO rate mb/sec: 38.236236572265625 IO rate std deviation: 5.929484960036511 Test exec time sec: 1266.624 With federation: ----- TestDFSIO ----- : write Date & time: Wed Apr 27 02:27:12 PDT 2011 Number of files: 1000 Total MBytes processed: 30000.0 Throughput mb/sec: 42.17884674597227 Average IO rate mb/sec: 43.11423873901367 IO rate std deviation: 5.357057259968647 Test exec time sec: 1135.298
          Hide
          Suresh Srinivas added a comment -

          BTW just to clarify - the above benchmarks are based on trunk; with or without the federation patch.

          Show
          Suresh Srinivas added a comment - BTW just to clarify - the above benchmarks are based on trunk; with or without the federation patch.
          Hide
          Suresh Srinivas added a comment -

          Updated patch for the latest trunk.

          Show
          Suresh Srinivas added a comment - Updated patch for the latest trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12477624/HDFS-1052.6.patch
          against trunk revision 1097329.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 348 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.server.namenode.TestBackupNode
          org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks
          org.apache.hadoop.hdfs.TestDatanodeBlockScanner
          org.apache.hadoop.hdfs.TestDecommission
          org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
          org.apache.hadoop.hdfs.TestFileConcurrentReader

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/429//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/429//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/429//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12477624/HDFS-1052.6.patch against trunk revision 1097329. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 348 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.server.namenode.TestBackupNode org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks org.apache.hadoop.hdfs.TestDatanodeBlockScanner org.apache.hadoop.hdfs.TestDecommission org.apache.hadoop.hdfs.TestDFSStorageStateRecovery org.apache.hadoop.hdfs.TestFileConcurrentReader +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/429//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/429//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/429//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12477624/HDFS-1052.6.patch
          against trunk revision 1097329.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 348 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these core unit tests:
          org.apache.hadoop.hdfs.server.namenode.TestBackupNode
          org.apache.hadoop.hdfs.TestDatanodeBlockScanner
          org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
          org.apache.hadoop.hdfs.TestFileConcurrentReader

          +1 contrib tests. The patch passed contrib unit tests.

          +1 system test framework. The patch passed system test framework compile.

          Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/427//testReport/
          Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/427//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/427//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12477624/HDFS-1052.6.patch against trunk revision 1097329. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 348 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.hdfs.server.namenode.TestBackupNode org.apache.hadoop.hdfs.TestDatanodeBlockScanner org.apache.hadoop.hdfs.TestDFSStorageStateRecovery org.apache.hadoop.hdfs.TestFileConcurrentReader +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/427//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/427//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/427//console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          TestFileConcurrentReader is a known failure. TestBackupNode, TestDatanodeBlockScanner and TestDFSStorageStateRecovery pass on my machine. But if these tests continue to fail, I will create a separate jira to address it.

          Show
          Suresh Srinivas added a comment - TestFileConcurrentReader is a known failure. TestBackupNode, TestDatanodeBlockScanner and TestDFSStorageStateRecovery pass on my machine. But if these tests continue to fail, I will create a separate jira to address it.
          Hide
          Sanjay Radia added a comment -

          +1 for committing to trunk.

          Show
          Sanjay Radia added a comment - +1 for committing to trunk.
          Hide
          dhruba borthakur added a comment -

          +1. although the benchmarks are single node benchmarks, it looks good to go into trunk.

          Show
          dhruba borthakur added a comment - +1. although the benchmarks are single node benchmarks, it looks good to go into trunk.
          Hide
          Suresh Srinivas added a comment -

          I merged the changes from federation branch HDFS-1052 into trunk. Thanks every one for participating in the discussions and help move this issue forward.

          Show
          Suresh Srinivas added a comment - I merged the changes from federation branch HDFS-1052 into trunk. Thanks every one for participating in the discussions and help move this issue forward.
          Hide
          Konstantin Boudnik added a comment -

          It'd be nice to have the test plan attached to the JIRA if any. Thanks.

          Show
          Konstantin Boudnik added a comment - It'd be nice to have the test plan attached to the JIRA if any. Thanks.
          Hide
          Todd Lipcon added a comment -

          MR trunk build is failing since this was merged. Is anyone working on this?

          Show
          Todd Lipcon added a comment - MR trunk build is failing since this was merged. Is anyone working on this?
          Hide
          Suresh Srinivas added a comment -

          HDFS-1871 fixed the build failures. Todd, are you still seeing the problem?

          Show
          Suresh Srinivas added a comment - HDFS-1871 fixed the build failures. Todd, are you still seeing the problem?
          Hide
          Todd Lipcon added a comment -

          Yes, in the raid contrib - see MAPREDUCE-2465

          Show
          Todd Lipcon added a comment - Yes, in the raid contrib - see MAPREDUCE-2465
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk-Commit #663 (See https://builds.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/663/)
          MAPREDUCE-2467. HDFS-1052 changes break the raid contrib module in MapReduce. (suresh srinivas via mahadev)

          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk-Commit #663 (See https://builds.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/663/ ) MAPREDUCE-2467 . HDFS-1052 changes break the raid contrib module in MapReduce. (suresh srinivas via mahadev)
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #679 (See https://builds.apache.org/hudson/job/Hadoop-Mapreduce-trunk/679/)

          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #679 (See https://builds.apache.org/hudson/job/Hadoop-Mapreduce-trunk/679/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #673 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/673/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #673 (See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/673/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #746 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/746/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #746 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/746/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #699 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/699/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #699 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/699/ )
          Hide
          vamshi added a comment -

          Hi suresh, few days ago i got this same idea of distributed NameNode in hadoop, i got impressed with your idea. Can i use Distributed Hash Table in working with cluster of NameNodes to serve the requests of clients? please .. Thank you

          Show
          vamshi added a comment - Hi suresh, few days ago i got this same idea of distributed NameNode in hadoop, i got impressed with your idea. Can i use Distributed Hash Table in working with cluster of NameNodes to serve the requests of clients? please .. Thank you
          Hide
          Suresh Srinivas added a comment -

          vamshi, I am not sure what you mean by this. Have you looked at the design document? Federation is multiple namespaces, each supporting a namespace. Client side mount tables provide unified view of all the namespaces. May be you can ping me on the details of what you are trying to do.

          Show
          Suresh Srinivas added a comment - vamshi, I am not sure what you mean by this. Have you looked at the design document? Federation is multiple namespaces, each supporting a namespace. Client side mount tables provide unified view of all the namespaces. May be you can ping me on the details of what you are trying to do.
          Hide
          vamshi added a comment -

          suresh, i went through the Desingn document (NamenNode Federation).What i was trying to ask is, after we make NameNode distributed among multiple nodes, there exist unique Namespace with respect to each NameNode. Now , when a client requests for some block location,it should be searched in the group of NameNodes for that block details(meta data). For this lookup/searching of metadata, can we use Distributed Hashing? please let me know some thing regarding it. Thank you

          Show
          vamshi added a comment - suresh, i went through the Desingn document (NamenNode Federation).What i was trying to ask is, after we make NameNode distributed among multiple nodes, there exist unique Namespace with respect to each NameNode. Now , when a client requests for some block location,it should be searched in the group of NameNodes for that block details(meta data). For this lookup/searching of metadata, can we use Distributed Hashing? please let me know some thing regarding it. Thank you
          Hide
          Suresh Srinivas added a comment -

          > For this lookup/searching of metadata, can we use Distributed Hashing? please let me know some thing regarding it.
          Client goes directly to the namespace of interest. In order to ease this, HADOOP-7257 add client side mount tables. This eliminates the need to lookup/search on group of namenodes and facilitates client going to directly to a namespace.

          Show
          Suresh Srinivas added a comment - > For this lookup/searching of metadata, can we use Distributed Hashing? please let me know some thing regarding it. Client goes directly to the namespace of interest. In order to ease this, HADOOP-7257 add client side mount tables. This eliminates the need to lookup/search on group of namenodes and facilitates client going to directly to a namespace.

            People

            • Assignee:
              Suresh Srinivas
              Reporter:
              Suresh Srinivas
            • Votes:
              1 Vote for this issue
              Watchers:
              77 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development