Details

    • Type: New Feature New Feature
    • Status: Patch Available
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None
    • Release Note:
      Bumping from 0.92 for above reasons

      Description

      I'm sure this issue exists already, at least as part of the discussion around making online schema edits possible, but no hard this having its own issue. Ted started a conversation on this topic up on dev and Todd suggested we lookd at how Hadoop did it over in HADOOP-7001

      1. HBASE-3909-backport-from-fb-for-trunk-7.patch
        143 kB
        binlijin
      2. HBASE-3909-backport-from-fb-for-trunk-6.patch
        143 kB
        binlijin
      3. HBASE-3909-backport-from-fb-for-trunk-5.patch
        143 kB
        binlijin
      4. HBASE-3909-backport-from-fb-for-trunk-4.patch
        138 kB
        binlijin
      5. HBASE-3909-backport-from-fb-for-trunk-3.patch
        135 kB
        binlijin
      6. HBASE-3909-backport-from-fb-for-trunk-2.patch
        131 kB
        binlijin
      7. HBASE-3909-backport-from-fb-for-trunk.patch
        128 kB
        binlijin
      8. testMasterNoCluster.stack
        20 kB
        Ted Yu
      9. 3909-102912.patch
        147 kB
        Subbu M Iyer
      10. 3909-102812.patch
        143 kB
        Subbu M Iyer
      11. 3909_090712-2.patch
        143 kB
        Subbu M Iyer
      12. patch-v2.patch
        37 kB
        Subbu M Iyer
      13. HBase Cluster Config Details.xlsx
        22 kB
        Subbu M Iyer
      14. 3909.v1
        52 kB
        Ted Yu
      15. 3909-v1.patch
        53 kB
        Subbu M Iyer

        Issue Links

          Activity

          Hide
          Ted Yu added a comment -

          I went over HADOOP-7001.5.patch
          We have the following decision to make:
          1. HADOOP-7001 is in trunk only. Are we going to pull the interface/base class/util class over to hbase ?
          2. ReconfigurationServlet would be convenient for admin to use. Are we going to support reloading conf from hbase shell ?
          3. HADOOP-7001 provides fine-grained property reconfig through reconfigurePropertyImpl() calls. Shall we also provide coarse-grained property reconfig mechanism ? e.g. we can notify AssignmentManager of the properties it uses whose values have just changed. This mechanism is also related to getReconfigurableProperties(). I think HMaster, AssignmentManager, etc would all extend ReconfigurableBase.

          Show
          Ted Yu added a comment - I went over HADOOP-7001 .5.patch We have the following decision to make: 1. HADOOP-7001 is in trunk only. Are we going to pull the interface/base class/util class over to hbase ? 2. ReconfigurationServlet would be convenient for admin to use. Are we going to support reloading conf from hbase shell ? 3. HADOOP-7001 provides fine-grained property reconfig through reconfigurePropertyImpl() calls. Shall we also provide coarse-grained property reconfig mechanism ? e.g. we can notify AssignmentManager of the properties it uses whose values have just changed. This mechanism is also related to getReconfigurableProperties(). I think HMaster, AssignmentManager, etc would all extend ReconfigurableBase.
          Hide
          Andrew Purtell added a comment -

          Given that Hadoop does not require ZooKeeper, but we do anyway, I wonder if it makes more sense to go our own route and host all of configuration in the ZooKeeper namespace. It would therefore be possible to make one edit (committed into ZK) and watches on all processes would automatically pull it.

          The access controller on HBASE-3025 uses this approach for ACLs. Upon cold boot they are loaded from META into znodes. Then all processes open watches on the znode(s). Upon update, the znode is updated, firing the watchers, propagating the change cluster wide.

          For supporting dynamic configuration, the first process up could populate znode(s) from Configuration; otherwise if the znodes exist configuration would be read from there. Whenever the znode(s) are updated, the changes can be applied to running state by the watcher.

          How/if the updated configuration should be written back to the config xml files on local disk may be a subject of debate.

          Show
          Andrew Purtell added a comment - Given that Hadoop does not require ZooKeeper, but we do anyway, I wonder if it makes more sense to go our own route and host all of configuration in the ZooKeeper namespace. It would therefore be possible to make one edit (committed into ZK) and watches on all processes would automatically pull it. The access controller on HBASE-3025 uses this approach for ACLs. Upon cold boot they are loaded from META into znodes. Then all processes open watches on the znode(s). Upon update, the znode is updated, firing the watchers, propagating the change cluster wide. For supporting dynamic configuration, the first process up could populate znode(s) from Configuration; otherwise if the znodes exist configuration would be read from there. Whenever the znode(s) are updated, the changes can be applied to running state by the watcher. How/if the updated configuration should be written back to the config xml files on local disk may be a subject of debate.
          Hide
          Todd Lipcon added a comment -

          I'm always skeptical of the suggestion to store configuration in ZooKeeper. Here's my reasoning:

          • we already require at least one piece of configuration in the client itself in order to connect to ZooKeeper (ie the ZK quorum info and session timeouts, etc)
          • operations teams are very good at managing text-based configuration files with tools like puppet, cfengine, etc. It's also easy to version-control these kinds of configs, add <!-- comments -->, etc. Moving to ZK makes these tasks more difficult – we'd need lots of tooling, etc.
          • If we keep both the text-based and ZK-based, it's easy to accidentally change something in ZK but forget to update in text, so it would revert on next restart.
          • we currently have the somewhat nice property that nothing in ZK is "critical" - even if the ZK cluster is completely wiped out, we dont lose any info. This would be a change.
          Show
          Todd Lipcon added a comment - I'm always skeptical of the suggestion to store configuration in ZooKeeper. Here's my reasoning: we already require at least one piece of configuration in the client itself in order to connect to ZooKeeper (ie the ZK quorum info and session timeouts, etc) operations teams are very good at managing text-based configuration files with tools like puppet, cfengine, etc. It's also easy to version-control these kinds of configs, add <!-- comments -->, etc. Moving to ZK makes these tasks more difficult – we'd need lots of tooling, etc. If we keep both the text-based and ZK-based, it's easy to accidentally change something in ZK but forget to update in text, so it would revert on next restart. we currently have the somewhat nice property that nothing in ZK is "critical" - even if the ZK cluster is completely wiped out, we dont lose any info. This would be a change.
          Hide
          Ted Yu added a comment -

          +1 on Todd's comment.

          Show
          Ted Yu added a comment - +1 on Todd's comment.
          Hide
          Andrew Purtell added a comment -

          Forget it then

          Show
          Andrew Purtell added a comment - Forget it then
          Hide
          stack added a comment -

          Some push back:

          we already require at least one piece of configuration in the client itself in order to connect to ZooKeeper (ie the ZK quorum info and session timeouts, etc)

          True, but we do keep config in two places – in hbase-*.xml and in conf/hbase-env.sh. If only an ensemble location (and optional timeout) could be environment variables.

          operations teams are very good at managing text-based configuration files with tools like puppet, cfengine, etc. It's also easy to version-control these kinds of configs, add <!-- comments -->, etc. Moving to ZK makes these tasks more difficult – we'd need lots of tooling, etc.

          If we move our config. totally to zk, true (This is a good point though. We are providing a software package that is going to be deployed in all kinds of environments. We need to do the LCD).

          If we keep both the text-based and ZK-based, it's easy to accidentally change something in ZK but forget to update in text, so it would revert on next restart.

          What I was thinking was that the text-based would seed zk. Changes would be made to the text-based config. and would then have to be hoisted up into zk for the changes to have an effect. Part of master startup would be putting the config. up in zk for clients and regionservers to use (Currently, regionservers get critical config. from the master, not from reading text-based file in their conf dir as part of the checkin).

          we currently have the somewhat nice property that nothing in ZK is "critical" - even if the ZK cluster is completely wiped out, we dont lose any info. This would be a change.

          This needs to become a golden rule going forward. Where do we etch it? (If we do the above where master posts config. to zk, then zk is not the sole repository for config.)

          I suppose I think there is still merit in the original proposal for using the zk watcher 'message bus' propagating config. changes but need to study hadoop-7001 more I say anything more.

          Show
          stack added a comment - Some push back: we already require at least one piece of configuration in the client itself in order to connect to ZooKeeper (ie the ZK quorum info and session timeouts, etc) True, but we do keep config in two places – in hbase-*.xml and in conf/hbase-env.sh. If only an ensemble location (and optional timeout) could be environment variables. operations teams are very good at managing text-based configuration files with tools like puppet, cfengine, etc. It's also easy to version-control these kinds of configs, add <!-- comments -->, etc. Moving to ZK makes these tasks more difficult – we'd need lots of tooling, etc. If we move our config. totally to zk, true (This is a good point though. We are providing a software package that is going to be deployed in all kinds of environments. We need to do the LCD). If we keep both the text-based and ZK-based, it's easy to accidentally change something in ZK but forget to update in text, so it would revert on next restart. What I was thinking was that the text-based would seed zk. Changes would be made to the text-based config. and would then have to be hoisted up into zk for the changes to have an effect. Part of master startup would be putting the config. up in zk for clients and regionservers to use (Currently, regionservers get critical config. from the master, not from reading text-based file in their conf dir as part of the checkin). we currently have the somewhat nice property that nothing in ZK is "critical" - even if the ZK cluster is completely wiped out, we dont lose any info. This would be a change. This needs to become a golden rule going forward. Where do we etch it? (If we do the above where master posts config. to zk, then zk is not the sole repository for config.) I suppose I think there is still merit in the original proposal for using the zk watcher 'message bus' propagating config. changes but need to study hadoop-7001 more I say anything more.
          Hide
          Ted Yu added a comment -

          Since HADOOP-7001 was for 0.23, we still need to decide how we should pull it into hbase.

          Show
          Ted Yu added a comment - Since HADOOP-7001 was for 0.23, we still need to decide how we should pull it into hbase.
          Hide
          stack added a comment -

          Pulling into 0.92 though I'm sure it'll be punted soon....

          Show
          stack added a comment - Pulling into 0.92 though I'm sure it'll be punted soon....
          Hide
          Gary Helmling added a comment -

          It would be really nice to have this capability but seems way out there for 0.92. We can't depend on Hadoop trunk/0.23 classes for 0.92. We could fork the HADOOP-7001 patch or come up with our own approach, but either one is going to be a lot of work. And the server related changes to support this seem fairly tricky for anything beyond trivial configuration options – ie, how to support reconfiguring number of rpc handler threads, say.

          All this adds up to: I'd suggest we punt from 0.92.

          Show
          Gary Helmling added a comment - It would be really nice to have this capability but seems way out there for 0.92. We can't depend on Hadoop trunk/0.23 classes for 0.92. We could fork the HADOOP-7001 patch or come up with our own approach, but either one is going to be a lot of work. And the server related changes to support this seem fairly tricky for anything beyond trivial configuration options – ie, how to support reconfiguring number of rpc handler threads, say. All this adds up to: I'd suggest we punt from 0.92.
          Hide
          Ted Yu added a comment -

          +1 on moving out of 0.92

          Show
          Ted Yu added a comment - +1 on moving out of 0.92
          Hide
          stack added a comment -

          My thought on moving issues in and out of releases is just do it with justification rather than make justification and not move it waiting on others to agree. For example, you make a good case for moving the issue out Gary, so go for it.

          If someone objects, let them counter argue and move it back. If a dispute, we can move it to the dev list to duke it out.

          Good stuff.

          Show
          stack added a comment - My thought on moving issues in and out of releases is just do it with justification rather than make justification and not move it waiting on others to agree. For example, you make a good case for moving the issue out Gary, so go for it. If someone objects, let them counter argue and move it back. If a dispute, we can move it to the dev list to duke it out. Good stuff.
          Hide
          stack added a comment -

          I'm now suggesting we hoist the differences only up into zk. We'd have a configuration directory under /hbase in zk. It would have znodes whose names are the config to change. The content of the znode is the new value (and type I suppose).

          Once a znode is added under configuration dir, watchers are triggered and they update their running Configuration instance.

          We do some refactoring in HRegionServers and HMaster so important configs go back to their Configuration instance at critical junctures such as at split or checking if should do a compaction or if should flush, rather than read a data member that was set on Construction (We'd be careful to not do lookup on Configuration always).

          We'd add a configure to the shell that allowed you hoist configs up into zk.

          We'd punt on there being a connection between this mechanism and whats in hbase-*xml. This facility is for 'ephemeral' configuration, for getting you over a temporary hump, for trying out a setting to see its effect, or to get you out of a fix; e.g. cluster is up and running but you forgot to set a critical config. all w/o need of a rolling restart/restart.

          Show
          stack added a comment - I'm now suggesting we hoist the differences only up into zk. We'd have a configuration directory under /hbase in zk. It would have znodes whose names are the config to change. The content of the znode is the new value (and type I suppose). Once a znode is added under configuration dir, watchers are triggered and they update their running Configuration instance. We do some refactoring in HRegionServers and HMaster so important configs go back to their Configuration instance at critical junctures such as at split or checking if should do a compaction or if should flush, rather than read a data member that was set on Construction (We'd be careful to not do lookup on Configuration always). We'd add a configure to the shell that allowed you hoist configs up into zk. We'd punt on there being a connection between this mechanism and whats in hbase-*xml. This facility is for 'ephemeral' configuration, for getting you over a temporary hump, for trying out a setting to see its effect, or to get you out of a fix; e.g. cluster is up and running but you forgot to set a critical config. all w/o need of a rolling restart/restart.
          Hide
          stack added a comment -

          Reading over hadoop-7001, Phillip says "Not to mention that Configuration objects get copied along, so it's hard to make sure that a configuration change propagates to all possible children." I need to survey to make sure callback context can change a Configuration instance that is used in all the important places we'd want to change.

          Show
          stack added a comment - Reading over hadoop-7001, Phillip says "Not to mention that Configuration objects get copied along, so it's hard to make sure that a configuration change propagates to all possible children." I need to survey to make sure callback context can change a Configuration instance that is used in all the important places we'd want to change.
          Hide
          Ted Yu added a comment -

          It would have znodes whose names are the config to change

          The assumption is there wouldn't be many such config items to change. We should survey and validate this assumption.

          When would these znodes be deleted ?

          Show
          Ted Yu added a comment - It would have znodes whose names are the config to change The assumption is there wouldn't be many such config items to change. We should survey and validate this assumption. When would these znodes be deleted ?
          Hide
          stack added a comment -

          The assumption is there wouldn't be many such config items to change. We should survey and validate this assumption.

          You could do hundreds or even put every config. up there if you wanted. Should be fine.

          When would these znodes be deleted ?

          Not sure. Good question. Their insertion would trigger the callback so they'd be useless after putting. Could let them just expire. Might be good to keep them around though so could get an idea of what was changed via zk. Need to think on it.

          Show
          stack added a comment - The assumption is there wouldn't be many such config items to change. We should survey and validate this assumption. You could do hundreds or even put every config. up there if you wanted. Should be fine. When would these znodes be deleted ? Not sure. Good question. Their insertion would trigger the callback so they'd be useless after putting. Could let them just expire. Might be good to keep them around though so could get an idea of what was changed via zk. Need to think on it.
          Hide
          Ted Yu added a comment -

          I think these two questions are related.
          If we keep config znodes around, the amount of data on zookeeper tends to accumulate. We should have control over how long those znodes exist, i.e. enforcing expiration policy.

          Show
          Ted Yu added a comment - I think these two questions are related. If we keep config znodes around, the amount of data on zookeeper tends to accumulate. We should have control over how long those znodes exist, i.e. enforcing expiration policy.
          Hide
          Jimmy Xiang added a comment -

          Can we put dynamic configuration somewhere in the HDFS, for example, some file under hbase.rootdir?

          We can put static configuration in hbase-site.xml, and dynamic configuration in a file under hbase.rootdir.

          We can also enhance hbase shell or master UI to view/change those dynamic configurations.

          Show
          Jimmy Xiang added a comment - Can we put dynamic configuration somewhere in the HDFS, for example, some file under hbase.rootdir? We can put static configuration in hbase-site.xml, and dynamic configuration in a file under hbase.rootdir. We can also enhance hbase shell or master UI to view/change those dynamic configurations.
          Hide
          Jimmy Xiang added a comment -

          For these dynamic configurations, we can cache them in memory. In the meantime, create a separate thread to re-load the cache periodically.
          So it is apparent to the configuration reader.

          Show
          Jimmy Xiang added a comment - For these dynamic configurations, we can cache them in memory. In the meantime, create a separate thread to re-load the cache periodically. So it is apparent to the configuration reader.
          Hide
          stack added a comment -

          @Jimmy Nice thing about zk is that when config changes all get notification (Would need to make it so a new regionserver joining cluster would look into the zk /configuration dir to pick up differences). When its in fs, we'd need to poll fs to find changes?

          Show
          stack added a comment - @Jimmy Nice thing about zk is that when config changes all get notification (Would need to make it so a new regionserver joining cluster would look into the zk /configuration dir to pick up differences). When its in fs, we'd need to poll fs to find changes?
          Hide
          Ted Yu added a comment -

          @Jimmy:
          In your comment @ 24/Feb/12 23:36, did you mean 'transparent to configuration reader' ?
          It is not totally transparent - we need some reader-writer lock.

          Show
          Ted Yu added a comment - @Jimmy: In your comment @ 24/Feb/12 23:36, did you mean 'transparent to configuration reader' ? It is not totally transparent - we need some reader-writer lock.
          Hide
          Jimmy Xiang added a comment -

          Yes, I meant transparent to configuration reader. My assumption is that the change doesn't have to take effect right away. Some delay is fine.

          If we really want to use ZK, we can use a central file as persistence.

          Show
          Jimmy Xiang added a comment - Yes, I meant transparent to configuration reader. My assumption is that the change doesn't have to take effect right away. Some delay is fine. If we really want to use ZK, we can use a central file as persistence.
          Hide
          Jimmy Xiang added a comment -

          @Stack, we don't have to poll fs to find changes. We can just put the lastmodifieddate of the file in ZK. Once the last modified date is changed, we can load the file again.

          When a new regionserver joins a cluster, it should always try to check if any configuration is changed based on the configuration file last modified
          date, which is kind of the version number of the file.

          Show
          Jimmy Xiang added a comment - @Stack, we don't have to poll fs to find changes. We can just put the lastmodifieddate of the file in ZK. Once the last modified date is changed, we can load the file again. When a new regionserver joins a cluster, it should always try to check if any configuration is changed based on the configuration file last modified date, which is kind of the version number of the file.
          Hide
          Lars Hofhansl added a comment -

          Moving out of 0.94

          Show
          Lars Hofhansl added a comment - Moving out of 0.94
          Hide
          stack added a comment -

          @Jimmy

          we don't have to poll fs to find changes. We can just put the lastmodifieddate of the file in ZK. Once the last modified date is changed, we can load the file again.

          Why have the fs involved at all. What would be the advantage? Why not just put the changed configs. up in zk? Because we could lose them (because we do not keep permanent data up in zk)? That'd be ok I think. If you don't move the configs to hbase-site, then its your own fault if they are lost when zk data is cleared.

          Show
          stack added a comment - @Jimmy we don't have to poll fs to find changes. We can just put the lastmodifieddate of the file in ZK. Once the last modified date is changed, we can load the file again. Why have the fs involved at all. What would be the advantage? Why not just put the changed configs. up in zk? Because we could lose them (because we do not keep permanent data up in zk)? That'd be ok I think. If you don't move the configs to hbase-site, then its your own fault if they are lost when zk data is cleared.
          Hide
          Jimmy Xiang added a comment -

          If they lose them, it could be very bad. It may be too later when someone see something weird, then realize their configs are gone. I think it is safer to persist them somewhere.

          Show
          Jimmy Xiang added a comment - If they lose them, it could be very bad. It may be too later when someone see something weird, then realize their configs are gone. I think it is safer to persist them somewhere.
          Hide
          Uma Maheswara Rao G added a comment -

          I think syncing the configuration across clusters would be mostly OM kind of tools functionality. Bringing that into Hadoop/Hbase may not be correct.
          I feel the current issue scope would be to allow some way to do the in-memory config updates with out restarting the node.

          And I agree with Todd. OM tools are good in managing configs.

          ¦operations teams are very good at managing text-based configuration files with tools like puppet, cfengine, etc. It's also easy to version-control these kinds of configs, add <!-- comments -->, etc. Moving to ZK makes these tasks more difficult – we'd need lots of tooling, etc.

          The current limitation point would be that, even though OMs are capable enough for updating the configurations in all the places, there is no way to make the nodes reflect with that configs without restart of that node.

          I am thinking to proceed with Hadoop-7001 kind of implementation, if there are no objections.
          Also, as a next step we can provide the options like, updating configs from shell and provide command to reload the config from disk one more..etc

          Show
          Uma Maheswara Rao G added a comment - I think syncing the configuration across clusters would be mostly OM kind of tools functionality. Bringing that into Hadoop/Hbase may not be correct. I feel the current issue scope would be to allow some way to do the in-memory config updates with out restarting the node. And I agree with Todd. OM tools are good in managing configs. ¦operations teams are very good at managing text-based configuration files with tools like puppet, cfengine, etc. It's also easy to version-control these kinds of configs, add <!-- comments -->, etc. Moving to ZK makes these tasks more difficult – we'd need lots of tooling, etc. The current limitation point would be that, even though OMs are capable enough for updating the configurations in all the places, there is no way to make the nodes reflect with that configs without restart of that node. I am thinking to proceed with Hadoop-7001 kind of implementation, if there are no objections. Also, as a next step we can provide the options like, updating configs from shell and provide command to reload the config from disk one more..etc
          Hide
          stack added a comment -

          Nothing in hadoop-7001 guarantees that what is in the *.xml files is in agreement w/ what gets POSTed to the daemon, right?

          Show
          stack added a comment - Nothing in hadoop-7001 guarantees that what is in the *.xml files is in agreement w/ what gets POSTed to the daemon, right?
          Hide
          Uma Maheswara Rao G added a comment -

          Yes, As per my understanding, HADOOP-7001 will assume that, OM/other tools will update *.xml and POST the same configs to Daemon for updating in-memory values.

          Show
          Uma Maheswara Rao G added a comment - Yes, As per my understanding, HADOOP-7001 will assume that, OM/other tools will update *.xml and POST the same configs to Daemon for updating in-memory values.
          Hide
          Todd Lipcon added a comment -

          I think adding a "refreshConfigs" admin command is a good idea. It can re-read the configs off the local disk, and emit warnings for any configs that changed that were not runtime-reconfigurable.

          Show
          Todd Lipcon added a comment - I think adding a "refreshConfigs" admin command is a good idea. It can re-read the configs off the local disk, and emit warnings for any configs that changed that were not runtime-reconfigurable.
          Hide
          stack added a comment -

          But it should go via zk I'd say since we have it rather than have us POST refreshConfigs to a servlet on each server

          Show
          stack added a comment - But it should go via zk I'd say since we have it rather than have us POST refreshConfigs to a servlet on each server
          Hide
          stack added a comment -

          An argument for redoing hadoop-7001 in hbase would be that you can reset configs in hbase the way you do it in hadoop. I could go for that.

          Show
          stack added a comment - An argument for redoing hadoop-7001 in hbase would be that you can reset configs in hbase the way you do it in hadoop. I could go for that.
          Hide
          Subbu M Iyer added a comment -

          First draft patch

          Show
          Subbu M Iyer added a comment - First draft patch
          Hide
          Subbu M Iyer added a comment -

          Attached a first draft for review. It may not address all the reqs mentioned in this Jira but can be added. (I started with a loose set of requirements and hope to refine it as we go further)

          1. Config will be maintained in ZK.
          2. Any runtime config changes survives master/RS crash.
          3. Shell support to update config as well as get config value for a config key.
          4. Behind a flag and is disabled by default. Can be enabled through a config change.
          5. If enabled, any new config changes will be available instantly with out a restart.

          Please review and let me know your comments. I wanted to get it out here so can get early comments and work on addressing the same.

          Show
          Subbu M Iyer added a comment - Attached a first draft for review. It may not address all the reqs mentioned in this Jira but can be added. (I started with a loose set of requirements and hope to refine it as we go further) 1. Config will be maintained in ZK. 2. Any runtime config changes survives master/RS crash. 3. Shell support to update config as well as get config value for a config key. 4. Behind a flag and is disabled by default. Can be enabled through a config change. 5. If enabled, any new config changes will be available instantly with out a restart. Please review and let me know your comments. I wanted to get it out here so can get early comments and work on addressing the same.
          Hide
          Ted Yu added a comment -

          @Subbu:
          Do you want to put the patch on review board ? It is of decent size.

          Show
          Ted Yu added a comment - @Subbu: Do you want to put the patch on review board ? It is of decent size.
          Hide
          Subbu M Iyer added a comment -

          Ted,

          I am trying to but for some reason RB is not allowing me to attach my svn
          patch.

          Is there anything special i need to do? I always have issues attaching my
          patches to RB.

          will keep trying.

          thanks
          Subbu

          On Wed, Apr 4, 2012 at 4:08 PM, Zhihong Yu (Commented) (JIRA) <

          Show
          Subbu M Iyer added a comment - Ted, I am trying to but for some reason RB is not allowing me to attach my svn patch. Is there anything special i need to do? I always have issues attaching my patches to RB. will keep trying. thanks Subbu On Wed, Apr 4, 2012 at 4:08 PM, Zhihong Yu (Commented) (JIRA) <
          Hide
          Ted Yu added a comment -

          I could create review request based on this patch.

          Show
          Ted Yu added a comment - I could create review request based on this patch.
          Hide
          Ted Yu added a comment -

          For new files, please add license at the top of them.

          Please add class javadoc for HBaseInMemoryConfiguration. Nicolas is introducing CompoundConfiguration in HBASE-5335. You may want to take a look.

          +    public long getLong(String name, long defaultValue) {
          +        if (clusterConfigMap.containsKey(name)) {
          +            String longValue = clusterConfigMap.get(name);
          +            return longValue == null ? defaultValue : Long.parseLong(longValue);
          

          What if Long.parseLong() throws NumberFormatException ? Should you catch NFE and delegate to super.getLong() ?
          Similar comment for getInt(), getFloat() and getBoolean().

          For the new method in HMasterInterface:

          +  public boolean updateConfig(String configKey, String configValue) {
          

          Should List<Pair<String, String>> be the parameter type so that the number of calls to master can be lowered ?
          Similar comment for getConfig().

          +    this.zooKeeper = new ZooKeeperWatcher(conf, MASTER + ":" + "isa.port", this, true);
          

          What does "isa.port" represent above ?
          getClusterConfiguration() is not a good name: clusterConfigTracker is started in the method.

          If "hbase.dynamic.configuration.enabled" has value of false, do we need to start the tracker ?
          Similar comment for starting the tracker in initializeZKBasedSystemTrackers().

          -    this.zooKeeper.reconnectAfterExpiration();
          -
          +    this.zooKeeper = new ZooKeeperWatcher(conf, MASTER + ":"
          +      + this.serverName.getPort(), this, true);
          +    // Set cluster config tracker to null to trigger a reload.
          +    this.clusterConfigTracker = null;
          

          Is the change above to how zooKeeper is recovered intentional ?
          tryRecoveringExpiredZKSession() is called by abortNow(). Can you explain what the comment above setting this.clusterConfigTracker to null means ?

          Show
          Ted Yu added a comment - For new files, please add license at the top of them. Please add class javadoc for HBaseInMemoryConfiguration. Nicolas is introducing CompoundConfiguration in HBASE-5335 . You may want to take a look. + public long getLong( String name, long defaultValue) { + if (clusterConfigMap.containsKey(name)) { + String longValue = clusterConfigMap.get(name); + return longValue == null ? defaultValue : Long .parseLong(longValue); What if Long.parseLong() throws NumberFormatException ? Should you catch NFE and delegate to super.getLong() ? Similar comment for getInt(), getFloat() and getBoolean(). For the new method in HMasterInterface: + public boolean updateConfig( String configKey, String configValue) { Should List<Pair<String, String>> be the parameter type so that the number of calls to master can be lowered ? Similar comment for getConfig(). + this .zooKeeper = new ZooKeeperWatcher(conf, MASTER + ":" + "isa.port" , this , true ); What does "isa.port" represent above ? getClusterConfiguration() is not a good name: clusterConfigTracker is started in the method. If "hbase.dynamic.configuration.enabled" has value of false, do we need to start the tracker ? Similar comment for starting the tracker in initializeZKBasedSystemTrackers(). - this .zooKeeper.reconnectAfterExpiration(); - + this .zooKeeper = new ZooKeeperWatcher(conf, MASTER + ":" + + this .serverName.getPort(), this , true ); + // Set cluster config tracker to null to trigger a reload. + this .clusterConfigTracker = null ; Is the change above to how zooKeeper is recovered intentional ? tryRecoveringExpiredZKSession() is called by abortNow(). Can you explain what the comment above setting this.clusterConfigTracker to null means ?
          Hide
          Uma Maheswara Rao G added a comment -

          @Subbu,

          I just reviewed this patch. Feedback as follows.

          1) updateClusterConfig and createClusterConfig are looks like mostly duplicate code for me. Can we reuse the code or can change the API, such that it can handle both.
            if (ZKUtil.checkExists(watcher,
          +                    getConfigNodePath(configKey)) > 0) {
          +                LOG.info("Cluster configuration node exist for Config Key = "
          +                        + configKey + " Updating the cluster config node");
          +                ZKUtil.setData(watcher, getConfigNodePath(configKey), Bytes.toBytes(configValue));
          +            } else {
          +                ZKUtil.createSetData(watcher, getConfigNodePath(configKey),
          +                        Bytes.toBytes(configValue));
          +            }
          
          2) Looks your are using wrong code formatter. intendation should be 2 spaces.
           try {
          +            if (ZKUtil.checkExists(watcher,
          
          3) Normally we used to put the licence header at top of the file.
          +package org.apache.hadoop.hbase.zookeeper;
          +
          +import org.apache.commons.logging.Log;
          +import org.apache.commons.logging.LogFactory;
          +import org.apache.hadoop.conf.Configuration;
          +import org.apache.hadoop.hbase.Abortable;
          +import org.apache.hadoop.hbase.HBaseInMemoryConfiguration;
          +import org.apache.hadoop.hbase.util.Bytes;
          +import org.apache.zookeeper.KeeperException;
          +import org.codehaus.jackson.map.ObjectMapper;
          +
          +import java.io.IOException;
          +import java.io.StringWriter;
          +import java.util.Map;
          +
          +/**
          + * Licensed to the Apache Software Foundation (ASF) under one
          + * or more contributor license agreements.  See the NOTICE file
          + * distributed with this work for additional information
          
          4) configuration.dumpConfiguration(configuration, outWriter); , this api is static in configuration , you can call directly with class.
          5) getMaster in HBaseAdmin seems like deprecated
          return getMaster().updateConfig(configKey, configValue);
          
          6) please add the javadoc cleanly
           /**
          +     *
          +     * @param configKey
          +     * @param configValue
          +     * @return
          +     */
          
          7) a) Call back comes from ZK for dataChangeEvent,
          b) when you are processing the event at Hmaster/Regionserver to refresh the configurations.
          c) Now unfortunately ZK down/nw fluctuation from ZK to Server.
          d) It may get data as null. And refreshClusterConfigMap will just ignore the dataupdation in memory.
          Then this node may continue with old configs right? when this will get updated again?
          seems to be a bug here. no?
          You have used ZKUtil#getChildDataAndWatchForNewChildren, it can throw NPE, I Just filed JIRA HBASE-5722
          8) Looks you are creating znodes for all the configuration items.
          I feel, we should not allow user to change all the configuration items. ( like some configurations we may not be able to change at runtime, may be zk url etc..).
          So, while creating znodes it self we can create only for mutable config items in ZK. If user tries to update any immutable config items, you can directly emit the warning.
          Am i missing some thing here?
          9) I think we may need to add the validation checks for the mutable configurations from admin. Since we are encouraging the users to update configs dynamically, if user sets some wrong values, it may change directly and system may go immediatly to unstable state. may be dangerous. no?
          10) Looks we are handling nodeDeleted event. When exactly this event can come?
          I think we added updateConfig api in HBase admin. How can we remove element with this? Is this event really will occur?
          11) Final question is that, we are claiming hadoop can run in commodity hardwares, in that case some configuration items can be different in each machine. for example: 'hbase.regionserver.handler.count'..etc
          12)createClusterConfig creating JSonConfiguration from the Configuration and adding them into HBaseInMemoryConfiguration.
          Can't we create update the HBaseInMemoryConfiguration from input Configuration.

          Please correct me if I missunderstand your design. Great work thanks a lot.

          Thanks
          Uma

          Show
          Uma Maheswara Rao G added a comment - @Subbu, I just reviewed this patch. Feedback as follows. 1) updateClusterConfig and createClusterConfig are looks like mostly duplicate code for me. Can we reuse the code or can change the API, such that it can handle both. if (ZKUtil.checkExists(watcher, + getConfigNodePath(configKey)) > 0) { + LOG.info( "Cluster configuration node exist for Config Key = " + + configKey + " Updating the cluster config node" ); + ZKUtil.setData(watcher, getConfigNodePath(configKey), Bytes.toBytes(configValue)); + } else { + ZKUtil.createSetData(watcher, getConfigNodePath(configKey), + Bytes.toBytes(configValue)); + } 2) Looks your are using wrong code formatter. intendation should be 2 spaces. try { + if (ZKUtil.checkExists(watcher, 3) Normally we used to put the licence header at top of the file. + package org.apache.hadoop.hbase.zookeeper; + + import org.apache.commons.logging.Log; + import org.apache.commons.logging.LogFactory; + import org.apache.hadoop.conf.Configuration; + import org.apache.hadoop.hbase.Abortable; + import org.apache.hadoop.hbase.HBaseInMemoryConfiguration; + import org.apache.hadoop.hbase.util.Bytes; + import org.apache.zookeeper.KeeperException; + import org.codehaus.jackson.map.ObjectMapper; + + import java.io.IOException; + import java.io.StringWriter; + import java.util.Map; + +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information 4) configuration.dumpConfiguration(configuration, outWriter); , this api is static in configuration , you can call directly with class. 5) getMaster in HBaseAdmin seems like deprecated return getMaster().updateConfig(configKey, configValue); 6) please add the javadoc cleanly /** + * + * @param configKey + * @param configValue + * @ return + */ 7) a) Call back comes from ZK for dataChangeEvent, b) when you are processing the event at Hmaster/Regionserver to refresh the configurations. c) Now unfortunately ZK down/nw fluctuation from ZK to Server. d) It may get data as null. And refreshClusterConfigMap will just ignore the dataupdation in memory. Then this node may continue with old configs right? when this will get updated again? seems to be a bug here. no? You have used ZKUtil#getChildDataAndWatchForNewChildren, it can throw NPE, I Just filed JIRA HBASE-5722 8) Looks you are creating znodes for all the configuration items. I feel, we should not allow user to change all the configuration items. ( like some configurations we may not be able to change at runtime, may be zk url etc..). So, while creating znodes it self we can create only for mutable config items in ZK. If user tries to update any immutable config items, you can directly emit the warning. Am i missing some thing here? 9) I think we may need to add the validation checks for the mutable configurations from admin. Since we are encouraging the users to update configs dynamically, if user sets some wrong values, it may change directly and system may go immediatly to unstable state. may be dangerous. no? 10) Looks we are handling nodeDeleted event. When exactly this event can come? I think we added updateConfig api in HBase admin. How can we remove element with this? Is this event really will occur? 11) Final question is that, we are claiming hadoop can run in commodity hardwares, in that case some configuration items can be different in each machine. for example: 'hbase.regionserver.handler.count'..etc 12)createClusterConfig creating JSonConfiguration from the Configuration and adding them into HBaseInMemoryConfiguration. Can't we create update the HBaseInMemoryConfiguration from input Configuration. Please correct me if I missunderstand your design. Great work thanks a lot. Thanks Uma
          Hide
          Uma Maheswara Rao G added a comment -

          Point 11) is incomplete:

          11) Final question is that, we are claiming hadoop can run in commodity hardwares, in that case some configuration items can be different in each machine. for example: 'hbase.regionserver.handler.count'..etc

          But the zookeeper based design may update the configurations in all nodes as same.
          We can't provide dynamic configuration updation functionality for such kind of properties right? If you see one property in DataNode, dataXceiverCount. We can configure this property defferently in each machine, and also this is a dynamically changeable item.

          Show
          Uma Maheswara Rao G added a comment - Point 11) is incomplete: 11) Final question is that, we are claiming hadoop can run in commodity hardwares, in that case some configuration items can be different in each machine. for example: 'hbase.regionserver.handler.count'..etc But the zookeeper based design may update the configurations in all nodes as same. We can't provide dynamic configuration updation functionality for such kind of properties right? If you see one property in DataNode, dataXceiverCount. We can configure this property defferently in each machine, and also this is a dynamically changeable item.
          Hide
          Subbu M Iyer added a comment -

          @Uma/@Ted:

          Thanks a lot for your review and comments. As I mentioned this is early draft and will address your concerns in the next few days.

          Also @Uma you raised lot of interesting questions which I had while working on this and I will post my thoughts on the same some time tomorrow.

          thanks

          Show
          Subbu M Iyer added a comment - @Uma/@Ted: Thanks a lot for your review and comments. As I mentioned this is early draft and will address your concerns in the next few days. Also @Uma you raised lot of interesting questions which I had while working on this and I will post my thoughts on the same some time tomorrow. thanks
          Hide
          Uma Maheswara Rao G added a comment -

          @Subbu,

          Also @Uma you raised lot of interesting questions which I had while working on this and I will post my thoughts on the same some time tomorrow.

          That's great.

          Thinking some more, one more point to check.

          Presently looks this design will update same set of configurations in all nodes (Hmasters/ RegionServers). But Hmaster may not require or interested on HRegionServer specific configuration items. But when you update the configurations related to HRegionServer, call back will come to Hmaster as well and will process this event. This processing will be really unnecessary in Hmaster. Because it will never use that RegionServers specific configuration items.

          @Stack, Ted, what is your opinion on this point.

          is it ok to update in all nodes as same? or we can separate the Znodes clearly for Hmaster and HregionServer, and then updations will takes place accordingly?
          I mean anyway we have to maintain reconfigurable items in-memory.So, here itself we can distinguish them. And the property updations from admin can pass to the correspondiong Znode based on item.

          Show
          Uma Maheswara Rao G added a comment - @Subbu, Also @Uma you raised lot of interesting questions which I had while working on this and I will post my thoughts on the same some time tomorrow. That's great. Thinking some more, one more point to check. Presently looks this design will update same set of configurations in all nodes (Hmasters/ RegionServers). But Hmaster may not require or interested on HRegionServer specific configuration items. But when you update the configurations related to HRegionServer, call back will come to Hmaster as well and will process this event. This processing will be really unnecessary in Hmaster. Because it will never use that RegionServers specific configuration items. @Stack, Ted, what is your opinion on this point. is it ok to update in all nodes as same? or we can separate the Znodes clearly for Hmaster and HregionServer, and then updations will takes place accordingly? I mean anyway we have to maintain reconfigurable items in-memory.So, here itself we can distinguish them. And the property updations from admin can pass to the correspondiong Znode based on item.
          Hide
          Subbu M Iyer added a comment -

          @Uma:

          5. getMaster().updateConfig is not deprecated. It is used by the Shell.

          7. If ZK is down or connectivity is lost we have much bigger problem then loosing a config update.

          8. Yes. I thought of creating two buckets of configuration elements. 1) Bootstrap config items which may not be changed after the cluster is up or have no effect changing them once cluster is up 2) Non bootstrap, truly runtime configurations which have immediate effect. My thought was to add these features if there is a consensus from the group.

          9. Yes. But if a user is changing the configuration either in XML or dynamic he is better aware of the implications. no?

          10. NodeDeleted is not required. We are not going to delete a configuration entry at runtime and even doing so may not have any effect now. In the future if we want to trigger some actions based on presence/absence of some configuration entries this might be added.

          11. Yes, we can offer a Master/RS specific fine grained configuration controls if that is a real requirement. My thought is begin with some thing
          very simple and add complexities as we see fit. So, one option is as we all know is to create separate subset of config nodes per Master/RS in the cluster and they subscribe to events specific to their own subset. But, if you look at our current configuration xml we make no clear distinction as to which properties belong to which runtime entity. Although we name them like config.key.master or config.key.region, I am not sure whether we can count on them to make a decision on whether a property applies to MAster only or RS only and so on.

          12. My thought is InMemory Configuration holds the config items after creating a ZK node for the same as other wise it may not be altered. Basically we need to make sure that ZK config node exists for a config key before we keep it in memory.

          13. My thought is ideally we want Master to be the single point of source for all cluster wide config management/updates. We definitely don't want to directly manipulate a RS specific property. I mean we should not expose API's in RS to enable config key manipulations. Master should be the single point of source for all configuration entries irrespective of whether the config is cluster wide, master specific or RS specific and ZK should be the medium of communication across the cluster. So, master will be involved in all config updates is my take.

          Others please comment.

          Show
          Subbu M Iyer added a comment - @Uma: 5. getMaster().updateConfig is not deprecated. It is used by the Shell. 7. If ZK is down or connectivity is lost we have much bigger problem then loosing a config update. 8. Yes. I thought of creating two buckets of configuration elements. 1) Bootstrap config items which may not be changed after the cluster is up or have no effect changing them once cluster is up 2) Non bootstrap, truly runtime configurations which have immediate effect. My thought was to add these features if there is a consensus from the group. 9. Yes. But if a user is changing the configuration either in XML or dynamic he is better aware of the implications. no? 10. NodeDeleted is not required. We are not going to delete a configuration entry at runtime and even doing so may not have any effect now. In the future if we want to trigger some actions based on presence/absence of some configuration entries this might be added. 11. Yes, we can offer a Master/RS specific fine grained configuration controls if that is a real requirement. My thought is begin with some thing very simple and add complexities as we see fit. So, one option is as we all know is to create separate subset of config nodes per Master/RS in the cluster and they subscribe to events specific to their own subset. But, if you look at our current configuration xml we make no clear distinction as to which properties belong to which runtime entity. Although we name them like config.key.master or config.key.region, I am not sure whether we can count on them to make a decision on whether a property applies to MAster only or RS only and so on. 12. My thought is InMemory Configuration holds the config items after creating a ZK node for the same as other wise it may not be altered. Basically we need to make sure that ZK config node exists for a config key before we keep it in memory. 13. My thought is ideally we want Master to be the single point of source for all cluster wide config management/updates. We definitely don't want to directly manipulate a RS specific property. I mean we should not expose API's in RS to enable config key manipulations. Master should be the single point of source for all configuration entries irrespective of whether the config is cluster wide, master specific or RS specific and ZK should be the medium of communication across the cluster. So, master will be involved in all config updates is my take. Others please comment.
          Hide
          Ted Yu added a comment -

          For #11, we can create another JIRA for annotating the scope of each config parameter: whether it affects master or region server, or both.
          This way master and region server wouldn't need to receive zk event whose underlying config update is irrelevant to itself.

          Show
          Ted Yu added a comment - For #11, we can create another JIRA for annotating the scope of each config parameter: whether it affects master or region server, or both. This way master and region server wouldn't need to receive zk event whose underlying config update is irrelevant to itself.
          Hide
          Uma Maheswara Rao G added a comment -

          5. getMaster().updateConfig is not deprecated. It is used by the Shell.

           @Deprecated
            public HMasterInterface getMaster()
          But, I am not sure for wnat reason, this API has been deprecated.
          

          7. If ZK is down or connectivity is lost we have much bigger problem then loosing a config update

          for example, i am referring HBASE-5635 kind of issues.

          I think your 11th point mostly answering my 13th question. My 11th point was something defferent. Since you want to make the initial patch simple, let's discuss that later or separate task.

          13. My thought is ideally we want Master to be the single point of source for all cluster wide config management/updates. We definitely don't want to directly manipulate a RS specific property. I mean we should not expose API's in RS to enable config key manipulations. Master should be the single point of source for all configuration entries irrespective of whether the config is cluster wide, master specific or RS specific and ZK should be the medium of communication across the cluster. So, master will be involved in all config updates is my take.

          I am not talking about the configuration updation from defferent nodes. I am talking about some config item updations which may not be really required for all the nodes. ( like some configurations items will be used only by master and some items will be used only by RS). Anyway as you said, we can take this point as next task I feel.

          Show
          Uma Maheswara Rao G added a comment - 5. getMaster().updateConfig is not deprecated. It is used by the Shell. @Deprecated public HMasterInterface getMaster() But, I am not sure for wnat reason, this API has been deprecated. 7. If ZK is down or connectivity is lost we have much bigger problem then loosing a config update for example, i am referring HBASE-5635 kind of issues. I think your 11th point mostly answering my 13th question. My 11th point was something defferent. Since you want to make the initial patch simple, let's discuss that later or separate task. 13. My thought is ideally we want Master to be the single point of source for all cluster wide config management/updates. We definitely don't want to directly manipulate a RS specific property. I mean we should not expose API's in RS to enable config key manipulations. Master should be the single point of source for all configuration entries irrespective of whether the config is cluster wide, master specific or RS specific and ZK should be the medium of communication across the cluster. So, master will be involved in all config updates is my take. I am not talking about the configuration updation from defferent nodes. I am talking about some config item updations which may not be really required for all the nodes. ( like some configurations items will be used only by master and some items will be used only by RS). Anyway as you said, we can take this point as next task I feel.
          Hide
          Subbu M Iyer added a comment -

          I have tried to capture all (most of) of configuration properties that are currently being used in an excel.

          The objective here is to figure out what properties may be effectively changed using the dynamic config utility and what is not. While doing this exercise I figured that almost close to 200 properties are missing from hbase-default.xml but is getting used in the system.

          So, here are my thoughts on this and would like to hear from you guys your thoughts as well:

          1. My understanding is that all configuration properties will be/must be defined in hbase-default.xml with a default value. Some exceptions are possible in some corner cases but majority of properties must be defined here.

          2. There is no way for a user to know that a property is configurable/changeable unless 1) the property is defined in hbase-default 2) Or communicated through some documentation or API doc.

          Show
          Subbu M Iyer added a comment - I have tried to capture all (most of) of configuration properties that are currently being used in an excel. The objective here is to figure out what properties may be effectively changed using the dynamic config utility and what is not. While doing this exercise I figured that almost close to 200 properties are missing from hbase-default.xml but is getting used in the system. So, here are my thoughts on this and would like to hear from you guys your thoughts as well: 1. My understanding is that all configuration properties will be/must be defined in hbase-default.xml with a default value. Some exceptions are possible in some corner cases but majority of properties must be defined here. 2. There is no way for a user to know that a property is configurable/changeable unless 1) the property is defined in hbase-default 2) Or communicated through some documentation or API doc.
          Hide
          Subbu M Iyer added a comment -

          Ted/Uma:

          Addressed all of your comments in this patch.

          Show
          Subbu M Iyer added a comment - Ted/Uma: Addressed all of your comments in this patch.
          Hide
          Subbu M Iyer added a comment -

          I am seeing that this is a complicated feature with lots of open questions.

          1. There are lot of properties/configuration that may have no effect (even with dynamic config tool) once the server is up. How do we communicate such cases? Do we maintain a smaller subset of configurations that can be changed dynamically and only allow those subsets for dynamic changes?

          2. There are lot of properties that might take effect during cases of server fail over/back up server taking over etc. Basically every dynamic configuration change that we apply to the cluster will take effect during server failover/ or back server taking over. How do we communicate such cases? Do we even want this?

          3. Do we really think that this feature will be a valuable addition?

          Show
          Subbu M Iyer added a comment - I am seeing that this is a complicated feature with lots of open questions. 1. There are lot of properties/configuration that may have no effect (even with dynamic config tool) once the server is up. How do we communicate such cases? Do we maintain a smaller subset of configurations that can be changed dynamically and only allow those subsets for dynamic changes? 2. There are lot of properties that might take effect during cases of server fail over/back up server taking over etc. Basically every dynamic configuration change that we apply to the cluster will take effect during server failover/ or back server taking over. How do we communicate such cases? Do we even want this? 3. Do we really think that this feature will be a valuable addition?
          Hide
          Ted Yu added a comment -

          @Subbu:
          Have you seen HBASE-5335 which adds config options on a per-table & per-cf basis ?
          Just for your reference.

          For #1 above, I think we should maintain subset of configurations.

          Show
          Ted Yu added a comment - @Subbu: Have you seen HBASE-5335 which adds config options on a per-table & per-cf basis ? Just for your reference. For #1 above, I think we should maintain subset of configurations.
          Hide
          stack added a comment -

          My understanding is that all configuration properties will be/must be defined in hbase-default.xml with a default value. Some exceptions are possible in some corner cases but majority of properties must be defined here.

          IMO, on #1 in your first of two comments above Subbu, there are configs. that are do not have to be in hbase-default. Such configs. are those we think we'd never change but that we've made configurable just-in-case. This class of config can only be found by reading source code. We don't add them to hbase-default.xml because they detract from the important configs. in the flat hbase-default xml file (Maybe this is less of an issue now we 'hide' hbase-default.xml inside the jar and the book includes description). If we do add these to hbase-default.xml, I'd think we'd add an annotation saying like 'exotic' or 'not-for-the-faint-of-heart').

          On #2, I think its fine that they have to read the source for esoterics.

          On your second set of comments, I think yes, this is an important feature to have and I agree many configs. will not be changeable without extensive refactoring server-side (and even then, some may be just plain dangerous to change in mid-flight). Perhaps we do a subset in this case and if you stray outside of the subset, you get an exception.

          Show
          stack added a comment - My understanding is that all configuration properties will be/must be defined in hbase-default.xml with a default value. Some exceptions are possible in some corner cases but majority of properties must be defined here. IMO, on #1 in your first of two comments above Subbu, there are configs. that are do not have to be in hbase-default. Such configs. are those we think we'd never change but that we've made configurable just-in-case. This class of config can only be found by reading source code. We don't add them to hbase-default.xml because they detract from the important configs. in the flat hbase-default xml file (Maybe this is less of an issue now we 'hide' hbase-default.xml inside the jar and the book includes description). If we do add these to hbase-default.xml, I'd think we'd add an annotation saying like 'exotic' or 'not-for-the-faint-of-heart'). On #2, I think its fine that they have to read the source for esoterics. On your second set of comments, I think yes, this is an important feature to have and I agree many configs. will not be changeable without extensive refactoring server-side (and even then, some may be just plain dangerous to change in mid-flight). Perhaps we do a subset in this case and if you stray outside of the subset, you get an exception.
          Hide
          Subbu M Iyer added a comment -

          Thanks Stack/Ted.

          The latest version of patch basically enables any and all of the configuration properties (that I have captured in the excel. Roughly 300 config properties) to be changed through the shell. Some config changes will take immediate effect, while others will have no effect mid-flight, but will take effect when a Master/RS newly joins the cluster or fails over.

          So, basically what we have now is all you can change option with no exception. Only caveat is that the user may not know which one will be immediate and which is not.

          Show
          Subbu M Iyer added a comment - Thanks Stack/Ted. The latest version of patch basically enables any and all of the configuration properties (that I have captured in the excel. Roughly 300 config properties) to be changed through the shell. Some config changes will take immediate effect, while others will have no effect mid-flight, but will take effect when a Master/RS newly joins the cluster or fails over. So, basically what we have now is all you can change option with no exception. Only caveat is that the user may not know which one will be immediate and which is not.
          Hide
          Ted Yu added a comment -

          For HBaseInMemoryConfiguration.java, the license should be placed at the top of file.
          Have you seen hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompoundConfiguration.java ?
          It holds multiple ImmutableConfigMap's.

          Minor comment on format:

          +               return true;
          +            }
          +            else if ("false".equals(booleanValue)) {
          

          else is not needed. In case you want to keep it, please move it to the line with '}'

          +            LOG.debug("Found Cluster configuration in ZK. Refreshing the in memory configuration map from ZK");
          

          The above line is too long (> 100 chars).

          It looks like ClusterConfigTracker wasn't included in the patch. Therefore it is hard to get a global view of your design.
          Can you generate a new patch for trunk and put it on the review board ?

          Thanks

          Show
          Ted Yu added a comment - For HBaseInMemoryConfiguration.java, the license should be placed at the top of file. Have you seen hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompoundConfiguration.java ? It holds multiple ImmutableConfigMap's. Minor comment on format: + return true ; + } + else if ( " false " .equals(booleanValue)) { else is not needed. In case you want to keep it, please move it to the line with '}' + LOG.debug( "Found Cluster configuration in ZK. Refreshing the in memory configuration map from ZK" ); The above line is too long (> 100 chars). It looks like ClusterConfigTracker wasn't included in the patch. Therefore it is hard to get a global view of your design. Can you generate a new patch for trunk and put it on the review board ? Thanks
          Hide
          Ted Yu added a comment -

          I found a copy of ClusterConfigTracker in patch v1.
          Please add javadoc for this class.

          For hasClusterConfigAvailableInZK(), maybe a better name is isClusterConfigAvailableInZK().

          +            int configNodes = ZKUtil.getNumberOfChildren(watcher, watcher.clusterConfigZNode);
          +            byte[] data = ZKUtil.getData(watcher, watcher.clusterConfigZNode);
          +            if (data != null && data.length > 0 && configNodes > 0) {
          

          The check of configNodes > 0 should be performed before the call to ZKUtil.getData(). If it is 0, return false directly.

          +    public boolean updateClusterConfig(String configKey, String configValue) {
          

          Consider creating a method similar to the above which accepts collection of key/value pairs so that more than one key/value would be updated in one call.

          Since I haven't seen ClusterConfigTracker in patch v2, some of the above comments may not apply.

          Show
          Ted Yu added a comment - I found a copy of ClusterConfigTracker in patch v1. Please add javadoc for this class. For hasClusterConfigAvailableInZK(), maybe a better name is isClusterConfigAvailableInZK(). + int configNodes = ZKUtil.getNumberOfChildren(watcher, watcher.clusterConfigZNode); + byte [] data = ZKUtil.getData(watcher, watcher.clusterConfigZNode); + if (data != null && data.length > 0 && configNodes > 0) { The check of configNodes > 0 should be performed before the call to ZKUtil.getData(). If it is 0, return false directly. + public boolean updateClusterConfig( String configKey, String configValue) { Consider creating a method similar to the above which accepts collection of key/value pairs so that more than one key/value would be updated in one call. Since I haven't seen ClusterConfigTracker in patch v2, some of the above comments may not apply.
          Hide
          Subbu M Iyer added a comment -

          Added latest patch with protobuf support and addressed the review comments. All 7 tests pass.

          Here is a sample of how things work from shell:

          hbase(main):007:0> get_config 'hbase.balancer.period'
          config_value = : 20000
          0 row(s) in 0.0170 seconds

          hbase(main):008:0> update_config 'hbase.balancer.period', '35000'
          updated config key with new value
          0 row(s) in 0.0200 seconds

          hbase(main):009:0> get_config 'hbase.balancer.period'
          config_value = : 35000
          0 row(s) in 0.0180 seconds

          Show
          Subbu M Iyer added a comment - Added latest patch with protobuf support and addressed the review comments. All 7 tests pass. Here is a sample of how things work from shell: hbase(main):007:0> get_config 'hbase.balancer.period' config_value = : 20000 0 row(s) in 0.0170 seconds hbase(main):008:0> update_config 'hbase.balancer.period', '35000' updated config key with new value 0 row(s) in 0.0200 seconds hbase(main):009:0> get_config 'hbase.balancer.period' config_value = : 35000 0 row(s) in 0.0180 seconds
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12544331/3909_090712-2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          -1 javac. The patch appears to cause mvn compile goal to fail.

          -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.TestHBaseDynamicConfiguration
          org.apache.hadoop.hbase.replication.TestReplication
          org.apache.hadoop.hbase.client.TestShell

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2828//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2828//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12544331/3909_090712-2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The patch appears to cause mvn compile goal to fail. -1 findbugs. The patch appears to cause Findbugs (version 1.3.9) to fail. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.TestHBaseDynamicConfiguration org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.client.TestShell Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/2828//testReport/ Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/2828//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          I think the test failure was due to the following in initializeZKBasedSystemTrackers():

              this.catalogTracker = createCatalogTracker(this.zooKeeper, this.conf,
                  this, conf.getInt("hbase.master.catalog.timeout", 600000));
          

          nit: exchange the order of the parameters so that assertion is easier to read:

              assertEquals(catalogTimeout, Integer.MAX_VALUE);
          

          In this particular case, I think the assertion is not needed - the actual value may change if someone modifies the snippet cited above.

          Show
          Ted Yu added a comment - I think the test failure was due to the following in initializeZKBasedSystemTrackers(): this .catalogTracker = createCatalogTracker( this .zooKeeper, this .conf, this , conf.getInt( "hbase.master.catalog.timeout" , 600000)); nit: exchange the order of the parameters so that assertion is easier to read: assertEquals(catalogTimeout, Integer .MAX_VALUE); In this particular case, I think the assertion is not needed - the actual value may change if someone modifies the snippet cited above.
          Hide
          stack added a comment -

          Subbu M Iyer Mind putting your patch on rb so we can take a look? I like your cmd-line example. That looks sweet.

          Show
          stack added a comment - Subbu M Iyer Mind putting your patch on rb so we can take a look? I like your cmd-line example. That looks sweet.
          Hide
          Subbu M Iyer added a comment -

          Stack: Which RB version should l target my patch on ?

          Please let me know.

          thanks
          Subbu

          Show
          Subbu M Iyer added a comment - Stack: Which RB version should l target my patch on ? Please let me know. thanks Subbu
          Hide
          Ted Yu added a comment -

          Subbu has put the review here on Sept 8th: https://reviews.apache.org/r/6964/

          Show
          Ted Yu added a comment - Subbu has put the review here on Sept 8th: https://reviews.apache.org/r/6964/
          Hide
          Subbu M Iyer added a comment -

          Attaching latest patch that addresses the review comments.

          All tests pass except the following and I am reviewing the failures.

          Tests in error:
          testGetRowVersions(org.apache.hadoop.hbase.TestMultiVersions): Shutting down
          testScanMultipleVersions(org.apache.hadoop.hbase.TestMultiVersions): org.apache.hadoop.hbase.MasterNotRunningException: Can create a proxy to master, but it is not running

          Show
          Subbu M Iyer added a comment - Attaching latest patch that addresses the review comments. All tests pass except the following and I am reviewing the failures. Tests in error: testGetRowVersions(org.apache.hadoop.hbase.TestMultiVersions): Shutting down testScanMultipleVersions(org.apache.hadoop.hbase.TestMultiVersions): org.apache.hadoop.hbase.MasterNotRunningException: Can create a proxy to master, but it is not running
          Hide
          Ted Yu added a comment -

          The test passed locally with patch on my MacBook:

          Running org.apache.hadoop.hbase.TestMultiVersions
          2012-10-28 20:41:05.379 java[36641:1903] Unable to load realm mapping info from SCDynamicStore
          Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.007 sec
          
          Show
          Ted Yu added a comment - The test passed locally with patch on my MacBook: Running org.apache.hadoop.hbase.TestMultiVersions 2012-10-28 20:41:05.379 java[36641:1903] Unable to load realm mapping info from SCDynamicStore Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.007 sec
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12551148/3909-102812.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 89 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.client.TestShell

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551148/3909-102812.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 89 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.client.TestShell Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3169//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          Test failure revealed that some script might be missing from the patch:

          Stacktrace
          
          org.jruby.embed.EvalFailedException: (LoadError) no such file to load -- shell/commands/update_config
          	at org.jruby.embed.internal.EmbedEvalUnitImpl.run(EmbedEvalUnitImpl.java:136)
          	at org.jruby.embed.ScriptingContainer.runUnit(ScriptingContainer.java:1263)
          	at org.jruby.embed.ScriptingContainer.runScriptlet(ScriptingContainer.java:1308)
          	at org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:69)
          
          Show
          Ted Yu added a comment - Test failure revealed that some script might be missing from the patch: Stacktrace org.jruby.embed.EvalFailedException: (LoadError) no such file to load -- shell/commands/update_config at org.jruby.embed.internal.EmbedEvalUnitImpl.run(EmbedEvalUnitImpl.java:136) at org.jruby.embed.ScriptingContainer.runUnit(ScriptingContainer.java:1263) at org.jruby.embed.ScriptingContainer.runScriptlet(ScriptingContainer.java:1308) at org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:69)
          Hide
          Subbu M Iyer added a comment -

          Added new patch to address the missing scripts issue.

          Thanks Ted for pointing it out.

          Show
          Subbu M Iyer added a comment - Added new patch to address the missing scripts issue. Thanks Ted for pointing it out.
          Hide
          Ted Yu added a comment -

          The first few changes to conf/hbase-site.xml seem to be unrelated to this feature.

          +            runtime cluster configuration changes with out the need for restarting the cluster.
          

          'with out' -> 'without'
          For HBaseInMemoryConfiguration:

          +  public String get(String configKey) {
          +    if (clusterConfigMap.containsKey(configKey)) {
          +      return clusterConfigMap.get(configKey);
          

          Since clusterConfigMap is ConcurrentHashMap, I think we should skip the call to containsKey(). If null is returned from get(), we delegate to super.get(). This applies to getLong(), etc.
          For refreshClusterConfigMap():

          +      LOG.debug("Refreshed cluster configuration map. Current size = "
          +              + clusterConfigMap.size());
          

          Consider recording the size of clusterConfigMap before refreshing and log both sizes.

          Running org.apache.hadoop.hbase.TestHBaseDynamicConfiguration
          Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.502 sec
          

          I think TestHBaseDynamicConfiguration should be marked as LargeTest.

          +    updateClusterConfig(cct);
          +
          +    Thread.sleep(5000);
          

          Worth considering finding the proper way of detecting completion of config update so that we don't need to wait for so long.
          testDumpConfiguration() seems like a utility method instead of a test.
          testConfigDuplicateAddition() can be named testConfigDuplicateAdditionWithSameKey().
          testConfigUpdate() seems to be same as testConfigDuplicateAddition().

          For JsonConfiguration and JsonProperty, they're defined in ClusterConfigTracker.
          You can reuse them in TestHBaseDynamicConfiguration

          Please upload patch v3 onto review board so that further review is easier.

          Show
          Ted Yu added a comment - The first few changes to conf/hbase-site.xml seem to be unrelated to this feature. + runtime cluster configuration changes with out the need for restarting the cluster. 'with out' -> 'without' For HBaseInMemoryConfiguration: + public String get( String configKey) { + if (clusterConfigMap.containsKey(configKey)) { + return clusterConfigMap.get(configKey); Since clusterConfigMap is ConcurrentHashMap, I think we should skip the call to containsKey(). If null is returned from get(), we delegate to super.get(). This applies to getLong(), etc. For refreshClusterConfigMap(): + LOG.debug( "Refreshed cluster configuration map. Current size = " + + clusterConfigMap.size()); Consider recording the size of clusterConfigMap before refreshing and log both sizes. Running org.apache.hadoop.hbase.TestHBaseDynamicConfiguration Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.502 sec I think TestHBaseDynamicConfiguration should be marked as LargeTest. + updateClusterConfig(cct); + + Thread .sleep(5000); Worth considering finding the proper way of detecting completion of config update so that we don't need to wait for so long. testDumpConfiguration() seems like a utility method instead of a test. testConfigDuplicateAddition() can be named testConfigDuplicateAdditionWithSameKey(). testConfigUpdate() seems to be same as testConfigDuplicateAddition(). For JsonConfiguration and JsonProperty, they're defined in ClusterConfigTracker. You can reuse them in TestHBaseDynamicConfiguration Please upload patch v3 onto review board so that further review is easier.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12551209/3909-102912.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 89 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551209/3909-102912.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified tests. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 89 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests: Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3172//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          For ClusterConfigTracker:

          +        // Cluster configuration already exists in ZK.
          +        LOG.info("ZK Cluster Config already available. Skipping config ZK node creation");
          

          The above log should be debug. The second sentence actually belongs to initClusterConfig()
          For updateClusterConfig(String configKey, String configValue):

          +    } catch (KeeperException e) {
          +      LOG.error("Failure during update cluster configuration", e);
          

          Please include configKey in the above log.
          The change in ServerManager.java is unrelated, please remove from next patch.

          For update_config.rb and get_config.rb, the following is not needed:

          +# Copyright 2011 The Apache Software Foundation
          

          For HBaseAdmin.java, the following line exceeds 100 characters long:

          +        UpdateConfigRequest request = RequestConverter.buildUpdateConfigRequest(configKey, configValue);
          

          Currently only one pair of key / value is updated per UpdateConfigRequest:

          +message UpdateConfigRequest {
          +  required string configKey = 1;
          +  required string configValue = 2;
          +}
          

          Shall we accommodate more than one pair (through repeated keyword) ?

          Show
          Ted Yu added a comment - For ClusterConfigTracker: + // Cluster configuration already exists in ZK. + LOG.info( "ZK Cluster Config already available. Skipping config ZK node creation" ); The above log should be debug. The second sentence actually belongs to initClusterConfig() For updateClusterConfig(String configKey, String configValue): + } catch (KeeperException e) { + LOG.error( "Failure during update cluster configuration" , e); Please include configKey in the above log. The change in ServerManager.java is unrelated, please remove from next patch. For update_config.rb and get_config.rb, the following is not needed: +# Copyright 2011 The Apache Software Foundation For HBaseAdmin.java, the following line exceeds 100 characters long: + UpdateConfigRequest request = RequestConverter.buildUpdateConfigRequest(configKey, configValue); Currently only one pair of key / value is updated per UpdateConfigRequest: +message UpdateConfigRequest { + required string configKey = 1; + required string configValue = 2; +} Shall we accommodate more than one pair (through repeated keyword) ?
          Hide
          Ted Yu added a comment -

          In a local run, TestMasterNoCluster hung.
          I wasn't able to find the test in https://builds.apache.org/job/PreCommit-HBASE-Build/3172//testReport/ either.
          Will attach jstack soon.

          Show
          Ted Yu added a comment - In a local run, TestMasterNoCluster hung. I wasn't able to find the test in https://builds.apache.org/job/PreCommit-HBASE-Build/3172//testReport/ either. Will attach jstack soon.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12551232/testMasterNoCluster.stack
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3173//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551232/testMasterNoCluster.stack against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3173//console This message is automatically generated.
          Hide
          Subbu M Iyer added a comment -

          Ted,

          Addressed all your comments except the support for having more than one
          pair of values for the update request. My thought is we shall definitely
          add that as a additional feature once we commit this one.

          Also updated the latest patch to review board.

          thanks a lot for your cooperation on this.

          Subbu

          Show
          Subbu M Iyer added a comment - Ted, Addressed all your comments except the support for having more than one pair of values for the update request. My thought is we shall definitely add that as a additional feature once we commit this one. Also updated the latest patch to review board. thanks a lot for your cooperation on this. Subbu
          Hide
          Subbu M Iyer added a comment -

          call to containsKey(). If null is returned from get(), we delegate to
          super.get(). This applies to getLong(), etc.

          This is tricky. Basically my thought is it's quite possible that some one
          has overridden a configuration key with null value explicitly and in such
          cases we do not want to override the same with default value from config.

          So, if the key is present in CHM, then we must use it no matter what the
          value is.

          thoughts ?

          On Mon, Oct 29, 2012 at 10:12 AM, Ted Yu (JIRA) <jira@apache.org> wrote:

          Show
          Subbu M Iyer added a comment - call to containsKey(). If null is returned from get(), we delegate to super.get(). This applies to getLong(), etc. This is tricky. Basically my thought is it's quite possible that some one has overridden a configuration key with null value explicitly and in such cases we do not want to override the same with default value from config. So, if the key is present in CHM, then we must use it no matter what the value is. thoughts ? On Mon, Oct 29, 2012 at 10:12 AM, Ted Yu (JIRA) <jira@apache.org> wrote:
          Hide
          Ted Yu added a comment -

          Excerpt from http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ConcurrentHashMap.html :

          Like Hashtable but unlike HashMap, this class does not allow null to be used as a key or value.

          So the user is not able to specify null value. I think my suggestion is safe.

          Show
          Ted Yu added a comment - Excerpt from http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ConcurrentHashMap.html : Like Hashtable but unlike HashMap, this class does not allow null to be used as a key or value. So the user is not able to specify null value. I think my suggestion is safe.
          Hide
          Sergey Shelukhin added a comment -

          this appears to have stalled

          Show
          Sergey Shelukhin added a comment - this appears to have stalled
          Hide
          stack added a comment -

          Moving new feature out of 0.95 (I think this is critical hbase feature but I can't work on it at mo.)

          Show
          stack added a comment - Moving new feature out of 0.95 (I think this is critical hbase feature but I can't work on it at mo.)
          Hide
          binlijin added a comment -

          What about the method which 0.89 fb current use?
          (1) First we change all node's hbase-site.xml.
          (2) Second ask the node to reload the conf and take effect.

          Show
          binlijin added a comment - What about the method which 0.89 fb current use? (1) First we change all node's hbase-site.xml. (2) Second ask the node to reload the conf and take effect.
          Hide
          binlijin added a comment -

          And i can backport it from 0.89 fb and make a patch for trunk.

          Show
          binlijin added a comment - And i can backport it from 0.89 fb and make a patch for trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12551209/3909-102912.patch
          against trunk revision .
          ATTACHMENT ID: 12551209

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8482//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12551209/3909-102912.patch against trunk revision . ATTACHMENT ID: 12551209 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8482//console This message is automatically generated.
          Hide
          binlijin added a comment -

          Stack Ted Yu how about the patch HBASE-3909-backport-from-fb-for-trunk.patch?
          We have many method to implement dynamic config, so i just add a framework, if we accept this method , will add more.

          Show
          binlijin added a comment - Stack Ted Yu how about the patch HBASE-3909 -backport-from-fb-for-trunk.patch? We have many method to implement dynamic config, so i just add a framework, if we accept this method , will add more.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12624264/HBASE-3909-backport-from-fb-for-trunk.patch
          against trunk revision .
          ATTACHMENT ID: 12624264

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624264/HBASE-3909-backport-from-fb-for-trunk.patch against trunk revision . ATTACHMENT ID: 12624264 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8488//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          Skimmed the patch.
          Some high level questions:
          when configuration is updated, how would regions in transition pick up the changes ?
          when configuration is updated and meanwhile master fails over, what would happen ?

          Thanks

          Show
          Ted Yu added a comment - Skimmed the patch. Some high level questions: when configuration is updated, how would regions in transition pick up the changes ? when configuration is updated and meanwhile master fails over, what would happen ? Thanks
          Hide
          binlijin added a comment -

          Q: when configuration is updated, how would regions in transition pick up the changes ?
          A: when region open finished, it can check the configuration, if configuration changes the region can apply the new configuration.

          Q:when configuration is updated and meanwhile master fails over, what would happen ?
          A: The new configuration takes effect must be triggered by admin,if some node failed,admin can do it again.

          Show
          binlijin added a comment - Q: when configuration is updated, how would regions in transition pick up the changes ? A: when region open finished, it can check the configuration, if configuration changes the region can apply the new configuration. Q:when configuration is updated and meanwhile master fails over, what would happen ? A: The new configuration takes effect must be triggered by admin,if some node failed,admin can do it again.
          Hide
          Ted Yu added a comment -

          when region open finished, it can check the configuration

          Does this imply that this region needs re-opening because many parameters take effect at the time of region open ?

          Show
          Ted Yu added a comment - when region open finished, it can check the configuration Does this imply that this region needs re-opening because many parameters take effect at the time of region open ?
          Hide
          binlijin added a comment -

          Does this imply that this region needs re-opening because many parameters take effect at the time of region open ?

          No,the region should not re-opening,not all parameters will dynamic take effect, we choose some of them.

          Show
          binlijin added a comment - Does this imply that this region needs re-opening because many parameters take effect at the time of region open ? No,the region should not re-opening,not all parameters will dynamic take effect, we choose some of them.
          Hide
          stack added a comment -

          Patch is great. Lets get it in. There is a hole in that the admin client does not know how to talk to backup master which we should fix in general but it should not get in the way of this patch. What you think binlijin? Apply this as is and then add configs as we go? Needs a bit of doc. so devs know this mechanism exists. Would suggest fixing the class comment so it uses javadoc formatting especially given the documentation on the class is so good.

          Rereading the old comments on this issue talking of hoisting configs up into zk so we can exploit its callback mechanism, this mechanism does none of that and just has us reload the configs from under the running processors.

          One concern I'd have is that we are reloading the configs but the Server constructors do a bunch of checking and setting on configs. Will these settings be lost when we reload from raw configs on disk. For example, HRS constructor calls FSUtils.setupShortCircuitRead(this.conf). Do these settings persist pas the reload call?

          Good stuff.

          Show
          stack added a comment - Patch is great. Lets get it in. There is a hole in that the admin client does not know how to talk to backup master which we should fix in general but it should not get in the way of this patch. What you think binlijin ? Apply this as is and then add configs as we go? Needs a bit of doc. so devs know this mechanism exists. Would suggest fixing the class comment so it uses javadoc formatting especially given the documentation on the class is so good. Rereading the old comments on this issue talking of hoisting configs up into zk so we can exploit its callback mechanism, this mechanism does none of that and just has us reload the configs from under the running processors. One concern I'd have is that we are reloading the configs but the Server constructors do a bunch of checking and setting on configs. Will these settings be lost when we reload from raw configs on disk. For example, HRS constructor calls FSUtils.setupShortCircuitRead(this.conf). Do these settings persist pas the reload call? Good stuff.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12624975/HBASE-3909-backport-from-fb-for-trunk-2.patch
          against trunk revision .
          ATTACHMENT ID: 12624975

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624975/HBASE-3909-backport-from-fb-for-trunk-2.patch against trunk revision . ATTACHMENT ID: 12624975 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8517//console This message is automatically generated.
          Hide
          binlijin added a comment -

          Add to rb for review, https://reviews.apache.org/r/17369/ .
          stack

          There is a hole in that the admin client does not know how to talk to backup master which we should fix in general but it should not get in the way of this patch.

          Yes, this patch misses backup master, we can fix it with other patch.

          Apply this as is and then add configs as we go? Needs a bit of doc.

          More work will be done, for example doc and add ruby command.

          One concern I'd have is that we are reloading the configs but the Server constructors do a bunch of checking and setting on configs. Will these settings be lost when we reload from raw configs on disk. For example, HRS constructor calls FSUtils.setupShortCircuitRead(this.conf). Do these settings persist pas the reload call?

          Yes, at first these are some settings will be lost, we will not have all settings take effect dynamic.

          Show
          binlijin added a comment - Add to rb for review, https://reviews.apache.org/r/17369/ . stack There is a hole in that the admin client does not know how to talk to backup master which we should fix in general but it should not get in the way of this patch. Yes, this patch misses backup master, we can fix it with other patch. Apply this as is and then add configs as we go? Needs a bit of doc. More work will be done, for example doc and add ruby command. One concern I'd have is that we are reloading the configs but the Server constructors do a bunch of checking and setting on configs. Will these settings be lost when we reload from raw configs on disk. For example, HRS constructor calls FSUtils.setupShortCircuitRead(this.conf). Do these settings persist pas the reload call? Yes, at first these are some settings will be lost, we will not have all settings take effect dynamic.
          Hide
          stack added a comment -

          Yes, this patch misses backup master, we can fix it with other patch.

          Yes.

          Yes, at first these are some settings will be lost, we will not have all settings take effect dynamic.

          Pardon me. I don't follow. We can't lose the setting of short circuit reads on reload of configs. Do you think this will happen? Can we add unit tests to show server configs ride across the reload of configs? Thanks binlijin

          Show
          stack added a comment - Yes, this patch misses backup master, we can fix it with other patch. Yes. Yes, at first these are some settings will be lost, we will not have all settings take effect dynamic. Pardon me. I don't follow. We can't lose the setting of short circuit reads on reload of configs. Do you think this will happen? Can we add unit tests to show server configs ride across the reload of configs? Thanks binlijin
          Hide
          Andrew Purtell added a comment -

          Yes, this patch misses backup master, we can fix it with other patch.

          Yes.

          That's a pretty big hole. What happens if there is a fail over?

          Show
          Andrew Purtell added a comment - Yes, this patch misses backup master, we can fix it with other patch. Yes. That's a pretty big hole. What happens if there is a fail over?
          Hide
          stack added a comment -

          That's a pretty big hole. What happens if there is a fail over?

          The failed-over master will be missing the config. A workaround would be the operator restarting the backup masters when master config changed. Can add this to the release notes and address it better in a second issue (backup masters are in general on hbaseadmins blindside – need to fix).

          Show
          stack added a comment - That's a pretty big hole. What happens if there is a fail over? The failed-over master will be missing the config. A workaround would be the operator restarting the backup masters when master config changed. Can add this to the release notes and address it better in a second issue (backup masters are in general on hbaseadmins blindside – need to fix).
          Hide
          chunhui shen added a comment -

          Throw some potential problems I think:
          1.As the implementation of patch, only allow some configuration to reset the value after reloading. It means Administrator should remember the configurations which could be reloaded dynamically.
          I think it will puzzle the user whether the configuraion value he expected is reset.

          2.Through the Configuration Observer, will the reloaded value take affect properly? At least, resetting the compaction threads number is possible to be not expected on the patch. For example, decreasing the CorePoolSize of thread pool will not decrease the number of running thread until the queue of thread pool is empty and the thread is idle for some time. If user want to limit the IO caused by compaction through decreasing the compaction threads number dynamically, but compaction request is always being produced in real cluster, it means the compaction thread won't be idle, then the running compaction thread won't be decreased.

          3.Is it any concurrent risk when reloading the config? A thread may get the old value and new value for the same config when running. Of course, this could be guaranteed in Configuration Observer, but seems not easy.

          Show
          chunhui shen added a comment - Throw some potential problems I think: 1.As the implementation of patch, only allow some configuration to reset the value after reloading. It means Administrator should remember the configurations which could be reloaded dynamically. I think it will puzzle the user whether the configuraion value he expected is reset. 2.Through the Configuration Observer, will the reloaded value take affect properly? At least, resetting the compaction threads number is possible to be not expected on the patch. For example, decreasing the CorePoolSize of thread pool will not decrease the number of running thread until the queue of thread pool is empty and the thread is idle for some time. If user want to limit the IO caused by compaction through decreasing the compaction threads number dynamically, but compaction request is always being produced in real cluster, it means the compaction thread won't be idle, then the running compaction thread won't be decreased. 3.Is it any concurrent risk when reloading the config? A thread may get the old value and new value for the same config when running. Of course, this could be guaranteed in Configuration Observer, but seems not easy.
          Hide
          binlijin added a comment -

          Pardon me. I don't follow. We can't lose the setting of short circuit reads on reload of configs. Do you think this will happen?

          Some configs are hard or not impossible to take effect dynamically right now for example: zookeeper.session.timeout, hbase.regionserver.metahandler.count and so on.
          If we want short circuit reads to take effect dynamically we must change HDFS Client.

          DFSClient
            /**
             * DFSClient configuration 
             */
            public static class Conf {
              final boolean useLegacyBlockReader;
              final boolean useLegacyBlockReaderLocal;
              final String domainSocketPath;
              final boolean skipShortCircuitChecksums;
              final int shortCircuitBufferSize;
              final boolean shortCircuitLocalReads;
              final boolean domainSocketDataTraffic;
              final int shortCircuitStreamsCacheSize;
              final long shortCircuitStreamsCacheExpiryMs; 
            }
          

          stack

          Show
          binlijin added a comment - Pardon me. I don't follow. We can't lose the setting of short circuit reads on reload of configs. Do you think this will happen? Some configs are hard or not impossible to take effect dynamically right now for example: zookeeper.session.timeout, hbase.regionserver.metahandler.count and so on. If we want short circuit reads to take effect dynamically we must change HDFS Client. DFSClient /** * DFSClient configuration */ public static class Conf { final boolean useLegacyBlockReader; final boolean useLegacyBlockReaderLocal; final String domainSocketPath; final boolean skipShortCircuitChecksums; final int shortCircuitBufferSize; final boolean shortCircuitLocalReads; final boolean domainSocketDataTraffic; final int shortCircuitStreamsCacheSize; final long shortCircuitStreamsCacheExpiryMs; } stack
          Hide
          binlijin added a comment -

          HBASE-3909-backport-from-fb-for-trunk-4.patch make short circuit reads dynamic.

          Show
          binlijin added a comment - HBASE-3909 -backport-from-fb-for-trunk-4.patch make short circuit reads dynamic.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12627568/HBASE-3909-backport-from-fb-for-trunk-3.patch
          against trunk revision .
          ATTACHMENT ID: 12627568

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          -1 javadoc. The javadoc tool appears to have generated 3 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>

          +1 site. The mvn site goal succeeds with this patch.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12627568/HBASE-3909-backport-from-fb-for-trunk-3.patch against trunk revision . ATTACHMENT ID: 12627568 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. -1 javadoc . The javadoc tool appears to have generated 3 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8625//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12627573/HBASE-3909-backport-from-fb-for-trunk-4.patch
          against trunk revision .
          ATTACHMENT ID: 12627573

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          -1 javadoc. The javadoc tool appears to have generated 3 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>

          +1 site. The mvn site goal succeeds with this patch.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12627573/HBASE-3909-backport-from-fb-for-trunk-4.patch against trunk revision . ATTACHMENT ID: 12627573 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 6 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. -1 javadoc . The javadoc tool appears to have generated 3 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> +1 site . The mvn site goal succeeds with this patch. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8626//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          Patch is of large size.

          Mind posting it on review board ?

          Thanks

          Show
          Ted Yu added a comment - Patch is of large size. Mind posting it on review board ? Thanks
          Hide
          Ted Yu added a comment -
          + * Copyright 2013 The Apache Software Foundation
          

          Year is not needed.

          +public interface ConfigurationObserver {
          

          Add annotation for audience.

          +          LOG.error("Encountered a NPE while notifying observers.");
          

          Please add some information about the observer which caused the NPE

          Show
          Ted Yu added a comment - + * Copyright 2013 The Apache Software Foundation Year is not needed. + public interface ConfigurationObserver { Add annotation for audience. + LOG.error( "Encountered a NPE while notifying observers." ); Please add some information about the observer which caused the NPE
          Hide
          binlijin added a comment -
          Show
          binlijin added a comment - Ted Yu patch is uploaded to https://reviews.apache.org/r/17369/
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12627774/HBASE-3909-backport-from-fb-for-trunk-5.patch
          against trunk revision .
          ATTACHMENT ID: 12627774

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          -1 javadoc. The javadoc tool appears to have generated 3 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12627774/HBASE-3909-backport-from-fb-for-trunk-5.patch against trunk revision . ATTACHMENT ID: 12627774 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. -1 javadoc . The javadoc tool appears to have generated 3 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8640//console This message is automatically generated.
          Hide
          binlijin added a comment -

          Try to fix javadoc and findbugs warnings.

          Show
          binlijin added a comment - Try to fix javadoc and findbugs warnings.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12627787/HBASE-3909-backport-from-fb-for-trunk-6.patch
          against trunk revision .
          ATTACHMENT ID: 12627787

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          -1 javadoc. The javadoc tool appears to have generated 3 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>

          -1 site. The patch appears to cause mvn site goal to fail.

          -1 core tests. The patch failed these unit tests:

          -1 core zombie tests. There are 1 zombie test(s): at org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnPipelineRestart(TestLogRolling.java:485)

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12627787/HBASE-3909-backport-from-fb-for-trunk-6.patch against trunk revision . ATTACHMENT ID: 12627787 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. -1 javadoc . The javadoc tool appears to have generated 3 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> -1 site . The patch appears to cause mvn site goal to fail. -1 core tests . The patch failed these unit tests: -1 core zombie tests . There are 1 zombie test(s): at org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnPipelineRestart(TestLogRolling.java:485) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8642//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12627802/HBASE-3909-backport-from-fb-for-trunk-7.patch
          against trunk revision .
          ATTACHMENT ID: 12627802

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 9 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop1.1. The patch compiles against the hadoop 1.1 profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 lineLengths. The patch introduces the following lines longer than 100:
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + private UpdateConfigurationRequest(boolean noInit)

          { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
          + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }

          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>
          + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code>

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12627802/HBASE-3909-backport-from-fb-for-trunk-7.patch against trunk revision . ATTACHMENT ID: 12627802 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 9 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop1.1 . The patch compiles against the hadoop 1.1 profile. +1 javadoc . The javadoc tool did not generate any warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 lineLengths . The patch introduces the following lines longer than 100: + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + private UpdateConfigurationRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private UpdateConfigurationResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> + * <code>rpc UpdateConfiguration(.UpdateConfigurationRequest) returns (.UpdateConfigurationResponse);</code> -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8643//console This message is automatically generated.

            People

            • Assignee:
              Subbu M Iyer
              Reporter:
              stack
            • Votes:
              0 Vote for this issue
              Watchers:
              21 Start watching this issue

              Dates

              • Created:
                Updated:

                Development