Solr
  1. Solr
  2. SOLR-4655

The Overseer should assign node names by default.

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.4, 6.0
    • Component/s: SolrCloud
    • Labels:
      None

      Description

      Currently we make a unique node name by using the host address as part of the name. This means that if you want a node with a new address to take over, the node name is misleading. It's best if you set custom names for each node before starting your cluster. This is cumbersome though, and cannot currently be done with the collections API. Instead, the overseer could assign a more generic name such as nodeN by default. Then you can easily swap in another node with no pre planning and no confusion in the name.

      1. SOLR-4655.patch
        73 kB
        Mark Miller
      2. SOLR-4655.patch
        72 kB
        Mark Miller
      3. SOLR-4655.patch
        68 kB
        Mark Miller
      4. SOLR-4655.patch
        68 kB
        Mark Miller
      5. SOLR-4655.patch
        69 kB
        Mark Miller
      6. SOLR-4655.patch
        66 kB
        Mark Miller
      7. SOLR-4655.patch
        66 kB
        Mark Miller
      8. SOLR-4655.patch
        62 kB
        Mark Miller

        Issue Links

          Activity

          Hide
          Mark Miller added a comment - - edited

          For back compat, I have added a new config option at the core container level called genericNodeNames - it will default to true in the next release, but it's absence will be equal to false.

          Show
          Mark Miller added a comment - - edited For back compat, I have added a new config option at the core container level called genericNodeNames - it will default to true in the next release, but it's absence will be equal to false.
          Hide
          Shalin Shekhar Mangar added a comment -

          Mark, I noticed this issue after I committed SOLR-3755. We assign names to sub-shard nodes in OverseerCollectionProcessor. Is the change as simple as using Assign.assignNode() or is there something more I need to take care of?

          Show
          Shalin Shekhar Mangar added a comment - Mark, I noticed this issue after I committed SOLR-3755 . We assign names to sub-shard nodes in OverseerCollectionProcessor. Is the change as simple as using Assign.assignNode() or is there something more I need to take care of?
          Hide
          Mark Miller added a comment -

          Is the change as simple as using Assign.assignNode()

          I think it's a bit more complicated than that - its a 62 kb patch. I think I've simplified things overall though. The way you could override the node name or get the address based name previously was a bit ugly. This creates one place for corenodename to be set, rather than having various places knowing and using the default when the core node name is null or something.

          Show
          Mark Miller added a comment - Is the change as simple as using Assign.assignNode() I think it's a bit more complicated than that - its a 62 kb patch. I think I've simplified things overall though. The way you could override the node name or get the address based name previously was a bit ugly. This creates one place for corenodename to be set, rather than having various places knowing and using the default when the core node name is null or something.
          Hide
          Mark Miller added a comment -

          Patch to trunk.

          Show
          Mark Miller added a comment - Patch to trunk.
          Hide
          Mark Miller added a comment -

          Is the change as simple as using Assign.assignNode() or is there something more I need to take care of?

          Oh, do you mean in terms of keeping the names sub shard names consistent with the super shard names? I didn't catch your meaning on the first read.

          The way the node names are assigned is super simple - just Assign.assignNode to get a new name. I'm not positive that it will just play nice with sub shards - hope to look at that stuff closer soon.

          How do you currently handle the case where a user specifies a custom arbitrary shard name?

          Show
          Mark Miller added a comment - Is the change as simple as using Assign.assignNode() or is there something more I need to take care of? Oh, do you mean in terms of keeping the names sub shard names consistent with the super shard names? I didn't catch your meaning on the first read. The way the node names are assigned is super simple - just Assign.assignNode to get a new name. I'm not positive that it will just play nice with sub shards - hope to look at that stuff closer soon. How do you currently handle the case where a user specifies a custom arbitrary shard name?
          Hide
          Shalin Shekhar Mangar added a comment -

          Sub-slice names are created with just appending _N to the parent shard name. For example, shard1 gets split into shard1_0 and shard1_1 etc.

          The node names are created as collection_shard1_0_replica1, collection_shard1_0_replica2 etc.

          Show
          Shalin Shekhar Mangar added a comment - Sub-slice names are created with just appending _N to the parent shard name. For example, shard1 gets split into shard1_0 and shard1_1 etc. The node names are created as collection_shard1_0_replica1, collection_shard1_0_replica2 etc.
          Hide
          Mark Miller added a comment -

          Checkpoint - I'm having trouble with the new ShardSplitTest - not sure what the issue is yet, but it passes about 50% of the time, while currently a std checkout is passing consistently for me.

          Show
          Mark Miller added a comment - Checkpoint - I'm having trouble with the new ShardSplitTest - not sure what the issue is yet, but it passes about 50% of the time, while currently a std checkout is passing consistently for me.
          Hide
          Anshum Gupta added a comment -

          Hey Mark, I just looked at the patch and it looks like you've removed the update/createshard APIs. Just wanted to bring it to your notice.

          Show
          Anshum Gupta added a comment - Hey Mark, I just looked at the patch and it looks like you've removed the update/createshard APIs. Just wanted to bring it to your notice.
          Hide
          Mark Miller added a comment -

          Hmm, must have lost them in the merge up somehow. Hopefully thats the reason that test is failing - oddly I've seen all tests pass a couple times though.

          Show
          Mark Miller added a comment - Hmm, must have lost them in the merge up somehow. Hopefully thats the reason that test is failing - oddly I've seen all tests pass a couple times though.
          Hide
          Mark Miller added a comment -

          node

          {n} is not the right name here - going with core_node{n}

          - this is at the coreNodeName level.

          Unfortunately, the above code i mistakenly took out was not the only problem, that test is still failing.

          Show
          Mark Miller added a comment - node {n} is not the right name here - going with core_node{n} - this is at the coreNodeName level. Unfortunately, the above code i mistakenly took out was not the only problem, that test is still failing.
          Hide
          Mark Miller added a comment -

          Patch to trunk and with the above change. Tests seem to be passing now.

          I'd like to commit this soon. Shalin Shekhar Mangar or Anshum Gupta, are you able to sync up shard splitting with this change?

          Show
          Mark Miller added a comment - Patch to trunk and with the above change. Tests seem to be passing now. I'd like to commit this soon. Shalin Shekhar Mangar or Anshum Gupta , are you able to sync up shard splitting with this change?
          Hide
          Anshum Gupta added a comment - - edited

          [~hakeber] Sure. I'll have a look at it sometime tomorrow (My time, IST).
          Hope that's fine.

          Show
          Anshum Gupta added a comment - - edited [~hakeber] Sure. I'll have a look at it sometime tomorrow (My time, IST). Hope that's fine.
          Hide
          Mark Miller added a comment -

          To trunk.

          Show
          Mark Miller added a comment - To trunk.
          Hide
          Mark Miller added a comment - - edited

          Any progress with this Anshum Gupta ?

          Show
          Mark Miller added a comment - - edited Any progress with this Anshum Gupta ?
          Hide
          Anshum Gupta added a comment - - edited

          Mark, sorry I couldn't find time. Was travelling from India to CA for Lucene Revolution.
          Just landed, will try and close it before the conference though.

          Show
          Anshum Gupta added a comment - - edited Mark, sorry I couldn't find time. Was travelling from India to CA for Lucene Revolution. Just landed, will try and close it before the conference though.
          Hide
          Anshum Gupta added a comment - - edited

          I just tested the patch and this seems to work just about fine out of the box (other than a few minor hiccups while patching) even for the split shards. We don't seem to be using the hostname in the SplitShard code.
          Here's how the clusterstate looks like post a shard split call using genericNames.

          {"collection1":{
          "shards":{
          "shard1":{
          "range":"80000000-ffffffff",
          "state":"active",
          "replicas":{"core_node1":

          Unknown macro: { "state"}

          }},
          "shard2":{
          "range":"0-7fffffff",
          "state":"active",
          "replicas":{"core_node2":

          Unknown macro: { "state"}

          }},
          "shard1_0":{
          "state":"active",
          "replicas":{"core_node3":

          Unknown macro: { "state"}

          }},
          "shard1_1":{
          "state":"active",
          "replicas":{"core_node4":{
          "state":"active",
          "core":"collection1_shard1_1_replica1",
          "node_name":"192.168.2.2:8983_solr",
          "base_url":"http://192.168.2.2:8983/solr",
          "leader":"true"}}}},
          "router":"compositeId"}}

          Show
          Anshum Gupta added a comment - - edited I just tested the patch and this seems to work just about fine out of the box (other than a few minor hiccups while patching) even for the split shards. We don't seem to be using the hostname in the SplitShard code. Here's how the clusterstate looks like post a shard split call using genericNames. {"collection1":{ "shards":{ "shard1":{ "range":"80000000-ffffffff", "state":"active", "replicas":{"core_node1": Unknown macro: { "state"} }}, "shard2":{ "range":"0-7fffffff", "state":"active", "replicas":{"core_node2": Unknown macro: { "state"} }}, "shard1_0":{ "state":"active", "replicas":{"core_node3": Unknown macro: { "state"} }}, "shard1_1":{ "state":"active", "replicas":{"core_node4":{ "state":"active", "core":"collection1_shard1_1_replica1", "node_name":"192.168.2.2:8983_solr", "base_url":"http://192.168.2.2:8983/solr", "leader":"true"}}}}, "router":"compositeId"}}
          Hide
          Anshum Gupta added a comment -

          Ok, seems like something did break. Don't commit it just yet.
          Need to double check and validate.

          Show
          Anshum Gupta added a comment - Ok, seems like something did break. Don't commit it just yet. Need to double check and validate.
          Hide
          Anshum Gupta added a comment -

          Mark, can you please reupload the latest patch? I'm having issues using the last patch you uploaded.

          Show
          Anshum Gupta added a comment - Mark, can you please reupload the latest patch? I'm having issues using the last patch you uploaded.
          Hide
          Mark Miller added a comment -

          Yup, I'll try and merge up and upload a new patch tonight or tomorrow.

          Show
          Mark Miller added a comment - Yup, I'll try and merge up and upload a new patch tonight or tomorrow.
          Hide
          Mark Miller added a comment -

          Just updated this trunk. Got a clean test pass, but I'll check it some more in the morning.

          Show
          Mark Miller added a comment - Just updated this trunk. Got a clean test pass, but I'll check it some more in the morning.
          Hide
          Mark Miller added a comment -

          I'm a little confused how the tests are currently passing - makes me think something is off.

          OverseerCollectionProcesser will try and wait for the coreNodeName of: cmd.setCoreNodeName(nodeName + "_" + subShardName);

          But according to your output above, the core node name will will be core_node

          {n}

          I'm not sure how that wait ends up working right.

          Show
          Mark Miller added a comment - I'm a little confused how the tests are currently passing - makes me think something is off. OverseerCollectionProcesser will try and wait for the coreNodeName of: cmd.setCoreNodeName(nodeName + "_" + subShardName); But according to your output above, the core node name will will be core_node {n} I'm not sure how that wait ends up working right.
          Hide
          Anshum Gupta added a comment -

          I'll just patch up and have a look at it later in the day. I guess I know what's going on there. i.e. it waits (for the wrong coreNodeName) and times out but this isn't getting caught up.
          Once I confirm that, I'll also create a JIRA to fix that.

          Show
          Anshum Gupta added a comment - I'll just patch up and have a look at it later in the day. I guess I know what's going on there. i.e. it waits (for the wrong coreNodeName) and times out but this isn't getting caught up. Once I confirm that, I'll also create a JIRA to fix that.
          Hide
          Mark Miller added a comment -

          What's odd is that I have another branch I'm working on where it does get caught up over that...

          Another problem is that in ZkController#waitForCoreNodeName, it must use getSlicesMap rather than getActiveSlicesMap, otherwise these subshards don't find there set coreNodeNames.

          Show
          Mark Miller added a comment - What's odd is that I have another branch I'm working on where it does get caught up over that... Another problem is that in ZkController#waitForCoreNodeName, it must use getSlicesMap rather than getActiveSlicesMap, otherwise these subshards don't find there set coreNodeNames.
          Hide
          Mark Miller added a comment -

          Re: the wrong coreNodeName

          Because the coreNodeName is set in the call to the coreadminhandler, there is no realy way to find out what it's set to without polling the clusterstate I think...I'm going to look into that real quick.

          Show
          Mark Miller added a comment - Re: the wrong coreNodeName Because the coreNodeName is set in the call to the coreadminhandler, there is no realy way to find out what it's set to without polling the clusterstate I think...I'm going to look into that real quick.
          Hide
          Anshum Gupta added a comment -

          That's right. I'm trying to figure if it could be pre-specified as a param while making the call.
          Also, seems like there are some other issues with the patch too which happened while integrating your patch with the trunk changes all through this while.

          Show
          Anshum Gupta added a comment - That's right. I'm trying to figure if it could be pre-specified as a param while making the call. Also, seems like there are some other issues with the patch too which happened while integrating your patch with the trunk changes all through this while.
          Hide
          Mark Miller added a comment -

          I think I've got this all working relatively okay in my dev branch - i just have to get it ported over to trunk before I put a patch up.

          Show
          Mark Miller added a comment - I think I've got this all working relatively okay in my dev branch - i just have to get it ported over to trunk before I put a patch up.
          Hide
          Anshum Gupta added a comment -

          Ok, coz there are are few things like using shardState and shardRange only once in the preRegister command and resetting it that seems to have flipped over on the patch (order reversed).

          I could look at it once you put up the patch and integrate if there are some changes that I've made that you may have skipped.

          Show
          Anshum Gupta added a comment - Ok, coz there are are few things like using shardState and shardRange only once in the preRegister command and resetting it that seems to have flipped over on the patch (order reversed). I could look at it once you put up the patch and integrate if there are some changes that I've made that you may have skipped.
          Hide
          Mark Miller added a comment -

          Yeah, I'm currently working on another branch that has a fix for SOLR-4745, so that stuff has been moved in what I'm mainly looking at.

          Unfortunetly, we don't seem to have a test that will catch that yet (at least on my dev machine) - unless a more rare chaos monkey test can trigger it?

          As a side note since I'm reminded: It also seemed like the shard splitting tests could occasionaly pass even though no split had been ordered - you noted some time ago above I was missing some critical code in a merge up, but I still got some passing runs at the time. It did fail commonly, but it seems like no split should be an easy all the time fail.

          Show
          Mark Miller added a comment - Yeah, I'm currently working on another branch that has a fix for SOLR-4745 , so that stuff has been moved in what I'm mainly looking at. Unfortunetly, we don't seem to have a test that will catch that yet (at least on my dev machine) - unless a more rare chaos monkey test can trigger it? As a side note since I'm reminded: It also seemed like the shard splitting tests could occasionaly pass even though no split had been ordered - you noted some time ago above I was missing some critical code in a merge up, but I still got some passing runs at the time. It did fail commonly, but it seems like no split should be an easy all the time fail.
          Hide
          Anshum Gupta added a comment -

          shard splitting tests could occasionaly pass even though no split had been ordered

          Do you intend to say complete and not 'ordered'? So the thing is, the shard split code instead of returning an error and exiting out, continues in case of a timeout while waiting for a core to come up. The shard gets created incorrectly. We'd need to stop that from happening and error out instead.

          P.S: Yes, we need to add tests that check for things other than just the final clusterstate.

          Show
          Anshum Gupta added a comment - shard splitting tests could occasionaly pass even though no split had been ordered Do you intend to say complete and not 'ordered'? So the thing is, the shard split code instead of returning an error and exiting out, continues in case of a timeout while waiting for a core to come up. The shard gets created incorrectly. We'd need to stop that from happening and error out instead. P.S: Yes, we need to add tests that check for things other than just the final clusterstate.
          Hide
          Mark Miller added a comment -

          Do you intend to say complete and not 'ordered'?

          I guess I mean 'started'? I think the missing code prevented a shard split to even be 'started' or requested. I didn't look super closely or anything, but it seemed I could still have succesful test runs in that state, so it seemed like with the right timing, the test did not assert a split had happened.

          Show
          Mark Miller added a comment - Do you intend to say complete and not 'ordered'? I guess I mean 'started'? I think the missing code prevented a shard split to even be 'started' or requested. I didn't look super closely or anything, but it seemed I could still have succesful test runs in that state, so it seemed like with the right timing, the test did not assert a split had happened.
          Hide
          Mark Miller added a comment -

          P.S: Yes, we need to add tests that check for things other than just the final clusterstate.

          I'm just surprised that it wouldn't trip the chaosmonkey test if this order was important, so I was wondering if I just didn't happen to see a fail that a jenkins chaosmonkey fail brought up previously - I seem to remember Shalin adding some of that stuff after the first commit and I didn't know if that was in response to a jenkins chaos fail that perhaps my fast dev machine is just randomly not easily hitting.

          Show
          Mark Miller added a comment - P.S: Yes, we need to add tests that check for things other than just the final clusterstate. I'm just surprised that it wouldn't trip the chaosmonkey test if this order was important, so I was wondering if I just didn't happen to see a fail that a jenkins chaosmonkey fail brought up previously - I seem to remember Shalin adding some of that stuff after the first commit and I didn't know if that was in response to a jenkins chaos fail that perhaps my fast dev machine is just randomly not easily hitting.
          Hide
          Anshum Gupta added a comment -

          There are asserts for stuff other than the final cluster state just that there still are cases which could be a false positive as we may not have checked for everything that there is.

          I don't think the test can ever pass without a split actually happening (shards being created). I'll still have a look at it again though.

          Show
          Anshum Gupta added a comment - There are asserts for stuff other than the final cluster state just that there still are cases which could be a false positive as we may not have checked for everything that there is. I don't think the test can ever pass without a split actually happening (shards being created). I'll still have a look at it again though.
          Hide
          Mark Miller added a comment -

          I don't think the test can ever pass without a split actually happening

          That's what really surprised me - that i could get any pass based on that missing code - but I didn't look closely enough. I can probably remove those lines again by looking at the patch you originally commented on and see if I can reproduce it.

          Show
          Mark Miller added a comment - I don't think the test can ever pass without a split actually happening That's what really surprised me - that i could get any pass based on that missing code - but I didn't look closely enough. I can probably remove those lines again by looking at the patch you originally commented on and see if I can reproduce it.
          Hide
          Mark Miller added a comment -

          Okay, this is fairly out of date again - costing me too much to maintain, so I'm going to focus on getting this committed now. New patch coming soon.

          Show
          Mark Miller added a comment - Okay, this is fairly out of date again - costing me too much to maintain, so I'm going to focus on getting this committed now. New patch coming soon.
          Hide
          Anshum Gupta added a comment -

          I'd be more than happy to get the changes related to the shard split bit in place in 2 days after you put the new patch up.

          Show
          Anshum Gupta added a comment - I'd be more than happy to get the changes related to the shard split bit in place in 2 days after you put the new patch up.
          Hide
          Mark Miller added a comment -

          This is the patch updated to trunk and with some of my changes from another branch that has other open JIRA fixes. On the branch tests were passing, but for some reason the shard splitting tests are not passing with my merge up - I have not been able to spot the root cause yet, but it looks like for some reason the sub shards do not end up with ranges in clusterstate.json.

          Show
          Mark Miller added a comment - This is the patch updated to trunk and with some of my changes from another branch that has other open JIRA fixes. On the branch tests were passing, but for some reason the shard splitting tests are not passing with my merge up - I have not been able to spot the root cause yet, but it looks like for some reason the sub shards do not end up with ranges in clusterstate.json.
          Hide
          Anshum Gupta added a comment -

          I'll just start working on this. The email for this completely skipped my eyes.

          Show
          Anshum Gupta added a comment - I'll just start working on this. The email for this completely skipped my eyes.
          Hide
          Anshum Gupta added a comment -

          I integrated the above patch and have the following tests failing on the trunk. Mark Miller Can you confirm that all of these tests fail for you as well?

          [junit4:junit4] - org.apache.solr.cloud.ShardSplitTest.testDistribSearch
          [junit4:junit4] - org.apache.solr.cloud.ChaosMonkeyShardSplitTest.testDistribSearch
          [junit4:junit4] - org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration
          [junit4:junit4] - org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch
          [junit4:junit4] - org.apache.solr.cloud.BasicDistributedZkTest (suite)

          Show
          Anshum Gupta added a comment - I integrated the above patch and have the following tests failing on the trunk. Mark Miller Can you confirm that all of these tests fail for you as well? [junit4:junit4] - org.apache.solr.cloud.ShardSplitTest.testDistribSearch [junit4:junit4] - org.apache.solr.cloud.ChaosMonkeyShardSplitTest.testDistribSearch [junit4:junit4] - org.apache.solr.cloud.ClusterStateUpdateTest.testCoreRegistration [junit4:junit4] - org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch [junit4:junit4] - org.apache.solr.cloud.BasicDistributedZkTest (suite)
          Hide
          Mark Miller added a comment -

          The last patch for me only had the shard split tests failing - ill try and update to trunk tomorrow.

          Show
          Mark Miller added a comment - The last patch for me only had the shard split tests failing - ill try and update to trunk tomorrow.
          Hide
          Mark Miller added a comment -

          Update to trunk.

          For me the only fails are still in the shard splitting - i assume for the same reason - they are not getting a range somehow (the subshards). I have another branch with some other fixes/changes, and tests pass there, but I can't figure out the difference yet.

          Show
          Mark Miller added a comment - Update to trunk. For me the only fails are still in the shard splitting - i assume for the same reason - they are not getting a range somehow (the subshards). I have another branch with some other fixes/changes, and tests pass there, but I can't figure out the difference yet.
          Hide
          Mark Miller added a comment -
          [junit4:junit4] Tests with failures:
          [junit4:junit4]   - org.apache.solr.cloud.ShardSplitTest.testDistribSearch
          [junit4:junit4]   - org.apache.solr.cloud.ChaosMonkeyShardSplitTest.testDistribSearch
          
          Show
          Mark Miller added a comment - [junit4:junit4] Tests with failures: [junit4:junit4] - org.apache.solr.cloud.ShardSplitTest.testDistribSearch [junit4:junit4] - org.apache.solr.cloud.ChaosMonkeyShardSplitTest.testDistribSearch
          Hide
          Anshum Gupta added a comment -

          I'm looking at it but here's what fails for me:

          [junit4:junit4] Tests with failures:
          [junit4:junit4] - org.apache.solr.cloud.ShardSplitTest.testDistribSearch
          [junit4:junit4] - org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch
          [junit4:junit4] - org.apache.solr.cloud.BasicDistributedZkTest (suite)

          On similar lines but not exactly the same tests.

          Show
          Anshum Gupta added a comment - I'm looking at it but here's what fails for me: [junit4:junit4] Tests with failures: [junit4:junit4] - org.apache.solr.cloud.ShardSplitTest.testDistribSearch [junit4:junit4] - org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch [junit4:junit4] - org.apache.solr.cloud.BasicDistributedZkTest (suite) On similar lines but not exactly the same tests.
          Hide
          Anshum Gupta added a comment -

          Looks like no slice related info is getting passed on while creating the sub-slices. Range and State are both missing.
          The range is completely missing as we don't default it whereas the state is defaulting to 'ACTIVE'.

          Show
          Anshum Gupta added a comment - Looks like no slice related info is getting passed on while creating the sub-slices. Range and State are both missing. The range is completely missing as we don't default it whereas the state is defaulting to 'ACTIVE'.
          Hide
          Anshum Gupta added a comment -

          Strangely, I see non generic name for the core come up randomly on the test. Weird but have noticed that not too often but twice. Did you also see any such things happen?

          Show
          Anshum Gupta added a comment - Strangely, I see non generic name for the core come up randomly on the test. Weird but have noticed that not too often but twice. Did you also see any such things happen?
          Hide
          Mark Miller added a comment -

          I think all of the tests randomly switch between - to cover both paths - you can use the same seed or temp hard code to force one or the other. I'm traveling this week, so have less time than normal, but hopefully I can help look into this stuff as well a bit.

          Show
          Mark Miller added a comment - I think all of the tests randomly switch between - to cover both paths - you can use the same seed or temp hard code to force one or the other. I'm traveling this week, so have less time than normal, but hopefully I can help look into this stuff as well a bit.
          Hide
          Mark Miller added a comment -

          Okay, I think something changed from 4.3 to 4.x that affects this - my other branch that had tests passing against 4.3 also has shard tests failing after updating to trunk. Don't know what it is yet, but some new information.

          I'll dig in and try and figure it out - this patch is pretty useful for SOLR-4916.

          Show
          Mark Miller added a comment - Okay, I think something changed from 4.3 to 4.x that affects this - my other branch that had tests passing against 4.3 also has shard tests failing after updating to trunk. Don't know what it is yet, but some new information. I'll dig in and try and figure it out - this patch is pretty useful for SOLR-4916 .
          Hide
          Mark Miller added a comment -

          I now have all tests passing for me with this integrated into SOLR-4916

          Show
          Mark Miller added a comment - I now have all tests passing for me with this integrated into SOLR-4916
          Hide
          Mark Miller added a comment -

          SOLR-4916 is in, and this went in with it.

          Show
          Mark Miller added a comment - SOLR-4916 is in, and this went in with it.
          Hide
          ASF subversion and git services added a comment -

          Commit 1498767 from Mark Miller
          [ https://svn.apache.org/r1498767 ]

          SOLR-4655: Add CHANGES entry

          Show
          ASF subversion and git services added a comment - Commit 1498767 from Mark Miller [ https://svn.apache.org/r1498767 ] SOLR-4655 : Add CHANGES entry
          Hide
          ASF subversion and git services added a comment -

          Commit 1498768 from Mark Miller
          [ https://svn.apache.org/r1498768 ]

          SOLR-4655: Add CHANGES entry

          Show
          ASF subversion and git services added a comment - Commit 1498768 from Mark Miller [ https://svn.apache.org/r1498768 ] SOLR-4655 : Add CHANGES entry
          Hide
          Steve Rowe added a comment -

          Bulk close resolved 4.4 issues

          Show
          Steve Rowe added a comment - Bulk close resolved 4.4 issues

            People

            • Assignee:
              Mark Miller
              Reporter:
              Mark Miller
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development