Uploaded image for project: 'Solr'
  1. Solr
  2. SOLR-5209

last replica removal cascades to remove shard from clusterstate

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 4.4
    • Fix Version/s: 6.0
    • Component/s: SolrCloud
    • Labels:
      None

      Description

      The problem we saw was that unloading of an only replica of a shard deleted that shard's info from the clusterstate. Once it was gone then there was no easy way to re-create the shard (other than dropping and re-creating the whole collection's state).

      This seems like a bug?

      Overseer.java around line 600 has a comment and commented out code:
      // TODO TODO TODO!!! if there are no replicas left for the slice, and the slice has no hash range, remove it
      // if (newReplicas.size() == 0 && slice.getRange() == null) {
      // if there are no replicas left for the slice remove it

      1. SOLR-5209.patch
        17 kB
        Christine Poerschke
      2. SOLR-5209.patch
        1 kB
        Christine Poerschke

        Issue Links

          Activity

          Hide
          cpoerschke Christine Poerschke added a comment -

          original clusterstate (extract)

          shards : {
            "shard1":{ "range":"...", "replicas":{ { "core":"collection1_shard1" } } },
            "shard2":{ "range":"...", "replicas":{ { "core":"collection1_shard2" } } }
          }
          

          actual clusterstate after UNLOAD of collection1_shard1

          shards : {
            "shard2":{ "range":"...", "replicas":{ { "core":"collection1_shard2" } } }
          }
          

          expected clusterstate after UNLOAD of collection1_shard1

          shards : {
            "shard1":{ "range":"...", "replicas":{} },
            "shard2":{ "range":"...", "replicas":{ { "core":"collection1_shard2" } } }
          }
          
          Show
          cpoerschke Christine Poerschke added a comment - original clusterstate (extract) shards : { "shard1":{ "range":"...", "replicas":{ { "core":"collection1_shard1" } } }, "shard2":{ "range":"...", "replicas":{ { "core":"collection1_shard2" } } } } actual clusterstate after UNLOAD of collection1_shard1 shards : { "shard2":{ "range":"...", "replicas":{ { "core":"collection1_shard2" } } } } expected clusterstate after UNLOAD of collection1_shard1 shards : { "shard1":{ "range":"...", "replicas":{} }, "shard2":{ "range":"...", "replicas":{ { "core":"collection1_shard2" } } } }
          Hide
          dancollins Daniel Collins added a comment - - edited

          When I had a look at that code, I was confused why removeCore was even attempting to remove "all empty pre allocated slices"? And it also tries to clean up collections, given we have the collections API to do that, why is a simple core unload going any further? I can see that its "nice" and saves the user having to run the collections API to remove the collection, but isn't it potentially dangerous (like in this case) if we are second guessing what the user will do next?

          Say they are just removing all the replicas, and re-assigning them to different hosts? They may want to leave the collection/shard ranges "empty" and then move the data to new hosts, and restart the cores to re-register them. Yes, we could/should start the new ones before removing the old, but that's not enforced?

          Show
          dancollins Daniel Collins added a comment - - edited When I had a look at that code, I was confused why removeCore was even attempting to remove "all empty pre allocated slices"? And it also tries to clean up collections, given we have the collections API to do that, why is a simple core unload going any further? I can see that its "nice" and saves the user having to run the collections API to remove the collection, but isn't it potentially dangerous (like in this case) if we are second guessing what the user will do next? Say they are just removing all the replicas, and re-assigning them to different hosts? They may want to leave the collection/shard ranges "empty" and then move the data to new hosts, and restart the cores to re-register them. Yes, we could/should start the new ones before removing the old, but that's not enforced?
          Hide
          shalinmangar Shalin Shekhar Mangar added a comment -

          I think this deserves another look. We have the deleteshard API now which can be used to completely remove a slice from the cluster state. We should remove this trappy behaviour.

          Show
          shalinmangar Shalin Shekhar Mangar added a comment - I think this deserves another look. We have the deleteshard API now which can be used to completely remove a slice from the cluster state. We should remove this trappy behaviour.
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          given we have the collections API to do that

          We don't actually have the collections API to do that - it's simply a thin candy wrapper around SolrCore admin calls. Everything is driven by SolrCores being added or removed. There is work being done to migrate towards something where the collections API is actually large and in charge, but currently it's just a sugar wrapper.

          Show
          markrmiller@gmail.com Mark Miller added a comment - given we have the collections API to do that We don't actually have the collections API to do that - it's simply a thin candy wrapper around SolrCore admin calls. Everything is driven by SolrCores being added or removed. There is work being done to migrate towards something where the collections API is actually large and in charge, but currently it's just a sugar wrapper.
          Hide
          dancollins Daniel Collins added a comment - - edited

          Ok, my bad, I wasn't clear enough. At the user-level there is collections API and core API, and yes one is just a wrapper around the other. But at the Overseer level, we seem to have various different sub-commands (not sure what the correct terminology for them is!): create_shard, removeshard, createcollection, removecollection, deletecore, etc. I appreciate this is probably historical code, but since we have these other methods, it felt like deletecore was overstepping its bounds now

          Could submit an extra patch, but wasn't sure of the historical nature of this code, hence just a comment first to get an opinion/discussion.

          Show
          dancollins Daniel Collins added a comment - - edited Ok, my bad, I wasn't clear enough. At the user-level there is collections API and core API, and yes one is just a wrapper around the other. But at the Overseer level, we seem to have various different sub-commands (not sure what the correct terminology for them is!): create_shard , removeshard , createcollection , removecollection , deletecore , etc. I appreciate this is probably historical code, but since we have these other methods, it felt like deletecore was overstepping its bounds now Could submit an extra patch, but wasn't sure of the historical nature of this code, hence just a comment first to get an opinion/discussion.
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          Right, but the sub commands are just the wrapper calls - except the shard commands - those are new. Th delete core one is mostly about cleanup ini remember right.

          The problem is, the overseer and zk do not own the state. The individual cores do basically. Mostly that's due to historical stuff. We intend to change that, but it's no small feat. Until that is done, I think this is much trickier to get right than it looks.

          Show
          markrmiller@gmail.com Mark Miller added a comment - Right, but the sub commands are just the wrapper calls - except the shard commands - those are new. Th delete core one is mostly about cleanup ini remember right. The problem is, the overseer and zk do not own the state. The individual cores do basically. Mostly that's due to historical stuff. We intend to change that, but it's no small feat. Until that is done, I think this is much trickier to get right than it looks.
          Hide
          cpoerschke Christine Poerschke added a comment -

          Hello. Just wanted to follow-up if we could proceed with this patch? To either reduce or completely remove the cascading behaviour UNLOAD currently has, between them DELETESHARD and DELETECOLLECTION should allow for shard and collection removal if that was to be required after UNLOAD-ing.

          original clusterstate

          "collection1" : {
            shards : {
              "shard1":{ "range":"...", "state" : "active", "replicas":{ { "core":"collection1_shard1" } } },
              "shard2":{ "range":"...", "state" : "active", "replicas":{ { "core":"collection1_shard2" } } }
            }, ...
          },
          "collection2" : {
            ...
          }
          

          current clusterstate after UNLOAD of collection1 shard1
          (The UNLOAD of the last shard1 replica cascade-triggered the removal shard1 itself.)

          "collection1" : {
            shards : {
              "shard2":{ "range":"...", "state" : "active", "replicas":{ { "core":"collection1_shard2" } } }
            }, ...
          },
          "collection2" : {
            ...
          }
          

          expected clusterstate after UNLOAD of collection1 shard1
          (There are now no replica in shard1. If one wanted to get rid of the shard then DELETESHARD could be modified to allow removal of active shards without replicas (currently DELETESHARD only removes shards that are state=inactive or state=recovery or range=null).)

          "collection1" : {
            shards : {
              "shard1":{ "range":"...", "state" : "active", "replicas":{} },
              "shard2":{ "range":"...", "state" : "active", "replicas":{ { "core":"collection1_shard2" } } }
            }, ...
          },
          "collection2" : {
            ...
          }
          

          current clusterstate after UNLOAD of collection1 shard1 and UNLOAD of collection1 shard2
          (The UNLOAD of the last shard2 replica cascade-triggered the removal shard2 itself, and then the removal of the last shard cascade-triggered the removal of collection1 itself.)

          "collection2" : {
            ...
          }
          

          expected clusterstate after UNLOAD of collection1 shard1 and UNLOAD of collection1 shard2
          (There are now no replica in shard1 and shard2. If one wanted to get rid of shard1 and shard2 then DELETESHARD could be modified and used as per above. If one wanted to get rid of collection1 itself then DELETECOLLECTION could be used.)

          "collection1" : {
            shards : {
              "shard1":{ "range":"...", "state" : "active", "replicas":{} },
              "shard2":{ "range":"...", "state" : "active", "replicas":{} }
            }, ...
          },
          "collection2" : {
            ...
          }
          
          Show
          cpoerschke Christine Poerschke added a comment - Hello. Just wanted to follow-up if we could proceed with this patch? To either reduce or completely remove the cascading behaviour UNLOAD currently has, between them DELETESHARD and DELETECOLLECTION should allow for shard and collection removal if that was to be required after UNLOAD-ing. original clusterstate "collection1" : { shards : { "shard1":{ "range":"...", "state" : "active", "replicas":{ { "core":"collection1_shard1" } } }, "shard2":{ "range":"...", "state" : "active", "replicas":{ { "core":"collection1_shard2" } } } }, ... }, "collection2" : { ... } current clusterstate after UNLOAD of collection1 shard1 (The UNLOAD of the last shard1 replica cascade-triggered the removal shard1 itself.) "collection1" : { shards : { "shard2":{ "range":"...", "state" : "active", "replicas":{ { "core":"collection1_shard2" } } } }, ... }, "collection2" : { ... } expected clusterstate after UNLOAD of collection1 shard1 (There are now no replica in shard1. If one wanted to get rid of the shard then DELETESHARD could be modified to allow removal of active shards without replicas (currently DELETESHARD only removes shards that are state=inactive or state=recovery or range=null).) "collection1" : { shards : { "shard1":{ "range":"...", "state" : "active", "replicas":{} }, "shard2":{ "range":"...", "state" : "active", "replicas":{ { "core":"collection1_shard2" } } } }, ... }, "collection2" : { ... } current clusterstate after UNLOAD of collection1 shard1 and UNLOAD of collection1 shard2 (The UNLOAD of the last shard2 replica cascade-triggered the removal shard2 itself, and then the removal of the last shard cascade-triggered the removal of collection1 itself.) "collection2" : { ... } expected clusterstate after UNLOAD of collection1 shard1 and UNLOAD of collection1 shard2 (There are now no replica in shard1 and shard2. If one wanted to get rid of shard1 and shard2 then DELETESHARD could be modified and used as per above. If one wanted to get rid of collection1 itself then DELETECOLLECTION could be used.) "collection1" : { shards : { "shard1":{ "range":"...", "state" : "active", "replicas":{} }, "shard2":{ "range":"...", "state" : "active", "replicas":{} } }, ... }, "collection2" : { ... }
          Hide
          noble.paul Noble Paul added a comment - - edited

          What is the resolution on this?

          +1 to not remove the slice when the last replica is gone

          I can take this up if this is the proposed resolution

          Show
          noble.paul Noble Paul added a comment - - edited What is the resolution on this? +1 to not remove the slice when the last replica is gone I can take this up if this is the proposed resolution
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          This would need to be done in the new "zk is truth" mode, not in the legacy cloud mode.

          Show
          markrmiller@gmail.com Mark Miller added a comment - This would need to be done in the new "zk is truth" mode, not in the legacy cloud mode.
          Hide
          noble.paul Noble Paul added a comment -

          I guess this should not matter because it would not break anything that people are doing. I would consider this a bug. Because creating a lost shard would be extremely hard because the 'range' info would be missing and you will end up with a 'broken' cluster

          Show
          noble.paul Noble Paul added a comment - I guess this should not matter because it would not break anything that people are doing. I would consider this a bug. Because creating a lost shard would be extremely hard because the 'range' info would be missing and you will end up with a 'broken' cluster
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          It's not a bug, the behavior was added explicitly and has been around for a long time. If you want to remove it, its got to be in the new mode.

          Show
          markrmiller@gmail.com Mark Miller added a comment - It's not a bug, the behavior was added explicitly and has been around for a long time. If you want to remove it, its got to be in the new mode.
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          Just a note: The new zk=truth mode is the future. The old mode is never going to be great. It's not possible. It was built in a halfway world, and it will always seems halfway funny. We need to embrace the zk=truth mode to make things nice.

          Show
          markrmiller@gmail.com Mark Miller added a comment - Just a note: The new zk=truth mode is the future. The old mode is never going to be great. It's not possible. It was built in a halfway world, and it will always seems halfway funny. We need to embrace the zk=truth mode to make things nice.
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          Of course, if we really need to, we can remove this, add lots of warnings about the break, notes about the new apis that allow the same thing, etc. Because there is now an alternate way to get this behavior, that is not an out of this world idea.

          Personally though, I'd so much rather see that energy put into the new zk=truth mode, it's required for so many things to work as we want.

          Show
          markrmiller@gmail.com Mark Miller added a comment - Of course, if we really need to, we can remove this, add lots of warnings about the break, notes about the new apis that allow the same thing, etc. Because there is now an alternate way to get this behavior, that is not an out of this world idea. Personally though, I'd so much rather see that energy put into the new zk=truth mode, it's required for so many things to work as we want.
          Hide
          andyetitmoves Ramkumar Aiyengar added a comment -

          FWIW, newer API actually won't allow you to do exactly what this does. The shard delete API (sensibly) explicitly checks if we have a null range or if we are not an active shard. This potentially lops out an active shard with an assigned range leaving the cluster state broken. I actually don't know if there's a way to get the cluster state back to a sensible state without modifying it by hand or recreating it altogether by nuking ZK state – when we tried bringing up the replica after this, it got assigned to a shard with an empty range..

          I guess if we plan to get to the ZK truth mode sometime soon, its fine to drop this patch. We mainly meant for this as an interim step to ensure someone doesn't accidentally burn their fingers..

          Show
          andyetitmoves Ramkumar Aiyengar added a comment - FWIW, newer API actually won't allow you to do exactly what this does. The shard delete API (sensibly) explicitly checks if we have a null range or if we are not an active shard. This potentially lops out an active shard with an assigned range leaving the cluster state broken. I actually don't know if there's a way to get the cluster state back to a sensible state without modifying it by hand or recreating it altogether by nuking ZK state – when we tried bringing up the replica after this, it got assigned to a shard with an empty range.. I guess if we plan to get to the ZK truth mode sometime soon, its fine to drop this patch. We mainly meant for this as an interim step to ensure someone doesn't accidentally burn their fingers..
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          This behavior existed long before hash ranges, so I'd say that issue was not complete in this area.

          The zk=truth mode can be started quite simply - its just a new config param we add and we can say some behaviors will change over time as we complete it. I think Shalin has already created an issue for it.

          Show
          markrmiller@gmail.com Mark Miller added a comment - This behavior existed long before hash ranges, so I'd say that issue was not complete in this area. The zk=truth mode can be started quite simply - its just a new config param we add and we can say some behaviors will change over time as we complete it. I think Shalin has already created an issue for it.
          Hide
          noble.paul Noble Paul added a comment -

          SOLR-5610 adds a the necessary property. I'm committing i right away

          Show
          noble.paul Noble Paul added a comment - SOLR-5610 adds a the necessary property. I'm committing i right away
          Hide
          andyetitmoves Ramkumar Aiyengar added a comment -

          This seems to be still relevant (in SliceMutator though rather than Overseer after the refactoring), 5.0 might be a good time to revisit this behaviour since the concerns above were more of back-compat..

          Show
          andyetitmoves Ramkumar Aiyengar added a comment - This seems to be still relevant (in SliceMutator though rather than Overseer after the refactoring), 5.0 might be a good time to revisit this behaviour since the concerns above were more of back-compat..
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          Yup, now is a good time to change this.

          Show
          markrmiller@gmail.com Mark Miller added a comment - Yup, now is a good time to change this.
          Hide
          githubbot ASF GitHub Bot added a comment -

          GitHub user cpoerschke opened a pull request:

          https://github.com/apache/lucene-solr/pull/118

          SOLR-5209: last replica removal no longer cascades to remove shard from clusterstate

          https://issues.apache.org/jira/i#browse/SOLR-5209

          You can merge this pull request into a Git repository by running:

          $ git pull https://github.com/bloomberg/lucene-solr trunk-solr-5209

          Alternatively you can review and apply these changes as the patch at:

          https://github.com/apache/lucene-solr/pull/118.patch

          To close this pull request, make a commit to your master/trunk branch
          with (at least) the following in the commit message:

          This closes #118


          commit 90b1a394b50d4f456bfabab756cc64f28733e1a5
          Author: Christine Poerschke <cpoerschke@bloomberg.net>
          Date: 2014-12-24T12:07:56Z

          SOLR-5209: last replica removal no longer cascades to remove shard from clusterstate

          old behaviour:

          • last replica removal cascades to remove shard from clusterstate
          • last shard removal cascades to remove collection from clusterstate

          new behaviour:

          • last replica removal preserves shard within clusterstate
          • OverseerTest.testRemovalOfLastReplica added for replica removal logic

          also:

          • strings such as "collection1" and "state" in OverseerTest replaced with variable or enum equvivalent

          Show
          githubbot ASF GitHub Bot added a comment - GitHub user cpoerschke opened a pull request: https://github.com/apache/lucene-solr/pull/118 SOLR-5209 : last replica removal no longer cascades to remove shard from clusterstate https://issues.apache.org/jira/i#browse/SOLR-5209 You can merge this pull request into a Git repository by running: $ git pull https://github.com/bloomberg/lucene-solr trunk-solr-5209 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/118.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #118 commit 90b1a394b50d4f456bfabab756cc64f28733e1a5 Author: Christine Poerschke <cpoerschke@bloomberg.net> Date: 2014-12-24T12:07:56Z SOLR-5209 : last replica removal no longer cascades to remove shard from clusterstate old behaviour: last replica removal cascades to remove shard from clusterstate last shard removal cascades to remove collection from clusterstate new behaviour: last replica removal preserves shard within clusterstate OverseerTest.testRemovalOfLastReplica added for replica removal logic also: strings such as "collection1" and "state" in OverseerTest replaced with variable or enum equvivalent
          Hide
          cpoerschke Christine Poerschke added a comment -

          Replaced attached patch with Solr trunk pull request which also includes OverseerTest test case logic.

          Show
          cpoerschke Christine Poerschke added a comment - Replaced attached patch with Solr trunk pull request which also includes OverseerTest test case logic.
          Hide
          andyetitmoves Ramkumar Aiyengar added a comment -

          I think this might fail BasicDistributedZkTest.testFailedCoreCreateCleansUp.. If we call core create, it happens to be the first core of the collection, and the core creation fails (due to say, a config issue) – the test currently verifies if the rollback happens by checking for the absence of the collection. We could obviously change the test, but in such a case, should we be removing the collection as well? (or disallow creation of a core for a non-existent collection – though that might be a bigger, disruptive end user change and not necessarily good)..

          Show
          andyetitmoves Ramkumar Aiyengar added a comment - I think this might fail BasicDistributedZkTest.testFailedCoreCreateCleansUp .. If we call core create, it happens to be the first core of the collection, and the core creation fails (due to say, a config issue) – the test currently verifies if the rollback happens by checking for the absence of the collection. We could obviously change the test, but in such a case, should we be removing the collection as well? (or disallow creation of a core for a non-existent collection – though that might be a bigger, disruptive end user change and not necessarily good)..
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          I think as part of making zk the truth we actually do want to prevent creation of cores that are not part of a zk collection.

          Show
          markrmiller@gmail.com Mark Miller added a comment - I think as part of making zk the truth we actually do want to prevent creation of cores that are not part of a zk collection.
          Hide
          cpoerschke Christine Poerschke added a comment -

          https://github.com/apache/lucene-solr/pull/118 now updated to remove the BasicDistributedZkTest.testFailedCoreCreateCleansUp test and adjust UnloadDistributedZkTest.testUnloadShardAndCollection and OverseerTest.testOverseerFailure tests.

          Would agree that longer-term core creation should not auto-create a collection.

          Show
          cpoerschke Christine Poerschke added a comment - https://github.com/apache/lucene-solr/pull/118 now updated to remove the BasicDistributedZkTest.testFailedCoreCreateCleansUp test and adjust UnloadDistributedZkTest.testUnloadShardAndCollection and OverseerTest.testOverseerFailure tests. Would agree that longer-term core creation should not auto-create a collection.
          Hide
          anshumg Anshum Gupta added a comment -

          Mark Miller is this a blocker?

          Show
          anshumg Anshum Gupta added a comment - Mark Miller is this a blocker?
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          Not fully. It would just stink to have to wait until 6 to remove this ugly back compat behavior. I thought I'd have time to get to it last week or someone else might pick it up, but no such luck, so perhaps we just push it out.

          Show
          markrmiller@gmail.com Mark Miller added a comment - Not fully. It would just stink to have to wait until 6 to remove this ugly back compat behavior. I thought I'd have time to get to it last week or someone else might pick it up, but no such luck, so perhaps we just push it out.
          Hide
          elyograg Shawn Heisey added a comment -

          On IRC today:

          09:14 < yriveiro> Hi, I unloaded by accident the las replica of a shard in a
                            collection
          09:14 < yriveiro> How can I recreate the shard?
          
          Show
          elyograg Shawn Heisey added a comment - On IRC today: 09:14 < yriveiro> Hi, I unloaded by accident the las replica of a shard in a collection 09:14 < yriveiro> How can I recreate the shard?
          Hide
          cpoerschke Christine Poerschke added a comment -

          Am in the process of updating/rebasing the patch for this (SOLR-5209) ticket here. SOLR-8338 is towards that, just replacing magic strings so that the actual test changes required for SOLR-5209 will then be simpler and clearer.

          Show
          cpoerschke Christine Poerschke added a comment - Am in the process of updating/rebasing the patch for this ( SOLR-5209 ) ticket here. SOLR-8338 is towards that, just replacing magic strings so that the actual test changes required for SOLR-5209 will then be simpler and clearer.
          Hide
          cpoerschke Christine Poerschke added a comment -

          Attaching updated patch against trunk.

          Show
          cpoerschke Christine Poerschke added a comment - Attaching updated patch against trunk.
          Hide
          cpoerschke Christine Poerschke added a comment -

          Mark Miller - if you have no objections then I will re-assign this ticket to myself with a view towards committing it in the first half of January or so, to trunk (and not branch_5x) in good time for the 6.0.0 release hopefully, after reviews of course.

          Everyone - the latest SOLR-5209.patch attachment contains the (small) change itself, a new OverseerTest.testRemovalOfLastReplica() test plus adjustments to existing tests. Reviews, comments, suggestions etc. welcome. Thank you.

          Show
          cpoerschke Christine Poerschke added a comment - Mark Miller - if you have no objections then I will re-assign this ticket to myself with a view towards committing it in the first half of January or so, to trunk (and not branch_5x) in good time for the 6.0.0 release hopefully, after reviews of course. Everyone - the latest SOLR-5209 .patch attachment contains the (small) change itself, a new OverseerTest.testRemovalOfLastReplica() test plus adjustments to existing tests. Reviews, comments, suggestions etc. welcome. Thank you.
          Hide
          jira-bot ASF subversion and git services added a comment -

          Commit 1724098 from Christine Poerschke in branch 'dev/trunk'
          [ https://svn.apache.org/r1724098 ]

          SOLR-5209: Unloading or deleting the last replica of a shard now no longer cascades to remove the shard from the clusterstate.

          Show
          jira-bot ASF subversion and git services added a comment - Commit 1724098 from Christine Poerschke in branch 'dev/trunk' [ https://svn.apache.org/r1724098 ] SOLR-5209 : Unloading or deleting the last replica of a shard now no longer cascades to remove the shard from the clusterstate.
          Hide
          cpoerschke Christine Poerschke added a comment -

          Committed to trunk only (for 6.x releases) and deliberately not committed to branch_5x since this changes existing behaviour.

          Show
          cpoerschke Christine Poerschke added a comment - Committed to trunk only (for 6.x releases) and deliberately not committed to branch_5x since this changes existing behaviour.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user cpoerschke commented on the pull request:

          https://github.com/apache/lucene-solr/pull/118#issuecomment-171007162

          SOLR-5209 committed to trunk yesterday (https://svn.apache.org/r1724098).

          Show
          githubbot ASF GitHub Bot added a comment - Github user cpoerschke commented on the pull request: https://github.com/apache/lucene-solr/pull/118#issuecomment-171007162 SOLR-5209 committed to trunk yesterday ( https://svn.apache.org/r1724098 ).
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user cpoerschke closed the pull request at:

          https://github.com/apache/lucene-solr/pull/118

          Show
          githubbot ASF GitHub Bot added a comment - Github user cpoerschke closed the pull request at: https://github.com/apache/lucene-solr/pull/118
          Hide
          noble.paul Noble Paul added a comment -

          I guess we should get this into 5.5 . This looks like a serious enough problem that backward incompatibility should not be a problem

          Show
          noble.paul Noble Paul added a comment - I guess we should get this into 5.5 . This looks like a serious enough problem that backward incompatibility should not be a problem
          Hide
          markrmiller@gmail.com Mark Miller added a comment -

          I don't agree for the same reasons we have already discussed.

          We reserved some leeway for 5x to fix things in this vein, but we didn't do any of it. To just make this large change in behavior out of the blue in a 5.5 release doesn't make a lot of sense to me. Best done in the 6 release.

          Show
          markrmiller@gmail.com Mark Miller added a comment - I don't agree for the same reasons we have already discussed. We reserved some leeway for 5x to fix things in this vein, but we didn't do any of it. To just make this large change in behavior out of the blue in a 5.5 release doesn't make a lot of sense to me. Best done in the 6 release.

            People

            • Assignee:
              cpoerschke Christine Poerschke
              Reporter:
              cpoerschke Christine Poerschke
            • Votes:
              2 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development