Solr
  1. Solr
  2. SOLR-4808

Persist and use router,replicationFactor and maxShardsPerNode at Collection and Shard level

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.5, 5.0
    • Component/s: SolrCloud
    • Labels:

      Description

      The replication factor for a collection as of now is not persisted and used while adding replicas.
      We should save the replication factor at collection factor as well as shard level.

      1. SOLR-4808.patch
        14 kB
        Noble Paul
      2. SOLR-4808.patch
        24 kB
        Shalin Shekhar Mangar

        Issue Links

          Activity

          Hide
          Mark Miller added a comment -

          Yeah, this is mainly because this seems like a largish issue to deal with to make good use of it, so it's sort of been a long term plan.

          Because you can create collections and add/remove from them without the collections api or overseer right now, you could persist a replicationFactor that is simply wrong and not adjusted for.

          My long term plan for this has been:

          As the collections api gets better, stop supporting creation of collections by configuration.

          Have the Overseer optionally enforce the replicationFactor over time - if too few nodes are up, it creates new ones, if for some reason too many are up, it removes some - using some plugable algorithm.

          As part of this, we can also make ZooKeeper the truth for cluster state - so for example, if you delete a collection and then bring back an old node with a core for that collection, the Solr instance can know that it should simply remove that core and not bring the collection back to life.

          One step at a time, but I think there is a lot to do to take advantage of the persisted replicationFactor in a way that makes sense - it wouldn't be too cool if we persisted a repilcation factor of 4 and then the core admin api was used to make it actually 6 and the spec never matched with the model again.

          Show
          Mark Miller added a comment - Yeah, this is mainly because this seems like a largish issue to deal with to make good use of it, so it's sort of been a long term plan. Because you can create collections and add/remove from them without the collections api or overseer right now, you could persist a replicationFactor that is simply wrong and not adjusted for. My long term plan for this has been: As the collections api gets better, stop supporting creation of collections by configuration. Have the Overseer optionally enforce the replicationFactor over time - if too few nodes are up, it creates new ones, if for some reason too many are up, it removes some - using some plugable algorithm. As part of this, we can also make ZooKeeper the truth for cluster state - so for example, if you delete a collection and then bring back an old node with a core for that collection, the Solr instance can know that it should simply remove that core and not bring the collection back to life. One step at a time, but I think there is a lot to do to take advantage of the persisted replicationFactor in a way that makes sense - it wouldn't be too cool if we persisted a repilcation factor of 4 and then the core admin api was used to make it actually 6 and the spec never matched with the model again.
          Hide
          Noble Paul added a comment -

          We need to redefine replicationFactor as the minimum no:of replicas for a given shard. The cluster will have atleast those many no:of replicas for that given shard. If the number falls below , that the overseer should attempt to bring the number upto that.

          What if a node is asked to join a shard which has enough replicas according to the replicationFactor? It should be allowed to do so but the replicationFactor should remain same till it is changed through the cluster admin command.

          replicationFactor value must be persisted per shard . We should be able to manipulate the value

          • per shard : relevant for custom sharding
          • entire cluster : for hash based setup the user would just update it cluster wide
          • shard.keys values : for composite id collections it make sense to change the value by providing just the keys and the command can identify the appropriate shards
          Show
          Noble Paul added a comment - We need to redefine replicationFactor as the minimum no:of replicas for a given shard. The cluster will have atleast those many no:of replicas for that given shard. If the number falls below , that the overseer should attempt to bring the number upto that. What if a node is asked to join a shard which has enough replicas according to the replicationFactor? It should be allowed to do so but the replicationFactor should remain same till it is changed through the cluster admin command. replicationFactor value must be persisted per shard . We should be able to manipulate the value per shard : relevant for custom sharding entire cluster : for hash based setup the user would just update it cluster wide shard.keys values : for composite id collections it make sense to change the value by providing just the keys and the command can identify the appropriate shards
          Hide
          Mark Miller added a comment -

          We need to redefine replicationFactor as the minimum no:of replicas for a given shard. The cluster will have atleast those many no:of replicas for that given shard. If the number falls below , that the overseer should attempt to bring the number upto that.

          I don't think it needs to necessarily be redefined - you are specifying the replicationFactor you would like, not the minimum replicationFactor you'd like.

          If the number falls below , that the overseer should attempt to bring the number upto that.

          I think any auto cluster changes should be optional - by default we should not meddle. And one of those options should allow trying to reduce to the right replicationFactor if you have too many the same way you might raise to it.

          What if a node is asked to join a shard which has enough replicas according to the replicationFactor?

          For this mode, we should not allow preconfigured cores - so this shouldn't happen - if it does for some weird reason, depending on your config, the node should be removed shortly after by the overseer - I think the way you should raise the number of replicas is to use the collections api to change the replicationFactor.

          Or don't use this mode where the Overseer keeps things in line and it's up to you to manually do what you want.

          Show
          Mark Miller added a comment - We need to redefine replicationFactor as the minimum no:of replicas for a given shard. The cluster will have atleast those many no:of replicas for that given shard. If the number falls below , that the overseer should attempt to bring the number upto that. I don't think it needs to necessarily be redefined - you are specifying the replicationFactor you would like, not the minimum replicationFactor you'd like. If the number falls below , that the overseer should attempt to bring the number upto that. I think any auto cluster changes should be optional - by default we should not meddle. And one of those options should allow trying to reduce to the right replicationFactor if you have too many the same way you might raise to it. What if a node is asked to join a shard which has enough replicas according to the replicationFactor? For this mode, we should not allow preconfigured cores - so this shouldn't happen - if it does for some weird reason, depending on your config, the node should be removed shortly after by the overseer - I think the way you should raise the number of replicas is to use the collections api to change the replicationFactor. Or don't use this mode where the Overseer keeps things in line and it's up to you to manually do what you want.
          Hide
          Shalin Shekhar Mangar added a comment -

          For this mode, we should not allow preconfigured cores - so this shouldn't happen - if it does for some weird reason, depending on your config, the node should be removed shortly after by the overseer - I think the way you should raise the number of replicas is to use the collections api to change the replicationFactor.

          Noble and I had a discussion about this. What you say is right but the problem is about back-compat at least until the 4.x releases continue. In 5.0 we can stop supporting collection creation via startup configuration. But until that time we need to decide whether we throw an exception if a user starts up a pre-configured core or treat replication factor as a minimum value and let the core be added to the shard. After our discussion Noble and I were leaning towards the latter option.

          I have started to work on this. I'm going to create some issues around APIs to modify replication factor and maxShardsPerNode. I also plan to store replication factor (at a shard level only) and start using these values in Overseer.

          Show
          Shalin Shekhar Mangar added a comment - For this mode, we should not allow preconfigured cores - so this shouldn't happen - if it does for some weird reason, depending on your config, the node should be removed shortly after by the overseer - I think the way you should raise the number of replicas is to use the collections api to change the replicationFactor. Noble and I had a discussion about this. What you say is right but the problem is about back-compat at least until the 4.x releases continue. In 5.0 we can stop supporting collection creation via startup configuration. But until that time we need to decide whether we throw an exception if a user starts up a pre-configured core or treat replication factor as a minimum value and let the core be added to the shard. After our discussion Noble and I were leaning towards the latter option. I have started to work on this. I'm going to create some issues around APIs to modify replication factor and maxShardsPerNode. I also plan to store replication factor (at a shard level only) and start using these values in Overseer.
          Hide
          Mark Miller added a comment -

          What you say is right but the problem is about back-compat at least until the 4.x releases continue.

          I think we should have a simple config to deal with back compat - if you turn it on, you are in this smarter mode. By default, predefined cores work, you don't get this feature set - when you turn it on, predefined cores are out the window and you get this feature set.

          Show
          Mark Miller added a comment - What you say is right but the problem is about back-compat at least until the 4.x releases continue. I think we should have a simple config to deal with back compat - if you turn it on, you are in this smarter mode. By default, predefined cores work, you don't get this feature set - when you turn it on, predefined cores are out the window and you get this feature set.
          Hide
          Yonik Seeley added a comment -

          It seems like we want a collection level replicationFactor so it's easy to change for the whole collection (if you want to scale up/down to meet increased query traffic for instance), and to remove the burden of having to specify it for each shard created. Consequently, we probably shouldn't store a replicationFactor at the shard level unless it has been explicitly set.

          Show
          Yonik Seeley added a comment - It seems like we want a collection level replicationFactor so it's easy to change for the whole collection (if you want to scale up/down to meet increased query traffic for instance), and to remove the burden of having to specify it for each shard created. Consequently, we probably shouldn't store a replicationFactor at the shard level unless it has been explicitly set.
          Hide
          Shalin Shekhar Mangar added a comment - - edited

          It seems like we want a collection level replicationFactor so it's easy to change for the whole collection

          The API to change replication factor can work at both levels. If you specify a shard name we change the factor for that shard otherwise we change it for the whole collection. Considering that it is nice to have a replication factor per shard for custom sharding use-cases, I think it is best that we store it only at a per-shard level.

          Show
          Shalin Shekhar Mangar added a comment - - edited It seems like we want a collection level replicationFactor so it's easy to change for the whole collection The API to change replication factor can work at both levels. If you specify a shard name we change the factor for that shard otherwise we change it for the whole collection. Considering that it is nice to have a replication factor per shard for custom sharding use-cases, I think it is best that we store it only at a per-shard level.
          Hide
          Shalin Shekhar Mangar added a comment -

          I think we should have a simple config to deal with back compat - if you turn it on, you are in this smarter mode. By default, predefined cores work, you don't get this feature set - when you turn it on, predefined cores are out the window and you get this feature set.

          Okay that sounds good to me. It is easier to transition to 5.x this way – just throw away the non-smart mode.

          Show
          Shalin Shekhar Mangar added a comment - I think we should have a simple config to deal with back compat - if you turn it on, you are in this smarter mode. By default, predefined cores work, you don't get this feature set - when you turn it on, predefined cores are out the window and you get this feature set. Okay that sounds good to me. It is easier to transition to 5.x this way – just throw away the non-smart mode.
          Hide
          Yonik Seeley added a comment -

          We need to redefine replicationFactor

          replicationFactor has always been the "target". The property does not fully define how that target is generally met, and
          it doesn't not mean that if the target isn't exactly met that the cluster is invalid somehow.

          What if a node is asked to join a shard which has enough replicas according to the replicationFactor? It should be allowed to do so but the replicationFactor should remain same till it is changed through the cluster admin command.

          Agreed - if it's explicit, we should allow it. I'm not sure I see the problem here.

          Show
          Yonik Seeley added a comment - We need to redefine replicationFactor replicationFactor has always been the "target". The property does not fully define how that target is generally met, and it doesn't not mean that if the target isn't exactly met that the cluster is invalid somehow. What if a node is asked to join a shard which has enough replicas according to the replicationFactor? It should be allowed to do so but the replicationFactor should remain same till it is changed through the cluster admin command. Agreed - if it's explicit, we should allow it. I'm not sure I see the problem here.
          Hide
          Shalin Shekhar Mangar added a comment -

          Agreed - if it's explicit, we should allow it. I'm not sure I see the problem here.

          Seems like we have two conflicting opinions here? Mark wants to have two different options to allow/disallow pre-configured cores.

          Which one should we go for?

          Show
          Shalin Shekhar Mangar added a comment - Agreed - if it's explicit, we should allow it. I'm not sure I see the problem here. Seems like we have two conflicting opinions here? Mark wants to have two different options to allow/disallow pre-configured cores. Which one should we go for?
          Hide
          Yonik Seeley added a comment -

          After a little chatting on #solr-dev, I think I understand the source of my confusion...
          I took this "What if a node is asked to join a shard which has enough replicas according to the replicationFactor?" to mean there was an explicit operator request for a node to join a shard (which should always succeed if possible IMO).

          The issue seems to be more that when a node is brought up, we don't know if the shards it has locally were explicitly configured by the user, or just the result of previous automatic shard assignments. It seems like we should assume ZK has the truth (i.e. assume the latter). We also don't want shards coming back from the dead (say we deleted a shard, but one of the replicas was down at the time). So I think I'm agreeing with Mark's "we should not allow preconfigured cores".

          It also seems like it would be a very useful feature to be able to dynamically tell the cluster "stop trying to fix things temporarily" via API (i.e. it shouldn't just be a back compat thing).

          Show
          Yonik Seeley added a comment - After a little chatting on #solr-dev, I think I understand the source of my confusion... I took this "What if a node is asked to join a shard which has enough replicas according to the replicationFactor?" to mean there was an explicit operator request for a node to join a shard (which should always succeed if possible IMO). The issue seems to be more that when a node is brought up, we don't know if the shards it has locally were explicitly configured by the user, or just the result of previous automatic shard assignments. It seems like we should assume ZK has the truth (i.e. assume the latter). We also don't want shards coming back from the dead (say we deleted a shard, but one of the replicas was down at the time). So I think I'm agreeing with Mark's "we should not allow preconfigured cores". It also seems like it would be a very useful feature to be able to dynamically tell the cluster "stop trying to fix things temporarily" via API (i.e. it shouldn't just be a back compat thing).
          Hide
          Noble Paul added a comment -

          It also seems like it would be a very useful feature to be able to dynamically tell the cluster "stop trying to fix things temporarily" via API.(i.e. it shouldn't just be a back compat thing).

          yes . Having this property as a feature will help a lot . We should have a set of cluster-wide properties which are editable via an EDITCOLLECTION command.

          Show
          Noble Paul added a comment - It also seems like it would be a very useful feature to be able to dynamically tell the cluster "stop trying to fix things temporarily" via API.(i.e. it shouldn't just be a back compat thing). yes . Having this property as a feature will help a lot . We should have a set of cluster-wide properties which are editable via an EDITCOLLECTION command.
          Hide
          Shalin Shekhar Mangar added a comment - - edited

          Here's what I have in mind:

          SolrCloud shall store the following two properties in addition to the ones it already does:

          1. smartCloud (boolean) - Stored per collection
          2. autoManageCluster - Stored per collection
          3. maxShardsPerNode - Stored per collection
          4. replicationFactor - Stored per slice

          The smartCloud parameter must be specified in the create collection API. The smartCloud mode will default to false for all existing collections. The autoManageCluster, maxShardsPerNode and replicationFactor properties will be used in SolrCloud only if smartCloud=true for that collection.

          If a collection is running in smart mode then it will assume the cluster state in ZK to be the truth i.e. it may re-assign pre-configured cores when they join/re-join the cluster.

          If autoManageCluster=true then Overseer will attempt to keep the cluster running according to the values of replication factor and maxShardsPerNode i.e. it will increase/decrease replicas to match replication factor and maxShardsPerNode

          A new “editcollection” API will be introduced to:
          Modify value of smartCloud
          Modify value of autoManageCluster
          Modify values of maxShardsPerNode for the collection
          Modify value of replicationFactor for entire collection (apply to each and every slice)
          Modify values of replicationFactor on a per-slice basis

          The way to add a node to the collection would be to 1) increase the replication factor (of a collection or shard) and then 2) start a node

          The smartCloud property can be turned off via editcollection to allow users to pre-configure nodes if they want to.

          I'm not working on the "autoManageCluster" right now. I welcome suggestions for alternate names of the "smartCloud" property.

          Show
          Shalin Shekhar Mangar added a comment - - edited Here's what I have in mind: SolrCloud shall store the following two properties in addition to the ones it already does: smartCloud (boolean) - Stored per collection autoManageCluster - Stored per collection maxShardsPerNode - Stored per collection replicationFactor - Stored per slice The smartCloud parameter must be specified in the create collection API. The smartCloud mode will default to false for all existing collections. The autoManageCluster, maxShardsPerNode and replicationFactor properties will be used in SolrCloud only if smartCloud=true for that collection. If a collection is running in smart mode then it will assume the cluster state in ZK to be the truth i.e. it may re-assign pre-configured cores when they join/re-join the cluster. If autoManageCluster=true then Overseer will attempt to keep the cluster running according to the values of replication factor and maxShardsPerNode i.e. it will increase/decrease replicas to match replication factor and maxShardsPerNode A new “editcollection” API will be introduced to: Modify value of smartCloud Modify value of autoManageCluster Modify values of maxShardsPerNode for the collection Modify value of replicationFactor for entire collection (apply to each and every slice) Modify values of replicationFactor on a per-slice basis The way to add a node to the collection would be to 1) increase the replication factor (of a collection or shard) and then 2) start a node The smartCloud property can be turned off via editcollection to allow users to pre-configure nodes if they want to. I'm not working on the "autoManageCluster" right now. I welcome suggestions for alternate names of the "smartCloud" property.
          Hide
          Noble Paul added a comment -

          smartCloud (boolean) - Stored per collection

          autoManageCluster - Stored per collection

          I don't see a reason to have two parameters. It is quite confusing. We should have only one property .

          Show
          Noble Paul added a comment - smartCloud (boolean) - Stored per collection autoManageCluster - Stored per collection I don't see a reason to have two parameters. It is quite confusing. We should have only one property .
          Hide
          Shalin Shekhar Mangar added a comment -

          I don't see a reason to have two parameters. It is quite confusing. We should have only one property

          There are two things that we want:

          1. Use ZK as the truth and assign nodes accordingly as and when they join the cluster
          2. Increase/decrease replicas on-the-fly automatically using the replication factor and maxShardsPerNode properties

          While the former affects only those nodes that are joining the cluster, the latter affects active and available nodes. It makes sense to have a switch to disable #2.

          Show
          Shalin Shekhar Mangar added a comment - I don't see a reason to have two parameters. It is quite confusing. We should have only one property There are two things that we want: Use ZK as the truth and assign nodes accordingly as and when they join the cluster Increase/decrease replicas on-the-fly automatically using the replication factor and maxShardsPerNode properties While the former affects only those nodes that are joining the cluster, the latter affects active and available nodes. It makes sense to have a switch to disable #2.
          Hide
          Shalin Shekhar Mangar added a comment -

          Another idea that I've been thinking about for some time is to use the Solr Admin GUI to give hints to users instead of automatically managing the cluster completely. This can help us get rid of an additional config param (autoManageCluster) and reduce growing pains of such a feature by letting it "bake" for a while in real world installations without causing them damage.

          Show
          Shalin Shekhar Mangar added a comment - Another idea that I've been thinking about for some time is to use the Solr Admin GUI to give hints to users instead of automatically managing the cluster completely. This can help us get rid of an additional config param (autoManageCluster) and reduce growing pains of such a feature by letting it "bake" for a while in real world installations without causing them damage.
          Hide
          Noble Paul added a comment -

          Use ZK as the truth and assign nodes accordingly as and when they join the cluster

          Increase/decrease replicas on-the-fly automatically using the replication factor and maxShardsPerNode properties

          Why don't we have explicit names if that is the objective. terms like 'smart' 'auto' are very easy to be misinterpreted. An in the future , we may add a few more properties and we will always be wondering whether it is a part of the 'smart' thing or not

          Show
          Noble Paul added a comment - Use ZK as the truth and assign nodes accordingly as and when they join the cluster Increase/decrease replicas on-the-fly automatically using the replication factor and maxShardsPerNode properties Why don't we have explicit names if that is the objective. terms like 'smart' 'auto' are very easy to be misinterpreted. An in the future , we may add a few more properties and we will always be wondering whether it is a part of the 'smart' thing or not
          Hide
          Yonik Seeley added a comment -

          autoManageCluster - Stored per collection

          We might want to be more explicit since there will likely be more and more aspects to managing a cluster going forward.
          Perhaps "autoRebalance", or if there is a use-case for turning off one side or the other, "autoCreateReplicas" and "autoDestroyReplicas"?

          Show
          Yonik Seeley added a comment - autoManageCluster - Stored per collection We might want to be more explicit since there will likely be more and more aspects to managing a cluster going forward. Perhaps "autoRebalance", or if there is a use-case for turning off one side or the other, "autoCreateReplicas" and "autoDestroyReplicas"?
          Hide
          Shalin Shekhar Mangar added a comment -

          Why don't we have explicit names if that is the objective. terms like 'smart' 'auto' are very easy to be misinterpreted. An in the future , we may add a few more properties and we will always be wondering whether it is a part of the 'smart' thing or not

          +1. I used this only as a place holder. Any suggestions?

          We might want to be more explicit since there will likely be more and more aspects to managing a cluster going forward. Perhaps "autoRebalance", or if there is a use-case for turning off one side or the other, "autoCreateReplicas" and "autoDestroyReplicas"?

          +1 for autoCreateReplicas and autoDestroyReplicas.

          How do people feel about my suggestion on giving hints via GUI for a start (instead of full-blown automatic cluster management)?

          Show
          Shalin Shekhar Mangar added a comment - Why don't we have explicit names if that is the objective. terms like 'smart' 'auto' are very easy to be misinterpreted. An in the future , we may add a few more properties and we will always be wondering whether it is a part of the 'smart' thing or not +1. I used this only as a place holder. Any suggestions? We might want to be more explicit since there will likely be more and more aspects to managing a cluster going forward. Perhaps "autoRebalance", or if there is a use-case for turning off one side or the other, "autoCreateReplicas" and "autoDestroyReplicas"? +1 for autoCreateReplicas and autoDestroyReplicas. How do people feel about my suggestion on giving hints via GUI for a start (instead of full-blown automatic cluster management)?
          Hide
          Noble Paul added a comment -

          This patch changes the way the collection creation is perfromed. The OverseerCollectionProcessor sends a message to Overseer to create a collection with all the parameters it received via the collection 'create' command . After the empty collection is created it proceeds to create the nodes.

          The core create no more needs numShards parameter , nor does it send it to Overseer when it has to register itself

          Show
          Noble Paul added a comment - This patch changes the way the collection creation is perfromed. The OverseerCollectionProcessor sends a message to Overseer to create a collection with all the parameters it received via the collection 'create' command . After the empty collection is created it proceeds to create the nodes. The core create no more needs numShards parameter , nor does it send it to Overseer when it has to register itself
          Hide
          Shalin Shekhar Mangar added a comment -

          Patch builds over Noble's work.

          Changes:

          1. New mandatory "collectionApiMode" parameter during create collection command (we can think of a better name)
          2. When api mode is enabled, zk is used as truth and shardId is always looked up from cluster state
          3. replicationFactor is persisted at slice level
          4. maxShardsPerNode is persisted at collection level
          5. Basic testing in CollectionAPIDistributedZkTest

          I'm working on more tests and a modifyCollection API which can be used to change these values.

          Show
          Shalin Shekhar Mangar added a comment - Patch builds over Noble's work. Changes: New mandatory "collectionApiMode" parameter during create collection command (we can think of a better name) When api mode is enabled, zk is used as truth and shardId is always looked up from cluster state replicationFactor is persisted at slice level maxShardsPerNode is persisted at collection level Basic testing in CollectionAPIDistributedZkTest I'm working on more tests and a modifyCollection API which can be used to change these values.
          Hide
          Yonik Seeley added a comment -

          New mandatory "collectionApiMode" parameter during create collection command (we can think of a better name)

          Eh, internally mandatory I hope (as in, the user should not have to specify it?)

          replicationFactor is persisted at slice level

          It still feels like this should be a collection level property that we have the ability to store/override on a per-shard level.
          The reasons off the top of my head:

          • would be nice to be able to create a new shard w/o having to know/specify what the replication factor currently is
          • possible to completely lose the replication factor if we delete a shard and re-add a new one
          • there may be one shard that has a lot of demand and you set it's replication level high... so you override the replicationFactor for that shard only. It would still be nice to be able to adjust the replication factor for everyone else (by adjusting the collection level replicationFactor)

          "maxShardsPerNode" - should that be maxReplicasPerNode, or are we really talking logical shards?

          Show
          Yonik Seeley added a comment - New mandatory "collectionApiMode" parameter during create collection command (we can think of a better name) Eh, internally mandatory I hope (as in, the user should not have to specify it?) replicationFactor is persisted at slice level It still feels like this should be a collection level property that we have the ability to store/override on a per-shard level. The reasons off the top of my head: would be nice to be able to create a new shard w/o having to know/specify what the replication factor currently is possible to completely lose the replication factor if we delete a shard and re-add a new one there may be one shard that has a lot of demand and you set it's replication level high... so you override the replicationFactor for that shard only. It would still be nice to be able to adjust the replication factor for everyone else (by adjusting the collection level replicationFactor) "maxShardsPerNode" - should that be maxReplicasPerNode, or are we really talking logical shards?
          Hide
          ASF subversion and git services added a comment -
          Show
          ASF subversion and git services added a comment - Commit 1508968 from Noble Paul in branch 'dev/trunk' [ https://svn.apache.org/r1508968 ] SOLR-4221 SOLR-4808 SOLR-5006 SOLR-5017 SOLR-4222
          Hide
          ASF subversion and git services added a comment -

          Commit 1508981 from Noble Paul in branch 'dev/branches/branch_4x'
          [ https://svn.apache.org/r1508981 ]

          SOLR-4221 SOLR-4808 SOLR-5006 SOLR-5017 SOLR-4222

          Show
          ASF subversion and git services added a comment - Commit 1508981 from Noble Paul in branch 'dev/branches/branch_4x' [ https://svn.apache.org/r1508981 ] SOLR-4221 SOLR-4808 SOLR-5006 SOLR-5017 SOLR-4222
          Hide
          Noble Paul added a comment -

          The 'collectionApiMode' feature will be tracked as apart of another issue

          Show
          Noble Paul added a comment - The 'collectionApiMode' feature will be tracked as apart of another issue
          Hide
          Shalin Shekhar Mangar added a comment -

          I opened SOLR-5096

          Show
          Shalin Shekhar Mangar added a comment - I opened SOLR-5096
          Hide
          Adrien Grand added a comment -

          4.5 release -> bulk close

          Show
          Adrien Grand added a comment - 4.5 release -> bulk close

            People

            • Assignee:
              Shalin Shekhar Mangar
              Reporter:
              Anshum Gupta
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development