Details
-
Sub-task
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
To reproduce:
./bin/solr -e cloud -noprompt
Add policy and preferences:
{ 'set-cluster-policy': [ {'cores':'<10', 'node':'#ANY'}, {'replica':'<2', 'shard': '#EACH', 'node': '#ANY'}, {'nodeRole':'overseer', 'replica':0} ], 'set-cluster-preferences': [{'minimize': 'cores'}] }
Add a trigger:
{ 'set-trigger': { 'name' : 'node_added_trigger', 'event' : 'nodeAdded', 'waitFor' : '1s' } }
Shutdown a node (so only 1 is live)
./bin/solr stop -p 7574
Add a collection with 2 shards, 1 replica
http://localhost:8983/solr/admin/collections?action=create&name=test&replicationFactor=1&numShards=2&maxShardsPerNode=2&wt=json
The diagnostic output at this point is:
{ "responseHeader": { "status": 0, "QTime": 23 }, "diagnostics": { "sortedNodes": [ { "node": "127.0.1.1:8983_solr", "cores": 2 } ], "violations": [] }, "WARNING": "This response format is experimental. It is likely to change in the future." }
Start the other node that we had shutdown earlier:
"bin/solr" start -cloud -p 7574 -s "example/cloud/node2/solr" -z localhost:9983
The trigger kicks in but both cores are moved to the new node. Diagnostics output at steady state:
{ "responseHeader": { "status": 0, "QTime": 23 }, "diagnostics": { "sortedNodes": [ { "node": "127.0.1.1:7574_solr", "cores": 2 }, { "node": "127.0.1.1:8983_solr", "cores": 0 } ], "violations": [] }, "WARNING": "This response format is experimental. It is likely to change in the future." }