Details
-
Sub-task
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
Description
Currently, all nodes are hardcode to be assigned to "default" partition. That brings two disadvantages.
- we can't select specify nodes, which are used to execute spark job only, from a cluster
- multi-partitions does not work since non-default partition can't get nodes
Future works:
- support to change partition assignment of existent node (in this PR, the update request will be skipped)
- support to remove existent node which had been reassigned (in this PR, removing such node cause error message "Failed to update non existing node ...")
Attachments
Issue Links
- causes
-
YUNIKORN-1124 Avoid passing empty nodeAttributes in UpdateNode request
- Closed
- links to