Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.12.2
-
None
-
Amazon EKS, K8s 1.20, Cluster Autoscaler
Description
After YUNIKORN-704 was done, YuniKorn should have the same mechanism as the default scheduler when it comes to scheduling DaemonSet pods. That's the case most times in our deployments. But recently we have found that DaemonSet scheduling became problematic again: When K8s Cluster Autoscaler adds new nodes in response to pending pods in the cluster, EKS will automatically create a CNI DaemonSet (Amazon's container networking module), one pod on each newly created node. But YuniKorn could not schedule these pods successfully. There's no informative error messages. The default queue that these pods belong to have available resources too. Because they couldn't be scheduled, EKS refuses to mark the new nodes as ready, they then get stuck in NotReady state. This issue is not always reproducible, but it has happened a few times. The root cause needs to be further researched.
Note that when this bug happened, the mitigation that worked was to disable the YuniKorn admission controller, delete all the pending DaemonSet pods, wait for the default scheduler will schedule them all, then the new nodes will become Ready. So it seems that there are edge cases that haven't been covered by the previous work where YuniKorn handles DaemonSet differently compared to the default scheduler
Attachments
Attachments
Issue Links
- causes
-
YUNIKORN-1395 Account for preempted placeholder in the placeholder data
- Closed
- relates to
-
YUNIKORN-704 [Umbrella] Use the same mechanism to schedule daemon set pods as the default scheduler
- Closed
-
YUNIKORN-1289 Publish Daemonset scheduling design doc
- Closed