Description
We should properly interleave AIO threads (disk threads) evenly across the NUMA nodes, and have affinity on those nodes. That would allow for better memory distribution across NUMA nodes. We've noticed pretty uneven allocations on some boxes, which I attribute to this problem (but no strong evidence, but it does make sense).
Per-node process memory usage (in MBs) for PID 33471 ([ET_NET 0])
Node 0 Node 1 Total
--------------- --------------- ---------------
Huge 0.00 0.00 0.00
Heap 0.00 0.00 0.00
Stack 1.38 0.64 2.02
Private 188993.75 59142.80 248136.55
---------------- --------------- --------------- ---------------
Total 188995.13 59143.44 248138.57
Attachments
Issue Links
- is duplicated by
-
TS-1965 Make IO threads NUMA aware
- Closed
- links to