Details
-
Bug
-
Status: Accepted
-
Major
-
Resolution: Unresolved
-
0.22.1
-
None
-
None
-
Test environment runs on vagrant
Master: Centos 7 + mesos 0.22.1 + marathon 0.9.0 = 1 vcpu + 1gb ram
Slave: Centos 7 + mesos 0.22.1 = 3vcpus + 2048mb(1 master + 1 slave)
Description
We recently began doing some tests with kibana to graph some of the stats of the slaves and the masters.
We found something pretty odd:
Test case:
In my example my slave has 1840 mb free, of which mesos reserves 920mb for tasks.
1. create N (in my case 14) marathon tasks with the following configuration
command: while true; do sleep 1 ; echo "heloo"; done mem: 64mb cpu: 0.1
2. check the mesos master web UI
Total 3 920 MB Used 1.4 896 MB
3. check the <slave host>:5051/metrics/snapshot
"slave/mem_total":920, "slave/mem_used”:1344
Is this correct? I discussed this issue on the DCOS community slack channel with Adam and he told me that the correct numbers are in the #3 he explained that for each task, there are about 32mb + 0.1 cpu that is assigned to a default executor.
I also changed the slave to enable cgroups_limit_swap:
/etc/mesos-slave/ ├── attributes ├── cgroups_limit_swap ├── containerizers ├── executor_registration_timeout ├── hostname ├── isolation ├── resources └── work_dir
cat /etc/mesos-slave/cgroups_limit_swap true
ps ax | grep slave 26810 ? Ssl 0:02 /usr/sbin/mesos-slave --master=zk://172.41.5.11:2181/mesos --ip=172.41.6.11 --cgroups_limit_swap=true --containerizers=docker,mesos --executor_registration_timeout=30mins --hostname=172.41.6.11 --isolation=cgroups/cpu,cgroups/mem --work_dir=/tmp/mesos