-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
From @ineu on May 13, 2017 11:27
I have a pod consuming 115MB of RAM. I tried to set limits of 128MB, 256MB etc but the lowest one that worked was 450MB. Looks like the runner itself requires this amount of RAM, so the pod gets killed by the OOM killer before the application starts. I see the following dmesg:
[1655181.702032] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[1655181.702176] [23241] 2000 23241 4497 69 14 3 0 984 bash
[1655181.702177] [23254] 2000 23254 4497 62 15 3 0 984 bash
[1655181.702179] [23255] 2000 23255 140260 102204 211 5 0 984 objstorage
[1655181.702185] Memory cgroup out of memory: Kill process 23255 (objstorage) score 1980 or sacrifice child
[1655181.702513] Killed process 23255 (objstorage) total-vm:561040kB, anon-rss:408816kB, file-rss:0kB
Not sure what the objstorage is, but it is pretty greedy.
UPD: docker-based pods are fine. Just set a limit for one of them to 16MB, works well.
Copied from original issue: deis/slugrunner#64
Metadata
Metadata
Assignees
Labels
No labels