-
Notifications
You must be signed in to change notification settings - Fork 4k
libbpf-tools: add cgroup_helpers and oomkill support display cgroup #5384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
e957108 to
33f29d3
Compare
|
For this one:
Could you actually show your simple test? This way, people can reproduce the issue easily. |
Yes, thanks, the test code: oom_minimal.c#include <errno.h>
#include <malloc.h>
#include <unistd.h>
#include <stdio.h>
int main(void)
{
void *mem;
for (;;) {
mem = malloc(getpagesize());
if (!mem && errno == ENOMEM) {
fprintf(stderr, "OOMing...\n");
}
*(int *)mem = 1;
/* Just leak the memory */
}
return 0;
}cgroup-oom.sh#!/bin/bash
set -e
readonly pid=$$
readonly CGROUP_NAME=oom-test
[[ -z ${OOMer} ]] && OOMer=oom_minimal
cleanup() {
printf "\n"
sudo rmdir /sys/fs/cgroup/${CGROUP_NAME}/
}
trap cleanup EXIT
sudo mkdir -p /sys/fs/cgroup/${CGROUP_NAME}/
echo ${pid} | sudo tee /sys/fs/cgroup/${CGROUP_NAME}/cgroup.procs
echo $((1024*1024*2)) | sudo tee /sys/fs/cgroup/${CGROUP_NAME}/memory.max
echo $((1024*1024*2)) | sudo tee /sys/fs/cgroup/${CGROUP_NAME}/memory.high
eval ./${OOMer} ${@}compile and runthen |
33f29d3 to
1665706
Compare
|
@yonghong-song Thanks a lot, i just submit all changes and add test code above, please review, :) |
Using a simple test program (not shown) called OOM, we performed the
following two tests:
1. Allocating unlimited memory
2. Adding the process to a cgroup named oom-memcg and limiting its memory
usage to 200MB.
When we do not print cgroup information, we can only see process information
and cannot see the difference between memcg and non-memcg.
$ sudo ./oomkill
Tracing OOM kills... Ctrl-C to stop.
14:28:23 Triggered by PID 179201 ("oom"), OOM kill of PID 179201 ("oom"), 6114610 pages, loadavg: loadavg: 0.56 0.51 0.38 2/968 179204
14:28:42 Triggered by PID 179212 ("oom"), OOM kill of PID 179212 ("oom"), 51200 pages, loadavg: loadavg: 0.40 0.47 0.37 3/968 179212
The function implemented by this patch can clearly display cgroup information.
$ sudo ./oomkill -c
Tracing OOM kills... Ctrl-C to stop.
14:32:59 Triggered by PID 179879 ("oom"), CGROUP 8309 ("/sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/session.slice/[email protected]"), OOM kill of PID 179879 ("oom"), 6114610 pages, loadavg: loadavg: 0.50 0.38 0.35 4/970 179879
14:33:14 Triggered by PID 179884 ("oom"), CGROUP 122547 ("/sys/fs/cgroup/oom-memcg"), MEMCG 122547 ("/sys/fs/cgroup/oom-memcg"), OOM kill of PID 179884 ("oom"), 51200 pages, loadavg: loadavg: 0.47 0.38 0.35 3/971 179884
Link: iovisor#5384
Signed-off-by: Rong Tao <[email protected]>
1665706 to
86b0c14
Compare
Some tools need to obtain cgroup information. For example, oomkill currently
only supports tracking process information and cannot obtain cgroup
information. It would be better if it could obtain memcg information.
For ease of maintenance, a separate commit is kept just for adding
cgroup_helpers.
Added interfaces:
cgroup_cgroupid_of_path() - Get cgroupid from cgroup absolute path
get_cgroupid_path() - Get cgroup path from cgroupid
Signed-off-by: Rong Tao <[email protected]>
Using a simple test program (not shown) called OOM, we performed the
following two tests:
1. Allocating unlimited memory
2. Adding the process to a cgroup named oom-memcg and limiting its memory
usage to 200MB.
When we do not print cgroup information, we can only see process information
and cannot see the difference between memcg and non-memcg.
$ sudo ./oomkill
Tracing OOM kills... Ctrl-C to stop.
14:28:23 Triggered by PID 179201 ("oom"), OOM kill of PID 179201 ("oom"), 6114610 pages, loadavg: loadavg: 0.56 0.51 0.38 2/968 179204
14:28:42 Triggered by PID 179212 ("oom"), OOM kill of PID 179212 ("oom"), 51200 pages, loadavg: loadavg: 0.40 0.47 0.37 3/968 179212
The function implemented by this patch can clearly display cgroup information.
$ sudo ./oomkill -c
Tracing OOM kills... Ctrl-C to stop.
14:32:59 Triggered by PID 179879 ("oom"), CGROUP 8309 ("/sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/session.slice/[email protected]"), OOM kill of PID 179879 ("oom"), 6114610 pages, loadavg: loadavg: 0.50 0.38 0.35 4/970 179879
14:33:14 Triggered by PID 179884 ("oom"), CGROUP 122547 ("/sys/fs/cgroup/oom-memcg"), MEMCG 122547 ("/sys/fs/cgroup/oom-memcg"), OOM kill of PID 179884 ("oom"), 51200 pages, loadavg: loadavg: 0.47 0.38 0.35 3/971 179884
Link: iovisor#5384
Signed-off-by: Rong Tao <[email protected]>
86b0c14 to
374fdfb
Compare
Using a simple test program (not shown) called OOM, we performed the
following two tests:
usage to 200MB.
When we do not print cgroup information, we can only see process information
and cannot see the difference between memcg and non-memcg.
The function implemented by this patch can clearly display cgroup information.
and add
cgroup_helpers.c.