Linux command line for you and me Documentation Release 0.1Docs: man:sshd(8) man:sshd_config(5) Main PID: 3673 (sshd) Tasks: 1 (limit: 4915) CGroup: /system.slice/sshd.service └─3673 /usr/sbin/sshd -D Jun 22 18:19:28 kdas-laptop systemd[1]: 10:03:25 UTC; 1 day 3h ago Main PID: 21019 (myserver) Tasks: 2 (limit: 50586) Memory: 9.6M CGroup: /system.slice/myserver.service ├─21019 /usr/bin/sh /usr/sbin/myserver └─21020 nosuid,nodev,seclabel,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,releas0 码力 | 124 页 | 510.85 KB | 1 年前3
Cilium v1.11 Documentationkube- proxy), cgroup v2 needs to be enabled by setting the kernel systemd.unified_cgroup_hierarchy=1 parameter. Also, cgroup v1 controllers net_cls and net_prio have to be disabled, or cgroup v1 has to be (e.g. by setting the kernel cgroup_no_v1="all" parameter). This ensures that Kind nodes have their own cgroup namespace, and Cilium can attach BPF programs at the right cgroup hierarchy. To verify this, sudo ls -al /proc/$(docker inspect -f '{{.State.Pid}}' kind- control-plane)/ns/cgroup $ sudo ls -al /proc/self/ns/cgroup See the Pull Request [https://github.com/cilium/cilium/pull/16259] for more details0 码力 | 1373 页 | 19.37 MB | 1 年前3
Cilium v1.10 Documentationreplacement (Kubernetes Without kube- proxy), cgroup v1 controllers net_cls and net_prio have to be disabled, or cgroup v1 has to be disabled (e.g. by setting the kernel cgroup_no_v1="all" parameter). Validate the overlapping BPF cgroup type programs attached to the parent cgroup hierarchy of the kind container nodes. In such cases, either tear down Cilium, or manually detach the overlapping BPF cgroup programs running running in the parent cgroup hierarchy by following the bpftool documentation [https://manpages.ubuntu.com/manpages/focal/man8/bpftool-cgroup.8.html]. For more information, see the Pull Request [https://github0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.9 Documentationreplacement (Kubernetes Without kube- proxy), cgroup v1 controllers net_cls and net_prio have to be disabled, or cgroup v1 has to be disabled (e.g. by setting the kernel cgroup_no_v1="all" parameter). Validate the overlapping BPF cgroup type programs attached to the parent cgroup hierarchy of the kind container nodes. In such cases, either tear down Cilium, or manually detach the overlapping BPF cgroup programs running running in the parent cgroup hierarchy by following the bpftool documentation [https://manpages.ubuntu.com/manpages/focal/man8/bpftool-cgroup.8.html]. For more information, see the Pull Request [https://github0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.6 DocumentationRESTARTS AGE cilium-crf7f 1/1 Running 0 10m Limitations The kernel BPF cgroup hooks operate at connect(2), sendmsg(2) and recvmsg(2) system call layers for connecting the application The socket operations hook is attached to a specific cgroup and runs on TCP events. Cilium attaches a BPF socket operations program to the root cgroup and uses this to monitor for TCP state transitions, do echo "cat $log"; cat $log; done cat /var/run/cilium/state/bpf_features.log BPF/probes: CONFIG_CGROUP_BPF=y is not in kernel configuration BPF/probes: CONFIG_LWTUNNEL_BPF=y is not in kernel configuration0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.5 Documentationopera�ons: The socket opera�ons hook is a�ached to a specific cgroup and runs on TCP events. Cilium a�aches a BPF socket opera�ons program to the root cgroup and uses this to monitor for TCP state transi�ons, specifically do echo "cat $log"; cat $log; don cat /var/run/cilium/state/bpf_features.log BPF/probes: CONFIG_CGROUP_BPF=y is not in kernel configuration BPF/probes: CONFIG_LWTUNNEL_BPF=y is not in kernel configuration currently in the kernel are BPF_MAP_TYPE_PROG_ARRAY , BPF_MAP_TYPE_PERF_EVENT_ARRAY , BPF_MAP_TYPE_CGROUP_ARRAY , BPF_MAP_TYPE_STACK_TRACE , BPF_MAP_TYPE_ARRAY_OF_MAPS , BPF_MAP_TYPE_HASH_OF_MAPS . For0 码力 | 740 页 | 12.52 MB | 1 年前3
Cilium v1.7 DocumentationRESTARTS AGE cilium-crf7f 1/1 Running 0 10m Limitations The kernel BPF cgroup hooks operate at connect(2), sendmsg(2) and recvmsg(2) system call layers for connecting the application Cilium’s BPF kube-proxy replacement relies upon the Host-Reachable Services feature which uses BPF cgroup hooks to implement the service translation. The getpeername(2) hook is currently missing which will The socket operations hook is attached to a specific cgroup and runs on TCP events. Cilium attaches a BPF socket operations program to the root cgroup and uses this to monitor for TCP state transitions,0 码力 | 885 页 | 12.41 MB | 1 年前3
Cilium v1.8 DocumentationRESTARTS AGE cilium-crf7f 1/1 Running 0 10m Limitations The kernel BPF cgroup hooks operate at connect(2), sendmsg(2) and recvmsg(2) system call layers for connecting the application Cilium’s eBPF kube-proxy replacement relies upon the Host-Reachable Services feature which uses eBPF cgroup hooks to implement the service translation. Using it with libceph deployments currently requires The socket operations hook is attached to a specific cgroup and runs on TCP events. Cilium attaches a BPF socket operations program to the root cgroup and uses this to monitor for TCP state transitions,0 码力 | 1124 页 | 21.33 MB | 1 年前3
BAETYL 0.1.6 Documentationsure the following lines are commented out or add them if they don’t exist: GRUB_CMDLINE_LINUX=”cgroup_enable=memory swapaccount=1” 1. Save and exit and then run: sudo update-grub and reboot. NOTE:0 码力 | 119 页 | 11.46 MB | 1 年前3
PyArmor Documentation v8.1.9script .pyarmor/hooks/app.py def _pyarmor_check_docker(): cid = None with open("/proc/self/cgroup") as f: for line in f: if line.split(':', 2)[1] == 'name=systemd':0 码力 | 131 页 | 111.00 KB | 1 年前3
共 12 条
- 1
- 2













