Cilium v1.10 Documentationin-kernel verifier ensures that eBPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instructions for native execution efficiency. eBPF programs can be run at various cnp-node-status-gc and ccnp-node-status-gc are now removed. Please use cnp-node-status-gc-interval=0 instead. The cilium-endpoint-gc option is now removed. Please use cilium-endpoint-gc- interval=0 instead ccnp-node-status-gc: This option is being deprecated. Disabling CCNP node status GC can be done with cnp-node-status-gc-interval=0. (Note that this is not a typo, it is meant to be cnp-node-status-gc-interval)0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.11 Documentationclusters or clustermeshes with more than 65535 nodes. Decryption with Cilium IPsec is limited to a single CPU core per IPsec tunnel. This may affect performance in case of high throughput between two nodes. WireGuard in-kernel verifier ensures that eBPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instructions for native execution efficiency. eBPF programs can be run at various cilium_operator_identity_gc_entries_total is removed. Please use cilium_operator_identity_gc_entries instead. cilium_operator_identity_gc_runs_total is removed. Please use cilium_operator_identity_gc_runs instead0 码力 | 1373 页 | 19.37 MB | 1 年前3
Cilium v1.8 Documentationin-kernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instructions for native execution efficiency. BPF programs can be run at various ccnp-node-status-gc: This option is being deprecated. Disabling CCNP node status GC can be done with cnp-node-status-gc-interval=0. (Note that this is not a typo, it is meant to be cnp-node-status-gc-interval) cnp-node-status-gc: This option is being deprecated. Disabling CNP node status GC can be done with cnp-node-status-gc-interval=0. This old option will be removed in Cilium 1.9 cilium-endpoint-gc: This option0 码力 | 1124 页 | 21.33 MB | 1 年前3
Cilium v1.9 Documentationin-kernel verifier ensures that eBPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instructions for native execution efficiency. eBPF programs can be run at various cnp-node-status-gc and ccnp-node-status-gc are now removed. Please use cnp-node-status-gc-interval=0 instead. The cilium-endpoint-gc option is now removed. Please use cilium-endpoint-gc- interval=0 instead ccnp-node-status-gc: This option is being deprecated. Disabling CCNP node status GC can be done with cnp-node-status-gc-interval=0. (Note that this is not a typo, it is meant to be cnp-node-status-gc-interval)0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.6 Documentation\ --min-cpu-platform "Intel Broadwell" \ kata-testing gcloud compute ssh kata-testing # While ssh'd into the VM: $ [ -z "$(lscpu|grep GenuineIntel)" ] && { echo "ERROR: Need an Intel CPU"; exit 1; kernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instructions for native execution efficiency. BPF programs can be run at various BPF datapath to perform more aggressive aggregation on packet forwarding related events to reduce CPU consumption while running cilium monitor. The automatic change only applies to the default ConfigMap0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.7 Documentationkernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instructions for native execution efficiency. BPF programs can be run at various BPF datapath to perform more aggressive aggregation on packet forwarding related events to reduce CPU consumption while running cilium monitor. The automatic change only applies to the default ConfigMap optimize CPU consumption as much as possible while keeping the connection tracking table utilization below 25%. If needed, the interval can be set to a static interval with the option --conntrack-gc- interval0 码力 | 885 页 | 12.41 MB | 1 年前3
Cilium v1.5 Documentationin-kernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instruc�ons for na�ve execu�on efficiency. BPF programs can be run at various op�mize CPU consump�on as much as possible while keeping the connec�on tracking table u�liza�on below 25%. If needed, the interval can be set to a sta�c interval with the op�on --conntrack-gc-interval filling up and the automa�c adjustment of the garbage collector interval is insufficient. Set --conntrack-gc-interval to an interval lower than the default. Alterna�vely, the value for bpf-ct-global-any-max0 码力 | 740 页 | 12.52 MB | 1 年前3
Cilium的网络加速秘诀性能提升的主要表现: • 不同场景下,不同程度地降低了 网络数据包的“转发延时” • 不同场景下,不同程度地提升了 网络数据包的“吞吐量” • 不同场景下,不同程度地降低了 转发数据包所需的“ CPU 开销” eBPF 简介 eBPF 技术 在 Linux kernel 3.19 开始被 引入,可在用户态进行 eBPF 程序编程,编译 后,动态加载到内核指定的 hook 点上,以 VM 方式安全运行,其能过通过 最终实现内核数据进行修改,或者影响内核处 理请求的结果,或者改变内核处理请求的流程。 极大提升了内核处理事件的效率。 截止 linux 5.14 版本,eBPF 有32种类型程序。而 cilium 主要使用了如下类型程序: • sched_cls 。cilium在内核 TC 处实现数据包转发、负载均衡、过滤 • xdp 。cilium在内核 XDP 处实现数据包的转发、负载均衡、过滤 • cgroup_sock_addr 。cilium在 cgroup 中实现对service解析 • sock_ops + sk_msg。记录本地应用之间通信的socket,实现本地数据包的加速转发 加速同节点pod间通信 cilium 使用 eBPF 程序,借助 bpf_redirect() 或 bpf_redirect_peer() 等 helper 函数,快速帮助同宿主机间 的流量转发,节省了大量的内核协议栈 处理流程0 码力 | 14 页 | 11.97 MB | 1 年前3
共 8 条
- 1













