Cilium v1.9 Documentationobserve may report one or more nodes being unavailable and hubble-ui may fail to connect to the backends. Installer Integrations The following list includes the Kubernetes installer integrations we are cluster. We have also established service load-balancing from external workloads to your cluster backends, and configured domain name lookup in the external workload to be served by kube-dns of your cluster Global Service will load-balance across backends in multiple clusters. This implicitly configures io.cilium/shared-service: "true". To prevent service backends from being shared to other clusters, and0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.10 Documentationas recvmsg (UDP) the destination IP is checked for an existing service IP and one of the service backends is selected as a target, meaning, while the application is assuming its connection to the service sendmsg(2) and recvmsg(2) system call layers for connecting the application to one of the service backends. In the v5.8 Linux kernel, a getpeername(2) hook for eBPF has been added in order to also reverse with externalTrafficPolicy=Local is possible and can also be reached from nodes which have no local backends, meaning, given SNAT does not need to be performed, all service endpoints are available for load0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.11 Documentationas recvmsg (UDP) the destination IP is checked for an existing service IP and one of the service backends is selected as a target, meaning, while the application is assuming its connection to the service sendmsg(2) and recvmsg(2) system call layers for connecting the application to one of the service backends. In the v5.8 Linux kernel, a getpeername(2) hook for eBPF has been added in order to also reverse with externalTrafficPolicy=Local is possible and can also be reached from nodes which have no local backends, meaning, given SNAT does not need to be performed, all service endpoints are available for load0 码力 | 1373 页 | 19.37 MB | 1 年前3
Cilium v1.8 Documentationobserve may report one or more nodes being unavailable and hubble-ui may fail to connect to the backends. Installation on OpenShift OKD OpenShift Requirements 1. Choose preferred cloud provider. This Cilium pod and validate that the backend IPs consist of pod IPs from all clusters running relevant backends. You can further validate the correct datapath plumbing by running cilium bpf lb list to inspect as recvmsg (UDP) the destination IP is checked for an existing service IP and one of the service backends is selected as a target, meaning, while the application is assuming its connection to the service0 码力 | 1124 页 | 21.33 MB | 1 年前3
Cilium v1.7 DocumentationCilium pod and validate that the backend IPs consist of pod IPs from all clusters running relevant backends. You can further validate the correct datapath plumbing by running cilium bpf lb list to inspect as recvmsg (UDP) the destination IP is checked for an existing service IP and one of the service backends is selected as a target, meaning, while the application is assuming its connection to the service sendmsg(2) and recvmsg(2) system call layers for connecting the application to one of the service backends. Currently getpeername(2) does not yet have a BPF hook for rewriting sock addresses before copying0 码力 | 885 页 | 12.41 MB | 1 年前3
Cilium v1.6 DocumentationCilium pod and validate that the backend IPs consist of pod IPs from all clusters running relevant backends. You can further validate the correct datapath plumbing by running cilium bpf lb list to inspect as recvmsg (UDP) the destination IP is checked for an existing service IP and one of the service backends is selected as a target, meaning, while the application is assuming its connection to the service sendmsg(2) and recvmsg(2) system call layers for connecting the application to one of the service backends. Currently getpeername(2) does not yet have a BPF hook for rewriting sock addresses before copying0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.5 Documentationmax 512k endpoints (IPv4 or IPv6) across all clusters Load Balancer node 64k Max 64k cumula�ve backends across all services across all clusters [*] Policy endpoint16k Max 16k allowed iden�ty + port + in the load balancing table by scaling the number of service backends up and down can reduce the maximum number of supported service backends further. If in doubt, increase the limit to 512k. Kubernetes cilium bpf lb list Add a new loadbalancer cilium service update --frontend 127.0.0.1:80 \ --backends 127.0.0.2:90,127.0.0.3:90 \ --id 20 \ --rev 2 BPF List node tunneling mapping informa�on0 码力 | 740 页 | 12.52 MB | 1 年前3
共 7 条
- 1













