Cilium v1.5 Documentationclusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created clusterrole.rbac.authorization.k8s.io/kube-state-metrics created deployment.apps/kube-state-metrics created rolebinding.rbac.authorization authorization.k8s.io/kube-state-metrics created role.rbac.authorization.k8s.io/kube-state-metrics-resizer created serviceaccount/kube-state-metrics created service/kube-state-metrics created configmap/prometheus service/prometheus created service/prometheus-open created clusterrolebinding.rbac.authorization.k8s.io/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created serviceaccount/prometheus-k8s0 码力 | 740 页 | 12.52 MB | 1 年前3
Cilium v1.6 Documentationgithubusercontent.com/kata- containers/packaging/4bb97ef14a4ba8170b9d501b3e567037eb0f9a41/kata- deploy/kata-rbac.yaml kubectl apply -f https://raw.githubusercontent.com/kata- containers/packaging/4bb97ef14a4ba8 created deployment.extensions/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created serviceaccount/prometheus-k8s privileges are automatically granted when using the standard Cilium deployment artifacts: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cilium rules: - apiGroups: - cilium0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.10 Documentation${CLUSTER_NAME} --query "nodeResourceGroup" -- output tsv) AZURE_SERVICE_PRINCIPAL=$(az ad sp create-for-rbac --scopes /subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${AZURE_NODE_RES OURCE_GROUP} --role gated by Kubernetes Role-based access control (RBAC) framework. See the official RBAC documentation [https://kubernetes.io/docs/reference/access- authn-authz/rbac/]. When policies are applied, matched pod pod traffic is redirected. If desired, RBAC configurations can be used such that application developers can not escape the redirection. Note This is a beta feature. Please provide feedback and file a GitHub0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.9 Documentation${CLUSTER_NAME} --query "nodeResourceGroup" -- output tsv) AZURE_SERVICE_PRINCIPAL=$(az ad sp create-for-rbac --scopes /subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${AZURE_NODE_RES OURCE_GROUP} --role gated by Kubernetes Role-based access control (RBAC) framework. See the official RBAC documentation [https://kubernetes.io/docs/reference/access- authn-authz/rbac/]. When policies are applied, matched pod pod traffic is redirected. If desired, RBAC configurations can be used such that application developers can not escape the redirection. Note This is a beta feature. Please provide feedback and file a GitHub0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.11 Documentation${CLUSTER_NAME} --query "nodeResourceGroup" -- output tsv) AZURE_SERVICE_PRINCIPAL=$(az ad sp create-for-rbac --scopes /subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${AZURE_NODE_RES OURCE_GROUP} --role gated by Kubernetes Role-based access control (RBAC) framework. See the official RBAC documentation [https://kubernetes.io/docs/reference/access- authn-authz/rbac/]. When policies are applied, matched pod pod traffic is redirected. If desired, RBAC configurations can be used such that application developers can not escape the redirection. Note This is a beta feature. Please provide feedback and file a GitHub0 码力 | 1373 页 | 19.37 MB | 1 年前3
Cilium v1.7 Documentationcreated deployment.extensions/prometheus created clusterrolebinding.rbac.authorization.k8s.io/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus created serviceaccount/prometheus-k8s privileges are automatically granted when using the standard Cilium deployment artifacts: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cilium rules: - apiGroups: - cilium identity and permissions used by cilium-agent to access the Kubernetes API server when Kubernetes RBAC is enabled. A Secret resource: describes the credentials use access the etcd kvstore, if required0 码力 | 885 页 | 12.41 MB | 1 年前3
Cilium v1.8 Documentationit is recommended to create a dedicated service principal for cilium-operator: az ad sp create-for-rbac --name cilium-operator > azure-sp.json The contents of azure-sp.json should look like this: { configmap/grafana-hubble-dashboard created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus unchanged clusterrolebinding.rbac.authorization.k8s.io/prometheus unchanged service/grafana created privileges are automatically granted when using the standard Cilium deployment artifacts: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cilium rules: - apiGroups: - cilium0 码力 | 1124 页 | 21.33 MB | 1 年前3
1.5 Years of Cilium Usage at DigitalOcean● Upgrades have been pretty smooth ○ moved from Cilium 1.4 initially to 1.8 today ○ retain old RBAC rules across certain cluster upgrades to avoid disruptions ● (Health checking) tooling really helpful0 码力 | 7 页 | 234.36 KB | 1 年前3
Cilium的网络加速秘诀ingress tc egress redirect_peer redirect_neigh kernel network stack netfilter 加速东西向 nodePort 访问 �������������������� ������� request to nodeport 32000 of service pod3 worker node1 10.6.0 效果: • 集群内访问nodePort、LoadBalancer 的service时,能够减少数据包转发跳 数,极大提高网络性能 • 相比传统 iptables 等 技术,降低了访 问延时。例如在相同环境下,service 数量达到3K,kube-proxy iptables下 的的延时为0.6ms,而cilium的延时为 0.3ms XDP 加速南北向 nodePort 访问 cilium 借助 eBPF NAT DSR 加速南北向 nodePort 访问 传统的 nodePort 转发,伴随着 SNAT的发生。而 Cilium 为 nodePort 提供了 native 和 IPIP 等方式的 DSR (direct server return)实现,有效减 少了网络转发的跳数,极大提升了 nodePort的转发性能,降低访问延时。 相关测试表明: • kube proxy iptables模式下,请求完0 码力 | 14 页 | 11.97 MB | 1 年前3
2.2.1通过Golang+eBPF实现无侵入应用可观测服务实例的 运行情况,进一步提升问题定位能力,通常在已经定位到某个异常节点后使用。 实例 全栈数据源,70+个告警模板开箱即用: 应用级别:Pod/Service/Deployment K8S控制面:apiserver/ETCD/Scheduler 基础设施:节点、网络、存储 云服务界别:Kafka/MySQL/Redis/ 告警 拓扑图排查 根因定位 修复 告警收敛,幸福感UP 指标 面向失败、高可用设计 优化告警 主动发现 智能降噪、去重 系统性解决 系统性解决 关闭 智能告警 全栈数据源,70+个告警模板开箱即用: 应用级别:Pod/Service/Deployment K8S控制面:apiserver/ETCD/Scheduler 基础设施:节点、网络、存储 云服务界别:Kafka/MySQL/Redis/ 告警 拓扑图排查 根因定位 修复 告警收敛,幸福感UP 指标 eBPF是一种在Linux内核运行的沙盒程序,无需修改 任何应用代码,提供无侵入的应用无关、语言无关、 框架无关的应用可观测能力,提供如网络、虚拟内存、 系统调用等Otel无法获取的数据指标。 新版控制台体验升级 • 提供多语言的无侵入的应用CPU热点查看 • 监控网络异常,如TCP Drop、TCP 重传 • 监控应用异常事件,如OOM 黄金三指标 调用链查询与分析 拓扑/上下游 网络大盘0 码力 | 29 页 | 3.83 MB | 1 年前3
共 10 条
- 1













