2.2.1通过Golang+eBPF实现无侵入应用可观测应用:微服务架构、多语言、多协议 挑战1:微服务、多语言、多协议环境下,端到端观测 复杂度上升,埋点成本居高不下 Kubernetes 容器 网络、操作系统、硬件 基础设施层复杂度日益增加 如何关联? 挑战3:数据散落,工具多, 缺少上下文,排查效率低下 业务应用 应用框架 容器虚拟化 系统调用 内核 应用性能监控(APM) Kubernetes监控 Kubernetes组件异常: Scheduler 新版控制台体验升级 • 提供多语言的无侵入的应用CPU热点查看 • 监控网络异常,如TCP Drop、TCP 重传 • 监控应用异常事件,如OOM 黄金三指标 调用链查询与分析 拓扑/上下游 网络大盘 容器监控 智能告警 持续剖析 接口监控 数据来源 Thank You, Every Gopher0 码力 | 29 页 | 3.83 MB | 1 年前3
Cilium v1.6 Documentationfor the TLS certificates between etcd peers to work correctly, a DNS reverse lookup on a pod IP must map back to pod name. If you are using CoreDNS, check the CoreDNS ConfigMap and validate that in-addr.arpa Validate that the IP cache is synchronized correctly by running cilium bpf ipcache list or cilium map get cilium_ipcache. The output must contain pod IPs from local and remote clusters. If this fails: ({ [...] services: (map[k8s.ServiceID]*k8s.Service) (len=2) { (k8s.ServiceID) default/kubernetes: (*k8s.Service) (0xc000cd11d0)(frontend:172.20.0.1/ports= [https]/selector=map[]), (k8s.ServiceID)0 码力 | 734 页 | 11.45 MB | 1 年前3
Cilium v1.5 Documentationfor the TLS cer�ficates between etcd peers to work correctly, a DNS reverse lookup on a pod IP must map back to pod name. If you are using CoreDNS, check the CoreDNS ConfigMap and validate that in-addr for the TLS cer�ficates between etcd peers to work correctly, a DNS reverse lookup on a pod IP must map back to pod name. If you are using CoreDNS, check the CoreDNS ConfigMap and validate that in-addr.arpa Endpoint Policy: The endpoint policy object implements the Cilium endpoint enforcement. Using a map to lookup a packets associated iden�ty and policy this layer scales well to lots of endpoints. Depending0 码力 | 740 页 | 12.52 MB | 1 年前3
Cilium v1.7 Documentationfor the TLS certificates between etcd peers to work correctly, a DNS reverse lookup on a pod IP must map back to pod name. If you are using CoreDNS, check the CoreDNS ConfigMap and validate that in-addr.arpa Validate that the IP cache is synchronized correctly by running cilium bpf ipcache list or cilium map get cilium_ipcache. The output must contain pod IPs from local and remote clusters. If this fails: ({ [...] services: (map[k8s.ServiceID]*k8s.Service) (len=2) { (k8s.ServiceID) default/kubernetes: (*k8s.Service) (0xc000cd11d0)(frontend:172.20.0.1/ports= [https]/selector=map[]), (k8s.ServiceID)0 码力 | 885 页 | 12.41 MB | 1 年前3
Cilium v1.8 Documentationthese new BPF powers. Hubble can answer questions such as: Service dependencies & communication map What services are communicating with each other? How frequently? What does the service dependency for the TLS certificates between etcd peers to work correctly, a DNS reverse lookup on a pod IP must map back to pod name. If you are using CoreDNS, check the CoreDNS ConfigMap and validate that in-addr Validate that the IP cache is synchronized correctly by running cilium bpf ipcache list or cilium map get cilium_ipcache. The output must contain pod IPs from local and remote clusters. If this fails:0 码力 | 1124 页 | 21.33 MB | 1 年前3
Cilium v1.9 Documentationthese new eBPF powers. Hubble can answer questions such as: Service dependencies & communication map What services are communicating with each other? How frequently? What does the service dependency for the TLS certificates between etcd peers to work correctly, a DNS reverse lookup on a pod IP must map back to pod name. If you are using CoreDNS, check the CoreDNS ConfigMap and validate that in-addr.arpa Validate that the IP cache is synchronized correctly by running cilium bpf ipcache list or cilium map get cilium_ipcache. The output must contain pod IPs from local and remote clusters. If this fails:0 码力 | 1263 页 | 18.62 MB | 1 年前3
Cilium v1.10 Documentationthese new eBPF powers. Hubble can answer questions such as: Service dependencies & communication map What services are communicating with each other? How frequently? What does the service dependency pods Observability Setting up Hubble Observability Inspecting Network Flows with the CLI Service Map & Hubble UI Network Policy Security Tutorials Identity-Aware and HTTP-Aware Policy Enforcement Locking Cilium. � Next Steps Setting up Hubble Observability Inspecting Network Flows with the CLI Service Map & Hubble UI Identity-Aware and HTTP-Aware Policy Enforcement Setting up Cluster Mesh Installation0 码力 | 1307 页 | 19.26 MB | 1 年前3
Cilium v1.11 Documentationthese new eBPF powers. Hubble can answer questions such as: Service dependencies & communication map What services are communicating with each other? How frequently? What does the service dependency pods Observability Setting up Hubble Observability Inspecting Network Flows with the CLI Service Map & Hubble UI Network Policy Security Tutorials Identity-Aware and HTTP-Aware Policy Enforcement Locking Cilium. � Next Steps Setting up Hubble Observability Inspecting Network Flows with the CLI Service Map & Hubble UI Identity-Aware and HTTP-Aware Policy Enforcement Setting up Cluster Mesh Installation0 码力 | 1373 页 | 19.37 MB | 1 年前3
Steering connections to sockets with BPF socket lookup hook__u32 local_port; /* ... */ }; /usr/include/linux/bpf.h 7 77 777 echo_ports BPF HASH map Ncat socket echo_socket BPF SOCKMAP (2) is local port open? (3) pick echo service socket Ncat bpf.c - BPF sk_lookup program /* Declare BPF maps */ struct bpf_map_def SEC("maps") echo_ports = { .type = BPF_MAP_TYPE_HASH, .max_entries = 1024, .key_size sizeof(__u16), .value_size = sizeof(__u8), }; struct bpf_map_def SEC("maps") echo_socket = { .type = BPF_MAP_TYPE_SOCKMAP, .max_entries = 1, .key_size0 码力 | 23 页 | 441.22 KB | 1 年前3
eBPF Summit 2020 Lightning TalkAMQP port • Extract source IP & port as BPF map key Extract AMQP Methods Use BPF Maps Use BPF Maps • Using the source IP & port as map key • Map is a counter for consumers per connection Use the source IP & port as map key • Map is a counter for consumers per connection • Increase when declare Use BPF Maps • Using the source IP & port as map key • Map is a counter for consumers Increase when declare • Decrease when cancel Use BPF Maps • Using the source IP & port as map key • Map is a counter for consumers per connection • Increase when declare • Decrease when cancel0 码力 | 22 页 | 1.81 MB | 1 年前3
共 15 条
- 1
- 2













