Can eBPF save us from the Data Deluge?
from the Data Deluge? A case for file filtering in eBPF Giulia Frascaria October 28, 2020 1 The data deluge on modern storage 2 Compute node CPU Network Storage node Flash The data deluge on CPU Network Storage node Flash Data DoS in reverse! 11 Compute node CPU Network Storage node Flash Data So similar yet so different ● DoS is malicious ● Data transfer is business-critical ● We 12 So similar yet so different ● DoS is malicious ● Data transfer is business-critical ● We can blindly drop DoS 13 But could we reduce data transfer size? eBPF filter-reduce 14 Filter Reduce input0 码力 | 18 页 | 266.90 KB | 1 年前3Cilium v1.11 Documentation
Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Mutexes / Locks and Data Races Hubble Bumping the vendored Cilium dependency Documentation Style Header Titles Body Code Cilium’s eBPF implementation is optimized for maximum performance, can be attached to XDP (eXpress Data Path), and supports direct server return (DSR) as well as Maglev consistent hashing if the load balancing run in standalone mode or as a cluster making it a great choice for local testing with multi-node data paths. Agent nodes are joined to the master node using a node-token which can be found on the master0 码力 | 1373 页 | 19.37 MB | 1 年前3Cilium v1.5 Documentation
for the kubernetes block like this: kubectl -n kube-system edit cm coredns [...] apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr for the kubernetes block like this: kubectl -n kube-system edit cm coredns [...] apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr components of an applica�on. A cluster of “Ka�a brokers” connect nodes that “produce” data into a data stream, or “consume” data from a datastream. Ka�a refers to each datastream as a “topic”. Because scalable0 码力 | 740 页 | 12.52 MB | 1 年前3Cilium v1.6 Documentation
wildcards for the kubernetes block like this: kubectl -n kube-system edit cm coredns [...] apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa configuration: apiVersion: v1 kind: ConfigMap metadata: name: cni-configuration namespace: cilium data: cni-config: |- { "cniVersion": "0.3.0", "name": "azure", "plugins": [ configuration: apiVersion: v1 kind: ConfigMap metadata: name: cni-configuration namespace: cilium data: cni-config: |- { "cniVersion": "0.3.0", "name": "azure", "plugins": [0 码力 | 734 页 | 11.45 MB | 1 年前3Cilium v1.8 Documentation
Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Mutexes / Locks and Data Races Release Management Organization Release tracking Release Cadence Backporting process Backport run in standalone mode or as a cluster making it a great choice for local testing with multi-node data paths. Agent nodes are joined to the master node using a node-token which can be found on the master wildcards for the kubernetes block like this: kubectl -n kube-system edit cm coredns [...] apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa0 码力 | 1124 页 | 21.33 MB | 1 年前3Cilium v1.9 Documentation
Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Mutexes / Locks and Data Races Hubble Bumping the vendored Cilium dependency Release Management Organization Release tracking Cilium’s eBPF implementation is optimized for maximum performance, can be attached to XDP (eXpress Data Path), and supports direct server return (DSR) as well as Maglev consistent hashing if the load balancing open http://localhost:12000/ to access the UI. Hubble UI is not the only way to get access to Hubble data. A command line tool, the Hubble CLI, is also available. It can be installed by following the instructions0 码力 | 1263 页 | 18.62 MB | 1 年前3Cilium v1.10 Documentation
Cilium Hubble Important common packages Debugging toFQDNs and DNS Debugging Mutexes / Locks and Data Races Hubble Bumping the vendored Cilium dependency Release Management Organization Release tracking Cilium’s eBPF implementation is optimized for maximum performance, can be attached to XDP (eXpress Data Path), and supports direct server return (DSR) as well as Maglev consistent hashing if the load balancing run in standalone mode or as a cluster making it a great choice for local testing with multi-node data paths. Agent nodes are joined to the master node using a node-token which can be found on the master0 码力 | 1307 页 | 19.26 MB | 1 年前3Cilium v1.7 Documentation
run in standalone mode or as a cluster making it a great choice for local testing with multi-node data paths. Agent nodes are joined to the master node using a node-token which can be found on the master wildcards for the kubernetes block like this: kubectl -n kube-system edit cm coredns [...] apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa configuration: apiVersion: v1 kind: ConfigMap metadata: name: cni-configuration namespace: cilium data: cni-config: |- { "cniVersion": "0.3.0", "name": "azure", "plugins": [0 码力 | 885 页 | 12.41 MB | 1 年前3Buzzing Across Space
retrieve configuration options, and store state through eBPF maps to save and retrieve data in a wide set of data structures. These maps can be accessed from eBPF programs as well as from applications their own encoding. When the bees jumped on the case, it marked the beginning Of a whole new era for data sharing and messaging. Mail was still slow to go through the ship’s processors, But the electrician in-kernel aggregation of metrics allows flexible and efficient generation of observability events and data structures from a wide range of possible sources without having to export samples. Attaching eBPF0 码力 | 32 页 | 32.98 MB | 1 年前3How and When You Should Measure CPU Overhead of eBPF Programs
repeat – Control input data and/or context. Examine output data/context. – Use cases: – Unit testing – Debugging bpftool prog run Program Type Input Data Input Context Output Data Output Context Repeat0 码力 | 20 页 | 2.04 MB | 1 年前3
共 13 条
- 1
- 2