Prometheus Deep Dive - Monitoring. At scale.read/write API Twelve integrations for this API Ongoing work to send write-ahead-log over the wire to fill gaps Richard Hartmann & Frederic Branczyk @TwitchiH & @fredbrancz Prometheus Deep Dive Introduction are object storage backends for Prometheus, e.g. Thanos Remote API can now send WAL over the wire to fill gaps in data There are twelve different systems which are able to ingest Proemtheus data this way time series can point to one single event Especially useful if you emit one trace id per histogram bucket Some integrations already support this concept, e.g. OpenCensus Ingestors are free to discard this0 码力 | 34 页 | 370.20 KB | 1 年前3
PromQL 从入门到精通Prometheus Histogram 的设计初衷了。 Histogram 类型,是把延迟数据分到多个桶里,比如下面的例子,我们查询一个bucket指标看 看效果,虽然这个指标的桶划分不是很合理,也可以说明问题: binlog_consumer_latency_seconds_bucket 这个指标,有一个非常非常重要的标签叫 le,表 示桶上界,上面的例子就表示,binlog的consume延迟数据分成了6个桶,分别统计了每个桶的 是 http_request_duration_seconds_bucket ,其各个 bucket 的值如下: http_request_duration_seconds_bucket{job="n9e-proxy", le="0.1"} 500 http_request_duration_seconds_bucket{job="n9e-proxy", le="1"} 700 http http_request_duration_seconds_bucket{job="n9e-proxy", le="10"} 850 http_request_duration_seconds_bucket{job="n9e-proxy", le="20"} 1000 http_request_duration_seconds_bucket{job="n9e-proxy", le="+Inf"} 10000 码力 | 16 页 | 2.77 MB | 1 年前3
OpenMetrics - Standing on the shoulders of Titanstime series can point to one single event Especially useful if you emit one trace id per histogram bucket, i.e. exemplars Some integrations already support this concept, e.g. OpenCensus Ingestors are free 200" } 1027 1544554800 histogram_bucket{le=" 1" } 123 # {foo=" bar" } 42 1544554800 histogram_bucket{le=" 2" } 234 # {foo=" bar" } 23 1544554799.123 histogram_bucket{le=" 3" } 345 1544554800 # {foo="0 码力 | 21 页 | 84.83 KB | 1 年前3
共 3 条
- 1













