2. 腾讯 clickhouse实践 _2019丁晓坤&熊峰
iData 2 旧画像系统 Block 1 Block 2 Block … Storage Scheduler Data Stats Gather SQL Parser Query Optimizer Execution Plan Bitcode Emitter Scheduler Block 1 Block 2 Block … DataNode-2 Bitmap Filter Builder Dynamic Bitmap Index Cache Bitmap Index Generator Execute Engine Data Mapper Col- 1 Col- 1 Col… Aggregate Merger Executor-1 Bitmap Filter Builder Dynamic Bitmap Execute Engine Data Mapper Col- 1 Col- 1 Col… Aggregate Merger Executor-2 Bitmap Filter Builder Dynamic Bitmap Index Cache Bitmap Index Generator Execute Engine Data Mapper Col- 1 Col-0 码力 | 26 页 | 3.58 MB | 1 年前38. Continue to use ClickHouse as TSDB
test_insert test_query test insert_view calc_test_query Read Client Write Client ► Time-Series-Orient Model How we do test_insert test_query test insert_view calc_test_query Read Client Write Float64 ) ENGINE = Null ► Time-Series-Orient Model How we do test_insert test_query test insert_view calc_test_query Read Client Write Client CREATE MATERIALIZED VIEW demonstration.test_insert metric_name, Name, Age ► Time-Series-Orient Model How we do test_insert test_query test insert_view calc_test_query Read Client Write Client CREATE TABLE demonstration.test ( `time_series_interval`0 码力 | 42 页 | 911.10 KB | 1 年前3ClickHouse in Production
EventLogHDFS; Ok. 0 rows in set. Elapsed: 106.350 sec. Processed 28.75 mln rows. 54 / 97 In ClickHouse: Query Local Copy SELECT countIf(CounterType='Show') as SumShows, countIf(CounterType='Click') as SumClicks BannerID FROM EventLogLocal GROUP BY BannerID ORDER BY SumClicks desc LIMIT 3; 55 / 97 In ClickHouse: Query Local Copy SELECT countIf(CounterType='Show') as SumShows, countIf(CounterType='Click') as SumClicks dictionary update policy › Uniformly random time within range [MIN, MAX] › Some sources support invalidate query 83 / 97 External dictionaries: Layout › Flat › Hashed › Cache › ComplexKeyCache › ComplexKeyHashed0 码力 | 100 页 | 6.86 MB | 1 年前31. Machine Learning with ClickHouse
import pandas as pd url = 'http://127.0.0.1:8123?query=' query = 'select * from trips limit 1000 format TSVWithNames' resp = requests.get(url, data=query) string_io = io.StringIO(resp.text) table = pd.read_csv(string_io OFFSET y Must specify an expression for sampling › Optimized by PK › Fixed dataset for fixed sample query › Only for MergeTree 11 / 62 How to sample data SAMPLE x OFFSET y CREATE TABLE trips_sample_time FROM trips_sample_time 432992321 1 rows in set. Elapsed: 0.413 sec. Processed 432.99 million rows Query with sampling reads less rows! SELECT count() FROM trips_sample_time SAMPLE 1 / 3 OFFSET 1 / 3 1443307700 码力 | 64 页 | 1.38 MB | 1 年前30. Machine Learning with ClickHouse
import pandas as pd url = 'http://127.0.0.1:8123?query=' query = 'select * from trips limit 1000 format TSVWithNames' resp = requests.get(url, data=query) string_io = io.StringIO(resp.text) table = pd.read_csv(string_io OFFSET y Must specify an expression for sampling › Optimized by PK › Fixed dataset for fixed sample query › Only for MergeTree 11 / 62 How to sample data SAMPLE x OFFSET y CREATE TABLE trips_sample_time FROM trips_sample_time 432992321 1 rows in set. Elapsed: 0.413 sec. Processed 432.99 million rows Query with sampling reads less rows! SELECT count() FROM trips_sample_time SAMPLE 1 / 3 OFFSET 1 / 3 1443307700 码力 | 64 页 | 1.38 MB | 1 年前3蔡岳毅-基于ClickHouse+StarRocks构建支撑千亿级数据量的高可用查询引擎
随时调整服务器,新增/缩减服务器; 分布式: k8s的集群式部署 全球敏捷运维峰会 广州站 采用ClickHouse后平台的查询性能 system.query_log表,记录已经 执行的查询记录 query:执行的详细SQL,查询相关记录可以 根据SQL关键字筛选该字段 query_duration_ms:执行时间 memory_usage:占用内存 read_rows和read_bytes :读取行数和大小 r 数据导入之前要评估好分区字段; • 数据导入时根据分区做好Order By; • 左右表join的时候要注意数据量的变化; • 是否采用分布式; • 监控好服务器的cpu/内存波动/`system`.query_log; • 数据存储磁盘尽量采用ssd; • 减少数据中文本信息的冗余存储; • 特别适用于数据量大,查询频次可控的场景,如数据分析,埋点日志系统; 全球敏捷运维峰会 广州站 StarRocks应用小结0 码力 | 15 页 | 1.33 MB | 1 年前3Тестирование ClickHouse которого мы заслуживаем
PartitionManager() as pm: pm.partition_instances(node1, node2, port=9009) #drop connection node1.query("INSERT INTO tt \ SELECT * FROM hdfs('hdfs://hdfs1:9000/tt', 'TSV')") assert_with_retry(node2, "SELECT v_slow / v_fast AS ratio FROM ( SELECT PE.Names, query_duration_ms > 3000 AS slow, avg(PE.Values) AS value, FROM system.query_thread_log WHERE query = '...' GROUP BY PE.Names, slow ) HAVING ratio >0 码力 | 84 页 | 9.60 MB | 1 年前36. ClickHouse在众安的实践
/2013/000000_0' | clickhouse-client --host=127.0.0.1 -- port=10000 -u user --password password --query="INSERT INTO Insight_zhongan.baodan_yonghushuju FORMAT CSV" 效果: 单进程:每分钟2600w条记录,client占用核数=1,s iostat -dmx 1: 查看磁盘io使用情况,每秒更新 • Clickhouse命令: • set send_logs_level = 'trace':查看sql执行步骤详情 • 根据query_id查看内存使用情况,io情况等详细信息: system flush logs; select ProfileEvents.Names as name, match(name, 'Bytes|Chars') Values) : toString(ProfileEvents.Values) as value from system.query_log array join ProfileEvents where event_date = today() and type = 2 and query_id = '05ff4e7d-2b8c-4c41-b03d-094f9d8b02f2'; Thanks!0 码力 | 28 页 | 4.00 MB | 1 年前37. UDF in ClickHouse
Pipeline = Directed Acyclic Graph (DAG) of modules Module = Input + Task + Output Task = Query or external program Query = “CREATE TABLE ... AS SELECT ...” A Database System and A ML Pipeline Begin Content0 码力 | 29 页 | 1.54 MB | 1 年前33. 数仓ClickHouse多维分析应用实践-朱元
数 据 展 示 + 多 维 分析 采用开源报表系统davinci 地址: https://github.com/edp963/davinci 03 1. Memory limit (for query) exceeded 解决:通过在users.xml 配置 max_bytes_before_external_sort max_bytes_before_external_group_by 20 码力 | 14 页 | 3.03 MB | 1 年前3
共 15 条
- 1
- 2