积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部数据库(15)ClickHouse(15)

语言

全部英语(6)中文(简体)(6)俄语(3)

格式

全部PDF文档 PDF(14)PPT文档 PPT(1)
 
本次搜索耗时 0.012 秒,为您找到相关结果约 15 个.
  • 全部
  • 数据库
  • ClickHouse
  • 全部
  • 英语
  • 中文(简体)
  • 俄语
  • 全部
  • PDF文档 PDF
  • PPT文档 PPT
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 2. 腾讯 clickhouse实践 _2019丁晓坤&熊峰

    iData 2 旧画像系统 Block 1 Block 2 Block … Storage Scheduler Data Stats Gather SQL Parser Query Optimizer Execution Plan Bitcode Emitter Scheduler Block 1 Block 2 Block … DataNode-2 Bitmap Filter Builder Dynamic Bitmap Index Cache Bitmap Index Generator Execute Engine Data Mapper Col- 1 Col- 1 Col… Aggregate Merger Executor-1 Bitmap Filter Builder Dynamic Bitmap Execute Engine Data Mapper Col- 1 Col- 1 Col… Aggregate Merger Executor-2 Bitmap Filter Builder Dynamic Bitmap Index Cache Bitmap Index Generator Execute Engine Data Mapper Col- 1 Col-
    0 码力 | 26 页 | 3.58 MB | 1 年前
    3
  • pdf文档 8. Continue to use ClickHouse as TSDB

    test_insert test_query test insert_view calc_test_query Read Client Write Client ► Time-Series-Orient Model How we do test_insert test_query test insert_view calc_test_query Read Client Write Float64 ) ENGINE = Null ► Time-Series-Orient Model How we do test_insert test_query test insert_view calc_test_query Read Client Write Client CREATE MATERIALIZED VIEW demonstration.test_insert metric_name, Name, Age ► Time-Series-Orient Model How we do test_insert test_query test insert_view calc_test_query Read Client Write Client CREATE TABLE demonstration.test ( `time_series_interval`
    0 码力 | 42 页 | 911.10 KB | 1 年前
    3
  • pdf文档 ClickHouse in Production

    EventLogHDFS; Ok. 0 rows in set. Elapsed: 106.350 sec. Processed 28.75 mln rows. 54 / 97 In ClickHouse: Query Local Copy SELECT countIf(CounterType='Show') as SumShows, countIf(CounterType='Click') as SumClicks BannerID FROM EventLogLocal GROUP BY BannerID ORDER BY SumClicks desc LIMIT 3; 55 / 97 In ClickHouse: Query Local Copy SELECT countIf(CounterType='Show') as SumShows, countIf(CounterType='Click') as SumClicks dictionary update policy › Uniformly random time within range [MIN, MAX] › Some sources support invalidate query 83 / 97 External dictionaries: Layout › Flat › Hashed › Cache › ComplexKeyCache › ComplexKeyHashed
    0 码力 | 100 页 | 6.86 MB | 1 年前
    3
  • pdf文档 1. Machine Learning with ClickHouse

    import pandas as pd url = 'http://127.0.0.1:8123?query=' query = 'select * from trips limit 1000 format TSVWithNames' resp = requests.get(url, data=query) string_io = io.StringIO(resp.text) table = pd.read_csv(string_io OFFSET y Must specify an expression for sampling › Optimized by PK › Fixed dataset for fixed sample query › Only for MergeTree 11 / 62 How to sample data SAMPLE x OFFSET y CREATE TABLE trips_sample_time FROM trips_sample_time 432992321 1 rows in set. Elapsed: 0.413 sec. Processed 432.99 million rows Query with sampling reads less rows! SELECT count() FROM trips_sample_time SAMPLE 1 / 3 OFFSET 1 / 3 144330770
    0 码力 | 64 页 | 1.38 MB | 1 年前
    3
  • pdf文档 0. Machine Learning with ClickHouse

    import pandas as pd url = 'http://127.0.0.1:8123?query=' query = 'select * from trips limit 1000 format TSVWithNames' resp = requests.get(url, data=query) string_io = io.StringIO(resp.text) table = pd.read_csv(string_io OFFSET y Must specify an expression for sampling › Optimized by PK › Fixed dataset for fixed sample query › Only for MergeTree 11 / 62 How to sample data SAMPLE x OFFSET y CREATE TABLE trips_sample_time FROM trips_sample_time 432992321 1 rows in set. Elapsed: 0.413 sec. Processed 432.99 million rows Query with sampling reads less rows! SELECT count() FROM trips_sample_time SAMPLE 1 / 3 OFFSET 1 / 3 144330770
    0 码力 | 64 页 | 1.38 MB | 1 年前
    3
  • pdf文档 蔡岳毅-基于ClickHouse+StarRocks构建支撑千亿级数据量的高可用查询引擎

    随时调整服务器,新增/缩减服务器; 分布式: k8s的集群式部署 全球敏捷运维峰会 广州站 采用ClickHouse后平台的查询性能 system.query_log表,记录已经 执行的查询记录 query:执行的详细SQL,查询相关记录可以 根据SQL关键字筛选该字段 query_duration_ms:执行时间 memory_usage:占用内存 read_rows和read_bytes :读取行数和大小 r 数据导入之前要评估好分区字段; • 数据导入时根据分区做好Order By; • 左右表join的时候要注意数据量的变化; • 是否采用分布式; • 监控好服务器的cpu/内存波动/`system`.query_log; • 数据存储磁盘尽量采用ssd; • 减少数据中文本信息的冗余存储; • 特别适用于数据量大,查询频次可控的场景,如数据分析,埋点日志系统; 全球敏捷运维峰会 广州站 StarRocks应用小结
    0 码力 | 15 页 | 1.33 MB | 1 年前
    3
  • pdf文档 Тестирование ClickHouse которого мы заслуживаем

    PartitionManager() as pm: pm.partition_instances(node1, node2, port=9009) #drop connection node1.query("INSERT INTO tt \ SELECT * FROM hdfs('hdfs://hdfs1:9000/tt', 'TSV')") assert_with_retry(node2, "SELECT v_slow / v_fast AS ratio FROM ( SELECT PE.Names, query_duration_ms > 3000 AS slow, avg(PE.Values) AS value, FROM system.query_thread_log WHERE query = '...' GROUP BY PE.Names, slow ) HAVING ratio >
    0 码力 | 84 页 | 9.60 MB | 1 年前
    3
  • pdf文档 6. ClickHouse在众安的实践

    /2013/000000_0' | clickhouse-client --host=127.0.0.1 -- port=10000 -u user --password password --query="INSERT INTO Insight_zhongan.baodan_yonghushuju FORMAT CSV" 效果: 单进程:每分钟2600w条记录,client占用核数=1,s iostat -dmx 1: 查看磁盘io使用情况,每秒更新 • Clickhouse命令: • set send_logs_level = 'trace':查看sql执行步骤详情 • 根据query_id查看内存使用情况,io情况等详细信息: system flush logs; select ProfileEvents.Names as name, match(name, 'Bytes|Chars') Values) : toString(ProfileEvents.Values) as value from system.query_log array join ProfileEvents where event_date = today() and type = 2 and query_id = '05ff4e7d-2b8c-4c41-b03d-094f9d8b02f2'; Thanks!
    0 码力 | 28 页 | 4.00 MB | 1 年前
    3
  • pdf文档 7. UDF in ClickHouse

    Pipeline = Directed Acyclic Graph (DAG) of modules Module = Input + Task + Output Task = Query or external program Query = “CREATE TABLE ... AS SELECT ...” A Database System and A ML Pipeline Begin Content
    0 码力 | 29 页 | 1.54 MB | 1 年前
    3
  • pdf文档 3. 数仓ClickHouse多维分析应用实践-朱元

    数 据 展 示 + 多 维 分析 采用开源报表系统davinci 地址: https://github.com/edp963/davinci 03 1. Memory limit (for query) exceeded 解决:通过在users.xml 配置 max_bytes_before_external_sort max_bytes_before_external_group_by 2
    0 码力 | 14 页 | 3.03 MB | 1 年前
    3
共 15 条
  • 1
  • 2
前往
页
相关搜索词
腾讯clickhouse实践2019丁晓坤熊峰ContinuetouseClickHouseasTSDBinProductionMachineLearningwith蔡岳毅基于StarRocks构建支撑千亿数据数据量可用查询引擎众安UDFIn数仓多维分析多维分析应用朱元
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩