Apache Cassandra™ 10 Documentation February 16, 2012Column Names 55 Managing and Accessing Data in Cassandra 55 About Writes in Cassandra 55 About Compaction 55 About Transactions and Concurrency Control 55 About Inserts and Updates 56 About Deletes 72 compaction_preheat_key_cache 72 compaction_throughput_mb_per_sec 72 concurrent_compactors 72 concurrent_reads 72 concurrent_writes 72 flush_largest_memtables_at 73 in_memory_compaction_limit_in_mb memtable_flush_queue_size 73 memtable_flush_writers 73 memtable_total_space_in_mb 73 multithreaded_compaction 73 reduce_cache_capacity_to 73 reduce_cache_sizes_at 73 sliced_buffer_size_in_kb 74 stream0 码力 | 141 页 | 2.52 MB | 1 年前3
TiDB v5.3 Documentation#10980 – Fix the issue of poor scan performance because MVCC Deletion versions are not dropped by compaction filter GC #11248 • PD – Fix the issue that PD incorrectly delete the peers with data and in pending RocksDB per sec- ond gc. �→ enable �→ - �→ compaction �→ - �→ filter �→ Whether to en- able com- paction filter 319 Configuration item Description gc. �→ compaction �→ - �→ filter �→ - �→ skip �→ - �→ version open �→ - �→ files �→ The total num- ber of files that RocksDB can open {db- �→ name �→ }. �→ compaction �→ - �→ readahead �→ - �→ size �→ The size of readahead �→ dur- ing com- paction {db- �→ name0 码力 | 2996 页 | 49.30 MB | 1 年前3
TiDB v5.2 Documentationfile storage. �→ flow- �→ control �→ .soft- �→ pending �→ - �→ compaction �→ -bytes- �→ limit Newly added When the pending compaction bytes in KvDB reach this threshold, the flow control mechanism starts file storage. �→ flow- �→ control �→ .hard- �→ pending �→ - �→ compaction �→ -bytes- �→ limit Newly added When the pending compaction bytes in KvDB reach this threshold, the flow control mechanism rejects reduces the impact on the stability of foreground write. In specific, when the stress of RocksDB compaction accumulates, flow control is per- formed at the TiKV scheduler layer instead of the RocksDB layer0 码力 | 2848 页 | 47.90 MB | 1 年前3
TiDB v5.1 DocumentationDescription TiKV con- figuration file soft- �→ pending �→ - �→ compaction �→ -bytes- �→ limit Modified The soft limit on the pending compaction bytes. The default value is changed from "64GB" to "192GB". requests, TiKV Write Rate Limiter smoothes the write traffic of TiKV background tasks such as GC and Compaction. The default value of TiKV background task write rate limiter is “0MB”. It is recommended to set results when cloning shared delta index concurrently – Fix the potential panic that occurs when the Compaction Filter feature is enabled – Fix the issue that TiFlash cannot resolve the lock fallen back from0 码力 | 2745 页 | 47.65 MB | 1 年前3
TiDB v5.4 DocumentationFix the issue that GC scan causes memory overflow #11410 – Fix the issue that RocksDB flush or compaction causes panic when the disk ca- pacity is full #11224 • PD – Fix the issue that Region statistics writ- ten into RocksDB per sec- ond gc. �→ enable �→ - �→ compaction �→ - �→ filter �→ Whether to en- able com- paction filter gc. �→ compaction �→ - �→ filter �→ - �→ skip �→ - �→ version �→ - �→ check ber of files that RocksDB can open 411 Configuration item Description {db- �→ name �→ }. �→ compaction �→ - �→ readahead �→ - �→ size �→ The size of readahead �→ dur- ing com- paction {db- �→ name0 码力 | 3650 页 | 52.72 MB | 1 年前3
TiDB v5.2 中文手册pending- �→ compaction �→ -bytes- �→ limit 新增 当 KvDB 的 pending compaction bytes 达到该阈值时, 流控机制开始 拒绝部分写入 请求并报错。 默认值为 “192GB”。 TiKV 配置文件 storage.flow- �→ control. �→ hard- �→ pending- �→ compaction �→ -bytes- -bytes- �→ limit 新增 当 KvDB 的 pending compaction bytes 达到该阈值时, 流控机制开始 拒绝所有写入 请求并报错。 默认值为 “1024GB”。 2.2.1.3 其他 • 升级前,请检查系统变量tidb_evolve_plan_baselines 的值是否为 ON。如果为 ON,需要将其改成 OFF, 否则会导致升级失败。 • v4.0 集群升级到 引入了新的流控机制代替之前的 RocksDB write stall 流控机制。相比于 write stall 机制,新的流控机制 通过以下改进减少了流控对前台写入稳定性的影响: – 当 RocksDB compaction 压力堆积时,通过在 TiKV scheduler 层进行流控而不是在 RocksDB 层进行流控, 避免 RocksDB write stall 造成的 raftstore 卡顿并造成 Raft0 码力 | 2259 页 | 48.16 MB | 1 年前3
TiDB v7.6 Documentationinformation, see documentation. • Support periodic full compaction (experimental) #12729 afeinberg Starting from v7.6.0, TiDB supports periodic full compaction for TiKV. This feature serves as an enhancement feature to perform data compaction during idle periods to improve the performance during peak periods. You can set the specific times that TiKV initiates periodic full compaction by configur- ing the TiKV periodic full compaction by configuring periodic �→ -full-compact-start-max-cpu. The default value of periodic-full-compact- �→ start-max-cpu is 10%, which means that periodic full compaction is triggered0 码力 | 6123 页 | 107.24 MB | 1 年前3
TiDB v6.1 DocumentationSupport non-transactional DML statements (only support DELETE) • TiFlash supports on-demand data compaction • MPP introduces the window function framework • TiCDC supports replicating changelogs to Kafka statement, which provides a manual way to compact physical data based on the existing background compaction mechanism. With this statement, you can update data in earlier formats and improve read/write performance writ- ten into RocksDB per sec- ond gc. �→ enable �→ - �→ compaction �→ - �→ filter �→ Whether to en- able com- paction filter gc. �→ compaction �→ - �→ filter �→ - �→ skip �→ - �→ version �→ - �→ check0 码力 | 4487 页 | 84.44 MB | 1 年前3
HBase最佳实践及优化数据无类型 • 非RowKey查询性能差 • Column Family限制(数目,Partition对齐) • Region资源消耗大,实例数目不能太多 • 无法保证服务质量* – Split/Compaction等操作对集群性能影响极大 • 多租户隔离能力差 • 大内存(>100GB)管理差 12 Postgres Conference China 2016 中国用户大会 Kudu的设计目标 读写性能取舍* Postgres Conference China 2016 中国用户大会 性能优化目标 • 读 vs. 写? – 读需要合并HFile,因此文件越少越好 – 写需要减少Compaction操作,因此文件越多越好 – 优化读或者写之一,而不是全部 • 顺序 vs. 随机? • 参考值——每个RegionServer吞吐率>20MB/s – 读吞吐率>3000ops/s, 写吞吐率>10000ops/s 每个Region的Memstore太小,磁盘flush频繁, HFile文件过多小文件 Postgres Conference China 2016 中国用户大会 Major Compaction • HBase根据时间来计划执行Major Compaction – hbase.hregion.majorcompaction = 604800000 (缺省 值为一周) – hbase.hregion.majorcompaction0 码力 | 45 页 | 4.33 MB | 1 年前3
TiDB v5.3 中文手册请求时因超时而导致 panic 的问题 #10852 – 修复因统计线程监控数据导致的内存泄漏 #11195 – 修复在某些平台获取 cgroup 信息导致 panic 的问题 #10980 – 修复 Compaction Filter GC 无法清除 MVCC Deletion 版本导致 scan 性能下降的问题 #11248 • PD 39 – 修复因超过副本配置数量而导致错误删除带有数据且处于 pending write- bytes- per- sec 一秒 可写 入 RocksDB 的最 大字 节数 gc.enable- compaction- filter 是否 使用 com- paction filter 240 配置 项 简介 gc.compaction- filter- skip- version- check 是否 跳过 com- paction filter RocksDB 后台 线程 个数 {db- name}.max- open- files RocksDB 可以 打开 的文 件总 数 {db- name}.compaction- readahead- size Compaction 时候 reada- head 的大 小 {db- name}.bytes- per- sync 异步 同步 的限 速速 率 241 配置 项0 码力 | 2374 页 | 49.52 MB | 1 年前3
共 355 条
- 1
- 2
- 3
- 4
- 5
- 6
- 36













