TiDB v5.3 Documentation#10980 – Fix the issue of poor scan performance because MVCC Deletion versions are not dropped by compaction filter GC #11248 • PD – Fix the issue that PD incorrectly delete the peers with data and in pending RocksDB per sec- ond gc. �→ enable �→ - �→ compaction �→ - �→ filter �→ Whether to en- able com- paction filter 319 Configuration item Description gc. �→ compaction �→ - �→ filter �→ - �→ skip �→ - �→ version open �→ - �→ files �→ The total num- ber of files that RocksDB can open {db- �→ name �→ }. �→ compaction �→ - �→ readahead �→ - �→ size �→ The size of readahead �→ dur- ing com- paction {db- �→ name0 码力 | 2996 页 | 49.30 MB | 1 年前3
TiDB v5.2 Documentationfile storage. �→ flow- �→ control �→ .soft- �→ pending �→ - �→ compaction �→ -bytes- �→ limit Newly added When the pending compaction bytes in KvDB reach this threshold, the flow control mechanism starts file storage. �→ flow- �→ control �→ .hard- �→ pending �→ - �→ compaction �→ -bytes- �→ limit Newly added When the pending compaction bytes in KvDB reach this threshold, the flow control mechanism rejects reduces the impact on the stability of foreground write. In specific, when the stress of RocksDB compaction accumulates, flow control is per- formed at the TiKV scheduler layer instead of the RocksDB layer0 码力 | 2848 页 | 47.90 MB | 1 年前3
TiDB v5.1 DocumentationDescription TiKV con- figuration file soft- �→ pending �→ - �→ compaction �→ -bytes- �→ limit Modified The soft limit on the pending compaction bytes. The default value is changed from "64GB" to "192GB". requests, TiKV Write Rate Limiter smoothes the write traffic of TiKV background tasks such as GC and Compaction. The default value of TiKV background task write rate limiter is “0MB”. It is recommended to set results when cloning shared delta index concurrently – Fix the potential panic that occurs when the Compaction Filter feature is enabled – Fix the issue that TiFlash cannot resolve the lock fallen back from0 码力 | 2745 页 | 47.65 MB | 1 年前3
TiDB v5.4 DocumentationFix the issue that GC scan causes memory overflow #11410 – Fix the issue that RocksDB flush or compaction causes panic when the disk ca- pacity is full #11224 • PD – Fix the issue that Region statistics writ- ten into RocksDB per sec- ond gc. �→ enable �→ - �→ compaction �→ - �→ filter �→ Whether to en- able com- paction filter gc. �→ compaction �→ - �→ filter �→ - �→ skip �→ - �→ version �→ - �→ check ber of files that RocksDB can open 411 Configuration item Description {db- �→ name �→ }. �→ compaction �→ - �→ readahead �→ - �→ size �→ The size of readahead �→ dur- ing com- paction {db- �→ name0 码力 | 3650 页 | 52.72 MB | 1 年前3
TiDB v5.2 中文手册pending- �→ compaction �→ -bytes- �→ limit 新增 当 KvDB 的 pending compaction bytes 达到该阈值时, 流控机制开始 拒绝部分写入 请求并报错。 默认值为 “192GB”。 TiKV 配置文件 storage.flow- �→ control. �→ hard- �→ pending- �→ compaction �→ -bytes- -bytes- �→ limit 新增 当 KvDB 的 pending compaction bytes 达到该阈值时, 流控机制开始 拒绝所有写入 请求并报错。 默认值为 “1024GB”。 2.2.1.3 其他 • 升级前,请检查系统变量tidb_evolve_plan_baselines 的值是否为 ON。如果为 ON,需要将其改成 OFF, 否则会导致升级失败。 • v4.0 集群升级到 引入了新的流控机制代替之前的 RocksDB write stall 流控机制。相比于 write stall 机制,新的流控机制 通过以下改进减少了流控对前台写入稳定性的影响: – 当 RocksDB compaction 压力堆积时,通过在 TiKV scheduler 层进行流控而不是在 RocksDB 层进行流控, 避免 RocksDB write stall 造成的 raftstore 卡顿并造成 Raft0 码力 | 2259 页 | 48.16 MB | 1 年前3
TiDB v7.6 Documentationinformation, see documentation. • Support periodic full compaction (experimental) #12729 afeinberg Starting from v7.6.0, TiDB supports periodic full compaction for TiKV. This feature serves as an enhancement feature to perform data compaction during idle periods to improve the performance during peak periods. You can set the specific times that TiKV initiates periodic full compaction by configur- ing the TiKV periodic full compaction by configuring periodic �→ -full-compact-start-max-cpu. The default value of periodic-full-compact- �→ start-max-cpu is 10%, which means that periodic full compaction is triggered0 码力 | 6123 页 | 107.24 MB | 1 年前3
TiDB v6.1 DocumentationSupport non-transactional DML statements (only support DELETE) • TiFlash supports on-demand data compaction • MPP introduces the window function framework • TiCDC supports replicating changelogs to Kafka statement, which provides a manual way to compact physical data based on the existing background compaction mechanism. With this statement, you can update data in earlier formats and improve read/write performance writ- ten into RocksDB per sec- ond gc. �→ enable �→ - �→ compaction �→ - �→ filter �→ Whether to en- able com- paction filter gc. �→ compaction �→ - �→ filter �→ - �→ skip �→ - �→ version �→ - �→ check0 码力 | 4487 页 | 84.44 MB | 1 年前3
TiDB v5.3 中文手册请求时因超时而导致 panic 的问题 #10852 – 修复因统计线程监控数据导致的内存泄漏 #11195 – 修复在某些平台获取 cgroup 信息导致 panic 的问题 #10980 – 修复 Compaction Filter GC 无法清除 MVCC Deletion 版本导致 scan 性能下降的问题 #11248 • PD 39 – 修复因超过副本配置数量而导致错误删除带有数据且处于 pending write- bytes- per- sec 一秒 可写 入 RocksDB 的最 大字 节数 gc.enable- compaction- filter 是否 使用 com- paction filter 240 配置 项 简介 gc.compaction- filter- skip- version- check 是否 跳过 com- paction filter RocksDB 后台 线程 个数 {db- name}.max- open- files RocksDB 可以 打开 的文 件总 数 {db- name}.compaction- readahead- size Compaction 时候 reada- head 的大 小 {db- name}.bytes- per- sync 异步 同步 的限 速速 率 241 配置 项0 码力 | 2374 页 | 49.52 MB | 1 年前3
TiDB v8.3 Documentationaccidentally delete valid data #17258 @hbisheng • Fix the issue that Ingestion picked level and Compaction Job Size(files) are displayed incorrectly in the TiKV dashboard in Grafana #15990 @Connor1996 Note that it is recommended to reserve 20% of storage space, because background tasks such as compaction and snapshot replication also consume a portion of the storage space. 6.3.4 Change configuration configurable. On key rotation, TiKV does not rewrite all existing files to replace the key, but RocksDB compaction are expected to rewrite old data into new data files, with the most recent data key, if the cluster0 码力 | 6606 页 | 109.48 MB | 10 月前3
TiDB v8.5 Documentationto spill-dir, ensuring continuous operation of the system #17356 @LykxSassinator • Optimize the compaction trigger mechanism of RocksDB to accelerate disk space reclamation when handling a large number Note that it is recommended to reserve 20% of storage space, because background tasks such as compaction and snapshot replication also consume a portion of the storage space. 6.3.4 Change configuration regularly update the metadata. Because the time interval between the MVCC of etcd and PD’s default compaction is one hour, the amount of PD storage that TiCDC uses is proportional to the amount of metadata0 码力 | 6730 页 | 111.36 MB | 10 月前3
共 32 条
- 1
- 2
- 3
- 4













