TiDB v5.2 Documentationof read and write for TiFlash • TiKV introduces a new flow control mechanism to replace the previous RocksDB write stall mechanism to improve the stability of TiKV flow control • Simplify the operation storage. �→ flow- �→ control �→ .enable Newly added Determines whether to enable the flow control mechanism. The default value is true. 33 Configuration file Configuration item Change type Description threshold �→ Newly added When the number of kvDB memtables reaches this threshold, the flow control mechanism starts to work. The default value is 5. TiKV con- figuration file storage. �→ flow- �→ control0 码力 | 2848 页 | 47.90 MB | 1 年前3
TiDB v5.3 Documentationmight panic in case of a disk fully-written error, TiKV introduces a two-level threshold defense mechanism to protect the disk remaining space from being exhausted by excess traffic. Additionally, the mecha- implementation of the Raft consensus algorithm. As a columnar storage extension of TiKV, TiFlash repli- cates data from TiKV in real time according to the Raft Learner consensus algorithm, which ensures by the system, measured in KiB. • The setting of vm.min_free_kbytes affects the memory reclaim mechanism. Setting it too large reduces the available memory, while setting it too small might cause memory0 码力 | 2996 页 | 49.30 MB | 1 年前3
TiDB v5.1 Documentationby the system, measured in KiB. • The setting of vm.min_free_kbytes affects the memory reclaim mechanism. Setting it too large reduces the available memory, while setting it too small might cause memory rency conflict is found. Setting tidb_disable_txn_auto_retry to off turns on the automatic retry mechanism after meeting a transaction conflict, which can prevent Sysbench from quitting because of the transaction process. Note: The PD Client in TiKV caches the list of PD nodes. The current version of TiKV has a mechanism to automatically and regularly update PD nodes, which can help mitigate the issue of an expired0 码力 | 2745 页 | 47.65 MB | 1 年前3
TiDB v5.4 Documentationrange through a session variable TiDB is a multi-replica distributed database based on the Raft consensus algorithm. In face of high-concurrency and high-throughput application scenarios, TiDB can scale operations do not need to wait for secondary locks to be resolved #11402 – Add a disk protection mechanism to avoid panic caused by disk space drainage #10537 – Support archiving and rotating logs #11651 implementation of the Raft consensus algorithm. As a columnar storage extension of TiKV, TiFlash repli- cates data from TiKV in real time according to the Raft Learner consensus algorithm, which ensures0 码力 | 3650 页 | 52.72 MB | 1 年前3
TiDB 与 TiFlash扩展——向真 HTAP 平台前进 韦万processing ○ Based on ClickHouse with tons of proprietary modifications ● Data sync via extended Raft consensus algorithm ○ Strong consistency ○ Trivial overhead ● Clear workload isolation for not impacting who would you save?” “Why not both?” Columnstore vs Rowstore ● TiDB replicates log via Raft consensus protocol ● TiFlash replicates data in columnstore via Raft Learner ● Learner is a special read-only the same design Scalability ● Perfect Resource Isolation ● Data rebalance based on the “label” mechanism ○ Dedicated nodes for TiFlash / Columnstore ○ Nodes are differentiated by “label” ● Computation0 码力 | 45 页 | 2.75 MB | 6 月前3
TiDB v6.1 Documentationwhich provides a manual way to compact physical data based on the existing background compaction mechanism. With this statement, you can update data in earlier formats and improve read/write performance implementation of the Raft consensus algorithm. As a columnar storage extension of TiKV, TiFlash repli- cates data from TiKV in real time according to the Raft Learner consensus algorithm, which ensures type if you store values larger than 2038. 4.6.3.5 Performance considerations 4.6.3.5.1 TiDB GC mechanism TiDB does not delete the data immediately after you run the DELETE statement. Instead, it marks0 码力 | 4487 页 | 84.44 MB | 1 年前3
TiDB v6.5 Documentationfailure within one hour after BR exits, the snapshot data to be backed up might be recycled by the GC mechanism, causing the backup to fail. For more information, see documentation. • PITR performance improved information, see documentation. 2.2.3 Deprecated feature Starting from v6.5.0, the AMEND TRANSACTION mechanism introduced in v4.0.7 is depre- cated and replaced by Metadata Lock. 2.2.4 Improvements • TiDB INFORMATION_SCHEMA.COLUMNS consistent with MySQL #25472 @hawkingrei • Optimize the TiDB probing mechanism for TiFlash nodes in the TiFlash MPP mode to mitigate the performance impact when nodes are abnormal0 码力 | 5282 页 | 99.69 MB | 1 年前3
TiDB v7.5 Documentationimplementation of the Raft consensus algorithm. As a columnar storage extension of TiKV, TiFlash repli- cates data from TiKV in real time according to the Raft Learner consensus algorithm, which ensures type if you store values larger than 2038. 4.6.3.5 Performance considerations 4.6.3.5.1 TiDB GC mechanism TiDB does not delete the data immediately after you run the DELETE statement. Instead, it marks time point will not be used again, so TiDB can safely clean it up. For more information, see GC mechanism. 4.6.3.5.2 Update statistical information TiDB uses statistical information to determine index0 码力 | 6020 页 | 106.82 MB | 1 年前3
TiDB v7.6 Documentationstatement in scenarios with large datasets #48301 @Leavrth • Refactor the BR exception handling mechanism to increase tolerance for un- known errors #47656 @3pointer • TiCDC • Improve the performance implementation of the Raft consensus algorithm. As a columnar storage extension of TiKV, TiFlash repli- cates data from TiKV in real time according to the Raft Learner consensus algorithm, which ensures type if you store values larger than 2038. 4.6.3.5 Performance considerations 4.6.3.5.1 TiDB GC mechanism TiDB does not delete the data immediately after you run the DELETE statement. Instead, it marks0 码力 | 6123 页 | 107.24 MB | 1 年前3
TiDB v7.1 DocumentationSupport the checkpoint mechanism for Fast Online DDL to improve fault tolerance and automatic recovery capability #42164 @tangenta TiDB v7.1.0 introduces a checkpoint mechanism for Fast Online DDL, which hotspot scheduler when the storage engine is raft-kv2 #6297 @bufferflies • Add a leader health check mechanism. When the PD server where the etcd leader is located cannot be elected as the leader, PD actively importing data #42836 @okJiang • Add a retry mechanism when encountering an unknown RPC error during data import #43291 @D3Hunter • Enhance the retry mechanism for Region jobs #43682 @lance6716 2.2.4 Bug0 码力 | 5716 页 | 104.74 MB | 1 年前3
共 163 条
- 1
- 2
- 3
- 4
- 5
- 6
- 17













