Curve for CNCF Mainstorage in the cloud (AWS EBS, AWS S3, AWS Glacier, aliyun EBS, aliyun OSS) or on-prem (baremetal, HDFS, OSS) and turns it into container-native storageDatabase • Database services orchestrated in the move to the slave cloudData apps(middleware/bigdata/ai) • CurveFS can manage different storages (HDFS, OSS, EBS) below • Apps access data by POSIX interface • Infrequent data is moved to OSS,0 码力 | 21 页 | 4.56 MB | 6 月前3
OID CND Asia Slide: CurveFSaccess file data in storage pools through interfaces such as NFS/HDFS/Posix api Manage multiple types of storage (object storage, HDFS storage, Elastic block storage) Support both on-premise and public0 码力 | 24 页 | 3.47 MB | 6 月前3
Curve 分布式存储设计完善高性能3副本存储引擎,支持混合盘 4. 文件存储支持数据存储到HDFS、rados等引擎 2. 性能 1. 完善RDMA/SPDK方案,发布稳定版本 2. 更高性能硬件选型、适配及性能调优 3. 大文件读写性能优化,RAFT优化,降低写放大 3. 功能 1. 文件存储支持回收站/生命周期管理/配额/用户权限等 2. 支持NFS、CIFS/SMB、HDFS等协议 3. 块存储支持按存储池创建卷Curve0 码力 | 20 页 | 4.13 MB | 6 月前3
PingCAP TiDB&TiKV Introduction OLTP内部新一代分布式处理框架,于12/13年发表 相关论文,奠定下一代分布式 NewSQL的理论和工程 实践基石。PingCAP以此为基础打造了TiDB & TiKV HBase Map Reduce HDFS TiDB TiKV NewSQL | TiDB Google Spanner / F1 - The First NewSQL ● 全球级别分布式 / 跨数据中心复制 ○ Paxos0 码力 | 21 页 | 613.54 KB | 6 月前3
Raft在Curve存储中的工程实践基于Openstack构建云计算平台 • 底层存储使用Ceph块存储 • 稳定性挑战 • 算力平台kubernetes的迅速发展 • AI/大数据业务的快速增长 • 存储使用Ceph文件存储/HDFS • 成本/性能挑战 Curve块存储和文件存储均采用raft协议整体架构 • 对接OpenStack平台为云主机提供高性能块 存储服务 • 对接Kubernetes为其提供RWO、RWX等类0 码力 | 29 页 | 2.20 MB | 6 月前3
24-云原生中间件之道-高磊Yarn目前只能通过NodeManager上报的静态资源情况进行分配, 无法基于动态资源调度,无法很好的支持在线、离线业务混部的场景。 • 操作系统镜像及部署复杂性拖慢应用发布:虚拟机或裸金属设备所依赖的镜像,包含了诸多软件包,如HDFS、Spark、 Flink、Hadoop等,系统的镜像远远大于10GB,通常存在镜像过大、制作繁琐、镜像跨地域分发周期长等问题。基于这 些问题,有些大数据开发团队不得不将需求划分为镜像类和非镜像类0 码力 | 22 页 | 4.39 MB | 6 月前3
TiDB v8.5 Documentation'/path/in/hdfs'. Therefore, if you need to export a table named test, perform the following steps: 1. Run the following SQL statement in Hive: CREATE TABLE temp STORED AS PARQUET LOCATION '/path/in/hdfs' AS data is successfully exported to the HDFS system. 2. Export the parquet files to the local file system using the hdfs dfs -get command: hdfs dfs -get /path/in/hdfs /path/in/local After the export is is complete, if you need to delete the exported parquet files in HDFS, you can directly delete the temporary table (temp): DROP TABLE temp; 3. The parquet files exported from Hive might not have the .parquet0 码力 | 6730 页 | 111.36 MB | 10 月前3
TiDB v8.2 Documentation'/path/in/hdfs'. Therefore, if you need to export a table named test, perform the following steps: 1. Run the following SQL statement in Hive: CREATE TABLE temp STORED AS PARQUET LOCATION '/path/in/hdfs' AS data is successfully exported to the HDFS system. 2. Export the parquet files to the local file system using the hdfs dfs -get command: hdfs dfs -get /path/in/hdfs /path/in/local After the export is is complete, if you need to delete the exported parquet files in HDFS, you can directly delete the temporary table (temp): DROP TABLE temp; 3. The parquet files exported from Hive might not have the .parquet0 码力 | 6549 页 | 108.77 MB | 10 月前3
TiDB v8.3 Documentation'/path/in/hdfs'. Therefore, if you need to export a table named test, perform the following steps: 1. Run the following SQL statement in Hive: CREATE TABLE temp STORED AS PARQUET LOCATION '/path/in/hdfs' AS data is successfully exported to the HDFS system. 2. Export the parquet files to the local file system using the hdfs dfs -get command: hdfs dfs -get /path/in/hdfs /path/in/local After the export is is complete, if you need to delete the exported parquet files in HDFS, you can directly delete the temporary table (temp): DROP TABLE temp; 3. The parquet files exported from Hive might not have the .parquet0 码力 | 6606 页 | 109.48 MB | 10 月前3
TiDB v8.4 Documentation'/path/in/hdfs'. Therefore, if you need to export a table named test, perform the following steps: 1. Run the following SQL statement in Hive: CREATE TABLE temp STORED AS PARQUET LOCATION '/path/in/hdfs' AS data is successfully exported to the HDFS system. 2. Export the parquet files to the local file system using the hdfs dfs -get command: hdfs dfs -get /path/in/hdfs /path/in/local After the export is is complete, if you need to delete the exported parquet files in HDFS, you can directly delete the temporary table (temp): DROP TABLE temp; 3. The parquet files exported from Hive might not have the .parquet0 码力 | 6705 页 | 110.86 MB | 10 月前3
共 14 条
- 1
- 2













