Apache Kyuubi 1.3.1 Documentationaccess data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need above By default Kyuubi is pre- built w/ a Apache Spark release inside at $KYUUBI_HOME/externals HDFS Distributed File System Optional referenced by Spark Hadoop Distributed File System is a part framework, used to store and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore0 码力 | 199 页 | 4.44 MB | 1 年前3
 Apache Kyuubi 1.3.0 Documentationaccess data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need above By default Kyuubi is pre- built w/ a Apache Spark release inside at $KYUUBI_HOME/externals HDFS Distributed File System Optional referenced by Spark Hadoop Distributed File System is a part framework, used to store and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore0 码力 | 199 页 | 4.42 MB | 1 年前3
 Apache Kyuubi 1.4.1 Documentationaccess data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need Required 3.0.0 and above By default Kyuubi binary release is delivered without a Spark tarball. HDFS Distributed File System Optional referenced by Spark Hadoop Distributed File System is a part framework, used to store and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore0 码力 | 233 页 | 4.62 MB | 1 年前3
 Apache Kyuubi 1.4.0 Documentationaccess data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need Required 3.0.0 and above By default Kyuubi binary release is delivered without a Spark tarball. HDFS Distributed File System Optional referenced by Spark Hadoop Distributed File System is a part framework, used to store and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore0 码力 | 233 页 | 4.62 MB | 1 年前3
 Apache Kyuubi 1.5.1 Documentationaccess data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop- hdfs/HdfsDesign.html], with permissions. Ease of Use You only need Optional 1.14.0 and above By default Kyuubi binary release is delivered without a Flink tarball. HDFS Distributed File System Optional referenced by Spark Hadoop Distributed File System is a part framework, used to store and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore0 码力 | 267 页 | 5.80 MB | 1 年前3
 Apache Kyuubi 1.5.2 Documentationaccess data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop- hdfs/HdfsDesign.html], with permissions. Ease of Use You only need Optional 1.14.0 and above By default Kyuubi binary release is delivered without a Flink tarball. HDFS Distributed File System Optional referenced by Spark Hadoop Distributed File System is a part framework, used to store and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore0 码力 | 267 页 | 5.80 MB | 1 年前3
 Apache Kyuubi 1.5.0 Documentationaccess data and metadata from a storage system, e.g. Apache Hadoop HDFS [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop- hdfs/HdfsDesign.html], with permissions. Ease of Use You only need Optional 1.14.0 and above By default Kyuubi binary release is delivered without a Flink tarball. HDFS Distributed File System Optional referenced by Spark Hadoop Distributed File System is a part framework, used to store and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore0 码力 | 267 页 | 5.80 MB | 1 年前3
 Apache Kyuubi 1.7.0-rc0 Documentationdata in various formats (Parquet, CSV, JSON, text) in your data lake in cloud storage or an on-prem HDFS cluster. Lakehouse formation and analytics Easily build an ACID table storage layer via Hudi, Iceberg possible to visit different Hive metastore server instance. Similarly, this works for other services like HDFS, YARN too. Limitation: As most Hive configurations are final and unmodifiable in Spark at runtime all the engine events go for the built-in JSON logger. Local Path: start with 'file://' HDFS Path: start with 'hdfs://' strin g 1.3.0 kyuubi.engine.even t.loggers SPARK A comma-separated list of engine0 码力 | 404 页 | 5.25 MB | 1 年前3
 Apache Kyuubi 1.7.0 Documentationdata in various formats (Parquet, CSV, JSON, text) in your data lake in cloud storage or an on-prem HDFS cluster. Lakehouse formation and analytics Easily build an ACID table storage layer via Hudi, Iceberg possible to visit different Hive metastore server instance. Similarly, this works for other services like HDFS, YARN too. Limitation: As most Hive configurations are final and unmodifiable in Spark at runtime all the engine events go for the built-in JSON logger. Local Path: start with 'file://' HDFS Path: start with 'hdfs://' strin g 1.3.0 kyuubi.engine.even t.loggers SPARK A comma-separated list of engine0 码力 | 400 页 | 5.25 MB | 1 年前3
 Apache Kyuubi 1.7.0-rc1 Documentationdata in various formats (Parquet, CSV, JSON, text) in your data lake in cloud storage or an on-prem HDFS cluster. Lakehouse formation and analytics Easily build an ACID table storage layer via Hudi, Iceberg possible to visit different Hive metastore server instance. Similarly, this works for other services like HDFS, YARN too. Limitation: As most Hive configurations are final and unmodifiable in Spark at runtime all the engine events go for the built-in JSON logger. Local Path: start with 'file://' HDFS Path: start with 'hdfs://' strin g 1.3.0 kyuubi.engine.even t.loggers SPARK A comma-separated list of engine0 码力 | 400 页 | 5.25 MB | 1 年前3
共 55 条
- 1
 - 2
 - 3
 - 4
 - 5
 - 6
 













