Apache Kyuubi 1.3.0 Documentationrest prerequisites inside already. Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For file of the Kyuubi server instance. • work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. 14 Chapter 6. High Availability Kyuubi, Release 1.3.0 Running Kyuubi Release 1.3.0 Tips Turn on AQE by default can significantly improve the user experience. Other sub-features are all enabled. advisoryPartitionSizeInBytes is targeting the HDFS block size minPartitionNum0 码力 | 129 页 | 6.15 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentationrest prerequisites inside already. Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For file of the Kyuubi server instance. • work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. 14 Chapter 6. High Availability Kyuubi, Release 1.3.0 Running Kyuubi Release 1.3.0 Tips Turn on AQE by default can significantly improve the user experience. Other sub-features are all enabled. advisoryPartitionSizeInBytes is targeting the HDFS block size minPartitionNum0 码力 | 129 页 | 6.16 MB | 1 年前3
Apache Kyuubi 1.3.0 Documentationserver inside for non-production use. Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For PID file of the Kyuubi server instance. work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. 1.4. Running Kyuubi As mentioned above, for a quick start deployment engine.share.le vel.sub.domain SERVER: the App will be shared by Kyuubi servers strin g 1.2 .0 kyuubi.engine.share .level.sub.domainAllow end-users to create a sub-domain for the share 0 码力 | 199 页 | 4.42 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentationserver inside for non-production use. Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For PID file of the Kyuubi server instance. work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. 1.4. Running Kyuubi As mentioned above, for a quick start deployment engine.share.le vel.sub.domain SERVER: the App will be shared by Kyuubi servers strin g 1.2 .0 kyuubi.engine.share .level.sub.domainAllow end-users to create a sub-domain for the share 0 码力 | 199 页 | 4.44 MB | 1 年前3
Apache Kyuubi 1.4.1 Documentationrest prerequisites inside already. Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For file of the Kyuubi server instance. • work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. Running Kyuubi As mentioned above, for a quick start deployment, then 4.1-incubating Tips Turn on AQE by default can significantly improve the user experience. Other sub-features are all enabled. advisoryPartitionSizeInBytes is targeting the HDFS block size minPartitionNum0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.4.0 Documentationrest prerequisites inside already. Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For file of the Kyuubi server instance. • work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. Running Kyuubi As mentioned above, for a quick start deployment, then Release 1.3.0 Tips Turn on AQE by default can significantly improve the user experience. Other sub-features are all enabled. advisoryPartitionSizeInBytes is targeting the HDFS block size minPartitionNum0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.5.0 Documentationprerequisites inside already. Additionally, if you want to work with other Spark/Flink compatible systems or plugins, you only need to take care of them as using them with regular Spark/Flink applications file of the Kyuubi server instance. • work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. Running Kyuubi As mentioned above, for a quick start deployment, then The latter has higher priority. • kyuubi.engine.share.level.subdomain(kyuubi.engine.share.level.sub.domain) – Default: – Candidates: a valid zookeeper a child node – Meaning: Add a subdomain under0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.5.1 Documentationprerequisites inside already. Additionally, if you want to work with other Spark/Flink compatible systems or plugins, you only need to take care of them as using them with regular Spark/Flink applications file of the Kyuubi server instance. • work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. Running Kyuubi As mentioned above, for a quick start deployment, then The latter has higher priority. • kyuubi.engine.share.level.subdomain(kyuubi.engine.share.level.sub.domain) – Default: – Candidates: a valid zookeeper a child node – Meaning: Add a subdomain under0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.5.2 Documentationprerequisites inside already. Additionally, if you want to work with other Spark/Flink compatible systems or plugins, you only need to take care of them as using them with regular Spark/Flink applications file of the Kyuubi server instance. • work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. Running Kyuubi As mentioned above, for a quick start deployment, then The latter has higher priority. • kyuubi.engine.share.level.subdomain(kyuubi.engine.share.level.sub.domain) – Default: – Candidates: a valid zookeeper a child node – Meaning: Add a subdomain under0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.4.1 Documentationserver inside for non-production use. Additionally, if you want to work with other Spark compatible systems or plugins, you only need to take care of them as using them with regular Spark applications. For PID file of the Kyuubi server instance. work: the root of the working directories of all the forked sub-processes, a.k.a. SQL engines. 1.4. Running Kyuubi As mentioned above, for a quick start deployment fallback to the USER level. SERVER: the App will be shared by Kyuubi servers kyuubi.engine.share .level.sub.domain(deprecated) - Using kyuubi.engine.share.level.s ubdomain instead strin g 1 0 码力 | 233 页 | 4.62 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5













