Apache Kyuubi 1.3.0 DocumentationApache Hadoop HDFS, with permissions. 3 Kyuubi, Release 1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity access to shuffle data, even if the executors that generated the data are recycled. Spark provides two implementations for shuffle data tracking. If either is enabled, we can use the DRA feature properly proper shuffle partition number to fit your dataset. To enable this feature, we need to set the below two configs to true. spark.sql.adaptive.enabled=true spark.sql.adaptive.coalescePartitions.enabled=true0 码力 | 129 页 | 6.15 MB | 1 年前3
Apache Kyuubi 1.3.1 DocumentationApache Hadoop HDFS, with permissions. 3 Kyuubi, Release 1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity access to shuffle data, even if the executors that generated the data are recycled. Spark provides two implementations for shuffle data tracking. If either is enabled, we can use the DRA feature properly proper shuffle partition number to fit your dataset. To enable this feature, we need to set the below two configs to true. spark.sql.adaptive.enabled=true spark.sql.adaptive.coalescePartitions.enabled=true0 码力 | 129 页 | 6.16 MB | 1 年前3
Apache Kyuubi 1.3.0 Documentationaccess to shuffle data, even if the executors that generated the data are recycled. Spark provides two implementations for shuffle data tracking. If either is enabled, we can use the DRA feature properly proper shuffle partition number to fit your dataset. To enable this feature, we need to set the below two configs to true. spark.sql.adaptive.enabled=true spark.sql.adaptive.coalescePartitions.enabled=true the resources and concurrency of the application. But there are always exceptions. Relating these two seemingly unrelated parameters can be somehow tricky for users. This config is optional by default0 码力 | 199 页 | 4.42 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentationaccess to shuffle data, even if the executors that generated the data are recycled. Spark provides two implementations for shuffle data tracking. If either is enabled, we can use the DRA feature properly proper shuffle partition number to fit your dataset. To enable this feature, we need to set the below two configs to true. spark.sql.adaptive.enabled=true spark.sql.adaptive.coalescePartitions.enabled=true the resources and concurrency of the application. But there are always exceptions. Relating these two seemingly unrelated parameters can be somehow tricky for users. This config is optional by default0 码力 | 199 页 | 4.44 MB | 1 年前3
Apache Kyuubi 1.4.1 DocumentationHDFS, with permissions. 3 Kyuubi, Release 1.4.1-incubating 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity access to shuffle data, even if the executors that generated the data are recycled. Spark provides two implementations for shuffle data tracking. If either is enabled, we can use the DRA feature properly proper shuffle partition number to fit your dataset. To enable this feature, we need to set the below two configs to true. spark.sql.adaptive.enabled=true spark.sql.adaptive.coalescePartitions.enabled=true0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.4.0 DocumentationApache Hadoop HDFS, with permissions. 3 Kyuubi, Release 1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity access to shuffle data, even if the executors that generated the data are recycled. Spark provides two implementations for shuffle data tracking. If either is enabled, we can use the DRA feature properly proper shuffle partition number to fit your dataset. To enable this feature, we need to set the below two configs to true. spark.sql.adaptive.enabled=true spark.sql.adaptive.coalescePartitions.enabled=true0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.5.0 DocumentationHDFS, with permissions. 3 Kyuubi, Release 1.5.0-incubating 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity while a brand new appli- cation will be submitted for user ‘kentyao’ instead. Then, you can see two processes running in your local environment, including one KyuubiServer instance, one SparkSubmit or System-side Deployment When applying HA to Kyuubi deployment, we need to be aware of the below two thing basically, • kyuubi.ha.zookeeper.quorum - the external zookeeper cluster address for deploy0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.5.1 DocumentationHDFS, with permissions. 3 Kyuubi, Release 1.5.1-incubating 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity while a brand new appli- cation will be submitted for user ‘kentyao’ instead. Then, you can see two processes running in your local environment, including one KyuubiServer instance, one SparkSubmit or System-side Deployment When applying HA to Kyuubi deployment, we need to be aware of the below two thing basically, • kyuubi.ha.zookeeper.quorum - the external zookeeper cluster address for deploy0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.5.2 DocumentationHDFS, with permissions. 3 Kyuubi, Release 1.5.2-incubating 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity while a brand new appli- cation will be submitted for user ‘kentyao’ instead. Then, you can see two processes running in your local environment, including one KyuubiServer instance, one SparkSubmit or System-side Deployment When applying HA to Kyuubi deployment, we need to be aware of the below two thing basically, • kyuubi.ha.zookeeper.quorum - the external zookeeper cluster address for deploy0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.4.1 Documentationaccess to shuffle data, even if the executors that generated the data are recycled. Spark provides two implementations for shuffle data tracking. If either is enabled, we can use the DRA feature properly proper shuffle partition number to fit your dataset. To enable this feature, we need to set the below two configs to true. spark.sql.adaptive.enabled=true spark.sql.adaptive.coalescePartitions.enabled=true the resources and concurrency of the application. But there are always exceptions. Relating these two seemingly unrelated parameters can be somehow tricky for users. This config is optional by default0 码力 | 233 页 | 4.62 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5













