Apache Kyuubi 1.6.1 Documentation
1. Supports multi-client concurrency and authentication 2. Supports one Spark application per account(SPA). 3. Supports QUEUE/NAMESPACE Access Control Lists (ACL) 4. Supports metadata & data Access SQL Client, etc, to operate with Kyuubi server concurrently. The SPA policy makes sure 1) a user account can only get computing resource with managed ACLs, e.g. Queue Access Control Lists, from cluster cluster managers, e.g. Apache Hadoop YARN, Kubernetes (K8s) to create the Spark application; 2) a user account can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS, with permissions0 码力 | 199 页 | 3.89 MB | 1 年前3Apache Kyuubi 1.6.0 Documentation
1. Supports multi-client concurrency and authentication 2. Supports one Spark application per account(SPA). 3. Supports QUEUE/NAMESPACE Access Control Lists (ACL) 4. Supports metadata & data Access SQL Client, etc, to operate with Kyuubi server concurrently. The SPA policy makes sure 1) a user account can only get computing resource with managed ACLs, e.g. Queue Access Control Lists, from cluster cluster managers, e.g. Apache Hadoop YARN, Kubernetes (K8s) to create the Spark application; 2) a user account can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS, with permissions0 码力 | 195 页 | 3.88 MB | 1 年前3Apache Kyuubi 1.9.0-SNAPSHOT Documentation
supplies with default values for SQL engine appli- cation too. These properties will override all settings in $SPARK_HOME/conf/spark-defaults.conf Via JDBC Connection URL Setting them in the JDBC Connection SQL engine appli- cation too. You can use properties with the additional prefix flink. to override settings in $FLINK_HOME/conf/ flink-conf.yaml. For example: flink.parallelism.default 2 flink.taskmanager SQL engine applica- tion too. You can use properties with the additional prefix trino. to override settings in $TRINO_HOME/etc/config. properties. For example: trino.query_max_stage_count 500 trino.par0 码力 | 220 页 | 3.93 MB | 1 年前3Apache Kyuubi 1.8.1 Documentation
supplies with default values for SQL engine appli- cation too. These properties will override all settings in $SPARK_HOME/conf/spark-defaults.conf Via JDBC Connection URL Setting them in the JDBC Connection SQL engine appli- cation too. You can use properties with the additional prefix flink. to override settings in $FLINK_HOME/conf/ flink-conf.yaml. For example: flink.parallelism.default 2 flink.taskmanager SQL engine applica- tion too. You can use properties with the additional prefix trino. to override settings in $TRINO_HOME/etc/config. properties. For example: trino.query_max_stage_count 500 trino.par0 码力 | 222 页 | 3.84 MB | 1 年前3Apache Kyuubi 1.7.0-rc0 Documentation
Kyuubi server. Add repository to your maven configuration file which may reside in $MAVEN_HOME/conf/settings.xml.central maven repo central maven repo https created/ list/delete engine pods in kubernetes. You should create your serviceAccount ( or reuse account with the appropriate privileges ) and set your serviceAc- countName for kyuubi pod, which you can defaults in $SPARK_HOME/conf/hive-site.xml and $KYUUBI_HOME/conf/ kyuubi-defaults.conf for each user account. With this feature, end users are possible to visit different Hive metastore server instance. Similarly0 码力 | 210 页 | 3.79 MB | 1 年前3Apache Kyuubi 1.7.0-rc1 Documentation
Kyuubi server. Add repository to your maven configuration file which may reside in $MAVEN_HOME/conf/settings.xml.central maven repo central maven repo https created/ list/delete engine pods in kubernetes. You should create your serviceAccount ( or reuse account with the appropriate privileges ) and set your serviceAc- countName for kyuubi pod, which you can defaults in $SPARK_HOME/conf/hive-site.xml and $KYUUBI_HOME/conf/ kyuubi-defaults.conf for each user account. With this feature, end users are possible to visit different Hive metastore server instance. Similarly0 码力 | 206 页 | 3.78 MB | 1 年前3Apache Kyuubi 1.7.3 Documentation
created/ list/delete engine pods in kubernetes. You should create your serviceAccount ( or reuse account with the appropriate privileges ) and set your serviceAc- countName for kyuubi pod, which you can defaults in $SPARK_HOME/conf/hive-site.xml and $KYUUBI_HOME/conf/ kyuubi-defaults.conf for each user account. With this feature, end users are possible to visit different Hive metastore server instance. Similarly supplies with default values for SQL engine appli- cation too. These properties will override all settings in $SPARK_HOME/conf/spark-defaults.conf Via JDBC Connection URL Setting them in the JDBC Connection0 码力 | 211 页 | 3.79 MB | 1 年前3Apache Kyuubi 1.7.3-rc0 Documentation
created/ list/delete engine pods in kubernetes. You should create your serviceAccount ( or reuse account with the appropriate privileges ) and set your serviceAc- countName for kyuubi pod, which you can defaults in $SPARK_HOME/conf/hive-site.xml and $KYUUBI_HOME/conf/ kyuubi-defaults.conf for each user account. With this feature, end users are possible to visit different Hive metastore server instance. Similarly supplies with default values for SQL engine appli- cation too. These properties will override all settings in $SPARK_HOME/conf/spark-defaults.conf Via JDBC Connection URL Setting them in the JDBC Connection0 码力 | 211 页 | 3.79 MB | 1 年前3Apache Kyuubi 1.7.0 Documentation
Kyuubi server. Add repository to your maven configuration file which may reside in $MAVEN_HOME/conf/settings.xml.central maven repo central maven repo https created/ list/delete engine pods in kubernetes. You should create your serviceAccount ( or reuse account with the appropriate privileges ) and set your serviceAc- countName for kyuubi pod, which you can defaults in $SPARK_HOME/conf/hive-site.xml and $KYUUBI_HOME/conf/ kyuubi-defaults.conf for each user account. With this feature, end users are possible to visit different Hive metastore server instance. Similarly0 码力 | 206 页 | 3.78 MB | 1 年前3Apache Kyuubi 1.7.2 Documentation
created/ list/delete engine pods in kubernetes. You should create your serviceAccount ( or reuse account with the appropriate privileges ) and set your serviceAc- countName for kyuubi pod, which you can defaults in $SPARK_HOME/conf/hive-site.xml and $KYUUBI_HOME/conf/ kyuubi-defaults.conf for each user account. With this feature, end users are possible to visit different Hive metastore server instance. Similarly supplies with default values for SQL engine appli- cation too. These properties will override all settings in $SPARK_HOME/conf/spark-defaults.conf Via JDBC Connection URL Setting them in the JDBC Connection0 码力 | 211 页 | 3.79 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5