Apache Kyuubi 1.3.0 Documentation[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database release package you downloaded or built contains the rest prerequisites inside already. Components Role Optional Version Remarks Java Java Runtime Environment Required 1.8 Kyuubi is pre-built with Java and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore for Spark SQL0 码力 | 199 页 | 4.42 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentation[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database release package you downloaded or built contains the rest prerequisites inside already. Components Role Optional Version Remarks Java Java Runtime Environment Required 1.8 Kyuubi is pre-built with Java and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore for Spark SQL0 码力 | 199 页 | 4.44 MB | 1 年前3
Apache Kyuubi 1.3.0 Documentationaccount can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS, with permissions. 3 Kyuubi, Release 1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need 1.1.x support thrift version from 1 to 10 thrift_version=7 # change to your username to avoid permissions issue for local test auth_username=chengpan [notebook] [[interpreters]] [[[sql]]] name=SparkSQL to create and watch executor pods. In both cases, you need to figure out whether you have the permissions under the corresponding namespace. You can use following cmd to create serviceAccount (You need0 码力 | 129 页 | 6.15 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentationaccount can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS, with permissions. 3 Kyuubi, Release 1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need 1.1.x support thrift version from 1 to 10 thrift_version=7 # change to your username to avoid permissions issue for local test auth_username=chengpan [notebook] [[interpreters]] [[[sql]]] name=SparkSQL to create and watch executor pods. In both cases, you need to figure out whether you have the permissions under the corresponding namespace. You can use following cmd to create serviceAccount (You need0 码力 | 129 页 | 6.16 MB | 1 年前3
Apache Kyuubi 1.4.1 Documentation[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database release package you downloaded or built contains the rest prerequisites inside already. Components Role Optional Version Remarks Java Java Runtime Environment Required Java 8/11 Kyuubi is pre-built with and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore for Spark SQL0 码力 | 233 页 | 4.62 MB | 1 年前3
Apache Kyuubi 1.4.0 Documentation[https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database release package you downloaded or built contains the rest prerequisites inside already. Components Role Optional Version Remarks Java Java Runtime Environment Required Java 8/11 Kyuubi is pre-built with and process the datasets. You can interact with any Spark-compatible versions of HDFS. Components Role Optional Version Remarks Hive Metastore Optional referenced by Spark Hive Metastore for Spark SQL0 码力 | 233 页 | 4.62 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 DocumentationKyuubi, Release 1.7.0 2 ADMIN GUIDE CHAPTER ONE A UNIFIED GATEWAY The Server module plays the role of a unified gateway. The server enables simplified, secure access to any cluster resource through pre-installed and the JAVA_HOME is correctly set to each component. 7 Kyuubi, Release 1.7.0 Component Role Version Remarks Java JRE 8/11 Officially released against JDK8 Kyuubi Gateway Engine lib Beeline command: kubectl create serviceAccount kyuubi -nkubectl create rolebinding kyuubi-role --role=edit --serviceAccount= ˓→:kyuubi --namespace= See more related 0 码力 | 210 页 | 3.79 MB | 1 年前3
Apache Kyuubi 1.4.1 Documentationaccount can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS, with permissions. 3 Kyuubi, Release 1.4.1-incubating 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You 1.1.x support thrift version from 1 to 10 thrift_version=7 # change to your username to avoid permissions issue for local test auth_username=chengpan [notebook] [[interpreters]] [[[sql]]] name=SparkSQL to create and watch executor pods. In both cases, you need to figure out whether you have the permissions under the corresponding namespace. You can use following cmd to create serviceAccount (You need0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.4.0 Documentationaccount can only access data and metadata from a storage system, e.g. Apache Hadoop HDFS, with permissions. 3 Kyuubi, Release 1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need 1.1.x support thrift version from 1 to 10 thrift_version=7 # change to your username to avoid permissions issue for local test auth_username=chengpan [notebook] [[interpreters]] [[[sql]]] name=SparkSQL to create and watch executor pods. In both cases, you need to figure out whether you have the permissions under the corresponding namespace. You can use following cmd to create serviceAccount (You need0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 Documentationplatform, using one copy of data, with one SQL interface. A Unified Gateway The Server module plays the role of a unified gateway. The server enables simplified, secure access to any cluster resource through the JRE needs to be pre-installed and the JAVA_HOME is correctly set to each component. Component Role Version Remarks Java JRE 8/11 Officially released against JDK8 Kyuubi Gateway Engine lib Beeline distribution Flink Engine >=1.14.0 A Flink distribution Trino Engine >=363 A Trino cluster Component Role Version Remarks Doris Engine N/A A Doris cluster Hive Engine Metastor e 3.1.x N/A A Hive distribution0 码力 | 404 页 | 5.25 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5













