Apache Kyuubi 1.5.0 Documentation
kyuubi.SQLOperationListener: Query [a46ca504-fe3a-4dfb-be1e- ˓→19770af8ac4c]: Stage 3 started with 1 tasks, 1 active stages running 2021-10-28 13:56:27.651 INFO scheduler.DAGScheduler: Job 3 finished: collect script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by -s ${SPAAK_HOME}. Of course, if you have an image that contains the Spark binary package, you kyuubi-example -- /bin/bash ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://localhost:10009' Or you can submit tasks directly through local beeline: ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://${hostname}:${port}'0 码力 | 172 页 | 6.94 MB | 1 年前3Apache Kyuubi 1.5.1 Documentation
kyuubi.SQLOperationListener: Query [a46ca504-fe3a-4dfb-be1e- ˓→19770af8ac4c]: Stage 3 started with 1 tasks, 1 active stages running 2021-10-28 13:56:27.651 INFO scheduler.DAGScheduler: Job 3 finished: collect script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by -s ${SPAAK_HOME}. Of course, if you have an image that contains the Spark binary package, you kyuubi-example -- /bin/bash ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://localhost:10009' Or you can submit tasks directly through local beeline: ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://${hostname}:${port}'0 码力 | 172 页 | 6.94 MB | 1 年前3Apache Kyuubi 1.5.2 Documentation
kyuubi.SQLOperationListener: Query [a46ca504-fe3a-4dfb-be1e- ˓→19770af8ac4c]: Stage 3 started with 1 tasks, 1 active stages running 2021-10-28 13:56:27.651 INFO scheduler.DAGScheduler: Job 3 finished: collect script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by -s ${SPAAK_HOME}. Of course, if you have an image that contains the Spark binary package, you kyuubi-example -- /bin/bash ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://localhost:10009' Or you can submit tasks directly through local beeline: ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://${hostname}:${port}'0 码力 | 172 页 | 6.94 MB | 1 年前3Apache Kyuubi 1.5.1 Documentation
kyuubi.SQLOperationListener: Query [a46ca504-fe3a-4dfb-be1e-19770af8ac4c]: Stage 3 started with 1 tasks, 1 active stages running 2021-10-28 13:56:27.651 INFO scheduler.DAGScheduler: Job 3 finished: collect script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by -s ${SPAAK_HOME}. Of course, if you have an image that contains the Spark binary package, you kyuubi-example -- /bin/bash ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://localhost:10009' Or you can submit tasks directly through local beeline: ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://${hostname}:${port}'0 码力 | 267 页 | 5.80 MB | 1 年前3Apache Kyuubi 1.5.2 Documentation
kyuubi.SQLOperationListener: Query [a46ca504-fe3a-4dfb-be1e-19770af8ac4c]: Stage 3 started with 1 tasks, 1 active stages running 2021-10-28 13:56:27.651 INFO scheduler.DAGScheduler: Job 3 finished: collect script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by -s ${SPAAK_HOME}. Of course, if you have an image that contains the Spark binary package, you kyuubi-example -- /bin/bash ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://localhost:10009' Or you can submit tasks directly through local beeline: ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://${hostname}:${port}'0 码力 | 267 页 | 5.80 MB | 1 年前3Apache Kyuubi 1.5.0 Documentation
kyuubi.SQLOperationListener: Query [a46ca504-fe3a-4dfb-be1e-19770af8ac4c]: Stage 3 started with 1 tasks, 1 active stages running 2021-10-28 13:56:27.651 INFO scheduler.DAGScheduler: Job 3 finished: collect script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by -s ${SPAAK_HOME}. Of course, if you have an image that contains the Spark binary package, you kyuubi-example -- /bin/bash ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://localhost:10009' Or you can submit tasks directly through local beeline: ${SPARK_HOME}/bin/beeline -u 'jdbc:hive2://${hostname}:${port}'0 码力 | 267 页 | 5.80 MB | 1 年前3Apache Kyuubi 1.9.0-SNAPSHOT Documentation
Any Scale Most of the Kyuubi engine types have a distributed backend or can schedule distributed tasks at runtime. They can process data on single-node machines or clusters, such as YARN and Kubernetes script can also help build external Spark into a Kyuubi image that acts as a client for submitting tasks by -s ${SPARK_HOME}. Of course, if you have an image that contains the Spark binary package, you kyuubi-example -- /bin/bash ${KYUUBI_HOME}/bin/beeline -u 'jdbc:kyuubi://localhost:10009' Or you can submit tasks directly through local beeline: ${KYUUBI_HOME}/bin/beeline -u 'jdbc:kyuubi://${hostname}:${port}'0 码力 | 220 页 | 3.93 MB | 1 年前3Apache Kyuubi 1.3.0 Documentation
operation .interrupt.on.cancel true When true, all running tasks will be interrupted if one cancels a query. When false, all running tasks will remain until finished. bool ean 1.2 .0 kyuubi.operation should release it back to the resource pool promptly, and conversely, when the engine is doing chubby tasks, we should be able to get and use more resources more efficiently. On the one hand, we need to rely based on the workloads. When dynamic allocation is enabled, and an engine has a backlog of pending tasks, it can request executors via ExecutorAllocationManager. When the engine has executors that become0 码力 | 199 页 | 4.42 MB | 1 年前3Apache Kyuubi 1.3.1 Documentation
cancel true When true, all running tasks will be interrupted if one cancels a query. When bool ean 1.2 .0 Key Default Meaning Type Since false, all running tasks will remain until finished. kyuubi should release it back to the resource pool promptly, and conversely, when the engine is doing chubby tasks, we should be able to get and use more resources more efficiently. On the one hand, we need to rely based on the workloads. When dynamic allocation is enabled, and an engine has a backlog of pending tasks, it can request executors via ExecutorAllocationManager. When the engine has executors that become0 码力 | 199 页 | 4.44 MB | 1 年前3Apache Kyuubi 1.8.0-rc0 Documentation
Any Scale Most of the Kyuubi engine types have a distributed backend or can schedule distributed tasks at runtime. They can process data on single-node machines or clusters, such as YARN and Kubernetes metadata.re quest.retry.interval PT5S The interval to check and trigger the metadata request retry tasks. durat ion 1.6.0 kyuubi.metadata.st ore.class org.apache. kyuubi.serv er.metadata .jdbc.JDBC MetadataSt operation.in terrupt.on.cancel true When true, all running tasks will be interrupted if one cancels a query. When false, all running tasks will remain until finished. bool ean 1.2.0 kyuubi.operation0 码力 | 428 页 | 5.28 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5