OpenShift Container Platform 4.7 日志记录Elasticsearch 实例。 可选:添加到日志的标签。 可选:转发结构化的 JSON 日志条目作为 JSON 对象,在 structured 项。日志条目必须包含有效的 结构化 JSON;否则,OpenShift Logging 会删除 structured 字段,并将日志条目发送到默认索引 app-00000x。 可选:字符串。要添加到日志中的一个或多个标签。对值加引号(如 "true"),以便它们被识别为字 可选:指定将日志发送到内部 Elasticsearch 实例的 default 输出。 可选:转发结构化的 JSON 日志条目作为 JSON 对象,在 structured 项。日志条目必须包 含有效的结构化 JSON;否则,OpenShift Logging 会删除 structured 字段,并将日志条目 发送到默认索引 app-00000x。 可选:字符串。要添加到日志中的一个或多个标签。 可选:配置多个输出 可选。指定将日志转发到内部 Elasticsearch 实例的默 默认 认输出。 可选:转发结构化的 JSON 日志条目作为 JSON 对象,在 structured 项。日志条目必须包 含有效的结构化 JSON;否则,OpenShift Logging 会删除 structured 字段,并将日志条目 发送到默认索引 app-00000x。 可选:字符串。要添加到日志中的一个或多个标签。 可选:配置多个输出0 码力 | 183 页 | 1.98 MB | 1 年前3
OpenShift Container Platform 4.8 日志记录转发到内部 Elasticsearch 实例。 可选:添加到日志的标签。 可选:指定是否转发结构化 JSON 日志条目作为 structured 项中的 JSON 对象。日志条目必须包含 有效的结构化 JSON;否则,OpenShift Logging 会删除 structured 字段,并将日志条目发送到默 认索引 app-00000x。 可选:字符串。要添加到日志中的一个或多个标签。对值加引号(如 可选:指定将日志发送到内部 Elasticsearch 实例的 default 输出。 可选:指定是否转发结构化 JSON 日志条目作为 structured 项中的 JSON 对象。日志条目 必须包含有效的结构化 JSON;否则,OpenShift Logging 会删除 structured 字段,并将日 志条目发送到默认索引 app-00000x。 可选:字符串。要添加到日志中的一个或多个标签。 可选:指定将日志转发到内部 Elasticsearch 实例的 default 输出。 可选:指定是否转发结构化 JSON 日志条目作为 structured 项中的 JSON 对象。日志条目 必须包含有效的结构化 JSON;否则,OpenShift Logging 会删除 structured 字段,并将日 志条目发送到默认索引 app-00000x。 可选:字符串。要添加到日志中的一个或多个标签。 可选:配0 码力 | 223 页 | 2.28 MB | 1 年前3
Apache Karaf 3.0.5 Guideskaraf@root()> STOP When you start Apache Karaf in regular mode, the logout command or CTRL-D key binding logout from the console and shutdown Apache Karaf. When you start Apache Karaf in background mode to log:get ALL command) You can create your own aliases in the etc/shell.init.script file. Key binding Like on most Unix environment, Karaf console support some key bindings: • the arrows key to navigate When you are connected to a remote Apache Karaf console, you can logout using: • using CTRL-D key binding. Note that CTRL-D just logout from the remote console in this case, it doesn't shutdown the Apache0 码力 | 203 页 | 534.36 KB | 1 年前3
Apache Kyuubi 1.3.0 Documentation1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you serviceAccount permission). # create serviceAccount kubectl create serviceaccount spark -n# binding role kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount= ˓→ :spark applications, or even ETL jobs only via the Hive JDBC module. You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you 0 码力 | 129 页 | 6.15 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentation1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you serviceAccount permission). # create serviceAccount kubectl create serviceaccount spark -n# binding role kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount= ˓→ :spark applications, or even ETL jobs only via the Hive JDBC module. You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you 0 码力 | 129 页 | 6.16 MB | 1 年前3
Apache Karaf Container 4.x - Documentationmode 4.3.2. Stop When you start Apache Karaf in regular mode, the logout command or CTRL-D key binding logs out from the console and shuts Apache Karaf down. When you start Apache Karaf in background to log:get ALL command) You can create your own aliases in the etc/shell.init.script file. Key binding Like on most Unix environments, the Karaf console supports some key bindings: • the arrows key When you are connected to a remote Apache Karaf console, you can logout using: • using CTRL-D key binding. Note that CTRL-D just logs out from the remote console in this case, it doesn’t shutdown the Apache0 码力 | 370 页 | 1.03 MB | 1 年前3
Apache Kyuubi 1.3.0 Documentationadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you serviceAccount permission). # create serviceAccount kubectl create serviceaccount spark -n# binding role kubectl create clusterrolebinding spark-role --clusterrole=edit -- serviceaccount= :spark [https://mvnrepository.com/artifact/org.apache.hive/hive-jdbc/2.3.7] module. You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you 0 码力 | 199 页 | 4.42 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentationadoop-hdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you serviceAccount permission). # create serviceAccount kubectl create serviceaccount spark -n# binding role kubectl create clusterrolebinding spark-role --clusterrole=edit -- serviceaccount= :spark [https://mvnrepository.com/artifact/org.apache.hive/hive-jdbc/2.3.7] module. You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you 0 码力 | 199 页 | 4.44 MB | 1 年前3
Apache Kyuubi 1.4.1 Documentation1-incubating 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you serviceAccount permission). # create serviceAccount kubectl create serviceaccount spark -n# binding role kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount= ˓→ :spark applications, or even ETL jobs only via the Hive JDBC module. You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you 0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.4.0 Documentation1.3.0 4 Chapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you serviceAccount permission). # create serviceAccount kubectl create serviceaccount spark -n# binding role kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount= ˓→ :spark applications, or even ETL jobs only via the Hive JDBC module. You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you 0 码力 | 148 页 | 6.26 MB | 1 年前3
共 246 条
- 1
- 2
- 3
- 4
- 5
- 6
- 25













