Remote Execution Caching Compiler (RECC)Remote Execution Caching Compiler (RECC) CppCon 2024 September 19, 2024 Shivam Bairoliya Software Engineer© 2024 Bloomberg Finance L.P. All rights reserved. What is RECC? ● Remote Execution Caching source build tool that wraps compiler commands and optionally forwards them to a remote build execution service ○ Encompasses the capabilities of both ccache and distcc ○ Supports remote linking and CC) ○ Supports multiple operating systems (Linux, macOS, Solaris) ● Compatible with any remote execution API server supported by Bazel ○ Single Host Server/Proxy: BuildBox-CASD ○ Distributed Server:0 码力 | 6 页 | 2.03 MB | 6 月前3
Apache Kyuubi 1.3.0 Documentations/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. Concurrent execution: multiple Spark applications work together Quick response: long-running Spark applications without startup cost Optimal execution plan: fully supports Spark SQL 1.5.2. Execute Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://localhost:10009/> select timestamp '2018-11-17';0 码力 | 199 页 | 4.42 MB | 1 年前3
Apache Kyuubi 1.3.1 Documentations/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. Concurrent execution: multiple Spark applications work together Quick response: long-running Spark applications without startup cost Optimal execution plan: fully supports Spark SQL 1.5.2. Execute Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://localhost:10009/> select timestamp '2018-11-17';0 码力 | 199 页 | 4.44 MB | 1 年前3
Apache Kyuubi 1.4.1 Documentations/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. Concurrent execution: multiple Spark applications work together Quick response: long-running Spark applications without startup cost Optimal execution plan: fully supports Spark SQL 1.5.2. Execute Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://10.242.189.214:2181/> select timestamp '2018-11-17';0 码力 | 233 页 | 4.62 MB | 1 年前3
Apache Kyuubi 1.4.0 Documentations/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. Concurrent execution: multiple Spark applications work together Quick response: long-running Spark applications without startup cost Optimal execution plan: fully supports Spark SQL 1.5.2. Execute Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://10.242.189.214:2181/> select timestamp '2018-11-17';0 码力 | 233 页 | 4.62 MB | 1 年前3
Apache Kyuubi 1.3.0 DocumentationChapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. • Concurrent execution: multiple Spark applications work together • Quick response: long-running Spark applications without startup cost • Optimal execution plan: fully supports Spark SQL SparkSubmit Execute Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://localhost:10009/> select timestamp '2018-11-17';0 码力 | 129 页 | 6.15 MB | 1 年前3
Apache Kyuubi 1.3.1 DocumentationChapter 1. Multi-tenancy CHAPTER TWO EASE OF USE You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. • Concurrent execution: multiple Spark applications work together • Quick response: long-running Spark applications without startup cost • Optimal execution plan: fully supports Spark SQL SparkSubmit Execute Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://localhost:10009/> select timestamp '2018-11-17';0 码力 | 129 页 | 6.16 MB | 1 年前3
Apache Kyuubi 1.5.1 Documentationhdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. Concurrent execution: multiple Spark applications work together Quick response: long-running Spark applications without startup cost Optimal execution plan: fully supports Spark SQL Execute Spark SQL Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://10.242.189.214:2181/> select timestamp '2018-11-17';0 码力 | 267 页 | 5.80 MB | 1 年前3
Apache Kyuubi 1.5.2 Documentationhdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. Concurrent execution: multiple Spark applications work together Quick response: long-running Spark applications without startup cost Optimal execution plan: fully supports Spark SQL Execute Spark SQL Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://10.242.189.214:2181/> select timestamp '2018-11-17';0 码力 | 267 页 | 5.80 MB | 1 年前3
Apache Kyuubi 1.5.0 Documentationhdfs/HdfsDesign.html], with permissions. Ease of Use You only need to be familiar with Structured Query Language (SQL) and Java Database Connectivity (JDBC) to handle massive data. It helps you focus on analytics engine. Concurrent execution: multiple Spark applications work together Quick response: long-running Spark applications without startup cost Optimal execution plan: fully supports Spark SQL Execute Spark SQL Statements If the beeline session is successfully connected, then you can run any query supported by Spark SQL now. For example, 0: jdbc:hive2://10.242.189.214:2181/> select timestamp '2018-11-17';0 码力 | 267 页 | 5.80 MB | 1 年前3
共 1000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 100













