Hadoop 迁移到阿里云MaxCompute 技术方案.......................................................................... 22 6.2.2 MaxCompute DDL 与 Hive UDTF 生成 .......................................................................... 22 6.2 meta-carrier 的输出,调整 hive 与 odps 的映射 ................................................... 46 7.1.5 生成 ODPS DDL、Hive SQL 以及兼容性报告 ................................................................. 48 7.1.6 查看兼容性报告,调整直到兼容性报告符合预期 查看兼容性报告,调整直到兼容性报告符合预期 ............................................................. 49 7.1.7 运行 odps_ddl_runner.py 生成 odps 表和分区 .............................................................. 50 7.1.8 运行0 码力 | 59 页 | 4.33 MB | 1 年前3
Apache Kyuubi 1.6.1 Documentationapache.org/docs/latest/sql-ref-syntax- ddl-create-table-datasource.html] is supported to create jdbc source with SQL. # create JDBC Datasource table with DDL spark.sql("""CREATE TABLE kyuubi_table USING [https://github.com/apache/incubator- kyuubi/commit/e41a90628] [KYUUBI #3560] Flink SQL engine supports run DDL across versions [https://github.com/apache/incubator-kyuubi/commit/34ef8805d] [KYUUBI #3521] [TEST] com/apache/incubator- kyuubi/commit/2a9761694] [KYUUBI #3406] [FOLLOWUP] Add create datasource table DDL usage to Pyspark docs [https://github.com/apache/incubator-kyuubi/commit/9ddcf61f4] [KYUUBI #3547]0 码力 | 401 页 | 5.42 MB | 1 年前3
Apache Flink的过去、现在和未来Operator 抽象 Pull-based operator Push-based operator 算子可自定义读取顺序 Table API & SQL 1.9 新特性 全新的 SQL 类型系统 DDL 初步支持 Table API 增强 统一的 Catalog API Blink Planner What’s new in Blink Planner 数据结构 二进制化 更丰富的0 码力 | 33 页 | 3.36 MB | 1 年前3
PyFlink 1.15 DocumentationRecordBatch.from_arrays(arrays, schema) [5]: root |-- id: BIGINT |-- data: STRING Create a Table from DDL statements [6]: table_env.execute_sql(""" CREATE TABLE random_source ( id TINYINT, data STRING0 码力 | 36 页 | 266.77 KB | 1 年前3
PyFlink 1.16 DocumentationRecordBatch.from_arrays(arrays, schema) [5]: root |-- id: BIGINT |-- data: STRING Create a Table from DDL statements [6]: table_env.execute_sql(""" CREATE TABLE random_source ( id TINYINT, data STRING0 码力 | 36 页 | 266.80 KB | 1 年前3
Apache Kyuubi 1.7.1-rc0 Documentationapache.org/docs/latest/sql-ref-syntax- ddl-create-table-datasource.html] is supported to create jdbc source with SQL. # create JDBC Datasource table with DDL spark.sql("""CREATE TABLE kyuubi_table USING0 码力 | 401 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 Documentationapache.org/docs/latest/sql-ref-syntax- ddl-create-table-datasource.html] is supported to create jdbc source with SQL. # create JDBC Datasource table with DDL spark.sql("""CREATE TABLE kyuubi_table USING0 码力 | 404 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0 Documentationapache.org/docs/latest/sql-ref-syntax- ddl-create-table-datasource.html] is supported to create jdbc source with SQL. # create JDBC Datasource table with DDL spark.sql("""CREATE TABLE kyuubi_table USING0 码力 | 400 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0-rc1 Documentationapache.org/docs/latest/sql-ref-syntax- ddl-create-table-datasource.html] is supported to create jdbc source with SQL. # create JDBC Datasource table with DDL spark.sql("""CREATE TABLE kyuubi_table USING0 码力 | 400 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.2-rc0 Documentationapache.org/docs/latest/sql-ref-syntax- ddl-create-table-datasource.html] is supported to create jdbc source with SQL. # create JDBC Datasource table with DDL spark.sql("""CREATE TABLE kyuubi_table USING0 码力 | 405 页 | 5.26 MB | 1 年前3
共 32 条
- 1
- 2
- 3
- 4













