Apache Kyuubi 1.7.3 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in Yarn Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 3.6. Clients 115 Kyuubi, dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 211 页 | 3.79 MB | 1 年前3
Apache Kyuubi 1.7.1-rc0 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in Yarn Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 114 Chapter 3. What’s Next dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 208 页 | 3.78 MB | 1 年前3
Apache Kyuubi 1.7.3-rc0 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in Yarn Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 3.6. Clients 115 Kyuubi, dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 211 页 | 3.79 MB | 1 年前3
Apache Kyuubi 1.7.2 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in Yarn Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 3.6. Clients 115 Kyuubi, dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 211 页 | 3.79 MB | 1 年前3
Apache Kyuubi 1.7.2-rc0 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in Yarn Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 3.6. Clients 115 Kyuubi, dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 211 页 | 3.79 MB | 1 年前3
Apache Kyuubi 1.7.0-rc1 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in Yarn Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 112 Chapter 3. What’s Next dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 206 页 | 3.78 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); 36 Chapter 3. What’s Next Kyuubi, Release 1.7.0 If the Hive NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 116 Chapter 3. What’s Next dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 210 页 | 3.79 MB | 1 年前3
Apache Kyuubi 1.7.0 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in Yarn Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 112 Chapter 3. What’s Next dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 206 页 | 3.78 MB | 1 年前3
Apache Kyuubi 1.9.0-SNAPSHOT Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in YARN Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 3.7. Clients 117 Kyuubi, dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") 3.7. Clients 129 Kyuubi, Release 1.9.0-SNAPSHOT Use0 码力 | 220 页 | 3.93 MB | 1 年前3
Apache Kyuubi 1.8.0-rc1 Documentation0000/default> CREATE TABLE pokes (foo INT, bar STRING); 0: jdbc:hive2://localhost:10000/default> INSERT INTO TABLE pokes VALUES (1, 'hello'); If the Hive SQL passes and there is a job in YARN Web UI, NAMESPACE DEFAULT; CREATE TABLE spark_catalog.�default�.SRC(KEY INT, VALUE STRING) USING PARQUET; INSERT INTO TABLE spark_catalog.�default�.SRC VALUES (11215016, 'Kent Yao'); 3.7. Clients 117 Kyuubi, dataframe in overwrite mode df.writeTo("kyuubi_table").overwrite # write data from query spark.sql("INSERT INTO kyuubi_table SELECT * FROM some_table") Use PySpark with Pandas From PySpark 3.2.0, PySpark0 码力 | 220 页 | 3.82 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5













