Apache Kyuubi 1.6.1 Documentation[https://github.com/apache/incubator-kyuubi/commit/cb483385] [KYUUBI #2872] Catch the exception for the iterator job when incremental collect is enabled [https://github.com/apache/incubator-kyuubi/commit/383a7a84] kyuubi/commit/7a789a25] [KYUUBI #2285] trino’s result fetching method is changed to a streaming iterator mode to avoid hold data at server side [https://github.com/apache/incubator- kyuubi/commit/3114b393] mapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 401 页 | 5.42 MB | 1 年前3
Apache Kyuubi 1.6.0 Documentation[https://github.com/apache/incubator-kyuubi/commit/cb483385] [KYUUBI #2872] Catch the exception for the iterator job when incremental collect is enabled [https://github.com/apache/incubator-kyuubi/commit/383a7a84] kyuubi/commit/7a789a25] [KYUUBI #2285] trino’s result fetching method is changed to a streaming iterator mode to avoid hold data at server side [https://github.com/apache/incubator- kyuubi/commit/3114b393] mapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 391 页 | 5.41 MB | 1 年前3
Apache Kyuubi 1.4.1 DocumentationmapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.4.0 DocumentationmapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 148 页 | 6.26 MB | 1 年前3
Apache Kyuubi 1.5.0 DocumentationmapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.4.1 DocumentationmapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 233 页 | 4.62 MB | 1 年前3
Apache Kyuubi 1.5.1 DocumentationmapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.4.0 DocumentationmapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 233 页 | 4.62 MB | 1 年前3
Apache Kyuubi 1.5.2 DocumentationmapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 172 页 | 6.94 MB | 1 年前3
Apache Kyuubi 1.5.1 DocumentationmapPartitions { it => val partitionID = it.toStream.head val r = new Random(seed = partitionID) Iterator.fill((numRecords / numFiles).toInt)(randomConnRecord(r)) } df.write .mode("overwrite") .format("parquet")0 码力 | 267 页 | 5.80 MB | 1 年前3
共 25 条
- 1
- 2
- 3













