积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(17)Apache Flink(17)

语言

全部英语(15)中文(简体)(2)

格式

全部PDF文档 PDF(17)
 
本次搜索耗时 0.015 秒,为您找到相关结果约 17 个.
  • 全部
  • 云计算&大数据
  • Apache Flink
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 Introduction to Apache Flink and Apache Kafka - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    to Apache Flink and Apache Kafka Vasiliki Kalavri | Boston University 2020 Apache Flink • An open-source, distributed data analysis framework • True streaming at its core • Streaming & Batch API aggregations, ... 2 Vasiliki Kalavri | Boston University 2020 Basic API Concept Source Data Stream Operator Data Stream Sink Source Data Set Operator Data Set Sink Writing a Flink Program 1.Bootstrap University 2020 Resources • Documentation • https://flink.apache.org/ • Community • https://flink.apache.org/community.html#mailing-lists • Conference • http://flink-forward.org/ 20 Vasiliki
    0 码力 | 26 页 | 3.33 MB | 1 年前
    3
  • pdf文档 PyFlink 1.15 Documentation

    Conda install pandoc conda install pandoc 3. Build the docs python3 setup.py build_sphinx 4. Open the pyflink-docs/build/sphinx/html/index.html in the Browser 1.1 Getting Started This page summarizes 1.1 Preparation This page shows you how to install PyFlink using pip, conda, installing from the source, etc. Python Version Supported PyFlink Version Python Version Supported PyFlink 1.16 Python 3 virtual environment needs to be activated before to use it. To activate the virtual environment, run: source venv/bin/activate That is, execute the activate script under the bin directory of your virtual environment
    0 码力 | 36 页 | 266.77 KB | 1 年前
    3
  • pdf文档 PyFlink 1.16 Documentation

    Conda install pandoc conda install pandoc 3. Build the docs python3 setup.py build_sphinx 4. Open the pyflink-docs/build/sphinx/html/index.html in the Browser 1.1 Getting Started This page summarizes 1.1 Preparation This page shows you how to install PyFlink using pip, conda, installing from the source, etc. Python Version Supported PyFlink Version Python Version Supported PyFlink 1.16 Python 3 virtual environment needs to be activated before to use it. To activate the virtual environment, run: source venv/bin/activate That is, execute the activate script under the bin directory of your virtual environment
    0 码力 | 36 页 | 266.80 KB | 1 年前
    3
  • pdf文档 State management - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    Double)] { // the state handle object private var lastTempState: ValueState[Double] = _ override def open(parameters: Configuration): Unit = { // create state descriptor val lastTempDescriptor = new Valu declare state handle 2. assign name and get the state handle In the operator (FlatMap) class In the open() method Vasiliki Kalavri | Boston University 2020 class TemperatureAlertFunction(val threshold: ValueState rideState;
 private ValueState fareState;
 
 @Override
 public void open(Configuration config) { // initialize the state descriptors here
 rideState = getRuntimeContext()
    0 码力 | 24 页 | 914.13 KB | 1 年前
    3
  • pdf文档 Scalable Stream Processing - Spark Streaming and Flink

    Operations ▶ Every input DStream is associated with a Receiver object. • It receives the data from a source and stores it in Spark’s memory for processing. ▶ Three categories of streaming sources: 1. Basic Operations ▶ Every input DStream is associated with a Receiver object. • It receives the data from a source and stores it in Spark’s memory for processing. ▶ Three categories of streaming sources: 1. Basic [number of partitions]) 15 / 79 Input Operations - Custom Sources (1/3) ▶ To create a custom source: extend the Receiver class. ▶ Implement onStart() and onStop(). ▶ Call store(data) to store received
    0 码力 | 113 页 | 1.22 MB | 1 年前
    3
  • pdf文档 Streaming in Apache Flink

    cluster grows and shrinks • queryable: Flink state can be queried via a REST API Rich Functions • open(Configuration c) • close() • getRuntimeContext() DataStream> input = Tuple2> { private ValueState averageState; @Override public void open (Configuration conf) { ValueStateDescriptor descriptor = new ValueStat RichCoFlatMapFunction { private ValueState blocked; @Override public void open(Configuration config) { blocked = getRuntimeContext().getState(new ValueStateDescriptor<>("blocked"
    0 码力 | 45 页 | 3.00 MB | 1 年前
    3
  • pdf文档 Stream processing fundamentals - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    the attribute domain size(s). Note that N might be unknown. up-to-date frequencies for specific (source, destination) pairs observed in IP connections that are currently active 10 The vector is updated Vasiliki Kalavri | Boston University 2020 Types of streams • Base stream: produced by an external source • e.g. TCP packet stream • Derived Derived stream: produced by a continuous query and its operators, e.g. total traffic from a source every minute packet generation time bytes in packet total bytes this minute
    0 码力 | 45 页 | 1.22 MB | 1 年前
    3
  • pdf文档 监控Apache Flink应用程序(入门)

    4 可能的报警条件 • recordsOutPerSecond = 0 (for a non-Sink operator) 请注意:目前由于metrics体系只考虑Flink的内部通信,所以source operators的输入记录数是0,而sink operators的输出记录数也是0. caolei – 监控Apache Flink应用程序(入门) 进度和吞吐量监控 – 12 Up" 当从消息队列中消费消息时,通常有一种直接的方法来监控应用程序是否正常运行。通过使用特定于连接器的 metrics,您可以监视当前消费者组的消息队列头部的落后程度。Flink可以从大多数source获得基本metrics。 4.10 关键指标 Metric Scope Description records-lag-max user applies to FlinkKafkaConsumer latency markers periodically at all sources. For each sub-task, a latency distribution from each source to this operator will be reported. The granularity of these histograms can be further controlled
    0 码力 | 23 页 | 148.62 KB | 1 年前
    3
  • pdf文档 Flow control and load shedding - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    excess data for later processing, once input rates stabilize. • Requires a persistent input source. • Suitable for transient load increase. Scale resource allocation: • Addresses the case of increased at any location in the query plan • Dropping near the source avoids wasting work but it might affect results of multiple queries if the source is connected to multiple queries. 14 ??? Vasiliki Kalavri 2020 22 ??? Vasiliki Kalavri | Boston University 2020 22 Durably buffer events in a channel or source Adjust processing rate of all operators to that of the slowest part of the pipeline ??? Vasiliki
    0 码力 | 43 页 | 2.42 MB | 1 年前
    3
  • pdf文档 Elasticity and state migration: Part I - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    identify the minimum parallelism πi per operator i, such that the physical dataflow can sustain all source rates. S1 S2 λ1 λ2 S1 S2 π=2 π=3 logical dataflow physical dataflow ??? Vasiliki Kalavri 2020 –Johnny Appleseed “Type a quote here.” 21 Recursively computed as: True output rate of source j ??? Vasiliki Kalavri | Boston University 2020 –Johnny Appleseed “Type a quote here.” 21 Recursively computed for all operators by traversing the dataflow from left to right once True output rate of source j ??? Vasiliki Kalavri | Boston University 2020 Example 22 i=1 i=2 i=3 i=4 λ1 o = 2000r/s
    0 码力 | 93 页 | 2.42 MB | 1 年前
    3
共 17 条
  • 1
  • 2
前往
页
相关搜索词
IntroductiontoApacheFlinkandKafkaCS591K1DataStreamProcessingAnalyticsSpring2020Py1.15Documentation1.16StatemanagementScalableSparkStreaminginprocessingfundamentals监控应用程序应用程序入门FlowcontrolloadsheddingElasticitystatemigrationPart
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩