积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(198)VirtualBox(85)Apache Kyuubi(44)OpenShift(11)机器学习(9)Apache Flink(9)Istio(8)边缘计算(8)Kubernetes(6)RocketMQ(4)

语言

全部英语(164)中文(简体)(29)英语(3)中文(简体)(2)

格式

全部PDF文档 PDF(173)其他文档 其他(24)PPT文档 PPT(1)
 
本次搜索耗时 0.024 秒,为您找到相关结果约 198 个.
  • 全部
  • 云计算&大数据
  • VirtualBox
  • Apache Kyuubi
  • OpenShift
  • 机器学习
  • Apache Flink
  • Istio
  • 边缘计算
  • Kubernetes
  • RocketMQ
  • 全部
  • 英语
  • 中文(简体)
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 其他文档 其他
  • PPT文档 PPT
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 KubeCon2020/腾讯会议大规模使用Kubernetes的技术实践

    ������������������� StatefulSetPlus Operator Ø Keep share memory during Pod upgrade Ø Upgrade jitter (a few ms) for keep-alive services Flexible and dynamic resource management Dynamic Scheduler is
    0 码力 | 19 页 | 10.94 MB | 1 年前
    3
  • pdf文档 OpenShift Container Platform 4.10 可伸缩性和性能

    环境 境变 变量 量 描述 描述 LATENCY_TEST_DE LAY 指定测试开始运行的时间(以秒为单位)。您可以使用变量来允许 CPU 管理器协 调循环来更新默认的 CPU 池。默认值为 0。 LATENCY_TEST_CP US 指定运行延迟测试的 pod 使用的 CPU 数量。如果没有设置变量,则默认配置包含 所有隔离的 CPU。 LATENCY_TEST_RU NTIME 指 HWLATDETECT_MA XIMUM_LATENCY 指定工作负载和操作系统的最大可接受硬件延迟(微秒)。如果您没有设置 HWLATDETECT_MAXIMUM_LATENCY 或 MAXIMUM_LATENCY 的值, 该工具会比较默认预期阈值(20μs)和工具本身中实际的最大延迟。然后,测试会失 败或成功。 CYCLICTEST_MAXI MUM_LATENCY 指定 cyclictest 您没有设置 CYCLICTEST_MAXIMUM_LATENCY 或 MAXIMUM_LATENCY 的值,该 工具会跳过预期和实际最大延迟的比较。 OSLAT_MAXIMUM_L ATENCY 指定 oslat 测试结果的最大可接受延迟(微秒)。如果您没有设置 OSLAT_MAXIMUM_LATENCY 或 MAXIMUM_LATENCY 的值,该工具会跳 过预期和实际最大延迟的比较。 MAXIMUM_LATENC
    0 码力 | 315 页 | 3.19 MB | 1 年前
    3
  • pdf文档 监控Apache Flink应用程序(入门)

    ........................................................................... 14 4.12 Monitoring Latency................................................................................................. records-lag-max > threshold • millisBehindLatest > threshold 4.12 Monitoring Latency Generally speaking, latency is the delay between the creation of an event and the time at which results based which then writes the results to a database or calls a downstream system. In such a pipeline, latency can be introduced at each stage and for various reasons including the following: 1. It might take
    0 码力 | 23 页 | 148.62 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction

    number of parameters, the amount of training resources required to train the network, prediction latency, etc. Natural language models such as GPT-3 now cost millions of dollars to train just one iteration training process in terms of computation cost, memory cost, amount of training data, and the training latency. It addresses questions like: ● How long does the model take to train? ● How many devices are needed parameters does the model have, what is the disk size, RAM consumption during inference, inference latency, etc. Using the sensitive tweet classifier example, during the deployment phase the user will be
    0 码力 | 21 页 | 3.17 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    ideas, the compression techniques. Compression techniques aim to reduce the model footprint (size, latency, memory etc.). We can reduce the model footprint by reducing the number of trainable parameters. prediction latency, RAM consumption and the quality metrics, such as accuracy, F1, precision and recall as shown in table 2-1. Footprint Metrics Quality Metrics ● Model Size ● Inference Latency on Target the model with respect to one or more of the footprint metrics such as the model size, inference latency, or training time required for convergence with a little quality compromise. Hence, it is important
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 万亿级数据洪峰下的消息引擎Apache RocketMQ

    lMemory access latency issues: ØDirect reclaim • Background reclaim (kswapd) • Foreground reclaim (direct reclaim) ØPage fault • Major page fault will produce latency Memory Access Latency(Page Fault) Fault) alloc mem free mem? No Latency Page Reclaiming back-reclaim No Latency Direct Reclaim Y N Enough NOT Enough Latency lDirect reclaim Øvm.min_free_kbytes: 3g Øvm.extra_free_kbytes: 8g Used add_to_page_cache_locked 自旋锁- treelock 1.4万亿 低延迟分布式存储系统 – PageCache的毛刺内核源码分析 lMemory access latency issues: ØMemory lock ØWake_up_page ØWait_on_page_locked() ØWait_on_page_writebacfk() LOCKED DIRTY
    0 码力 | 35 页 | 993.29 KB | 1 年前
    3
  • pdf文档 万亿级数据洪峰下的消息引擎 Apache RocketMQ

    lMemory access latency issues: ØDirect reclaim • Background reclaim (kswapd) • Foreground reclaim (direct reclaim) ØPage fault • Major page fault will produce latency Memory Access Latency(Page Fault) Fault) alloc mem free mem? No Latency Page Reclaiming back-reclaim No Latency Direct Reclaim Y N Enough NOT Enough Latency lDirect reclaim Øvm.min_free_kbytes: 3g Øvm.extra_free_kbytes: 8g Used add_to_page_cache_locked 自旋锁- treelock 1.4万亿 低延迟分布式存储系统 – PageCache的毛刺内核源码分析 lMemory access latency issues: ØMemory lock ØWake_up_page ØWait_on_page_locked() ØWait_on_page_writebacfk() LOCKED DIRTY
    0 码力 | 35 页 | 5.82 MB | 1 年前
    3
  • pdf文档 Is Your Virtual Machine Really Ready-to-go with Istio?

    middle boxes) ● High performance networking ○ Much higher multi-Gbps peak data speeds ○ Ultra low latency ○ And of course, reduce overheads introduced! ● High availability ● CapEx, OpEx #IstioCon Security Overheads introduced ● No high performance data path support ○ Multi-Gbps bandwidth ○ Ultra low latency #IstioCon Performance Limitations: Solutions ● Software techniques ○ (eBPF-based) TCP/IP stack co-designs #IstioCon Latency Analysis ● ~3ms P90 latency added ○ Istio v1.6 ○ More for VM usage ● Hotspots ○ 1  2 ○ 3  4: 30%~50% ● Others ○ Latency between Pods ○ Latency introduced by C/S #IstioCon
    0 码力 | 50 页 | 2.19 MB | 1 年前
    3
  • pdf文档 Performance of Apache Ozone on NVMe

    • Long tail latency is a small percentage of the overall latency • Vendors increasing shipping configurations with NVME • Bet in the right direction of hardware trends. • Low latency metadata can ~500MB/s Typically 400 MB/s - 550MB/s Up to 600MB/s Typically 3,000 MB/s-5,000 MB/s Up to 7,000 MB/s Latency (4kb) ~10 ms ~200 us ~60 us Size 1TB - 16TB each Up to 20TB 500GB - 4TB each Up to 15TB 500GB 3. DN saturation of network 4. Better leveraging benefits of NVME a. Squeezing every bit of latency from each request processing b. Better caching architectures from computation down to disk to leverage
    0 码力 | 34 页 | 2.21 MB | 1 年前
    3
  • pdf文档 Flow control and load shedding - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    trades-off result accuracy for sustainable performance. • Suitable for applications with strict latency constraints that can tolerate approximate results. Slow down the flow of data: • The system buffers plan • It detects overload and decides what actions to take in order to maintain acceptable latency and minimize result quality degradation. 7 ??? Vasiliki Kalavri | Boston University 2020 DSMS University 2020 Load shedding decisions • When to shed load? • detect overload quickly to avoid latency increase • monitor input rates • Where in the query plan? • dropping at the sources vs. dropping
    0 码力 | 43 页 | 2.42 MB | 1 年前
    3
共 198 条
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 20
前往
页
相关搜索词
KubeCon2020腾讯会议大规规模大规模使用Kubernetes技术实践OpenShiftContainerPlatform4.10伸缩伸缩性可伸缩性性能监控ApacheFlink应用程序应用程序入门EfficientDeepLearningBookEDLChapterIntroductionCompressionTechniques万亿级数洪峰消息引擎RocketMQ数据IstioPerformanceofOzoneonNVMeFlowcontrolandloadsheddingCS591K1DataStreamProcessingAnalyticsSpring2020
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩