积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(12)机器学习(12)

语言

全部英语(7)中文(简体)(5)

格式

全部PDF文档 PDF(12)
 
本次搜索耗时 0.025 秒,为您找到相关结果约 12 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 PyTorch Release Notes

    (resulting in a 2X speedup for bandwidth-bound operations like most pointwise ops) and 2X reduced memory storage for intermediates (reducing the overall memory consumption of your model). Additionally, GEMMs and (resulting in a 2X speedup for bandwidth-bound operations like most pointwise ops) and 2X reduced memory storage for intermediates (reducing the overall memory consumption of your model). Additionally, GEMMs and (resulting in a 2X speedup for bandwidth-bound operations like most pointwise ops) and 2X reduced memory storage for intermediates (reducing PyTorch Release 23.05 PyTorch RN-08516-001_v23.07 | 26 the overall
    0 码力 | 365 页 | 2.94 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    exercises, we worked out the logic to quantize a high precision vector to low precision to save storage space and the transmission bandwidth. Let’s say a receiver received this data. How would it decode in the number of quantization bits. Quantization is a useful technique in the situation where the storage space or the transmission bandwidth is expensive like deep learning models on mobile devices. Mobile stored in an N-dimensional matrix (tensor), and the weight matrix W is most expensive in terms of storage. Can we efficiently represent this weight matrix W to reduce the model size? We already have worked
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 从推荐模型的基础特点看大规模推荐类深度学习系统的设计 袁镱

    存储/更新 百TB数据 分⽚训练 Feature 1: 动态空间 Feature 2.1:短时间内只有部分item和user 被命中,只有部分参数被⽤到 参数按需 获取/更新 Storage 异步训练流⽔线和多级存储:提升性能,降低内存成本 � 问题: � Learner线程中参数拉取和参数更新对性能影响⼤ � 内存成为主要资源瓶颈。由于需要等待全部参数 就绪,Parameter 效果: � 在不影响训练效果的情况下,降低参数准备与更新耗时,提 ⾼训练速度。训练耗时下降超50% � 异步storage线程,⽀持基于冷热数据的多级存储。内存消 耗下降30%-70% 磁盘 训练 Lookup+ pooling 算⼦融合 Unique keys Storage 近期训练 参数管理 需保持顺 序,以保证 训练效果 样本读取 样本解析 基于GPU的多级存储训练:更⾼的性价⽐
    0 码力 | 22 页 | 6.76 MB | 1 年前
    3
  • pdf文档 AI大模型千问 qwen 中文文档

    StorageContext, load_index_from_storage # save index storage_context = StorageContext.from_defaults(persist_dir="save") # load index index = load_index_from_storage(storage_context) 1.15.4 检索增强(RAG) 现在您可以输入查询,Qwen1
    0 码力 | 56 页 | 835.78 KB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    your deep learning models. We start with sparsity. If your goal was to optimize your brain for storage, you can often trim a lot of useless trivia without it impacting your life materially. This is also picking the connections and nodes to prune, and how to prune a given deep learning model to achieve storage and latency gains with a minimal performance tradeoff. Next, the chapter goes over weight sharing Sparse compressed models achieve higher compression ratio which results in lower transmission and storage costs. Figure 5-1 visually depicts two networks. The one on the left is the original network and
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 构建基于富媒体大数据的弹性深度学习计算平台

    推理服务 数据抽样 和整理 样本 训练 模型 模型评估 AVA深度学习平台 Caching IO Distributed System Docker Orchestration Storage HDFS SQL NoSQL Caffe MXNet Tensorflow Data Clean Iterative training Semi-supervised Labeling
    0 码力 | 21 页 | 1.71 MB | 1 年前
    3
  • pdf文档 人工智能发展史

    ence/ http://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf ▪ 2015 https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf AlphaZero http://www.iro.umontreal.ca/~vi
    0 码力 | 54 页 | 3.87 MB | 1 年前
    3
  • pdf文档 全连接神经网络实战. pytorch 版

    any means, electronic or mechanical, including photocopying and recording, or by any information storage or retrieval system, without the prior written permission of the publisher. Art. No 0 ISBN 000–00–0000–00–0
    0 码力 | 29 页 | 1.40 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    again, so that the TPU doesn't complain about the # weights of the TF Hub models being on local storage. os.environ['TFHUB_MODEL_LOAD_FORMAT'] = 'UNCOMPRESSED' We first start by importing the BERT pre-processing
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    metrics=['accuracy']) return model model = create_model() model.summary() Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim _ordering_tf_kernels_notop
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
共 12 条
  • 1
  • 2
前往
页
相关搜索词
PyTorchReleaseNotesEfficientDeepLearningBookEDLChapterCompressionTechniques推荐模型基础特点大规规模大规模深度学习系统设计AI千问qwen中文文档Advanced构建基于媒体数据弹性计算平台人工智能人工智能发展发展史连接神经网络神经网神经网络实战pytorchTechnicalReview
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩