积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(22)机器学习(22)

语言

全部英语(14)中文(简体)(8)

格式

全部PDF文档 PDF(22)
 
本次搜索耗时 0.022 秒,为您找到相关结果约 22 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    73728 Block: conv_transpose_block_17 Sparsity: 0.0% Total Weights: 1728 Figure 5-5 shows the comparison of compressed sizes of our regular model and its 50% sparse version. We used Tensorflow's save_model() even higher sparsity values and observe the resulting accuracies. Figure 5-5: Compressed size comparison of a regular model with its 50% sparse version. In this project, we used a consistent sparsity codebook will cost us bytes to store. For each tensor element we will now store only the index of its centroid in the codebook, which will only take up bits. For a tensor with elements, the cost would be
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    by the phases of architectural breakthroughs to improve on previous results and to drive down the cost of achieving those results. The evolution of multilayer perceptrons was one of the biggest architectural of the words would get mapped to the OOV token. However, if is too large, we would have to pay the cost of a very large embedding table. Step 2: Dataset Preparation & Vectorization Once the window size occupy a significant portion of the model size on disk and in memory. Although this comes with the cost of the table taking up significant disk space and memory, this issue can be a bottleneck if the model
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    Similarly, if you could make the model training process label efficient, you would incur a lower cost to meet a performance benchmark. Refer to Figure 3-3 for an example of a label efficient model’s training illustrates how learning techniques are leveraged to reduce the model footprint. Table 3-2 shows a comparison of vanilla models (without the learning techniques) with the models that employ learning techniques students. The creators went through a tedious sample collection and digitization process. It would cost substantial labor, time and money to collect more samples. In 2019, Kaggle1 opened a competition to
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    (important parameters) than those in the rest of the states (unimportant parameters). Figure 7-2: A comparison of hyperparameter search algorithms for two hyperparameters. The blue contours show the regions hyperparameter gets a new value per trial, the unimportant parameters do not increase the evaluation cost. Figure 7-2 (b) shows an example of random search. The trials are randomly spread across the search represented as a conditional probability distribution . The surrogates are cheaper to compute in comparison to the blackbox model (the deep learning model). Hence, they can be executed much more frequently
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    same target dataset, the authors report needing fewer labeled examples. Refer to figure 6-6 for a comparison between the error obtained by training from scratch v/s using pre-training strategies. 5 WikiText-103 However, since the pre-trained model is intended to be generalizable across many downstream tasks, the cost of pre-training can be amortized amongst these tasks. BERT has been used across a large number of achieves an accuracy of 91.59% while the latter achieves 90.33%. Refer to figure 6-8. Figure 6-8: Comparison between the accuracies achieved by BERT-Small models when using and not-using the pre-trained
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 Lecture 4: Regularization and Bayesian Statistics

    θ2x2 + θ3x3 + θ4x4 To eliminate the influence of θ3x3 and θ4x4 to smoothen hypothesis function, the cost function can be modified as follows min θ 1 2m � m � i=1 (hθ(x(i)) − y(i))2 + 1000 · θ2 3 + 1000 As the magnitudes of the fitting parameters increase, there will be an increasing penalty on the cost function This penalty is dependent on the squares of the parameters as well as the magnitude of λ Regularization and Bayesian Statistics September 20, 2023 9 / 25 Regularized Logistic Regression Recall the cost function for logistic regression J(θ) = − 1 m m � i=1 [y(i) log(hθ(x(i))) + (1 − y(i)) log(1 −
    0 码力 | 25 页 | 185.30 KB | 1 年前
    3
  • pdf文档 深度学习下的图像视频处理技术-沈小勇

    number of underexposed images that cover limited lighting conditions. Our Dataset Quantitative Comparison: Our Dataset Method PSNR SSIM HDRNet 26.33 0.743 DPE 23.58 0.737 White-Box 21.69 0.718 Distort-and-Recover 0.893 Quantitative Comparison: MIT-Adobe FiveK Visual Comparison: Our Dataset Input JieP HDRNet DPE White-box Distort-and-Recover Our result Expert-retouched Visual Comparison: MIT-Adobe FiveK Input Input JieP HDRNet DPE White-box Distort-and-Recover Our result Expert-retouched More Comparison Results: User Study Input WVM JieP HDRNet DPE White-Box Distort-and-Recover Our result Limitaion Input
    0 码力 | 121 页 | 37.75 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    print(x_q) This returns the following result. [0 1 2 3 4 5 6 7 7] Table 2-2 shows the element wise comparison of x and xq. x -10.0 -7.5 -5.0 -2.5 0 2.5 5.0 7.5 10.0 xq 0 1 2 3 4 5 6 7 7 Table 2-2: Vector ,3]) weights = get_random_matrix([3,5]) bias = get_random_matrix([5]) Print the weights for comparison later on. print(weights) [[-0.08321415 -0.66657766 0.71264132 -0.39179407 0.05601718] [-0.85867389 function, which can be implemented by invoking the np.maximum on y, such that it does element-wise comparison between each element and 0, and returns the larger value. Recall that the ReLU(x) = x if x > 0
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 构建基于富媒体大数据的弹性深度学习计算平台

    Iterative training Semi-supervised Labeling Incremental training Data Augment Model comparison Model Fusion Gray Update Auto Evaluation Log Server Graph Abstraction Data Flow API
    0 码力 | 21 页 | 1.71 MB | 1 年前
    3
  • pdf文档 PyTorch Tutorial

    use API. The code execution in this framework is quite easy. Also need a fewer lines to code in comparison. • It is easy to debug and understand the code. • Python usage − This library is considered to
    0 码力 | 38 页 | 4.09 MB | 1 年前
    3
共 22 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterAdvancedCompressionTechniquesArchitecturesAutomationTechnicalReviewLectureRegularizationandBayesianStatistics深度学习图像视频处理技术沈小勇构建基于媒体数据弹性计算平台PyTorchTutorial
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩