积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(21)机器学习(21)

语言

全部英语(15)中文(简体)(6)

格式

全部PDF文档 PDF(21)
 
本次搜索耗时 0.028 秒,为您找到相关结果约 21 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 keras tutorial

    extensible API.  Minimal structure - easy to achieve the result without any frills.  It supports multiple platforms and backends.  It is user friendly framework which runs on both CPU and GPU.  Network (ANN) was invented by psychologist Frank Rosenblatt, in the year of 1958. ANNs are made up of multiple nodes which is similar to neurons. Nodes are tightly interconnected and organized into different represented as below: 4. Keras ― Overview of Deep learning Keras 12 Here,  Multiple input along with weight represents dendrites.  Sum of input along with activation function represents
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 Lecture 1: Overview

    (Contd.) Feng Li (SDU) Overview September 6, 2023 30 / 57 Unsupervised Learning: Discovering Latent Factors Dimensionality reduction When dealing with high dimensional data, it is often useful to reduce dimensional, there may only be a small number of degrees of variability, corresponding to latent factors Feng Li (SDU) Overview September 6, 2023 31 / 57 Unsupervised Learning: Discovering Graph Structures probabilities can be used to infer uncertainty. A one-vs-one SVM approach can be used to tackle multiple classes. Feng Li (SDU) Overview September 6, 2023 47 / 57 Parametric vs Non-Parametric Models
    0 码力 | 57 页 | 2.41 MB | 1 年前
    3
  • pdf文档 深度学习下的图像视频处理技术-沈小勇

    make good use of multiple frames? Remaining Challenges 39 Data from Vid4 [Ce Liu et al.] Bicubic x4 Misalignment Occlusion Large motion Effectiveness How to make good use of multiple frames? Are the Effectiveness How to make good use of multiple frames? Are the generated details real? Remaining Challenges 41 Image SR Truth Effectiveness How to make good use of multiple frames? Are the generated details 2016] ESPCN [Shi et al., 2016] VSRNet [Kappeler et al, 2016] Effectiveness How to make good use of multiple frames? Are the generated details real? Model Issues One model for one setting Intensive parameter
    0 码力 | 121 页 | 37.75 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction

    might be multiple ML models being served concurrently on the same device. This further reduces the available resources for a single model. This could happen on the server-side where multiple models are and smaller footprint at the same quality. NAS has also been used to explicitly optimize these multiple objectives directly, like finding networks that get the best quality, while incurring the least embedding table on the left with an embedding for each token. Hashing Trick on the right, where multiple tokens map to the same slot and share embeddings, and thus helps with saving space. To remedy
    0 码力 | 21 页 | 3.17 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    costs. In many cases, to reduce the chances of mislabeling due to human error, data is labeled by multiple human labelers and the label that wins the consensus is assigned to the example. Given all the costs size. Can we apply N transformations to create a dataset Nx the size? What are the constraining factors? An image transformation recomputes the pixel values. The rotation of an RGB image of 100x100 requires capabilities. Length constraints, and the initial handicap of having to enter each individual letter using multiple keypresses on a numeric pad, drove readoption of telegraphic style, and continued space limits and
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    lossy compression because we lost the odd parts. The choice of the technique depends on several factors like customer preference, consumption delay, or resource availability (extra hands needed for chopping) space wastage. If that is indeed the case, you might have to design your own mechanism to pack in multiple quantized values in one of the supported data types (using bit-shifting). For example, if you pick We will start with the random number generator with a fixed seed to get consistent results across multiple runs. Next, we will create an input tensor of shape [10, 3], where 10 is the batch size, and 3
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    the two classes (Suitable / Not Suitable), since there were very few examples. What if you have multiple classes / a large number of examples / more than two features? In those cases, we could use classical training data size and time required. The quality of the embeddings primarily depends on the below two factors: Number of dimensions in each embedding (d): This is analogous to the features we manually computed tokens mapping to each slot might be slightly higher or lower than this number. Even though the multiple tokens mapping to each slot might be very different from each other, the model will learn one embedding
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 华为云深度学习在文本分类中的实践-李明磊

    financial and operating results, future product portfolio, new technology, etc. There are a number of factors that could cause actual results and developments to differ materially from those expressed or implied
    0 码力 | 23 页 | 1.80 MB | 1 年前
    3
  • pdf文档 PyTorch Release Notes

    introduced in Transformer-XL help capture better long-term dependencies by attending to tokens from multiple previous segments. Our implementation is based on the codebase that was published by the authors introduced in Transformer-XL help capture better long-term dependencies by attending to tokens from multiple previous segments. Our implementation is based on the codebase that was published by the authors introduced in Transformer-XL help capture better long-term dependencies by attending to tokens from multiple previous segments. Our implementation is based on the codebase that was published by the authors
    0 码力 | 365 页 | 2.94 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    choices even have multiple parameters. For example, horizontal flip is a boolean choice, rotation requires a fixed angle or a range of rotation, and random augment requires multiple parameters. Figure than Grid and Random searches. Figure 7-3 (b) shows an alternative search approach which evaluates multiple configurations and adaptively allocates more resources to the promising ones. This is called Configuration resources to a set of hyperparameter configurations. The trials for each configuration are run for multiple iterations. All the trials in an iteration are allocated an identical budget. The configurations
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
共 21 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
kerastutorialLectureOverview深度学习图像视频处理技术沈小勇EfficientDeepLearningBookEDLChapterIntroductionTechniquesCompressionArchitectures华为文本分类实践李明磊PyTorchReleaseNotesAutomation
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩