积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(33)机器学习(33)

语言

全部英语(22)中文(简体)(11)

格式

全部PDF文档 PDF(33)
 
本次搜索耗时 0.055 秒,为您找到相关结果约 33 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    benchmarks higher. Figure 7-1 shows some of the choices we face when working on a deep learning problem in the vision domain for instance. Some of these choices are boolean, others have discrete parameters parameters and still there are the ones with continuous parameters. Some choices even have multiple parameters. For example, horizontal flip is a boolean choice, rotation requires a fixed angle or a range of rotation rotation, and random augment requires multiple parameters. Figure 7-1: The plethora of choices that we face when training a deep learning model in the computer vision domain. A Search Space for n parameters
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    and cat, but we know that they are both cute, have been domesticated for a while and are safe. These two animals are more similar to each other than to a random animal like a chimp. Similarly, we know that reduction1. b) The low-dimensional representation should allow us to compute the distance between any two inputs, which is a measure of their similarity. c) Similar inputs should have a small distance, and and dangerous animals, and represent each animal using two features, say cute and dangerous. We can assign values between 0.0 and 1.0 to these two features for different animals. The higher the value, the
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    like accuracy, precision, recall, etc. which often are our primary quality concerns. We have chosen two of them, namely data augmentation and distillation, to discuss in this chapter. This is because, firstly chapter. We start this chapter with an introduction to sample efficiency and label efficiency, the two criteria that we have picked to benchmark learning techniques. It is followed by a short discussion talking about them in the same breadth as efficiency? To answer this question, let’s break down the two prominent ways to benchmark the model in the training phase namely sample efficiency and label efficiency
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction

    multiplying two matrices together much faster than traditional CPUs. Advances in the training algorithms There has been substantial progress in machine learning algorithms over the past two decades. Stochastic rate-limited by their efficiency. While efficiency can be an overloaded term, let us investigate two primary aspects: Training Efficiency Training Efficiency involves benchmarking the model training for examples. Figure 1-3: Some examples of training and iInference efficiency metrics. If we have two models performing equally well on a given task, we would choose the one which does better on training
    0 码力 | 21 页 | 3.17 MB | 1 年前
    3
  • pdf文档 Machine Learning

    • It can be extended to recurrent neural networks (RNN) by involving feedback connections, which power many natural language applications 2 / 19 Neuron 3 / 19 Neuron (Contd.) • Neuron activated when (Contd.) • An example: logistic regression function g(x) = 1 1 + exp(−wT x − b) • Break it into two computations • z = wT x + b • a = σ(z) where σ(z) = 1/(1 + e−z) 5 / 19 Neural Feedforward Networks written as a function of the output from the neural network • Hadamard product: Elementwise product of two vectors s ⊙ t such that (s ⊙ t)j = sjtj �1 2 � ⊙ �3 4 � = �1 ∗ 3 2 ∗ 4 � = �3 8 � 12 / 19 Fundamental
    0 码力 | 19 页 | 944.40 KB | 1 年前
    3
  • pdf文档 Machine Learning Pytorch Tutorial

    knowledge of NumPy will also be useful! What is PyTorch? ● An machine learning framework in Python. ● Two main features: ○ N-dimensional Tensor computation (like NumPy) on GPUs ○ Automatic differentiation Subtraction z = x - y ● Power y = x.pow(2) Common arithmetic functions are supported, such as: Tensors – Common Operations Tensors – Common Operations ● Transpose: transpose two specified dimensions
    0 码力 | 48 页 | 584.86 KB | 1 年前
    3
  • pdf文档 AI大模型千问 qwen 中文文档

    "model": "Qwen/Qwen1.5-72B-Chat", "prompt": "My favorite food is", "max_tokens": 512 }' | jq -r '.choices[0].text' 3. 向该 endpoint 发送 chat 请求 curl -L http://$IP:8000/v1/chat/completions \ -H "Content-Type: }, { "role": "user", "content": "What is the best food?" } ], "max_tokens": 512 }' | jq -r '.choices[0].message.content' 1.11.4 使用 SkyPilot Serve 扩展服务规模 1. 使用 SkyPilot Serve 扩展 Qwen 的服务规模非常容易,只需运行: me the python code for quick sorting a list of integers." } ], "max_tokens": 512 }' | jq -r '.choices[0].message.content' 1.11.5 使用 Chat GUI 调用 Qwen1.5 可以通过 FastChat 来使用 GUI 调用 Qwen1.5 的服务: 1. 开启一个
    0 码力 | 56 页 | 835.78 KB | 1 年前
    3
  • pdf文档 PyTorch Release Notes

    (NGC). ‣ Tacotron 2 and WaveGlow v1.1 model. This text-to-speech (TTS) system is a combination of two neural network models: a modified Tacotron 2 model from the Natural TTS Synthesis by Conditioning WaveNet (NGC). ‣ Tacotron 2 and WaveGlow v1.1 model. This text-to-speech (TTS) system is a combination of two neural network models: a modified Tacotron 2 model from the Natural TTS Synthesis by Conditioning WaveNet (NGC). ‣ Tacotron 2 and WaveGlow v1.1 model. This text-to-speech (TTS) system is a combination of two neural network models: a modified Tacotron 2 model from the Natural TTS Synthesis by Conditioning WaveNet
    0 码力 | 365 页 | 2.94 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    Dxf(x), (2.4.2) 其中符号 d dx和D是微分运算符,表示微分操作。我们可以使用以下规则来对常见函数求微分: • DC = 0(C是一个常数) • Dxn = nxn−1(幂律(power rule),n是任意实数) • Dex = ex • D ln(x) = 1/x 为了微分一个由一些常见函数组成的函数,下面的一些法则方便使用。假设函数f和g都是可微的,C是一个 常数,则: features = np.random.normal(size=(n_train + n_test, 1)) np.random.shuffle(features) poly_features = np.power(features, np.arange(max_degree).reshape(1, -1)) for i in range(max_degree): poly_features[:, i] 能力, 使人类的大脑能够更明智地分配资源来生存、成长和社交,例如发现天敌、找寻食物和伴侣。 10.1.1 生物学中的注意力提示 注意力是如何应用于视觉世界中的呢?这要从当今十分普及的双组件(two‐component)的框架开始讲起: 这个框架的出现可以追溯到19世纪90年代的威廉·詹姆斯,他被认为是“美国心理学之父”(James, 2007)。在 这个框架中,受试者基于非自主性提示和自主性提示有选择地引导注意力的焦点。
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 深度学习与PyTorch入门实战 - 12. 数学运算

    Round basic matmul ▪ Torch.mm ▪ only for 2d ▪ Torch.matmul ▪ @ An example >2d tensor matmul? Power Exp log Approximation ▪ .floor() .ceil() ▪ .round() ▪ .trunc() .frac() clamp ▪ gradient clipping
    0 码力 | 11 页 | 1015.16 KB | 1 年前
    3
共 33 条
  • 1
  • 2
  • 3
  • 4
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterAutomationArchitecturesTechniquesIntroductionMachinePytorchTutorialAI模型千问qwen中文文档PyTorchReleaseNotes动手深度学习v2入门实战12数学运算
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩