积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(27)机器学习(27)

语言

全部英语(24)中文(简体)(3)

格式

全部PDF文档 PDF(27)
 
本次搜索耗时 0.022 秒,为您找到相关结果约 27 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    especially in signal processing. It is a process of converting high precision continuous values to low precision discrete values. Take a look at figure 2-3. It shows a sine wave and an overlapped quantized sine precision representation. The quantized sine wave is a low precision representation which takes integer values in the range [0, 5]. As a result, the quantized wave requires low transmission bandwidth. Figure work on a scheme for going from this higher-precision domain (32-bits) to a quantized domain (b-bit values). This process is nothing but (cue drum roll!) ...Quantization! Before we get our hands dirty, let
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation

    that a point in such a region is a set of well-defined values for each of those parameters. The parameters can take discrete or continuous values. It is called a "search" space because we are searching technique is turned on and a $$False$$ value means it is turned off. This search space1 has four possible values such that . Let's take another example of a search space with two parameters. However, in this this search space has infinitely many points because the second parameter can take infinitely many values. In the context of deep learning, the parameters that influence the process of learning are called
    0 码力 | 33 页 | 2.48 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    the data that we are quantizing is not uniformly distributed, i.e. the data is more likely to take values in a certain range than another equally sized range. It creates equal sized quantization ranges (bins) valued weights in each training epoch. The result of such a training process is p% weights with zero values. Sparse compressed models achieve higher compression ratio which results in lower transmission and retained nodes have fewer connections. Let's do an exercise to convince ourselves that setting parameter values to zero indeed results in a higher compression ratio. Figure 5-1: An illustration of pruning weights
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    dataset Nx the size? What are the constraining factors? An image transformation recomputes the pixel values. The rotation of an RGB image of 100x100 requires at least 100x100x3 (3 channels) computations. Two is resized to 224x224px prior to the transformations. Value transformation operates on the pixel values. Let’s take brightness transformation as an example. Figure 3-6 shows an image 2x bright (bottom-right value range . Any channel values that exceed 255 after the 2x brightness transformation are clipped to 255 The channel values , after the 2x transformation, become . These values are clipped to 255. We
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 Experiment 1: Linear Regression

    of heights for various boys between the ages of two and eights. The y-values are the heights measured in meters, and the x-values are the ages of the boys corresponding to the heights. Each height and x ] ; % Add a column of ones to x 2 From this point on, you will need to remember that the age values from your training data are actually in the second column of x. This will be important when plotting converges. (this will take a total of about 1500 iterations). After convergence, record the final values of θ0 and θ1 that you get, and plot the straight line fit from your algorithm on the same graph as
    0 码力 | 7 页 | 428.11 KB | 1 年前
    3
  • pdf文档 keras tutorial

    sparse=True) >>> print(a) SparseTensor(indices=Tensor("Placeholder_8:0", shape=(?, 2), dtype=int64), values=Tensor("Placeholder_7:0", shape=(?,), dtype=float32), dense_shape=Tensor("Const:0", shape=(2,) mean represent the mean of the random values to generate  stddev represent the standard deviation of the random values to generate  seed represent the values to generate random number RandomUniform ) where,  minval represent the lower bound of the random values to generate  maxval represent the upper bound of the random values to generate TruncatedNormal Generates value using truncated
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    dangerous animals, and represent each animal using two features, say cute and dangerous. We can assign values between 0.0 and 1.0 to these two features for different animals. The higher the value, the more that that particular feature represents the given animal. In Table 4-1 we manually assigned values for the cute and dangerous features for six animals2, and we are calling the tuple of these two features an between 0.0 and 1.0. We manually picked these values for illustration. Going through table 4-1, cat and dog have high values for the ‘cute’ feature, and low values for the ‘dangerous’ feature. On the other
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 Lecture 1: Overview

    are primarily interested in prediction We are interested in predicting only one thing The possible values of what we want to predict are specified, and we have some training cases for which its value is that is, variables whose values are unknown, such that the corresponding design matrix will then have “holes” in it The goal of matrix completion is to infer plausible values for the missing entries Optimization and Integration Usually involve finding the best values for some parameters (an opti- mization problem), or average over many plausible values (an integration problem). How can we do this efficiently
    0 码力 | 57 页 | 2.41 MB | 1 年前
    3
  • pdf文档 Lecture Notes on Gaussian Discriminant Analysis, Naive

    ψy(i)(1 − ψ)y(i) We then maximize the log-likelihood function ℓ(ψ, µ0, µ1, Σ) so as to get the optimal values for ψ, µ0, and σ, such that the resulting GDA model can best fit the given training data. In particular the finite training data. Apparently, this is quite unreasonable! Similarly, when some of the label values (e.g., ¯y) doe not appear in the given training data, we have p(¯y) = �m i=1 1(y(i) = ¯y) m = 0 �m i=1 1(y(i) = y ∧ x(i) j = x) + 1 �m i=1 1(y(i) = y) + vj 8 where vj is the number of possible values of the j-th feature. In our case where xj ∈ {0, 1} for ∀j ∈ [n], we have vj = 2 for ∀j. Note that
    0 码力 | 19 页 | 238.80 KB | 1 年前
    3
  • pdf文档 Lecture 5: Gaussian Discriminant Analysis, Naive Bayes

    + 1 �m i=1 1(y (i) = y) + vj where k is number of the possible values of y (k=2 in our case), and vj is the number of the possible values of the j-th feature (vj = 2 for ∀j = 1, · · · , n in our case) length x(i) = [x(i) 1 , x(i) 2 , · · · , x(i) ni ]T The j-th feature of x(i) takes a finite set of values x(i) j ∈ {1, 2, · · · , v}, for ∀j = 1, · · · , ni For example, x(i) j indicates the j-th word features x(i) = [x(i) 1 , x(i) 2 , · · · , x(i) ni ]T The j-th feature of x(i) takes a finite set of values x(i) j ∈ {1, 2, · · · , v}, for ∀j = 1, · · · , ni For each training data, the features are i.i
    0 码力 | 122 页 | 1.35 MB | 1 年前
    3
共 27 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterCompressionTechniquesAutomationAdvancedExperimentLinearRegressionkerastutorialArchitecturesLectureOverviewNotesonGaussianDiscriminantAnalysisNaiveBayes
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩