积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部后端开发(812)Java(243)云计算&大数据(205)Spring(184)综合其他(158)Python(149)数据库(147)VirtualBox(113)Weblate(105)C++(96)

语言

全部英语(1096)中文(简体)(211)中文(繁体)(34)韩语(8)德语(7)西班牙语(7)日语(7)俄语(7)英语(7)法语(6)

格式

全部PDF文档 PDF(1395)
 
本次搜索耗时 0.027 秒,为您找到相关结果约 1000 个.
  • 全部
  • 后端开发
  • Java
  • 云计算&大数据
  • Spring
  • 综合其他
  • Python
  • 数据库
  • VirtualBox
  • Weblate
  • C++
  • 全部
  • 英语
  • 中文(简体)
  • 中文(繁体)
  • 韩语
  • 德语
  • 西班牙语
  • 日语
  • 俄语
  • 英语
  • 法语
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    than vanilla distillation. We will now go over stochastic depth, a technique which can be useful if you are training very deep networks. Stochastic Depth Deep networks with hundreds of layers such as block, the output of the previous layer ( ) skips the layers represented by the function . The stochastic depth idea takes this one step further by probabilistically dropping a residual block with a probability final probability ( ). Under these conditions, the expected network depth during training reduces to . By expected network depth we informally mean the number of blocks that are enabled in expectation
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 0. Machine Learning with ClickHouse

    https://github.com/clickhouse/ClickHouse There are stochastic regression methods in ClickHouse › stochasticLinearRegression › stochasticLogisticRegression Stochastic methods do support multiple factors. That’s That’s not the most important difference. 23 / 62 Stochastic linear regression in ClickHouse stochasticLinearRegression(parameters)(target, x1, ..., xN) Available parameters: › learning_rate › l2_regularization Nesterov All parameters are specified for stochastic gradient descent. Related wiki page: https://en.wikipedia.org/wiki/Stochastic_gradient_descent 24 / 62 Stochastic model with default parameters SELECT
    0 码力 | 64 页 | 1.38 MB | 1 年前
    3
  • pdf文档 1. Machine Learning with ClickHouse

    https://github.com/clickhouse/ClickHouse There are stochastic regression methods in ClickHouse › stochasticLinearRegression › stochasticLogisticRegression Stochastic methods do support multiple factors. That’s That’s not the most important difference. 23 / 62 Stochastic linear regression in ClickHouse stochasticLinearRegression(parameters)(target, x1, ..., xN) Available parameters: › learning_rate › l2_regularization Momentum, Nesterov All parameters are specified for stochastic gradient descent. Related page: https://www.jianshu.com/p/9329294d56d2 24 / 62 Stochastic model with default parameters SELECT stochasticLinearRegression(
    0 码力 | 64 页 | 1.38 MB | 1 年前
    3
  • pdf文档 Solving Nim by the Use of Machine Learning

    42 6.3.3 RunMlp.py . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6.3.4 Program With Stochastic Termination . . . . . . . . . . . 46 7 Comparing the Algorithms with Time Complexity 49 7.1 The . . . . . . . . 63 8.1.12 Difference in Time-use, Comparing the Agorithms . . . . 64 8.1.13 Stochastic Termination . . . . . . . . . . . . . . . . . . . 66 8.1.14 Playing the Game . . . . . . . . being an example of this. Machine learning is a type of stochastic algorithms that try to find a solution based upon statistical data. A stochastic algorithm is an algorithm with some random elements, a
    0 码力 | 109 页 | 6.58 MB | 1 年前
    3
  • pdf文档 Keras: 基于 Python 的深度学习库

    layers.SeparableConv2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, depth_multiplier=1, activation=None, use_bias=True, depthwise_initializer='glorot_uniform', pointwise bias_constraint=None) 深度方向的可分离 2D 卷积。 可分离的卷积的操作包括,首先执行深度方向的空间卷积(分别作用于每个输入通道),紧 接一个将所得输出通道混合在一起的逐点卷积。depth_multiplier 参数控制深度步骤中每个输 入通道生成多少个输出通道。 直观地说,可分离的卷积可以理解为一种将卷积核分解成两个较小的卷积核的方法,或者 作为 Inception 块的一个极端版本。 json 中找到的 image_data_format 值。如果你从未设 置它,将使用”channels_last”。 • depth_multiplier: 每个输入通道的深度方向卷积输出通道的数量。深度方向卷积输出通道 的总数将等于 filterss_in * depth_multiplier。 • activation: 要使用的激活函数 (详见 activations)。如果你不指定,则不使用激活函数
    0 码力 | 257 页 | 1.19 MB | 1 年前
    3
  • pdf文档 Cardinality and frequency estimation - CS 591 K1: Data Stream Processing and Analytics Spring 2020

    difficult ??? Vasiliki Kalavri | Boston University 2020 10 Stochastic averaging ??? Vasiliki Kalavri | Boston University 2020 10 Stochastic averaging Use one hash function to simulate many by splitting of the M-bit hash value to select a sub-stream and the next M-p bits to compute the rank(.): Stochastic averaging Use one hash function to simulate many by splitting the hash value into two parts of the M-bit hash value to select a sub-stream and the next M-p bits to compute the rank(.): Stochastic averaging Use one hash function to simulate many by splitting the hash value into two parts
    0 码力 | 69 页 | 630.01 KB | 1 年前
    3
  • pdf文档 深度学习与PyTorch入门实战 - 35. Early-stopping-Dropout

    Early Stop,Dropout 主讲人:龙良曲 Tricks ▪ Early Stopping ▪ Dropout ▪ Stochastic Gradient Descent Early Stopping ▪ Regularization How-To ▪ Validation set to select parameters ▪ Monitor validation performance Batch- Norm Stochastic Gradient Descent ▪ Stochastic ▪ not random! ▪ Deterministic Gradient Descent https://towardsdatascience.com/difference-between-batch-gradient-descent-and- stochastic-gradient- https://towardsdatascience.com/difference-between-batch-gradient-descent-and- stochastic-gradient-descent-1187f1291aa1 ?? ??? Stochastic Gradient Descent ▪ Not single usually ▪ batch = 16, 32, 64, 128… Why
    0 码力 | 16 页 | 1.15 MB | 1 年前
    3
  • pdf文档 Lecture 2: Linear Regression

    Supervised Learning: Regression and Classification 2 Linear Regression 3 Gradient Descent Algorithm 4 Stochastic Gradient Descent 5 Revisiting Least Square 6 A Probabilistic Interpretation to Linear Regression 0.6 , = 0.06 , = 0.07 , = 0.071 Feng Li (SDU) Linear Regression September 13, 2023 20 / 31 Stochastic Gradient Descent (SGD) What if the training set is huge? In the above batch gradient descent iteration A considerable computation cost is induced! Stochastic gradient descent (SGD), also known as incremental gradient descent, is a stochastic approximation of the gradient descent optimiza- tion
    0 码力 | 31 页 | 608.38 KB | 1 年前
    3
  • pdf文档 Lecture Notes on Linear Regression

    the GD algorithm. We illustrate the convergence processes under di↵erent step sizes in Fig. 3. 3 Stochastic Gradient Descent According to Eq. 5, it is observed that we have to visit all training data in convergence of GD algorithm under di↵erent step sizes. Stochastic Gradient Descent (SGD), also known as incremental gradient descent, is a stochastic approximation of the gradient descent optimization method x(i) � y(i))x(i) (6) and the update rule is ✓j ✓j � ↵(✓T x(i) � y(i))x(i) j (7) Algorithm 2: Stochastic Gradient Descent for Linear Regression 1: Given a starting point ✓ 2 dom J 2: repeat 3: Randomly
    0 码力 | 6 页 | 455.98 KB | 1 年前
    3
  • pdf文档 Programming in Lean Release 3.4.2

    about it against a background mathematical theory of arithmetic, analysis, dynamical systems, or stochastic processes. Lean employs a number of carefully chosen devices to support a clean and principled implemented as follows: meta def prop_prover_aux : N → tactic unit | 0 := fail "prop prover max depth reached" | (nat.succ n) := do split_conjs, contradiction <|> do (option.some h) ← find_disj | reduces the hypotheses to negation-normal form, and calls prop_prover_aux with a maximum splitting depth of 30. The tactic prop_prover_aux executes the following simple loop. First, it splits any conjunctions
    0 码力 | 51 页 | 220.07 KB | 1 年前
    3
共 1000 条
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 100
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterAdvancedTechniquesTechnicalReviewMachinewithClickHouseSolvingNimbytheUseofKeras基于Python深度学习CardinalityandfrequencyestimationCS591K1DataStreamProcessingAnalyticsSpring2020PyTorch入门实战35EarlystoppingDropoutLectureLinearRegressionNotesonProgramminginLeanRelease3.4
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩