积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(29)机器学习(29)

语言

全部中文(简体)(17)英语(12)

格式

全部PDF文档 PDF(29)
 
本次搜索耗时 0.049 秒,为您找到相关结果约 29 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 中文(简体)
  • 英语
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 深度学习下的图像视频处理技术-沈小勇

    Promising results both visually and quantitatively Fully Scalable Arbitrary input size Arbitrary scale factor Arbitrary temporal frames Our Method 44 45 Data from Vid4 [Ce Liu et al.] Motion Estimation ???? = ???????????? + 1 skip connections Input size: Fully convolutional Decoder Arbitrary Scale Factors 50 2× 3× 4× Parameter Free ???????????????????????? ???????????? ????????????0 ????? hour / frame Scale Factor: 4× Frames: 31 MFSR [Ma et al, 2015] Running Time 65 10 min / frame Scale Factor: 4× Frames: 31 DESR [Liao et al, 2015] Running Time 66 / frame Scale Factor: 4× Frames:
    0 码力 | 121 页 | 37.75 MB | 1 年前
    3
  • pdf文档 keras tutorial

    random_uniform_variable(shape, mean, scale) Here,  shape - denotes the rows and columns in the format of tuples.  mean - mean of uniform distribution.  scale - standard deviation of uniform distribution with the specified scale. from keras.models import Sequential from keras.layers import Activation, Dense from keras import initializers my_init = initializers.VarianceScaling(scale=1.0, mode='fan_in' add(Dense(512, activation='relu', input_shape=(784,), kernel_initializer=my_init)) where,  scale represent the scaling factor  mode represent any one of fan_in, fan_out and fan_avg values 
    0 码力 | 98 页 | 1.57 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    divided equally amongst the range [xmin, xmax], each bin covers a range of (where, s is referred to as scale). Hence, [xmin, xmin + s) will map to the bin 0, [xmin + s, xmin + 2s) will map to bin 1, [xmin + # numpy is one of the most useful libraries for ML. import numpy as np def get_scale(x_min, x_max, b): # Compute scale as discussed. return (x_max - x_min ) * 1.0 / (2**b) """Quantizing the given vector minimum(x, x_max) x = np.maximum(x, x_min) # Compute scale as discussed. scale = get_scale(x_min, x_max, b) x_q = np.floor((x - x_min) / scale) # Clamping the quantized value to be less than (2^b -
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 《TensorFlow 快速入门与实战》1-TensorFlow初印象

    Building Intelligent Systems with Large Scale Deep Learning 1990s��������������� Jeff Dean, Google Brain Team, Building Intelligent Systems with Large Scale Deep Learning ������������������ Jeff Dean Team, Building Intelligent Systems with Large Scale Deep Learning ����� Google ��� Jeff Dean, Google Brain Team, Building Intelligent Systems with Large Scale Deep Learning TensorFlow � Jeff Dean ���� Intel • Uber • �� • �� • ... TensorFlow ����� DistBelief - Google ��������������� Jeff Dean, Large Scale Distributed Deep Networks, NIPS 2012 TensorFlow - Google ��������������� • ���������� • �����������
    0 码力 | 34 页 | 35.16 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques

    create a tensor with a normal (gaussian) distribution. x = np.random.normal(size=50000, loc=0.0, scale=1.0) num_clusters = 8 x_decoded, centroids, reconstruction_error = simulate_clustering( x, num_clusters number of clusters ( ). Figure 5-7 (b) shows the plot. Note that both the x and y axes are in log-scale. Finally, figure 5-7 (c) compares the reconstruction errors between quantization and clustering on clusters (both x and y axes are in log scale). (c) Comparison of reconstruction errors for both clustering and quantization with the same x. Again, both axes are in log scale. Now that we have verified how
    0 码力 | 34 页 | 3.18 MB | 1 年前
    3
  • pdf文档 Keras: 基于 Python 的深度学习库

    BatchNormalization [source] keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', movi 移动均值和移动方差的动量。 • epsilon: 增加到方差的小的浮点数,以避免除以零。 • center: 如果为 True,把 beta 的偏移量加到标准化的张量上。如果为 False,beta 被忽略。 • scale: 如果为 True,乘以 gamma。如果为 False,gamma 不使用。当下一层为线性层(或者 例如 nn.relu),这可以被禁用,因为缩放将由下一层完成。 • beta_initializer: 返回值 一个 Keras Model 对象。 参考文献 预训练模型 APPLICATIONS 164 • Very Deep Convolutional Networks for Large-Scale Image Recognition:如果在研究中使用了 VGG,请引用该论文。 License 预训练权值由 VGG at Oxford 发布的预训练权值移植而来,基于 Creative Commons
    0 码力 | 257 页 | 1.19 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    我们现在可以创建一个函数来可视化这些样本。 def show_images(imgs, num_rows, num_cols, titles=None, scale=1.5): #@save """绘制图像列表""" figsize = (num_cols * scale, num_rows * scale) _, axes = d2l.plt.subplots(num_rows, num_cols, figsize=figsize) # labels的维度:(n_train+n_test,) labels = np.dot(poly_features, true_w) labels += np.random.normal(scale=0.1, size=labels.shape) 同样,存储在poly_features中的单项式由gamma函数重新缩放,其中Γ(n) = (n − 1)!。从生成的数据集中查 看一下前2个 1)中,ˆµB是小批量B的样本均值,ˆσB是小批量B的样本标准差。应用标准化后,生成的小批量的平均 值为0和单位方差为1。由于单位方差(与其他一些魔法数)是一个主观的选择,因此我们通常包含 拉伸参数 (scale)γ和偏移参数(shift)β,它们的形状与x相同。请注意,γ和β是需要与其他模型参数一起学习的参数。 由于在训练过程中,中间层的变化幅度不能过于剧烈,而批量规范化将每一层主动居中,并将它们重新调整
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    using heuristics like the top N most-frequent words in the corpus. It is often straightforward to scale up or down the model quality by increasing or decreasing these two parameters respectively. The exact your models for fewer epochs. However, we should discuss a couple of follow-up topics around how to scale them to NLP applications and beyond. My embedding table is huge! Help me! While embedding tables architecture itself. The embedding table approach removes this bottleneck, at which point we can scale the embedding model’s quality and footprint metrics as discussed. We can combine other ideas from
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 《TensorFlow 快速入门与实战》8-TensorFlow社区参与指南

    framework TFX - �� TensorFlow ���������� Baylor, Denis, et al. "Tfx: A tensorflow-based production-scale machine learning platform." Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge 2017. TFX - �� TensorFlow ���������� Baylor, Denis, et al. "Tfx: A tensorflow-based production-scale machine learning platform." Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge
    0 码力 | 46 页 | 38.88 MB | 1 年前
    3
  • pdf文档 《TensorFlow 2项目进阶实战》4-商品检测篇:使用RetinaNet瞄准你的货架商品

    应用:使用 RetinaNet 检测货架商品 “Hello TensorFlow” Try it! 扩展:目标检测常用数据集综述 通用目标检测数据集 • The ImageNet Large Scale Visual Recognition Challenge ILSVRC • The PASCAL Visual Object Classes (VOC) Challenge Pascal VOC ‘sports ball’, ‘kite’, ‘baseball bat’, ‘baseball glove’, ‘skateboard’, ‘surfboard’,…] IMAGENET Large Scale Visual Recognition Challenge (ILSVRC) ILSVRC 具体信息: 识别小类: 21841 图像总数: 1400万+ 带有 Bounding box 的图像总数:
    0 码力 | 67 页 | 21.59 MB | 1 年前
    3
共 29 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
深度学习图像视频处理技术沈小勇kerastutorialEfficientDeepLearningBookEDLChapterCompressionTechniquesTensorFlow快速入门实战印象AdvancedKeras基于Python动手v2Architectures社区参与指南商品检测使用RetinaNet瞄准货架
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩