积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部云计算&大数据(21)机器学习(21)

语言

全部英语(14)中文(简体)(7)

格式

全部PDF文档 PDF(21)
 
本次搜索耗时 0.077 秒,为您找到相关结果约 21 个.
  • 全部
  • 云计算&大数据
  • 机器学习
  • 全部
  • 英语
  • 中文(简体)
  • 全部
  • PDF文档 PDF
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques

    Distillation are widely different learning techniques. While data augmentation is concerned with samples and labels, distillation transfers knowledge from a large model or ensemble of models to smaller models. The looking at each example and assigning them a label that they believe describes it best. The assigned labels are subjective to the perception of their labelers. For example, a human labeler might perceive the that can potentially confuse the human labelers to choose a 1 or a 7 as the target label. Obtaining labels in many cases requires significant human involvement, and for that reason can be expensive and slow
    0 码力 | 56 页 | 18.93 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review

    Self-Supervised learning helps models to quickly achieve impressive quality with a small number of labels. As we described in chapter 3’s ‘Learning Techniques and Efficiency’ section, labeling of training Factoring in the costs of training human labelers on a given task, and then making sure that the labels are reliable, human labeling gets very expensive very quickly. Even after that it is likely that on labeled data, and hence achieving a high performance on a new task requires a large number of labels. 2. Compute Efficiency: Training for new tasks requires new models to be trained from scratch. For
    0 码力 | 31 页 | 4.03 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    -3.4]) true_b = 4.2 features, labels = synthetic_data(true_w, true_b, 1000) 47 https://discuss.d2l.ai/t/1775 3.2. 线性回归的从零开始实现 95 注意,features中的每一行都包含一个二维数据样本,labels中的每一行都包含一维标签值(一个标量)。 print('features:' print('features:', features[0],'\nlabel:', labels[0]) features: tensor([1.4632, 0.5511]) label: tensor([5.2498]) 通过生成第二个特征features[:, 1]和labels的散点图,可以直观观察到两者之间的线性关系。 d2l.set_figsize() d2l.plt.scatter(features[: scatter(features[:, (1)].detach().numpy(), labels.detach().numpy(), 1); 3.2.2 读取数据集 回想一下,训练模型时要对数据集进行遍历,每次抽取一小批量样本,并使用它们来更新我们的模型。由于 这个过程是训练机器学习算法的基础,所以有必要定义一个函数,该函数能打乱数据集中的样本并以小批量 方式获取数据。 在下面的代码中,我们定义一个data_i
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 【PyTorch深度学习-龙龙老师】-测试版202112

    drop(train_dataset.index) 将 MPG 字段移出为标签数据: # 移动 MPG 油耗效能这一列为真实标签 Y train_labels = train_dataset.pop('MPG') test_labels = test_dataset.pop('MPG') 统计训练集的各个字段数值的均值和标准差,并完成数据的标准化,通过 norm()函数 实现,代码如下: norm(test_dataset) # 标准化测试集 打印出训练集和测试集的大小: print(normed_train_data.shape,train_labels.shape) print(normed_test_data.shape, test_labels.shape) (314, 9) (314,) # 训练集共 314 行,输入特征长度为 9,标签用一个标量表示 预览版202112 第 normed_train_data) np.save('train_labels.npy', train_labels) # 保存测试集的特征和标签 np.save('normed_test_data.npy', normed_test_data) np.save('test_labels.npy', test_labels) 我们可以通过简单地统计数据集中各字段之间的两两分布来观察各个字段对
    0 码力 | 439 页 | 29.91 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction

    generate soft labels on the training data, and the student model learns to copy both the ground-truth labels as well as the soft labels generated by the teacher model. While the ground-truth labels simply assign assign a ‘1’ to the correct class, and a ‘0’ to the incorrect class, the soft labels consist of probabilities for each of the possible classes according to the teacher model. Figure 1-10: Distillation ground-truth and the teacher’s outputs on the given training data. The intuition is that the soft labels from the teacher can help the student capture how wrong/right the prediction is. For example, given
    0 码力 | 21 页 | 3.17 MB | 1 年前
    3
  • pdf文档 全连接神经网络实战. pytorch 版

    #用 来 测 试 的 数 据 download=True , #如 果 根 目 录 没 有 就 下 载 transform=ToTensor () ) #把 数 据 显 示 一 下 labels_map = { 0: ”T−Shirt ” , 1: ” Trouser ” , 2: ” Pullover ” , 3: ” Dress ” , 4: ”Coat” , 5: ” Sandal r e () # 抽 取 索 引 为 100 的 数 据 来 显 示 img , l a b e l = training_data [ 1 0 0 ] plt . t i t l e ( labels_map [ l a b e l ] ) #squeeze 函 数 把 为1 的 维 度 去 掉 plt . imshow ( img . squeeze () , cmap=” gray ” train_features , train_labels = next ( i t e r ( train_dataloader ) ) print ( f ” Feature ␣batch␣shape : ␣{ train_features . s i z e () }” ) print ( f ” Labels ␣batch␣shape : ␣{ train_labels . s i z e () }”
    0 码力 | 29 页 | 1.40 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques

    keras.datasets.mnist): """Returns the processed dataset.""" (train_images, train_labels), (test_images, test_labels) = ds.load_data() # Process the images for use. train_images = process_x(train_images) process_x(train_images) test_images = process_x(test_images) return (train_images, train_labels), (test_images, test_labels) (train_x, train_y), (test_x, test_y) = load_data() # You can train on the Fashion MNIST dataset we can use the index of the correct class for each example. The regular function expects one-hot labels which would require us to transform the class label 2, for example, to its one-hot representation
    0 码力 | 33 页 | 1.96 MB | 1 年前
    3
  • pdf文档 Keras: 基于 Python 的深度学习库

    numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(2, size=(1000, 1)) # 训练模型,以 32 个样本为一个 batch 进行迭代 model.fit(data, labels, epochs=10, batch_size=32) # 对于具有 10 个类的单输入模型(多分类分类): data = np.random.random((1000, 100)) labels = np.random.randint(10, size=(1000, 1)) # 将标签转换为分类的 one-hot 编码 one_hot_labels = keras.utils.to_categorical(labels, num_classes=10) # 训练模型,以 32 个样本为一个 batch batch 进行迭代 model.fit(data, one_hot_labels, epochs=10, batch_size=32) 3.1.5 例子 这里有几个可以帮助你开始的例子! 在 examples 目录 中,你可以找到真实数据集的示例模型: • CIFAR10 小图片分类:具有实时数据增强的卷积神经网络 (CNN) 快速开始 11 • IMDB 电影评论情感分类:基于词序列的
    0 码力 | 257 页 | 1.19 MB | 1 年前
    3
  • pdf文档 Lecture 5: Gaussian Discriminant Analysis, Naive Bayes

    images are randomly given Random variable X representing the feature vector (and thus the image) The labels are random, since the images are randomly given Random variable Y representing the label Feng Li n-dimensional vector Each feature x(i) j ∈ {0, 1} (j = 1, · · · , n) y (i) ∈ {0, 1} The features and labels can be represented by random variables {Xj}j=1,··· ,n and Y , respectively Feng Li (SDU) GDA, NB The Expectation-Maximization (EM) Algorithm A training set {x(1), x(2), · · · , x(m)} (without labels) The log-likelihood function ℓ(θ) = log m � i=1 p(x(i); θ) = m � i=1 log � z(i)∈Ω p(x(i)
    0 码力 | 122 页 | 1.35 MB | 1 年前
    3
  • pdf文档 《TensorFlow 2项目进阶实战》2-快速上手篇:动⼿训练模型和部署服务

    tensorflow as tf fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() Preprocess data import matplotlib.pyplot as plt plt in_labels[i])) plt.show( ) Build the model Train and evaluate Make prediction Visualize prediction i = 6 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() Visualize prediction “Hello TensorFlow” Try it! 扫码试看/订阅 《TensorFlow 2 项目进阶实战》视频课程
    0 码力 | 52 页 | 7.99 MB | 1 年前
    3
共 21 条
  • 1
  • 2
  • 3
前往
页
相关搜索词
EfficientDeepLearningBookEDLChapterTechniquesAdvancedTechnicalReview动手深度学习v2PyTorch深度学习Introduction连接神经网络神经网神经网络实战pytorchCompressionKeras基于PythonLectureGaussianDiscriminantAnalysisNaiveBayesTensorFlow快速入门上手训练模型部署服务
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩