Keras: 基于 Python 的深度学习库
add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) layers.MaxPooling1D(pool_size=2, strides=None, padding='valid') 对于时序数据的最大池化。 参数 • pool_size: 整数,最大池化的窗口大小。 • strides: 整数,或者是 None。作为缩小比例的因数。例如,2 会使得输入张量缩小一半。如 果是 None,那么默认值是 pool_size。 • padding: "valid" 4.2 MaxPooling2D [source] keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format=None) 对于空域数据的最大池化。 参数 • pool_size: 整数,或者 2 个整数元组,(垂直方向,水平方向)缩小比例的因数。(2,2)会 把输入张量的0 码力 | 257 页 | 1.19 MB | 1 年前3机器学习课程-温州大学-08深度学习-深度卷积神经网络
全连接层块:由三个全连接层组成。 5 ? = 5 ? = 1 6 filter CONV1 POOL1 ? = 2 ? = 2 ? = 5 ? = 1 16 filter ? = 2 ? = 2 CONV2 28x28x6 14x14x6 MAXPOOL 10x10x16 MAXPOOL 5x5x16 POOL2 F C F C FC2 FC3 S O F T M A X 120 梯度消失和梯度爆炸问题 14 2.深度残差网络 Input Conv3-32 Conv3-32 Conv3-32 Max-Pool Conv3-32 Conv3-128 Conv3-64 Conv3-64 Max-Pool Max-Pool FC-512 Output ConvNet Configuration Stacked layers Previous input0 码力 | 32 页 | 2.42 MB | 1 年前3机器学习课程-温州大学-07深度学习-卷积神经网络
卷积神经网络案例2 32x32x3 ? = 5 ? = 1 6 filter CONV1 POOL1 ? = 2 ? = 2 ? = 5 ? = 1 16 filter ? = 2 ? = 2 CONV2 28x28x6 14x14x6 MAXPOOL 10x10x16 MAXPOOL 5x5x16 POOL2 F C F C FC3 FC4 S O F T M A X 120 (f=5, s=1,6filter) (28,28,6) 6272 456=(3 x 5x5+1) x6 POOL1 (14,14,6) 1568 0 CONV2 (f=5,s=1,16filter) (10,10,16) 1600 2416=(6 x 5x5+1) x16 POOL2 (5,5,16) 400 0 FC2 (400,1) 400 48,120 =400 x120+1200 码力 | 29 页 | 3.14 MB | 1 年前3keras tutorial
value is as follows: keras.layers.MaxPooling1D (pool_size=2, strides=None, padding='valid', data_format='channels_last') Here, pool_size refers the max pooling windows. strides filters and ‘relu’ activation function with kernel size, (3,3). Thrid layer, MaxPooling has pool size of (2, 2). Fourth layer, Dropout has 0.25 as its value. Fifth layer, Flatten is used input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu'))0 码力 | 98 页 | 1.57 MB | 1 年前3动手学深度学习 v2.0
元素,卷积层仍然可以识别到模式。 在下面的代码中的pool2d函数,我们实现汇聚层的前向传播。这类似于 6.2节中的corr2d函数。然而,这里我 们没有卷积核,输出为输入中每个区域的最大值或平均值。 import torch from torch import nn from d2l import torch as d2l def pool2d(X, pool_size, mode='max'): mode='max'): p_h, p_w = pool_size Y = torch.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1)) for i in range(Y.shape[0]): for j in range(Y.shape[1]): if mode == 'max': Y[i, j] = X[i: i + p_h, j: j + p_w].max() tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]]) pool2d(X, (2, 2)) tensor([[4., 5.], [7., 8.]]) 此外,我们还可以验证平均汇聚层。 6.5. 汇聚层 237 pool2d(X, (2, 2), 'avg') tensor([[2., 3.], [5., 6.]]) 6.5.20 码力 | 797 页 | 29.45 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
activation='relu', kernel_regularizer=reg)( x) x = layers.BatchNormalization()(x) x = layers.MaxPooling1D(pool_size=(4))(x) x = layers.Dropout(dropout_rate)(x) # A short hand for the round method. r = lambda activation='relu', kernel_regularizer=reg)(x) x = layers.BatchNormalization()(x) x = layers.MaxPooling1D(pool_size=(4))(x) x = layers.Dropout(dropout_rate)(x) x = layers.Conv1D( r(128 * w), (9), padding='same' activation='relu', kernel_regularizer=reg)(x) x = layers.BatchNormalization()(x) x = layers.MaxPooling1D(pool_size=(4))(x) x = layers.Dropout(dropout_rate)(x) x = layers.Flatten()(x) x = layers.Dropout(dropout_rate)(x)0 码力 | 56 页 | 18.93 MB | 1 年前3TensorFlow on Yarn:深度学习遇上大数据
� 总结:� TensorFlow使用现状及痛点 • 集群资源的管理(目前支持CPU、内存,需要扩展GPU 资源管理)� • 作业的统⼀管理、状态跟踪� • 资源组(Schedule Pool)的划分� • 作业进程的资源隔离� Yarn能解决什么问题:� TensorFlow on Yarn设计 • 同时支持单机和分布式TensorFlow程序� • 支持GPU资源管理和调度� void setGpuCores(int gCores);� � 最终在ResourceManager端需要完成:� 1、对NodeManager GPU卡数量的统计管理� 2、调度器统计管理每个Pool的GPU设备数的分配情况� � 具体可以参考下面Patch的实现思路:� https://issues.apache.org/jira/browse/YARN-5517� TensorFlow on0 码力 | 32 页 | 4.06 MB | 1 年前3pytorch 入门笔记-03- 神经网络
def forward(self, x): # 最大池化层,池化层窗口大小为 (2, 2) x = F.max_pool2d(F.relu(self.conv1(x)), 2) x = F.max_pool2d(F.relu(self.conv2(x)), 2) # 改变数据的维度 x = x.view(-1, self0 码力 | 7 页 | 370.53 KB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation
5x5 Depthwise Separable Convolution ● 7x7 Depthwise Separable Convolution ● 3x3 Average Pool ● 3x3 Max Pool ● Identity Operation The combination operations are identical to the NASNet namely, add0 码力 | 33 页 | 2.48 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques
x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers.Conv2D(64, (3, 3))(x) x = layers.BatchNormalization()(x) x = layers.Activation('relu')(x) x = layers.MaxPooling2D(pool_size=(2, 2))(x) x = layers0 码力 | 33 页 | 1.96 MB | 1 年前3
共 15 条
- 1
- 2