《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquespurpose equally well, but the compressed image is an order of magnitude smaller. Discrete Cosine Transform (DCT), is a popular algorithm which is used in the JPEG format for image compression, and the MP3 Learning models: (a) lower model size, and (b) lower inference latency. We already have the necessary tools for achieving (a), the lower model size. Let us see how we can apply what we learnt for quantizing correct class for each example. The regular function expects one-hot labels which would require us to transform the class label 2, for example, to its one-hot representation [0 0 1 0 0 0 0 0 0 0]. The optimizer0 码力 | 33 页 | 1.96 MB | 1 年前3
动手学深度学习 v2.0�→'SoftplusTransform', 'StackTransform', 'StickBreakingTransform', 'StudentT', 'TanhTransform', �→'Transform', 'TransformedDistribution', 'Uniform', 'VonMises', 'Weibull', 'Wishart', '__all__', '__ �→builtins__' 'exponential', 'fishersnedecor', 'gamma', 'geometric', 'gumbel', 'half_cauchy', 'half_normal', �→'identity_transform', 'independent', 'kl', 'kl_divergence', 'kumaraswamy', 'laplace', 'lkj_cholesky', �→'log_normal' 'pareto', 'poisson', �→'register_kl', 'relaxed_bernoulli', 'relaxed_categorical', 'studentT', 'transform_to', 'transformed_ �→distribution', 'transforms', 'uniform', 'utils', 'von_mises', 'weibull', 'wishart']0 码力 | 797 页 | 29.45 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesand the hashing trick. In this chapter, we will deepdive into their architectures and use them to transform large and complex models into smaller and efficient models capable of running on mobile and edge era). Techniques like Principal Components Analysis, Low-Rank Matrix Factorization, etc. are popular tools for dimensionality reduction. We will explain these techniques in further detail in chapter 6. A0 码力 | 53 页 | 3.92 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquestransformations, label mixing transformations and synthetic sample generation. The label invariant techniques transform the input samples. The transformed samples are labeled identical to the original sample. A slightly using examples and code samples. Label Invariant Transformations Label invariant transformations transform samples such that the resulting samples preserve the original sample labels. They operate on a per transformations during the training process. Tensorflow comes bundled with the ImageDataGenerator which can transform images during the training process, thus eliminating the need to generate samples beforehand. Next0 码力 | 56 页 | 18.93 MB | 1 年前3
全连接神经网络实战. pytorch 版#如 果 根 目 录 没 有 就 下 载 transform=ToTensor () ) test_data = datasets . FashionMNIST( root=” data ” , train=False , #用 来 测 试 的 数 据 download=True , #如 果 根 目 录 没 有 就 下 载 transform=ToTensor () ) #把 数 root=” data ” , train=True , #用 来 训 练 的 数 据 download=True , #如 果 根 目 录 没 有 就 下 载 transform=ToTensor () , target_transform=Lambda( lambda y : torch . zeros (10 , dtype=torch . f l o a t ) . scatter_ (0 def __init__( s e l f , data , label , transform=None , target_transform= None) : s e l f . transform = transform s e l f . target_transform = target_transform 26 4.4. 自定义 Dataset 数据集 s e l f . datas0 码力 | 29 页 | 1.40 MB | 1 年前3
机器学习课程-温州大学-05深度学习-深度学习实践数据增强:随意翻转和裁剪、扭曲变形图片 15 数据增强的PyTorch实现 import torch from torchvision import transforms # 定义数据增强的方法 transform = transforms.Compose([ transforms.RandomResizedCrop(224), # 随机裁剪 transforms.RandomHorizontalFlip() open('image.jpg').convert('RGB') # 对图像进行数据增强 img_aug = transform(img) # 可以将数据增强的过程添加到数据集的加载器中 dataset = datasets.ImageFolder('data', transform=transform) dataloader = torch.utils.data.DataLoader(dataset,0 码力 | 19 页 | 1.09 MB | 1 年前3
Keras: 基于 Python 的深度学习库2 ImageDataGenerator 类方法 . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.3.2.1 apply_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.3.2.2 fit . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 6.3.2.5 get_random_transform . . . . . . . . . . . . . . . . . . . . . . . . 132 6.3.2.6 random_transform . . . . . . . . . . . . . . . . . . . . . . . . . . 132 ImageDataGenerator 类方法 6.3.2.1 apply_transform keras.preprocessing.image.apply_transform(x, transform_parameters) 根据给定的参数将变换应用于图像。 参数 • x: 3D 张量,单张图像。 • transform_parameters: 字符串 - 参数对表示的字典,用于描述转换。目前,使用字典中的0 码力 | 257 页 | 1.19 MB | 1 年前3
keras tutorialstructure of the input data, initializer to set the weight for each input and finally activators to transform the output to make it non-linear. In between, constraints restricts and specify the range in which (None, 16, 16) >>> where, 16 is set as repeat times. Lambda Layers Lambda is used to transform the input data using an expression or function. For example, if Lambda with expression lambda x: preprocessing.scale(x_train) scaler = preprocessing.StandardScaler().fit(x_train) x_test_scaled = scaler.transform(x_test) Here, we have normalized the training data using sklearn.preprocessing.scale function0 码力 | 98 页 | 1.57 MB | 1 年前3
机器学习课程-温州大学-特征工程from sklearn.preprocessing import Binarizer #二值化,阈值设置为3,返回值为二值化后的数据 Binarizer(threshold=3).fit_transform(iris.data) 11 定性特征哑编码 使用preproccessing库的OneHotEncoder类对数据进行哑编码的 代码如下: 2. 特征构建 from sklearn sklearn.preprocessing import OneHotEncoder #哑编码,对IRIS数据集的目标值,返回值为哑编码后的数据 OneHotEncoder().fit_transform(iris.target.reshape((-1,1))) 12 分箱 一般在建立分类模型时,需要对连续变量离散化,特征离散化后, 模型会更稳定,降低了模型过拟合的风险。 2. 特征构建 设成绩为:[630 码力 | 38 页 | 1.28 MB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112train=True, download=True, # 图片的预处理步骤 transform=torchvision.transforms.Compose([ # 转换为张量 的大小为 (10000)。 从 PyTorch 中加载的 MNIST 数据图片,数值的范围为[0,255]。在机器学习中,一般 希望数据的范围在 0 周围的小范围内分布。通过设置预处理 transform 参数,首先将输入图 片转换为张量对象,并将[0,255]范围像素值归一化(Normalize)到[−1,1]区间,更有利于模 型的训练。 网络中每张图片的计算流程是通用的,因此在计算的过程中可以一次进行多张图片的 torchvision.datasets.MNIST('mnist_data', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor()0 码力 | 439 页 | 29.91 MB | 1 年前3
共 17 条
- 1
- 2













