机器学习课程-温州大学-numpy使用总结10]]) > c.shape (3, 4) > a = np.array([1, 2, 3, 4]) > d = a.reshape((2,2)) array([[1, 2], [3, 4]]) 18 ndarray的切片 ndarray的切片和list是一样的。 > a = np.arange(10) > a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) a[5] [[22, 13, 27, 18, 16]] [28], [24], [18]] np.sum(a, axis=1) np.sum(a, axis=0) axis=0) ----------------- -------------------- [26, 28, 24, 18] [22, 13, 27, 18, 16] 38 大小与排序 NumPy在排序等方面常用的函数如下: > a = np.array([1, 3, 5, 7]) > b = np.array([2, 4, 6]) > np.maximum(a[None0 码力 | 49 页 | 1.52 MB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112经典卷积网络 10.10 CIFAR10 与 VGG13 实战 10.11 卷积层变种 10.12 深度残差网络 10.13 DenseNet 10.14 CIFAR10 与 ResNet18 实战 10.15 参考文献 第 11 章 循环神经网络 11.1 序列表示方法 11.2 循环神经网络 11.3 梯度传播 11.4 RNN 层使用方法 11.5 Anaconda 程序。如图 1.23 所示,安装程序 询问是否连带安装 VS Code 软件,选择 Skip 即可。整个安装流程约持续 5 分钟,具体时间 预览版202112 第 1 章 人工智能绪论 18 需依据计算机性能而定。 图 1.22 Anaconda 安装界面-1 图 1.23Anaconda 安装界面-2 安装完成后,怎么验证 Anaconda 是否安装成功呢?通过键盘上的 tensor(-13035, dtype=torch.int16) 布尔类型与整型之间相互转换也是合法的,是比较常见的操作: In [18]: a = torch.tensor([True, False]) a.int() # 布尔类型转整型 Out[18]: tensor([1, 0], dtype=torch.int32) 一般默认 0 表示 False,1 表示 True,在0 码力 | 439 页 | 29.91 MB | 1 年前3
深度学习下的图像视频处理技术-沈小勇TOG 18] • Distort-and-Recover: [CVPR 18] • DPE: [CVPR 18] Previous Work Input WVM [CVPR’16] JieP [ICCV’17] HDRNet [Siggraph’17] DPE [CVPR’18] White-Box [TOG’18] Distort-and-Recover [CVPR’18] Ours0 码力 | 121 页 | 37.75 MB | 1 年前3
动手学深度学习 v2.0动手学深度学习 Release 2.0.0 Aston Zhang, Zachary C. Lipton, Mu Li, and Alexander J. Smola Aug 18, 2023 目录 前言 1 安装 9 符号 13 1 引言 17 2 预备知识 39 2.1 数据操作 . . . . . . . . . . . . . . . . . . . . . . . . 此我们可能需要一个足够丰富的模型族,使模型多元化。比如,模型族的另一个模型只在听到“Hey Siri”这 个词时发出“是”。理想情况下,同一个模型族应该适合于“Alexa”识别和“Hey Siri”识别,因为从直觉上 18 1. 引言 看,它们似乎是相似的任务。然而,如果我们想处理完全不同的输入或输出,比如:从图像映射到字幕,或 从英语映射到中文,可能需要一个完全不同的模型族。 但如果模型所有的按钮(模型参数) 学中的一种实验方法——例如,电阻中电 流和电压的欧姆定律可以用线性模型完美地描述。 即使在中世纪,数学家对估计(estimation)也有敏锐的直觉。例如,雅各布·克贝尔 (1460–1533)18的几何学 书籍举例说明,通过平均16名成年男性的脚的长度,可以得出一英尺的长度。 图1.4.1: 估计一英尺的长度 图1.4.1 说明了这个估计器是如何工作的。16名成年男子被要求脚连脚排成一行。然后将它们的总长度除以16,0 码力 | 797 页 | 29.45 MB | 1 年前3
《TensorFlow 快速入门与实战》3-TensorFlow基础概念解析... �������������������������“��”��������0��rank� ���1����2������������������������������ 16 17 18 13 10 14 11 15 12 7 8 9 4 1 5 2 6 3 1 1 2 3 1 2 3 4 5 6 7 8 9 0� 1� 2� 3� TensorFlow TensorFlow �����Tensor����������������� ������������� 1. ����������������� 2. ������������� 16 17 18 13 10 14 11 15 12 7 8 9 4 1 5 2 6 3 1 1 2 3 1 2 3 4 5 6 7 8 9 0� 1� 2� 3� TensorFlow ���� 16 17 18 13 10 14 11 15 12 7 8 9 4 1 5 2 6 3 TensorFlow �� � TensorFlow ��������������������� • tf.constant //�� • tf.placeholder //��� • tf.Variable //�� 16 17 18 13 10 140 码力 | 50 页 | 25.17 MB | 1 年前3
Lecture Notes on Support Vector Machinerespectively, we have the primal feasibility conditions (18)∼(18) and the dual feasibility condition (20) holds gi(ω∗) ≤ 0, ∀i = 1, · · · , k (18) hj(ω∗) = 0, ∀j = 1, · · · , l (19) α∗ i ≥ 0, ∀i = 1, all values between b+ 1 and b+ 2 satisfy the KKT conditions and we can let b+ = (b+ 1 + b+ 2 )/2. 180 码力 | 18 页 | 509.37 KB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesembeddings of dissimilar inputs. This method is known as contrastive learning. In the SimCLR papers18 authors use this approach by using data augmentation to create similar inputs without needing labels the words scored, winning, goal and game occur together in sports category with high probability. 18 Chen, T., Kornblith, S., Swersky, K., Norouzi, M., & Hinton, G. E. (2020). Big self-supervised models '']] Tokens: tf.Tensor( [[3626 3688 11 129 1404 80 309 2393 4 303 106 3 1069 694 18 409 1 218 1 12 3 1 1 83 248 4 1 2 1 5 1 1 0 0 00 码力 | 53 页 | 3.92 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical ReviewJeremy and Sebastian Ruder. "Universal Language Model Fine-tuning for Text Classification." arXiv, 18 Jan. 2018, doi:10.48550/arXiv.1801.06146. Figure 6-6: Validation error w.r.t. number of training examples that build upon previous lessons. The intuition behind this is the theory of Continuation Methods (CM)18 which is a known approach for optimizing non-convex functions, where multiple local minima might exist learning. Association for Computing Machinery, 14 June 2009, pp. 41-48, doi:10.1145/1553374.1553380. 18 Allgower, Eugene L. and Kurt Georg. Numerical Continuation Methods. Springer, link.springer.com/book/100 码力 | 31 页 | 4.03 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquessingle model, but now we have multiple models which also multiplies our deployment costs. Hinton et al.18, in their seminal work explored how smaller student networks can be taught to extract “dark knowledge” the training set. The label values correspond to cat, dog, pigeon and parrot classes in that order. 18 Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv array([0.202 , 0.3013, 0.4967]) The class probabilities at various temperatures are shown in figure 3-18. As you can observe, when the temperature is 1.0, Class 2 has been assigned a probability of 0.99310 码力 | 56 页 | 18.93 MB | 1 年前3
keras tutorial.................................................................................................. 18 Core Modules ................................................................................... see the response similar as specified below, Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information are of two types as mentioned below: 5. Keras ― Deep learning with Keras Keras 18 Sequential Model - Sequential model is basically a linear composition of Keras Layers. Sequential0 码力 | 98 页 | 1.57 MB | 1 年前3
共 68 条
- 1
- 2
- 3
- 4
- 5
- 6
- 7













