Machine Learning Pytorch TutorialPrerequisites & What is Pytorch? ● Training & Testing Neural Networks in Pytorch ● Dataset & Dataloader ● Tensors ● torch.nn: Models, Loss Functions ● torch.optim: Optimization ● Save/load models Prerequisites __getitem__(4) 0 1 2 3 4 Tensors ● High-dimensional matrices (arrays) 1-D tensor e.g. audio 2-D tensor e.g. black&white images 3-D tensor e.g. RGB images Tensors – Shape of Tensors ● Check with .shape() dim in PyTorch == axis in NumPy dim 0 dim 0 dim 1 dim 0 dim 1 dim 2 5 5 3 5 4 3 Tensors – Creating Tensors ● Directly from data (list or numpy.ndarray) x = torch.tensor([[1, -1], [-1, 1]]) x0 码力 | 48 页 | 584.86 KB | 1 年前3
机器学习课程-温州大学-03深度学习-PyTorch入门黄海广 副教授 2 本章目录 01 Tensors张量 02 Autograd自动求导 03 神经网络 04 训练一个分类器 3 1.Tensors张量 01 Tensors张量 02 Autograd自动求导 03 神经网络 04 训练一个分类器 4 1.Tensors张量的概念 Tensor实际上就是一个多维数组(multidimensional 创建与另一个张量具有相同大小的张量,请使用 torch.*_like 如torch.rand_like() 创建与其他张量具有相似类型但大小不同的张量,请使 用tensor.new_*创建操作。 1.Tensors张量的概念 6 查看张量的属性 查看Tensor类型 tensor1 = torch.randn(2,3) #形状为(2,3)一组从标准正态分布 中随机抽取的数据 tensor1 tensor1.ndim #查看维度 查看Tensor是否存储在GPU上 tensor1.is_cuda 查看Tensor的梯度 tensor1.grad 1.Tensors张量的概念 7 Tensor在CPU和GPU之间转换,以及numpy之间的转换 CPU tensor转GPU tensor cpu_tensor.cuda() GPU tensor0 码力 | 40 页 | 1.64 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquesthe basic neural network operation is as follows: f(X; W, b) = σ(XW + b) Here, X, W and b are tensors (mathematical term for an n-dimensional matrix) to denote inputs, weights and the bias respectively etc. A neural network model learns W and b tensors which are stored with the model. Hence, they contribute significantly to its size. Of the two tensors, W naturally dominates the size owing to its higher weights to reduce a model’s size: 3The terms tensor and matrix are used interchangeably in the text. Tensors are N-dimensional matrices. The shape of a tensor denotes the size of each of its dimensions, and0 码力 | 33 页 | 1.96 MB | 1 年前3
keras tutorialshape. >>> k.int_shape(data) (1, 3, 3) dot It is used to multiply two tensors. Consider a and b are two tensors and c will be the outcome of multiply of ab. Assume a shape is (4,2) and b shape the input tensors of the model as list. >>> inputs = model.inputs >>> inputs [] model.outputs: Returns all the output tensors of the model is_categorical_crossentropy All above loss function accepts two arguments: y_true - true labels as tensors y_pred - prediction with same shape as y_true Import the losses module before using loss function 0 码力 | 98 页 | 1.57 MB | 1 年前3
AI大模型千问 qwen 中文文档messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) # Directly use generate() and tokenizer.decode() to get the output. # Use `max_new_tokens` add_generation_prompt=True (续下页) 6 Chapter 1. 文档 Qwen (接上页) ) model_inputs = tokenizer([text], return_tensors="pt").to(device) # Directly use generate() and tokenizer.decode() to get the output. # Use `max_new_tokens` messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids0 码力 | 56 页 | 835.78 KB | 1 年前3
PyTorch Tutorialnumber of LSTM cells based on the sentence’s length. PyTorch • Fundamental Concepts of PyTorch • Tensors • Autograd • Modular structure • Models / Layers • Datasets • Dataloader • Visualization Tools like examples Train Model •Train weights Evaluate Model •Visualise Tensor • Tensor? • PyTorch Tensors are just like numpy arrays, but they can run on GPU. • Examples: And more operations like: Indexing of operations for autograd • t.grad_fn Loading Data, Devices and CUDA • Numpy arrays to PyTorch tensors • torch.from_numpy(x_train) • Returns a cpu tensor! • PyTorch tensor to numpy • t.numpy() • Using0 码力 | 38 页 | 4.09 MB | 1 年前3
Keras: 基于 Python 的深度学习库compile(self, optimizer, loss, metrics=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None) 用于配置训练模型。 参数 • optimizer: 字符串(优化器名)或者优化器对象。详见 optimizers。 • loss: 字符串(目标函数名)或目标函数。详见 l class_weight 评估和加权的度 量标准列表。 • target_tensors: 默认情况下,Keras 将为模型的目标创建一个占位符,在训练过程中将使用 目标数据。相反,如果你想使用自己的目标张量(反过来说,Keras 在训练期间不会载入 这些目标张量的外部 Numpy 数据),您可以通过 target_tensors 参数指定它们。它应该 是单个张量(对于单输出 Sequential loss, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None) 用于配置训练模型。 参数 • optimizer: 字符串(优化器名)或者优化器对象。详见 optimizers。 • loss: 字符串(目标函数名)或目标函数。详见 l0 码力 | 257 页 | 1.19 MB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112w 和 b 的偏导数,故在创建 w 和 b 张量时设置为待优化张量。如 果没有设置 requires_grad 参数,则会产生类似于: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn 的错误信息。 预览版202112 4.4 创建张量 7 4.4 叠操作来合并张量,取决于具体的场景是否需要创建新维度。下面来介绍拼接操作和堆叠 操作的典型应用场景和使用方法。 拼接 在 PyTorch 中,可以通过 torch.cat(tensors, dim)函数拼接张量,其中参数 tensors 保存了所有需要合并的张量 List,dim 参数指定需要合并的维度索引。回到上面的例子, 需要在班级维度上合并成绩册,这里班级维度索引号为 0,即 dim=0,合并张量 torch.randn([6,35,8]) torch.cat([a,b], dim=0) # 非法拼接,其他维度长度不相同 Out[3]: RuntimeError: Sizes of tensors must match except in dimension 0. Got 32 and 35 in dimension 1 堆叠 拼接操作直接在现有维度上合并数据,并不会创建新的维度。如果在合并数据0 码力 | 439 页 | 29.91 MB | 1 年前3
共 8 条
- 1













