Keras: 基于 Python 的深度学习库播,以创建你自己的完全定制化的模型,(Model 子类 API 引入于 Keras 2.2.0)。 这里是一个用 Model 子类写的简单的多层感知器的例子: import keras class SimpleMLP(keras.Model): def __init__(self, use_bn=False, use_dp=False, num_classes=10): super(SimpleMLP 传递 mode 的字典或列表,以在每个输出上使用不同的 sample_weight_mode。 • weighted_metrics: 在训练和测试期间,由 sample_weight 或 class_weight 评估和加权的度 量标准列表。 • target_tensors: 默认情况下,Keras 将为模型的目标创建一个占位符,在训练过程中将使用 目标数据。相反,如果你想使用自己的目标张量(反过来说,Keras epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)0 码力 | 257 页 | 1.19 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesscenario we have only a few examples it is easy to manually assign them a label identifying which class a given animal belongs to. Puppies and cats would make instant favorites in the petting zoo because we use the Negative Sampling technique so that we only look at the output probability of the label class (which should be closer to 1.0), and the output probabilities of a few other random classes (which import pprint class_names = open(os.path.join('dbpedia_csv', 'classes.txt')).read().splitlines() num_classes = len(class_names) # The classes are as follows. pprint.pprint(class_names) There are fourteen0 码力 | 53 页 | 3.92 MB | 1 年前3
《TensorFlow 2项目进阶实战》2-快速上手篇:动⼿训练模型和部署服务历史上的 tf.keras.Model • Class tf.compat.v1.keras.Model • Class tf.compat.v1.keras.models.Model • Class tf.compat.v2.keras.Model • Class tf.compat.v2.keras.models.Model • Class tf.keras.models.Model figure() plt.imshow(train_images[1]) plt.colorbar() plt.grid(False) plt.show() Preprocess data class_names = [ 'T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat’, 'Sandal', 'Shirt', 'Sneaker', 'Bag' plt.yticks ( [ ] ) plt.grid(False) plt.imshow(train_images[i],camp=plt.cm.binary) plt.xlabel(class_names(train_labels[i])) plt.show( ) Build the model Train and evaluate Make prediction Visualize0 码力 | 52 页 | 7.99 MB | 1 年前3
AI大模型千问 qwen 中文文档deepspeed 来加速训练过程。该脚本非常简洁且易于理解。 @dataclass @dataclass class ModelArguments: model_name_or_path: Optional[str] = field(default="Qwen/Qwen-7B") @dataclass class DataArguments: data_path: str = field( default=None default=None, metadata={"help": "Path to the evaluation data."} ) lazy_preprocess: bool = False @dataclass class TrainingArguments(transformers.TrainingArguments): cache_dir: Optional[str] = field(default=None) Sequences will be right padded (and␣ �→possibly truncated)." }, ) use_lora: bool = False @dataclass class LoraArguments: lora_r: int = 64 lora_alpha: int = 16 lora_dropout: float = 0.05 lora_target_modules:0 码力 | 56 页 | 835.78 KB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquesuntouched. We define a create_model_for_pruning() function which takes a pre-trained model and the names of the prunable blocks as inputs. It returns a model that is capable of sparse training. It clones prepares the input arguments to create a model for pruning. The prunable_blocks variable is the list of names of prunable convolution blocks. We prune all convolution blocks from second (zero indexed) onwards it can be classified into one of the 12 classes (each representing either a target word, with one class for ‘unknown’). The code for this project is available here as a Jupyter notebook. Train the baseline0 码力 | 34 页 | 3.18 MB | 1 年前3
Machine Learning Pytorch TutorialPrerequisites ● We assume you are already familiar with… 1. Python3 ■ if-else, loop, function, file IO, class, ... ■ refs: link1, link2, link3 2. Deep Learning Basics ■ Prof. Lee’s 1st & 2nd lecture videos batches and shuffling here. Dataset & Dataloader from torch.utils.data import Dataset, DataLoader class MyDataset(Dataset): def __init__(self, file): self.data = ... def __getitem__(self, for more information on data types. Tensors – PyTorch v.s. NumPy ● Many functions have the same names as well PyTorch NumPy x.reshape / x.view x.reshape x.squeeze() x.squeeze() x.unsqueeze(1) np.expand_dims(x0 码力 | 48 页 | 584.86 KB | 1 年前3
动手学深度学习 v2.0到的文本,并将手写字符映 射到对应的已知字符之上。这种“哪一个”的问题叫做分类(classification)问题。分类问题希望模型能够预 测样本属于哪个类别(category,正式称为类(class))。例如,手写数字可能有10类,标签被设置为数字0~ 9。最简单的分类问题是只有两类,这被称之为二项分类(binomial classification)。例如,数据集可能由动 物图像组成,标签可能是{� 10000 a = torch.ones([n]) b = torch.ones([n]) 由于在本书中我们将频繁地进行运行时间的基准测试,所以我们定义一个计时器: 3.1. 线性回归 89 class Timer: #@save """记录多次运行时间""" def __init__(self): self.times = [] self.start() def start(self): valuate_accuracy函数中,我 们在Accumulator实例中创建了2个变量,分别用于存储正确预测的数量和预测的总数量。当我们遍历数据集 时,两者都将随着时间的推移而累加。 class Accumulator: #@save """在n个变量上累加""" def __init__(self, n): self.data = [0.0] * n def add(self,0 码力 | 797 页 | 29.45 MB | 1 年前3
机器学习课程-温州大学-numpy使用总结数数组, 布尔数组进行切片。 22 结构数组 C语言中可以通过struct关键字定义结构类型。NumPy中也有类似的结构数组。 > persontype = np.dtype({ 'names':['name', 'age', 'weight'], 'formats':['S30','i', 'f']}) > a = np.array([("Zhang", 32, 75.5), ("Wang" NumPy中多项式函数可以用一维数组表示。a[0]为最高次,a[-1] 为常数项。 > a = np.array([1.0, 0, -2, 1]) > p = np.poly1d(a) > print type(p) <class 'numpy.lib.polynomial.poly1d'> > p(np.linspace(0, 1, 5)) array([ 1. , 0.515625, 0.125 ,0 码力 | 49 页 | 1.52 MB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] raw_dataset = pd.read_csv(dataset_path, names=column_names, from torch import nn from torch.nn import functional as F from torch import optim class MyNetwork(nn.Module): def __init__(self): super(MyNetwork, self).__init__() Layer 实现一个网络层,需要传入网络层的输入节点数、输出节点数、激 活函数类型等参数,权值 weights 和偏置张量 bias 在初始化时根据输入、输出节点数自动 生成并初始化。代码如下: class Layer: # 全连接网络层 def __init__(self, n_input, n_neurons, activation=None, weights=None,0 码力 | 439 页 | 29.91 MB | 1 年前3
keras tutorialAPI) with relu activation (using Activation module) function. Sequential model exposes Model class to create customized models as well. We can use sub-classing concept to create our own complex model create our own customized layers. Customized layer can be created by sub-classing the Keras.Layer class and it is similar to sub-classing Keras models. Core Modules Keras also provides a lot of built-in HDF5Matrix data = HDF5Matrix('data.hdf5', 'data') to_categorical It is used to convert class vector into binary class matrix. >>> from keras.utils import to_categorical >>> labels = [0, 1, 2, 3, 4,0 码力 | 98 页 | 1.57 MB | 1 年前3
共 38 条
- 1
- 2
- 3
- 4













