Keras: 基于 Python 的深度学习库49 4.3.1 Model 类 API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.3.2 Model 的实用属性 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.3.3 Model 类模型方法 . . . . . . . . . . . . . . . 239 20.8 plot_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 20.9 multi_gpu_model . . . . . . . . . . . . . . . . . . . . . . Keras 的核心数据结构是 model,一种组织网络层的方式。最简单的模型是 Sequential 顺 序模型,它是由多个网络层线性堆叠的栈。对于更复杂的结构,你应该使用 Keras 函数式 API, 它允许构建任意的神经网络图。 Sequential 顺序模型如下所示: from keras.models import Sequential model = Sequential()0 码力 | 257 页 | 1.19 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression TechniquesCan we optimally prune the network connections, remove extraneous nodes, etc. while retaining the model’s performance? In this chapter we introduce the intuition behind sparsity, different possible methods methods of picking the connections and nodes to prune, and how to prune a given deep learning model to achieve storage and latency gains with a minimal performance tradeoff. Next, the chapter goes over weight learn about these techniques together! Model Compression Using Sparsity Sparsity or Pruning refers to the technique of removing (pruning) weights during the model training to achieve smaller models. Such0 码力 | 34 页 | 3.18 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architecturesin ANALOG magazine (1991) So far, we have discussed generic techniques which are agnostic to the model architecture. These techniques can be applied in NLP, vision, speech or other domains. However, owing challenges. What good is a model that cannot be deployed in practical applications! Efficient Architectures aim to improve model deployability by proposing novel ways to reduce model footprint and improve running on mobile and edge devices. We have also set up a couple of programming projects for a hands-on model optimization experience using these efficient layers and architectures. Let’s start our journey with0 码力 | 53 页 | 3.92 MB | 1 年前3
keras tutorial........................................................................................... 17 Model ................................................................................................. ............................................................................... 58 10. Keras ― Model Compilation ..................................................................................... ..... 61 Compile the model ........................................................................................................................................ 62 Model Training ..............0 码力 | 98 页 | 1.57 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquescompression techniques. Compression techniques aim to reduce the model footprint (size, latency, memory etc.). We can reduce the model footprint by reducing the number of trainable parameters. However requires many trials and evaluations to reach a smaller model, if it is at all possible. Second, such an approach doesn’t generalize well because the model designs are subjective to the specific problem. In In this chapter, we introduce Quantization, a model compression technique that addresses both these issues. We’ll start with a gentle introduction to the idea of compression. Details of quantization and0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesyou'll go.” ― Dr. Seuss Model quality is an important benchmark to evaluate the performance of a deep learning model. A language translation application that uses a low quality model would struggle with consumer effectively with others who speak different languages. An application that employs a high quality model with a reasonable translation accuracy would garner better consumer support. In this chapter, our picked to benchmark learning techniques. It is followed by a short discussion on exchanging model quality and model footprint. An in-depth discussion of data augmentation and distillation follows right after0 码力 | 56 页 | 18.93 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationtogether? We have four options: none, quantization, clustering, and both. We would need to train a model with each of these four options to make an informed decision. Blessed with a large research community multiple parameters. Figure 7-1: The plethora of choices that we face when training a deep learning model in the computer vision domain. A Search Space for n parameters is a n-dimensional region such that understand this using the earlier example for choosing quantization and/or clustering techniques for model optimization. We have a search space which has two boolean valued parameters: quantization and clustering0 码力 | 33 页 | 2.48 MB | 1 年前3
AI大模型千问 qwen 中文文档Qwen Team 2024 年 05 月 11 日 快速开始 1 文档 3 i ii Qwen Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. Now the large language models have been upgraded AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # Now you do not need to add "trust_remote_code=True" model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-7B-Chat", tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-7B-Chat") # Instead of using model.chat(), we directly use model.generate() # But you need to use tokenizer.apply_chat_template() to format your inputs0 码力 | 56 页 | 835.78 KB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112参考文献 第 15 章 自定义数据集 15.1 精灵宝可梦数据集 15.2 自定义数据集加载流程 15.3 宝可梦数据集实战 15.4 迁移学习 15.5 Saved_model 15.6 模型部署 15.7 参考文献 预览版202112 人工智能绪论 我们需要的是一台可以从经验中学习的机器。 −阿兰·图灵 1.1 机器学习需要从数据中间学习,因此首先需要采集大量的真实样本数据。以手写的数 字图片识别为例,如图 3.1 所示,需要收集较多的由真人书写的 0~9 的数字图片,为了便 于存储和计算,通常把收集的原始图片缩放到某个固定的大小(Size 或 Shape),比如 224 个 像素的行和 224 个像素的列(224 × 224),或者 96 个像素的行和 96 个像素的列(96 × 96), 图片样本将作为输入数据 x。同时,还需要 import pyplot as plt # 绘图工具 from utils import plot_image, plot_curve, one_hot # 便捷绘图函数 batch_size = 512 # 批大小 # 训练数据集,自动从网络下载 MNIST 数据集,保存至 mnist_data 文件夹 train_db=torchvision.datasets.MNIST('mnist_data'0 码力 | 439 页 | 29.91 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 1 - Introductionlearning algorithms help build models, which as the name suggests is an approximate mathematical model of what outputs correspond to a given input. To illustrate, when you visit Netflix’s homepage, the might be popular with other users too. If we train a model to predict the probability based on your behavior and currently trending content, the model will assign a high probability to Seinfeld. While there the performance of the model scaled well with the number of labeled examples, since the network had a large number of parameters. Thus to extract the most out of the setup, the model needed a large number0 码力 | 21 页 | 3.17 MB | 1 年前3
共 79 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8













