BRAND BOOK VERSION 1.0
BRAND BOOK VERSION 1.0 PAGE 2 V 1.0 GO BRAND BOOK Go is an open source programming language that enables the production of simple, efficient and reliable software at scale. { PAGE 3 V 1.0 GO BRAND BRAND BOOK // contents the brand 05 06 07 08 1.0 mission & vision 1.1 values 1.2 tone of voice 1.3 audience & key messages visual identity 13 14 16 17 2.0 logo overview 22 2.0 the gopher 2.0.1 model sheet 1 2 3 PAGE 4 V 1.0 GO BRAND BOOK the brand // section one GO BRAND BOOK V 1.0 PAGE 5 Bring order to the complexity of creating and running software0 码力 | 23 页 | 651.68 KB | 1 年前3BRAND BOOK VERSION 1.0
BRAND BOOK VERSION 1.0 PAGE 2 V 1.0 GO BRAND BOOK Go is an open source programming language that enables the production of simple, efficient and reliable software at scale. { PAGE 3 V 1.0 GO BRAND BRAND BOOK // contents the brand 05 06 07 08 1.0 mission & vision 1.1 values 1.2 tone of voice 1.3 audience & key messages visual identity 13 14 15 16 17 2.0 logo 22 2.0 the gopher 2.0.1 model sheet 1 2 3 PAGE 4 V 1.0 GO BRAND BOOK the brand // section one GO BRAND BOOK V 1.0 PAGE 5 Bring order to the complexity of creating and running software0 码力 | 23 页 | 1.16 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 7 - Automation
0 码力 | 33 页 | 2.48 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction
Chapter 1 - Introduction to Efficient Deep Learning Welcome to the book! This chapter is a preview of what to expect in the book. We start off by providing an overview of the state of deep learning, its and IoT devices over time. The lighter blue bars represent forecasts. (Data Source: 1, 2) In this book, we will primarily focus on efficiency for both training and deploying efficient deep learning models one without hurting the other? This is illustrated in Figure 1-6. As mentioned earlier, with this book we’ll strive to build a set of tools and techniques that can help us make models pareto-optimal and0 码力 | 21 页 | 3.17 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures
to compute the score matrix such as the Luong23 style and the Bahdanau24 style attention. In this book, we have chosen to discuss the Luong algorithm because it is used in Tensorflow’s attention layers0 码力 | 53 页 | 3.92 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
0 码力 | 56 页 | 18.93 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques
quantized values for a given x. Logistics We just wanted to take a moment to state that in this book, we have chosen to work with Tensorflow 2.0 (TF) because it has exhaustive support for building and are not familiar with the tensorflow framework, we refer you to the book Deep Learning with Python1. All the code examples in this book are available at the EDL GitHub repository. The code examples for all frequently operate on batches of data. Using vectorized operations also speeds up the execution (and this book is about efficiency, after all!). We highly recommend learning and becoming familiar with numpy.0 码力 | 33 页 | 1.96 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques
machine learning frameworks like Tensorflow and PyTorch is pending as of the time of writing this book. Mainly what is lacking is kernels that can efficiently leverage the compressed weight matrices on0 码力 | 34 页 | 3.18 MB | 1 年前3nim book v2, Chapter 3. Rendering Text
0 码力 | 6 页 | 74.05 KB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review
18 Allgower, Eugene L. and Kurt Georg. Numerical Continuation Methods. Springer, link.springer.com/book/10.1007/978-3-642-61257-2. 1. The epoch at which the model first learns to correctly predict the0 码力 | 31 页 | 4.03 MB | 1 年前3
共 1000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 100