《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquesrover is transmitting images back to earth. However, transmission costs make it infeasible to send the original image. Can we compress the transmission, and decompress it on arrival? If so, what would be representation which takes integer values in the range [0, 5]. As a result, the quantized wave requires low transmission bandwidth. Figure 2-3: Quantization of sine waves. Let’s dig deeper into its mechanics using of these variables over an expensive communication channel. Can we use quantization to reduce transmission size and thus save some costs? What if it did not matter to us if x was stored/transmitted with0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniqueszero values. Sparse compressed models achieve higher compression ratio which results in lower transmission and storage costs. Figure 5-1 visually depicts two networks. The one on the left is the original examples. Mars Rover beckons again! Can we do better with clustering? Remember the Mars Rover transmission project from Chapter 2? It seems like the scientists back at the Jet Propulsion Laboratory in solution. Let us implement a method that simulates the transmission using clustering and measures the reconstruction error. def simulate_transmission_clustering(img, num_clusters): decoded_img, _, reconstruction_error0 码力 | 34 页 | 3.18 MB | 1 年前3
keras tutorialWhere, Line 1 imports Sequential model from Keras models Line 2 imports Dense layer and Activation module Line 4 create a new sequential model using Sequential API Line 5 adds a dense Line 1 imports Sequential model from Keras models Line 2 imports Dense layer, Dropout layer and Activation module Line 4 create a new sequential model using Sequential API Line 5 adds function. Line 6 adds a dropout layer (Dropout API) to handle over-fitting. Line 7 adds another dense layer (Dense API) with relu activation (using Activation module) function. Line 8 adds another0 码力 | 98 页 | 1.57 MB | 1 年前3
PyTorch Release Notesone of the following commands: ‣ --ipc=host ‣ --shm-size=in the command line to docker run --gpus all To pull data and model descriptions from locations outside the container TensorRT integration for PyTorch and brings the capabilities of TensorRT directly to Torch in one line Python and C++ APIs. ‣ Starting with the 22.05 release, the PyTorch container is available for the TensorRT integration for PyTorch and brings the capabilities of TensorRT directly to Torch in one line Python and C++ APIs. ‣ Starting with the 22.05 release, the PyTorch container is available for the 0 码力 | 365 页 | 2.94 MB | 1 年前3
动手学深度学习 v2.03,其中系数2是切线的斜率。 x = np.arange(0, 3, 0.1) plot(x, [f(x), 2 * x - 3], 'x', 'f(x)', legend=['f(x)', 'Tangent line (x=1)']) 2.4. 微积分 67 2.4.2 偏导数 到目前为止,我们只讨论了仅含一个变量的函数的微分。在深度学习中,函数通常依赖于许多变量。因此,我 们需要将微分的思想推广到多元函数(multivariate download('time_machine'), 'r') as f: lines = f.readlines() return [re.sub('[^A-Za-z]+', ' ', line).strip().lower() for line in lines] lines = read_time_machine() print(f'# 文本总行数: {len(lines)}') print(lines[0]) #@save """将文本行拆分为单词或字符词元""" if token == 'word': return [line.split() for line in lines] elif token == 'char': return [list(line) for line in lines] else: print('错误:未知词元类型:' + token) (continues on0 码力 | 797 页 | 29.45 MB | 1 年前3
Lecture Notes on Gaussian Discriminant Analysis, NaiveExpectation-Maximization (EM) algorithm. 6.1 Convex Sets and Convex Functions A set C is convex if the line segment between any two points in C lies in C, i.e., for ∀x1, x2 ∈ C and ∀θ with 0 ≤ θ ≤ 1, we have we have the inequality in the fifth line. The sixth equality also comes from Eq. (31) 13 To tighten the lower bound, we should let the equality (in the forth line) hold. According to Jensen’s inequality in particular holds for Qi = Q[t] i , according to Eq. (32). We have the inequality in the second line, because θ[t+1] is calculated by θ[t+1] = arg max θ � i � z(i)∈Ω Q[t] i (z(i)) log p(x(i), z(i);0 码力 | 19 页 | 238.80 KB | 1 年前3
如何利用深度学习提高高精地图生产的自动化率-邹亮�������������� ���� ���� ������(Lane line Detection) �� ���)�)�(� ����&���&��,���,� ������� �������� �� ���&�&��&� �&��,��� ����&����&��,���,� ������(Lane line Detection) g�R������f A��������) S�e�����)��������)��� Oc��������(����������������������)��������������� �����U���a�S�PeF ������(Lane line Detection)��� �������� (Sign Detection, Traffic Light Detection) �� ���������� �������� (Sign Detection0 码力 | 34 页 | 56.04 MB | 1 年前3
Lecture Notes on Support Vector Machinegi(ω∗) = 0 for ∀i = 1, 2, · · · , k. Another observation is that, since the inequality in the third line holds with equality, ω∗ actually minimizes L(ω, α∗, β ∗) over ω. 2.2.3 Karush-Kuhn-Tucker (KKT) Conditions following, we take α1 and α2 for example to explain the optimization process of the SMO algorithm (i.e., Line 4 in Algorithm 1). By treating α1 and α2 as variables while the others as known quantities, the objective ?! + ?! (b) y(1)y(2) = 1 Figure 7: α+ 1 and α+ 2 . which confines the optimization to be on a line. Since 0 ≤ α1, α2 ≤ C, we can derive a lower bound L and an upper bound H for them. As shown in Fig0 码力 | 18 页 | 509.37 KB | 1 年前3
AI大模型千问 qwen 中文文档rank0_print("Loading data...") train_data = [] with open(data_args.data_path, "r") as f: for line in f: train_data.append(json.loads(line)) train_dataset = dataset_cls(train_data, tokenizer=tokenizer, max_len=max_len) eval_data_path: eval_data = [] with open(data_args.eval_data_path, "r") as f: for line in f: eval_data.append(json.loads(line)) eval_dataset = dataset_cls(eval_data, tokenizer=tokenizer, max_len=max_len)0 码力 | 56 页 | 835.78 KB | 1 年前3
Keras: 基于 Python 的深度学习库ile(path): while 1: f = open(path) for line in f: # create Numpy arrays of input data # and labels, from each line in the file x, y = process_line(line) yield (x, y) 模型 47 f.close() model.fi generate_arrays_from_file(path): while 1: f = open(path) for line in f: # 从文件中的每一行生成输入数据和标签的 numpy 数组, x1, x2, y = process_line(line) yield ({'input_1': x1, 'input_2': x2}, {'output': y}) f.close() 20.7 print_summary keras.utils.print_summary(model, line_length=None, positions=None, print_fn=None) 打印模型概况。 参数 • model: Keras 模型实例。 • line_length: 打印的每行的总长度 (例如,设置此项以使其显示适应不同的终端窗口大小)。 • positions:0 码力 | 257 页 | 1.19 MB | 1 年前3
共 19 条
- 1
- 2













