深度学习下的图像视频处理技术-沈小勇Remaining Challenges 39 Data from Vid4 [Ce Liu et al.] Bicubic x4 Misalignment Occlusion Large motion Effectiveness How to make good use of multiple frames? Are the generated details real? Remaining setting Intensive parameter tuning Slow Remaining Challenges 43 Advantages Better use of sub-pixel motion Promising results both visually and quantitatively Fully Scalable Arbitrary input size Arbitrary from Vid4 [Ce Liu et al.] Motion Estimation Our Method 46 ???????????????????????? ???????????? ????????????0 ???????????? ME ????????????????????????→0 Sub-pixel Motion Compensation (SPMC) Layer0 码力 | 121 页 | 37.75 MB | 1 年前3
复杂环境下的视觉同时定位与地图构建simple translation Group B: there are loops Group C: slow and nearly pure rotation Group D: fast motion with strong rotation 时间统计 • 台式机上的计算时间 • 移动终端上 • 20~50 fps on an iPhone 6. 时空一致性深度恢复 • Guofeng0 码力 | 60 页 | 4.61 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquesw = w.copy() w_1d = np.reshape(w, (-1)) # Create a list of indices sorted by the absolute magnitude of the weights. w_1d_sorted_indices = np.argsort(np.abs(w_1d)) # Compute the number of elements to num_elements_to_zero = int(w_1d.shape[0] * sparsity_rate) # Set the respective indices to zero. w_1d[w_1d_sorted_indices[:num_elements_to_zero]] = 0.0 w = np.reshape(w_1d, w.shape) return w def compress(w): x.""" # Pick initial centroids that are evenly spaced. x_sorted = np.sort(x.flatten()) centroids_init = np.linspace(x_sorted[0], x_sorted[-1], num_clusters) # Construct the variables in this optimization0 码力 | 34 页 | 3.18 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewwhere is the scoring function. Until the t-th epoch, we use the first fraction of examples from the sorted training set. If we train for a total of epochs, then should be 1.0 to ensure that the entire dataset fraction of data that is enabled from the sorted training set . The dotted pacing line shows a pacing function that starts with a fixed fraction of the data sorted by the scores, and at some iteration starts curriculum learning. Each pacing function describes the schedule of enabling the training dataset sorted by increasing hardness. Label smoothing and curriculum learning both help with better generalization0 码力 | 31 页 | 4.03 MB | 1 年前3
动手学深度学习 v2.0reserved_tokens is None: reserved_tokens = [] # 按出现频率排序 counter = count_corpus(tokens) self._token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True) # 未知词元的索引为0 self.idx_to_token = [''] + 权重就越高。 d2l.show_heatmaps(attention_weights.unsqueeze(0).unsqueeze(0), xlabel='Sorted training inputs', ylabel='Sorted testing inputs') 10.2.4 带参数注意力汇聚 非参数的Nadaraya‐Watson核回归具有一致性(consistency)的优点:如果有足够的数据,此模型会收敛到 滑。 d2l.show_heatmaps(net.attention_weights.unsqueeze(0).unsqueeze(0), xlabel='Sorted training inputs', ylabel='Sorted testing inputs') 392 10. 注意力机制 小结 • Nadaraya‐Watson核回归是具有注意力机制的机器学习范例。 • Na 0 码力 | 797 页 | 29.45 MB | 1 年前3
AI大模型千问 qwen 中文文档chunk_conent: return docs if len(id_set) == 0 and self.score_threshold > 0: return [] id_list = sorted(list(id_set)) id_lists = separate_list(id_list) for id_seq in id_lists: (续下页) 1.16. Langchain 470 码力 | 56 页 | 835.78 KB | 1 年前3
【PyTorch深度学习-龙龙老师】-测试版202112创建数字编码表 name2label = {} # 编码表字典,"sq...":0 # 遍历根目录下的子文件夹,并排序,保证映射关系固定 for name in sorted(os.listdir(os.path.join(root))): # 跳过非文件夹对象 if not os.path.isdir(os.path.join(root0 码力 | 439 页 | 29.91 MB | 1 年前3
共 7 条
- 1













