《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesto benchmark learning techniques. It is followed by a short discussion on exchanging model quality and model footprint. An in-depth discussion of data augmentation and distillation follows right after. discuss a few label invariant image and text transformation techniques. Image Transformations This discussion is organized into the following two categories: spatial transformation and value transformation cheaper than human labor costs to produce training samples. We have chosen four techniques for deeper discussion to cover both statistical and deep learning based synthetic data generation models. Back Translation0 码力 | 56 页 | 18.93 MB | 1 年前3
机器学习课程-温州大学-08机器学习-集成学习Ridgeway G . Special Invited Paper. Additive Logistic Regression: A Statistical View of Boosting: Discussion[J]. Annals of Statistics, 2000, 28(2):393-400. [6] FRIEDMAN J H . Stochastic gradient boosting[J]0 码力 | 50 页 | 2.03 MB | 1 年前3
Lecture Notes on Gaussian Discriminant Analysis, Naivei=1 1(y(i) = y) (23) Remark: We assume binary features (Xj ∈ {0, 1} for ∀j ∈ [n]) in the above discussion. What if Xj ∈ {1, 2, · · · , v}? Can we get similar results? Check it by yourselves! 4.4 Laplace0 码力 | 19 页 | 238.80 KB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationreaders to play with other choices and see how they affect the results. It's now time to conclude our discussion on automation with a short introduction to Automated ML or AutoML in the final section. Summary0 码力 | 33 页 | 2.48 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquesan n-dimensional matrix) to denote inputs, weights and the bias respectively. To simplify this discussion, let’s assume the shape (an array describing the size of each dimension) of X as [batch size, D1]0 码力 | 33 页 | 1.96 MB | 1 年前3
共 5 条
- 1













