谭国富:深度学习在图像审核的应用大量的色情图片、暴力等不良信息夹杂其 中,严重影响着互联网的健康发展。 Ø 直播行业的快速兴起,使得视频中不良信 息含量更加迅猛增长,色情暴力等不雅视 频频繁流出,导致各网络直播平台面临危 机。 Ø 内容监管日趋严格, 2017年上半年,各 大直播行业协会相应成立,行业平台自我 规范的同时,网信办、文化部等国家部门 对于直播行业监管也越发严格,几乎所有 知名的直播平台均被有关部门点名查处过, 特别2017 年月中旬,黄鳝事件引爆网络, 年月中旬,黄鳝事件引爆网络, 让色情直播再度被推上舆论浪尖。 微信朋友圈日上传图片10亿张,视频播放20亿次 4000亿QQ空间存量图片,每天空间相册新增6亿 张上传图片 SACC2017 内容审核 - 痛点和诉求 默默承受 自建识别模型 加大审核人力 一旦出现严重违规平 台面临停业整顿风险 昂贵的专业机器、AI专家, 样本不足导致识别模型漏 过模型调优难度大 人力审核疲劳容易发 人脸识别 l 政治敏感人物识别, 直播, 视频等场景 Ø 上亿级别的人脸检索,秒级的检索速度从黑名 单,白名单数据库中返回目标人脸信息。 Ø 技术指标:优图人脸识别通过传统方法和深度 学习技术结合,以空间面孔墙和微众银行远程 核身为基础,在性能上达到LFW 99.80%。 Ø QQ,微云等: 非法设置领导人头像, 公众人 物, 明星等等他人肖像。 Ø 直播,游戏视频等, 非法植入领导人,政府国0 码力 | 32 页 | 5.17 MB | 1 年前3
构建基于富媒体大数据的弹性深度学习计算平台构建基于富媒体大数据的弹性深度学 习计算平台 SPEAKER / 土土@七牛 AtLab Mobile —> 富媒体时代 数据存储 数据加速 数据处理 直播 点播 Connect 每天超过10亿图像上传 超过万亿小时的音视频存储 What are they? 内容审核团队 运营分析团队 AI? Content 分类 检测 分割 跟踪 描述 搜索 分析 …0 码力 | 21 页 | 1.71 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniqueshello weather and time. The output is none when none of the three acceptable words are detected. Now, let’s say that the performance threshold for a given model to be considered feasible for deployment sample efficiency, the model now needs only 5,000 labels and 40,000 steps to reach that accuracy (reducing 5,000 labels and 10,000 steps when compared to the baseline). Now, there can be a few different footprint metrics. This was necessary to build an intuition of the real world problems they aim to tackle. Now, let’s dive into these learning techniques to understand what they are and how to employ them in deep0 码力 | 56 页 | 18.93 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression TechniquesBlaise Pascal In the last chapter, we discussed a few ideas to improve the deep learning efficiency. Now, we will elaborate on one of those ideas, the compression techniques. Compression techniques aim to lie in any part of the range from [xmin, xmax], and there are no clusters of values in any part. Now that we have the assumptions out of the way, instead of working with a 32-bit for storing x, let us We need the floor function ( ) so that the floating point value is converted to an integer value. Now, if we plug in x = xmin, and xq would be 0. Although, when we plug in x = xmax, xq would be 2b, which0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniquesand fine-tuning the network in each round. At the end of rounds, weights will have been removed. Now that we have presented a general algorithm for pruning, we should go over some examples of different matrix was of shape [6, 6], we can now treat this problem to be of input [n, 5] and weight matrix of size [5, 6]. This is because we have simply removed the first neuron. Now, consider a convolution layer floating point values), the codebook will cost us bytes to store. For each tensor element we will now store only the index of its centroid in the codebook, which will only take up bits. For a tensor with0 码力 | 34 页 | 3.18 MB | 1 年前3
keras tutorial(AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> As of now the latest version is ‘3.7.2’. If Python is not installed, then visit the official python link - https://www environment This step will configure python and pip executables in your shell path. Linux/Mac OS Now we have created a virtual environment named “kerasvenv”. Move to the folder and type the below command higher SciPy version 0.17.0 or higher Keras 6 joblib 0.11 or higher. Now, we install scikit-learn using the below command: pip install -U scikit-learn Seaborn Seaborn0 码力 | 98 页 | 1.57 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewtraining epochs to do well on the given task. We will go into details of how this works shortly. For now, let's assume that we have such a general model that works for natural language inputs. Then by definition too sweet” should have a much different representation that is far from both the former sentences. Now notice how such a model would be useful across many different tasks. If the representations generated (apart from the paid service on Google Cloud). We will be talking more about TPUs in Chapter 10. For now, you can follow our lead. You can also adapt the code to run on GPUs if you desire. We start by loading0 码力 | 31 页 | 4.03 MB | 1 年前3
Lecture 6: Support Vector MachineGiven: Training data {(x(i), y(i))}i=1,··· ,m Goal: Learn ω and b that achieve the maximum margin For now, assume that entire training data are correctly classified by (ω, b) Zero loss on the training examples = 0, where ω is the normal vector The margin γ(i) is the distance between x(i) and the hyperplane Now, the margin is signed If y (i) = 1, γ(i) ≥ 0; otherwise, γ(i) < 0 !" !" !"# + % = 0 ! !(#) 2021 41 / 82 Feature Mapping (Contd.) Now map each example as x → {x, x2} Each example now has two features (“derived” from the old representa- tion) Data now becomes linearly separable in the new representation0 码力 | 82 页 | 773.97 KB | 1 年前3
PyTorch Release Notesthat can be used seamlessly with your PyTorch code. ‣ A preview of Torch-TensorRT (1.4.0dev0) is now included. Torch-TRT is the TensorRT integration for PyTorch and brings the capabilities of TensorRT Release 23.06 PyTorch RN-08516-001_v23.07 | 15 ‣ A preview of Torch-TensorRT (1.4.0dev0) is now included. Torch-TRT is the TensorRT integration for PyTorch and brings the capabilities of TensorRT installed by using a pip wheel on the nvidia-pyindex. ‣ A preview of Torch-TensorRT (1.4.0dev0) is now included. PyTorch Release 23.05 PyTorch RN-08516-001_v23.07 | 23 Torch-TRT is the TensorRT integration0 码力 | 365 页 | 2.94 MB | 1 年前3
Lecture Notes on Support Vector Machinex0 + b) ∥ω∥ (4) γ0 is the so-called margin of x0 (with respect to the hyperplane ωT x + b = 0). Now, given a set of m training data {(x(i), y(i))}i=1,··· ,m, we first assume that they are linearly separable Preliminary Knowledge of Convex Optimization 2.2.1 Optimization Problems and Lagrangian Duality We now consider the following optimization problem min ω f(ω) (9) s.t. gi(ω) ≤ 0, i = 1, · · · , k (10) L(ω, α, β ), f(˜ω) ≥ L(˜ω, α, β ) ≥ inf ω∈D L(ω, α, β ) = G(α, β ) holds for all feasible ˜ω. We now choose the minimizer of f(˜ω) over all feasible ˜ω’s to get p∗ ≥ G(α, β ). It is shown by Theorem0 码力 | 18 页 | 509.37 KB | 1 年前3
共 24 条
- 1
- 2
- 3













