TVM: Where Are We GoingHigh-level data flow graph and optimizations Directly generate optimized program for new operator workloads and hardware Hardware FrameworksWhy Automation is the Future Clear winner on emerging models BatchMatMul CuDNN w/ TensorCores tvm w/ TensorCores 1.4x better on emerging workloads Transformer related workloads Credit: Siyuan FengWhere are we goingUnified Runtime For Heterogeneous Devices remote_mod[“npufunction0"] func(remote_a, remote_b)Virtual Machine: Supporting Dynamic Workload Dynamic shape workloads More runtime objects: Arrays, Tuples, Trees, ADTs Minimum runtime for dynamic models Credit:0 码力 | 31 页 | 22.64 MB | 6 月前3
DeepSeek-V2: A Strong, Economical, and Efficient
Mixture-of-Experts Language Modellanguage model, characterized by economical training and efficient inference through an innovative Transformer architecture. It is equipped with a total of 236B parameters, of which 21B are activated for each Expert Top-???????????????????????? Attention Feed-Forward Network … 3 4 RMS Norm RMS Norm Transformer Block ×???????????? DeepSeekMoE 0 Input Hidden ???????????????????????? Multi-Head Latent Attention 5). 2. Architecture By and large, DeepSeek-V2 is still in the Transformer architecture (Vaswani et al., 2017), where each Transformer block consists of an attention module and a Feed-Forward Network0 码力 | 52 页 | 1.23 MB | 1 年前3
DeepSeek图解10页PDF. . . . . . . . . 5 2.1 LLM 基础概念 . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Transformer 基础架构 . . . . . . . . . . . . . . . . . . . . . . 6 2.3 LLM 基本训练方法 . . . . . . . . . . . . . . billion,意思是十亿,7b 就是 70 亿,8b 就 是 80 亿,70 亿、80 亿是指大模型的神经元参数(权重参数 weight+bias)的 总量。目前大模型都是基于 Transformer 架构,并且是很多层的 Transformer 结构,最后还有全连接层等,所有参数加起来 70 亿,80 亿,还有的上千亿。 教程作者:郭震,工作 8 年目前美国 AI 博士在读,公众号:郭震 AI,欢迎关注获取更多原创教程。资 元化,模型最后就会越通用;即使包括噪声数据,模型仍能通过扩展规律提 取出通用的知识。而 Transformer 这种架构正好完美做到了 Scaling Laws, Transformer 就是自然语言处理领域实现扩展规律的最好的网络结构。 2.2 Transformer 基础架构 LLM 依赖于 2017 年 Google 提出的 Transformer 模型,该架构相比传统的 RNN(递归神经网络)和 LSTM(长短时记忆网络)具有更高的训练效率和0 码力 | 11 页 | 2.64 MB | 8 月前3
Trends Artificial Intelligence
Jay Simons / Daegwon Chae / Alexander Krey2 Context We set out to compile foundational trends related to AI. A starting collection of several disparate datapoints turned into this beast. As soon as trending is ramping materially faster…and the machines can outpace us. The pace and scope of change related to the artificial intelligence technology evolution is indeed unprecedented, as supported by the toward AI in efforts to drive growth and fend off attackers. And global competition – especially related to China and USA tech developments – is acute. The outline for our document is on the next page0 码力 | 340 页 | 12.14 MB | 5 月前3
OctoML OSS 2019 11 8High-Level 人 ORGREEE Te Conv2D mized RE -一 一 QQ octoML Transformer Improvements Transformer based models such as BERT have recently become very Popular and require first class instead. We wantto add this form of view as a relay intrinsic to enable highly fused and optimized transformer models. olo o o QQ octoML BERT has many reshape operations, which are currently implemented0 码力 | 16 页 | 1.77 MB | 6 月前3
开源中国 2023 大模型(LLM)技术报告 代码生成工具 编程语言 3 / 32 LLM 技术背景 Transformer 架构和预训练与微调策略是 LLM 技术的核心,随着大规模语言数据集的可用性和计算能 力的提升,研究者们开始设计更大规模的神经网络,以提高对语言复杂性的理解。 GPT (Generative Pre-trained Transformer) 的提出标志着 LLM 技术的飞速发展,其预训练和微调的 方法为语言0 码力 | 32 页 | 13.09 MB | 1 年前3
Facebook -- TVM AWS Meetup Talkelsewhere- Performance matters a lot - Heterogenous computing environment - High variety of workloads - Ever-increasing set of primitives (over 500 aten kernels) - Interpreter methods not delivering transcendentals (exp, tanh, erf, etc) - very general technique, allows clean vectorization - Related work in Gibiansky (2017), Gray (2019), et al. Image from OpenAI- Add relay.nn.sparse_dense for block-sparse0 码力 | 11 页 | 3.08 MB | 6 月前3
普通人学AI指南工具,其中很多都是开源! 2.1 问答 2.1.1 ChatGPT ChatGPT 是一个由 OpenAI 开发的大型语言模型,它基于 GPT(Generative Pre-trained Transformer)架构。这种模型通过分析大量的文本数据来学习语 言结构和信息,使其能够生成连贯的文本、回答问题、撰写文章、进行对话等。 6 Figure 3: AI 问答工具 ChatGPT 经过特别0 码力 | 42 页 | 8.39 MB | 8 月前3
【周鸿祎清华演讲】DeepSeek给我们带来的创业机会-360周鸿祎-2025022024诺贝尔化学奖颁发给研发AlphaFold的两位AI专家 未来所有科学研究都将以AI为中心 过去如何做蛋白质研究 AlphaFold 1. X射线晶体衍射 2. 核磁共振 3. 冷冻电子显微镜 1. 利用Transformer的预测能力, 2. 直接从蛋白质的氨基酸序列 3. 中预测蛋白质的3D结构 靠肉眼观察,几年才能发现一个复杂蛋 白质结构,半个世纪预测了20多万种 从数年缩短到几分钟,解开了生物学密码 成功预测了地球存在的2亿种蛋白质结构0 码力 | 76 页 | 5.02 MB | 6 月前3
TVM@AliOSintelligence. The Alios is running in vehicles, Phone, Pad and loT terminals. Provide cloud and devices co-related solutions for different terminals. 。 To help traditional car firms AliOS互联网汽车 共创智能网联汽车 共建未来出行生态0 码力 | 27 页 | 4.86 MB | 6 月前3
共 12 条
- 1
- 2













