TVM: Where Are We GoingHigh-level data flow graph and optimizations Directly generate optimized program for new operator workloads and hardware Hardware FrameworksWhy Automation is the Future Clear winner on emerging models BatchMatMul CuDNN w/ TensorCores tvm w/ TensorCores 1.4x better on emerging workloads Transformer related workloads Credit: Siyuan FengWhere are we goingUnified Runtime For Heterogeneous Devices remote_mod[“npufunction0"] func(remote_a, remote_b)Virtual Machine: Supporting Dynamic Workload Dynamic shape workloads More runtime objects: Arrays, Tuples, Trees, ADTs Minimum runtime for dynamic models Credit:0 码力 | 31 页 | 22.64 MB | 6 月前3
DeepSeek-V2: A Strong, Economical, and Efficient
Mixture-of-Experts Language Modellanguage model, characterized by economical training and efficient inference through an innovative Transformer architecture. It is equipped with a total of 236B parameters, of which 21B are activated for each Expert Top-???????????????????????? Attention Feed-Forward Network … 3 4 RMS Norm RMS Norm Transformer Block ×???????????? DeepSeekMoE 0 Input Hidden ???????????????????????? Multi-Head Latent Attention 5). 2. Architecture By and large, DeepSeek-V2 is still in the Transformer architecture (Vaswani et al., 2017), where each Transformer block consists of an attention module and a Feed-Forward Network0 码力 | 52 页 | 1.23 MB | 1 年前3
Moonshot AI 介绍cn/),发布时间2023年11⽉2⽇ • 欢迎关注公众号,了解更多动态 公司亮点 1.团队拥有世界级的⼈才密度: a. 创始⼈杨植麟是中国35岁以下NLP领域引⽤最⾼的研究者,Transformer-XL和XLNet两篇重要 论⽂的第⼀作者;两位联合创始⼈周昕宇和吴育昕都有10000+的GoogleScholar引⽤。 b. 团队成员囊括NLP,CV,RL(强化学习) LLaMa和GooglePALM等⼤多数 主流模型的重要组成部分;发明了groupnormalization,是StableDiffusion等AI模型成功 的关键组件;发明了Transformer-XL,是历史上第⼀个在词级别和字级别都全⾯超越RNN 的注意⼒语⾔模型,解决了语⾔建模上下⽂⻓度的关键问题,定义了语⾔建模的新标准;曾 与DeepMind和CMU合作研究,⾸次实现⼩样本性能逼近全监督学习的⾼效对⻬⽅法。 能拍板执⾏。⼀个具体的例⼦是,⽉之暗⾯希望⽐ OpenAI更关⼼⽤⼾,原因是杨植麟判断⽤⼾数据的scaleup的效果最终会超越basemodel⾃⾝。 杨植麟对于⽤transformer这个概率模型的思想基础⾛向AGI也很有信⼼,⽤他的话说“如果你有10 亿的contextlength,今天看到的问题都不是问题”。 AGI:AI本质就是⼀堆scalinglaw0 码力 | 74 页 | 1.64 MB | 1 年前3
DeepSeek图解10页PDF. . . . . . . . . 5 2.1 LLM 基础概念 . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Transformer 基础架构 . . . . . . . . . . . . . . . . . . . . . . 6 2.3 LLM 基本训练方法 . . . . . . . . . . . . . . billion,意思是十亿,7b 就是 70 亿,8b 就 是 80 亿,70 亿、80 亿是指大模型的神经元参数(权重参数 weight+bias)的 总量。目前大模型都是基于 Transformer 架构,并且是很多层的 Transformer 结构,最后还有全连接层等,所有参数加起来 70 亿,80 亿,还有的上千亿。 教程作者:郭震,工作 8 年目前美国 AI 博士在读,公众号:郭震 AI,欢迎关注获取更多原创教程。资 元化,模型最后就会越通用;即使包括噪声数据,模型仍能通过扩展规律提 取出通用的知识。而 Transformer 这种架构正好完美做到了 Scaling Laws, Transformer 就是自然语言处理领域实现扩展规律的最好的网络结构。 2.2 Transformer 基础架构 LLM 依赖于 2017 年 Google 提出的 Transformer 模型,该架构相比传统的 RNN(递归神经网络)和 LSTM(长短时记忆网络)具有更高的训练效率和0 码力 | 11 页 | 2.64 MB | 8 月前3
2024 中国开源开发者报告Maas(Model as a service)、Aaas(Agent as a service)这样的平台,如玩乐高一般搭建自己的 AI 云原生应用。 2. 算力层深挖定制化、低能耗的可能性,但固化 transformer 可能不是最优解 虽说智能体不需要太大的模型,但其运营成本(模型推理计算成本)仍然较高。在短时间内, 算力、能源仍然会是大模型领域令人头疼的高墙。 根据报告【1】,能源消耗将会是 2030 型底层技术的特性,产出针对性的芯片,尤其是加速运算和降低能耗。这是未来 AI 芯片领域的 最优竞争力。 那么,把 transformer“焊死”到板子上就是最佳方案吗?我知道你很急,但你先别急。大 模型底层框架还存在底层路线之争。 32 / 111 我们知道,Transformer 架构呈现了 O(n²)的理论计算复杂度,这里的 n 指的是大模型输入 序列的 token 数量,但其前任语言模型担当 最近,以 Mamba、RWKV 为代表的类 RNN 结构死灰复燃,公开挑战 transformer 地位。 更有最新研究【13】从理论上表明,RNN 对比 Transformer 的表达力,只差一个 in-context-retrieval。 在这个方向的持续投入下,我们很可能会迎接一个介于 RNN 和 Transformer 之间的“新王”。 因此,算力层短时间内的主题仍然是“半通用化”“高算力”“低能耗”。0 码力 | 111 页 | 11.44 MB | 8 月前3
Trends Artificial Intelligence
Jay Simons / Daegwon Chae / Alexander Krey2 Context We set out to compile foundational trends related to AI. A starting collection of several disparate datapoints turned into this beast. As soon as trending is ramping materially faster…and the machines can outpace us. The pace and scope of change related to the artificial intelligence technology evolution is indeed unprecedented, as supported by the toward AI in efforts to drive growth and fend off attackers. And global competition – especially related to China and USA tech developments – is acute. The outline for our document is on the next page0 码力 | 340 页 | 12.14 MB | 5 月前3
OctoML OSS 2019 11 8High-Level 人 ORGREEE Te Conv2D mized RE -一 一 QQ octoML Transformer Improvements Transformer based models such as BERT have recently become very Popular and require first class instead. We wantto add this form of view as a relay intrinsic to enable highly fused and optimized transformer models. olo o o QQ octoML BERT has many reshape operations, which are currently implemented0 码力 | 16 页 | 1.77 MB | 6 月前3
01 Structure of Scientific Papers - Introduction to Scientific Writing WS2021/22University of Technology, WS 2020/21 Related Work Purpose of a “Related Work”-Section Not a mandatory task or to show you know the field Put you work in context of related areas (~ 1 paragraph each) Discuss closely related work Crisp separation from existing work (what are the differences) Placement Section 2 or Section n-1 Throughout the paper Give Credit Cite broadly, give credit Alireza Rezaei Mahdiraji, Ziawasch Abedjan, Tilmann Rabl, Volker Markl: Optimizing Machine Learning Workloads in Collaborative Environments. SIGMOD 2020 #7.2 Doris Xin, Litian Ma, Jialin Liu, Stephen Macke0 码力 | 36 页 | 1.12 MB | 1 年前3
03 Experiments, Reproducibility, and Projects - Introduction to Scientific Writing WS2021/22JOB, MLPerf #4 End-to-end Applications Evaluate in larger scope of real datasets and query workloads Examples: Customer workload, ML pipelines (dataprep, training, eval) Experiments and Result Presentation Result Presentation Representative of real data distributions? Representative for variety of workloads / common case? 9 706.015 Introduction to Scientific Writing – 03 Experiments & Reproducibility other software Baselines and configuration Use recent versions of baseline systems Data and workloads w/ data sizes, parameters, configurations Experiments and Result Presentation 13 706.015 Introduction0 码力 | 31 页 | 1.38 MB | 1 年前3
开源中国 2023 大模型(LLM)技术报告 代码生成工具 编程语言 3 / 32 LLM 技术背景 Transformer 架构和预训练与微调策略是 LLM 技术的核心,随着大规模语言数据集的可用性和计算能 力的提升,研究者们开始设计更大规模的神经网络,以提高对语言复杂性的理解。 GPT (Generative Pre-trained Transformer) 的提出标志着 LLM 技术的飞速发展,其预训练和微调的 方法为语言0 码力 | 32 页 | 13.09 MB | 1 年前3
共 442 条
- 1
- 2
- 3
- 4
- 5
- 6
- 45













