Dynamic Model in TVM
rights reserved. Presenter: Haichen Shen, Yao Wang Amazon SageMaker Neo, Deep Engine Science Dynamic Model in TVM AWS AI© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Models with models© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Support dynamic model in TVM ● Support Any-dim in typing ● Use shape function to compute the type at runtime ● Virtual input_name = "data" input_shape = [tvm.relay.Any(), 3, 224, 224] dtype = "float32" block = get_model('resnet50_v1', pretrained=True) mod, params = relay.frontend.from_mxnet(block, shape={input_name:0 码力 | 24 页 | 417.46 KB | 5 月前3DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
Efficient Mixture-of-Experts Language Model DeepSeek-AI research@deepseek.com Abstract We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and DeepSeek-V2 and its chat versions still achieve top-tier performance among open-source models. The model checkpoints are available at h t t p s : / / g i t h u b . c o m / d e e p s e e k - a i / D e e p Work 21 A Contributions and Acknowledgments 27 B DeepSeek-V2-Lite: A 16B Model Equipped with MLA and DeepSeekMoE 29 2 B.1 Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .0 码力 | 52 页 | 1.23 MB | 1 年前3Google 《Prompt Engineering v7》
writing styles 59 For few-shot prompting with classification tasks, mix up the classes 59 Adapt to model updates 60 Experiment with output formats 60 JSON Repair 61 Working with Schemas 62 Experiment When thinking about a large language model input and output, a text prompt (sometimes accompanied by other modalities such as image prompts) is the input the model uses to predict a specific output. You can be complicated. Many aspects of your prompt affect its efficacy: the model you use, the model’s training data, the model configurations, your word-choice, style and tone, structure, and context0 码力 | 68 页 | 6.50 MB | 6 月前3Trends Artificial Intelligence
Change Happening Faster Than Ever? Yes, It Is • AI User + Usage + CapEx Growth = Unprecedented • AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer 2/24 2/25 4/25 75% 60% 10% 21% 15% 0% Details on Page 293 USA – LLM #1 China USA – LLM #2 AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer Change Happening Faster Than Ever? Yes, It Is • AI User + Usage + CapEx Growth = Unprecedented • AI Model Compute Costs High / Rising + Inference Costs Per Token Falling = Performance Converging + Developer0 码力 | 340 页 | 12.14 MB | 4 月前3OpenAI 《A practical guide to building agents》
design foundations In its most fundamental form, an agent consists of three core components: 01 Model The LLM powering the agent’s reasoning and decision-making 02 Tools External functions or APIs the the workflow. Not every task requires the smartest model—a simple retrieval or intent classification task may be handled by a smaller, faster model, while harder tasks like deciding whether to approve approve a refund may benefit from a more capable model. An approach that works well is to build your agent prototype with the most capable model for every task to establish a performance baseline. From there0 码力 | 34 页 | 7.00 MB | 5 月前3OpenAI - AI in the Enterprise
They started with three model evals: 01 Language translation Measuring the accuracy and quality of translations produced by a model. 02 Summarization Evaluating how a model condenses information, using resilient to change. Evals are built around tasks that measure the quality of the output of a model against a benchmark—is it more accurate? More compliant? Safer? Your key metrics will depend on more tokens. To increase efficiency, OpenAI and Indeed worked together to fine-tune a smaller GPT model that was able to deliver similar results with 60% fewer tokens. Helping job seekers find the0 码力 | 25 页 | 9.48 MB | 5 月前3XDNN TVM - Nov 2019
>> 4© Copyright 2018 Xilinx Inference Flow >> 5 MxNet CPU Layers FPGA Layers Runtime Image Model Weights Calibration Set Quantizer Compiler Tensor Graph Optimization Framework Tensor Graph to ins, outs: tvm.call_packed('tvm.accel.accel_fused', attrs['path'], attrs['output_layout'], attrs['model_name'], outs[0], *ins ), name=name) return out >> 10© Copyright 2018 Xilinx Example of FPGA node Xilinx Performance Pipelines ˃ References to our latest results: https://github.com/Xilinx/AI-Model-Zoo (embedded i.e. ZC104/Ultra96) https://github.com/Xilinx/ml-suite/blob/master/examples/caffe/Benchmark_README0 码力 | 16 页 | 3.35 MB | 5 月前3Facebook -- TVM AWS Meetup Talk
methods not delivering generalized performance 2 Why TVM? XTVM for Speech Synthesis - WaveRNN-style model architecture - Autoregressive sampling net running at faster than real-time - Compute split between - First PyTorch model used a 3,400us sampling net runtime Image from LPCNetExit, Pursued By A Bear - 3400us (baseline), 40us (target) - 85x speedup - Uh ohEnter, TVM and model co-design - PyTorch WaveRNN, Sparse Transformers, etc - Reduce precision with int8/float16 - very helpful to maintain model in core-private L1 dcaches - Use rational approximations for transcendentals (exp, tanh, erf, etc)0 码力 | 11 页 | 3.08 MB | 5 月前3TVM@Alibaba AI Labs
int8 int32 = int16 1 + int16 x int8 Alibaba Al.Labs 阿里巴巴人工智能实验室 CPU : MTK8167S (ARM32 A35 1.5GHz) Model : MobileNetV2_ 1.0_ 224 400 336 350 3丈 300 250 PowerVR GPU Alibaba Al.Labs 阿里巴巴人工智能实验室 PowerVR support by TVM NNVM Compiler -Execution graph -Model layers functions Computation Graph Optimizations -Param TvM Tensor Operators Algorithm &Schedule CUDA TOPI Backends Machine Learning Automated Optimizer Schedule explorer Cost model Mali TOPI ROCM TOPI PVRTOPI Alibaba Al.Labs 阿里巴巴人工智能实验室 PVR TOPI > TOPI for PVR,including what0 码力 | 12 页 | 1.94 MB | 5 月前3TVM@AliOS
accelerated NLU model @ 2018.10 OO 2019.4 OO 2019.8 AiOS 1驱动万物智能 @ 和 Yunqi Conf AR-Nav Product Show Lanenet Model 1.6X Intel AliOs TVM Arch Model 。 Facelandmark libtvm_hexagon_runtime.so to support parallel. 。 Could run end-to-end TFLite Mobilenet V2 quantized model on Simulator / Device. /NiiOS ! 驱动万物智能 Alios TVM @ Hexagon DSP 。, Performance is our focus next0 码力 | 27 页 | 4.86 MB | 5 月前3
共 22 条
- 1
- 2
- 3