Go on GPUChangkun Ou. 2023. Go on GPU. GopherChina 2023. Session "Foundational Toolchains" Go on GPU Changkun Ou changkun.de/s/gogpu GopherChina 2023 Session “Foundational Toolchains” 2023 June 10 1 Changkun Ou. 2023. Go on GPU. GopherChina 2023. Session "Foundational Toolchains" Agenda ● Basic knowledge for interacting with GPUs ● Accelerate Go programs using GPUs ● Challenges in Go when using outlooks 2 Changkun Ou. 2023. Go on GPU. GopherChina 2023. Session "Foundational Toolchains" Agenda ● Basic knowledge for interacting with GPUs ○ Motivation ○ GPU Driver and Standards ○ Render and0 码力 | 57 页 | 4.62 MB | 1 年前3
Bridging the Gap: Writing Portable Programs for CPU and GPU1/66Bridging the Gap: Writing Portable Programs for CPU and GPU using CUDA Thomas Mejstrik Sebastian Woblistin 2/66Content 1 Motivation Audience etc.. Cuda crash course Quiz time 2 Patterns Oldschool Motivation Patterns The dark path Cuda proposal Thank you Why write programs for CPU and GPU Difference CPU/GPU Algorithms are designed differently Latency/Throughput Memory bandwidth Number of cores Motivation Patterns The dark path Cuda proposal Thank you Why write programs for CPU and GPU Difference CPU/GPU Why it makes sense? Library/Framework developers Embarrassingly parallel algorithms User0 码力 | 124 页 | 4.10 MB | 6 月前3
FFmpeg在Intel GPU上的硬件加速与优化FFmpeg在Intel GPU上的 硬件加速与优化 赵军 DCG/NPG @ Intel 介绍FFmpeg VAAPI • Media pipeline review • 何谓FFmpeg VAAPI • 为什么我们需要FFmpeg VAAPI • 当前状态 • 更进一步的计划 • 附录 典型的 media pipeline File Device Network Stream drivers: • radeon, nouveau (?), freedreno, … • 废弃的 API bridges • vdpau—va bridge • powervr—va bridge • … Intel GPU简介 • Gfx Label • Gen3: Pinetrail (Pineview) • Gen4: G965 • Gen5: G4X, Ironlake (Piketon Kabylake • … • Intel® Processor Graphics • 3D 渲染(OpenGL & Vulkan) • Media • 显示与计算(CUDA & OpenCL) Intel GPU media 硬件编程模型 slice Ring buffer FFmpeg MSDK i965/iHD OS scheduler com1 KMD com2 com3 Batch0 码力 | 26 页 | 964.83 KB | 1 年前3
激活函数与GPU加速激活函数与GPU加速 主讲人:龙良曲 Leaky ReLU simply SELU softplus GPU accelerated 下一课时 测试 Thank You.0 码力 | 11 页 | 452.22 KB | 1 年前3
Heterogeneous Modern C++ with SYCL 2020http://wongmichael.com/about ● C++11 book in Chinese: https://www.amazon.cn/dp/B00ETOV2OQ We build GPU compilers for some of the most powerful supercomputers in the world 34 Nevin “:-)” Liber nliber@anl Attribution 4.0 International License SYCL Single Source C++ Parallel Programming GPU FPGA DSP Custom Hardware GPU CPU CPU CPU Standard C++ Application Code C++ Libraries ML Frameworks give better performance on complex apps and libs than hand-coding AI/Tensor HW GPU FPGA DSP Custom Hardware GPU CPU CPU CPU AI/Tensor HW Other BackendsSYCL 2020 is here! Open Standard for0 码力 | 114 页 | 7.94 MB | 6 月前3
TVM@Alibaba AI Labs阿里巴巴人工智能实验室 AiILabs & TVM PART 1 : ARM32 CPU CONTENT PART 2 : HIFI4 DSP PART 3 : _ PowervVR GPU [和| Alibaba AL.Labs 阿里巴巴人工智能实验室 ARM 32 CPU Resolution Quantization Orize Kernel ALIOS TVM Alibaba HIFI4 DSP HIFI4 DSP [和| Alibaba AL.Labs 阿里巴巴人工智能实验室 PowerVR GPU Alibaba Al.Labs 阿里巴巴人工智能实验室 PowerVR support by TVM NNVM Compiler -Execution graph -Model layers functions0 码力 | 12 页 | 1.94 MB | 5 月前3
PyTorch Release NotesDeep Learning SDK accelerates widely-used deep learning frameworks such as PyTorch. PyTorch is a GPU-accelerated tensor computational framework with a Python front end. Functionality can be easily extended standard defined neural network layers, deep learning optimizers, data loading utilities, and multi-gpu, and multi-node support. Functions are executed immediately instead of enqueued in a static graph, see Preparing to use NVIDIA Containers Getting Started Guide. ‣ For non-DGX users, see NVIDIA ® GPU Cloud ™ (NGC) container registry installation documentation based on your platform. ‣ Ensure that0 码力 | 365 页 | 2.94 MB | 1 年前3
POCOAS in C++: A Portable Abstraction for Distributed Data StructuresCPU vFast GPU vvFast PCI Bus (or other fabric)GPUs as a First-Class Computing Resource CPU GPU PCI Bus (or other fabric) NIC - Historically, network comm. was CPU-centric 1) Direct GPU access to Infiniband allows GPU-to-GPU network transfers 2) Fast in-node fabrics like NVLink, Infinity Fabric allow very fast intra-node transfers DataGPUs as a First-Class Computing Resource CPU GPU PCI Bus (or fabric) NIC Data - Historically, network comm. was CPU-centric 1) Direct GPU access to Infiniband allows GPU-to-GPU network transfers 2) Fast in-node fabrics like NVLink, Infinity Fabric allow0 码力 | 128 页 | 2.03 MB | 6 月前3
动手学深度学习 v2.0208 5.5.2 加载和保存模型参数 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 5.6 GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 . . . . . . . . . . . . . . . . . 212 5.6.2 张量与GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 5.6.3 神经网络与GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 12.3.1 基于GPU的并行计算 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 12.3.2 并行计算与通信 . .0 码力 | 797 页 | 29.45 MB | 1 年前3
Taro: Task graph-based Asynchronous Programming Using C++ CoroutineB" : GPU operation 9Existing TGPSs on Heterogenous Computing - Challenge A C D B! B" 5 task_b = sched.emplace([](&){ 6 // CPU code; // GPU code; 7 }); // CPU thread blocks until GPU finishes B" : GPU operation 10Existing TGPSs on Heterogenous Computing - Challenge A C D B! B" 5 task_b = sched.emplace([](&){ 6 // CPU code; // GPU code; 7 }); // CPU thread blocks until GPU finishes operation B" : GPU operation Atomic execution per task 11Existing TGPSs on Heterogenous Computing - Challenge CPU A B! C Idle GPU D B" Runtime A C D B! B" Assume one CPU and one GPU B! : CPU operation0 码力 | 84 页 | 8.82 MB | 6 月前3
共 508 条
- 1
- 2
- 3
- 4
- 5
- 6
- 51













