TVM: Where Are We GoingSpecification by Tensor Expression TensorizationVTA: Open & Flexible Deep Learning Accelerator • Runtime JIT compile accelerator micro code • Support heterogenous devices, 10x better than CPU on the same Simulator} compiler, driver, hardware design full stack open source Current TVM Stack VTA Runtime & JIT CompilerTSIM: Support for Future Hardware Current TVM Stack New NPU Runtime TSIM Driver TSIM Binary0 码力 | 31 页 | 22.64 MB | 6 月前3
Facebook -- TVM AWS Meetup Talkinjective-only) - Kernel synthesis - Dynamic shapes, stride specialization - Impedance mismatch with PyTorch JIT IR and Relay IR - Watch this space :)Big thanks to the community0 码力 | 11 页 | 3.08 MB | 6 月前3
共 2 条
- 1













