Back to Basics: Debugging Techniques
Back to Basics: Debugging Techniques Bob Steagall CppCon 2021CppCon 2021 – Back to Basics: Debugging Techniques Copyright © 2021 Bob Steagall The Cost of Software Failures • January 2018, Tricentis’ salary -- $1.2T enterprise value lost for shareholders 2CppCon 2021 – Back to Basics: Debugging Techniques Copyright © 2021 Bob Steagall The Cost of Software Failures • Radiation overdoses from Therac-25 737 MAX MCAS system • System component design flaws 3CppCon 2021 – Back to Basics: Debugging Techniques Copyright © 2021 Bob Steagall Agenda • What are bugs? • What is debugging? • Challenges when0 码力 | 44 页 | 470.68 KB | 5 月前3Get off my thread: Techniques for moving k to background threads
my thread: Techniques for moving work to background threads Anthony Williams Just Software Solutions Ltd https://www.justsoftwaresolutions.co.uk September 2020Get off my thread: Techniques for moving0 码力 | 90 页 | 6.97 MB | 5 月前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
Chapter 3 - Learning Techniques “The more that you read, the more things you will know. The more that you learn, the more places you'll go.” ― Dr. Seuss Model quality is an important benchmark to evaluate translation accuracy would garner better consumer support. In this chapter, our focus will be on the techniques that enable us to achieve our quality goals. High quality models have an additional benefit in first chapter, we briefly introduced learning techniques such as regularization, dropout, data augmentation, and distillation to improve quality. These techniques can boost metrics like accuracy, precision0 码力 | 56 页 | 18.93 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques
Chapter 2 - Compression Techniques “I have made this longer than usual because I have not had time to make it shorter.” Blaise Pascal In the last chapter, we discussed a few ideas to improve the deep learning efficiency. Now, we will elaborate on one of those ideas, the compression techniques. Compression techniques aim to reduce the model footprint (size, latency, memory etc.). We can reduce the model reduce the number of layers and number of parameters, but this could hurt the quality. Compression techniques are used to achieve an efficient representation of one or more layers in a neural network with0 码力 | 33 页 | 1.96 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression Techniques
Advanced Compression Techniques “The problem is that we attempt to solve the simplest questions cleverly, thereby rendering them unusually complex. One should seek the simple solution.” — Anton Pavlovich Pavlovich Chekhov In this chapter, we will discuss two advanced compression techniques. By ‘advanced’ we mean that these techniques are slightly more involved than quantization (as discussed in the second can help improve the quality of our models. Did we get you excited yet? Let’s learn about these techniques together! Model Compression Using Sparsity Sparsity or Pruning refers to the technique of removing0 码力 | 34 页 | 3.18 MB | 1 年前3Techniques to Optimise Multi-threaded Data Building During Game Development
1 Dominik Grabiec - Techniques to Optimise Multi-threaded Data Building During Game Development - CppCon 2024Hello My name is Dominik Grabiec This talk isFocusing on optimising the process around Background • What is data building? • Differences from Game Code • Assumptions and Concepts 2. Techniques • Keep Threads Busy • 3D Caching • Optimise Sorting • Avoid Blocking Threads 3. Questions 2Three Background What data building is Differences from normal game code Concepts used in presentation Techniques I've used to optimise the data building system Time for questions at end Numbers at bottom of 0 码力 | 99 页 | 2.40 MB | 5 月前3《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Review
Chapter 6 - Advanced Learning Techniques “Tell me and I forget, teach me and I may remember, involve me and I learn.” – Benjamin Franklin This chapter is a continuation of Chapter 3, where we introduced introduced learning techniques. To recap, learning techniques can help us meet our model quality goals. Techniques like distillation and data augmentation improve the model quality, without increasing the achieve impressive quality with a small number of labels. As we described in chapter 3’s ‘Learning Techniques and Efficiency’ section, labeling of training data is an expensive undertaking. Factoring in the0 码力 | 31 页 | 4.03 MB | 1 年前3When Lock-Free Still Isn't Enough: An Introduction to Wait-Free Programming and Concurrency Techniques
0 码力 | 33 页 | 817.96 KB | 5 月前3Template Metaprogramming: Type Traits
current • Not necessarily beginner to C++, but beginner to traditional template metaprogramming techniques 3Intended Audience • Beginner/Intermediate • Gentle entry: swimming pool to river • Part 1 current • Not necessarily beginner to C++, but beginner to traditional template metaprogramming techniques • Type traits part of standard library for ~10 years 3Intended Audience • Beginner/Intermediate current • Not necessarily beginner to C++, but beginner to traditional template metaprogramming techniques • Type traits part of standard library for ~10 years • Fundamentals have been in use for ~200 码力 | 403 页 | 5.30 MB | 5 月前3《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction
in deep learning models. We will also introduce core areas of efficiency techniques (compression techniques, learning techniques, automation, efficient models & layers, infrastructure). Our hope is that tools at your disposal to achieve what you want. The subsequent chapters will delve deeper into techniques, infrastructure, and other helpful topics where you can get your hands dirty with practical projects possible classes. This helped with creating a testbed for researchers to experiment with. Along with techniques like Transfer Learning to adapt such models for the real world, and a rapid growth in data collected0 码力 | 21 页 | 3.17 MB | 1 年前3
共 1000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 100
相关搜索词