《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression TechniquesChapter 2 - Compression Techniques “I have made this longer than usual because I have not had time to make it shorter.” Blaise Pascal In the last chapter, we discussed a few ideas to improve the deep deep learning efficiency. Now, we will elaborate on one of those ideas, the compression techniques. Compression techniques aim to reduce the model footprint (size, latency, memory etc.). We can reduce the chapter, we introduce Quantization, a model compression technique that addresses both these issues. We’ll start with a gentle introduction to the idea of compression. Details of quantization and its applications0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 5 - Advanced Compression TechniquesAdvanced Compression Techniques “The problem is that we attempt to solve the simplest questions cleverly, thereby rendering them unusually complex. One should seek the simple solution.” — Anton Pavlovich Pavlovich Chekhov In this chapter, we will discuss two advanced compression techniques. By ‘advanced’ we mean that these techniques are slightly more involved than quantization (as discussed in the second of our models. Did we get you excited yet? Let’s learn about these techniques together! Model Compression Using Sparsity Sparsity or Pruning refers to the technique of removing (pruning) weights during0 码力 | 34 页 | 3.18 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 0.25sqlite SciPy 0.19.0 Miscellaneous statistical functions XLsxWriter 0.9.8 Excel writing blosc Compression for msgpack fastparquet 0.2.1 Parquet reading / writing gcsfs 0.2.2 Google Cloud Storage access on linux xlrd 1.1.0 Excel reading xlwt 1.2.0 Excel writing xsel Clipboard I/O on linux zlib Compression for msgpack Optional dependencies for parsing HTML One of the following combinations of libraries is a sample (using 100 column x 100,000 row DataFrames): Operation 0.11.0 (ms) Prior Version (ms) Ratio to Prior df1 > df2 13.32 125.35 0.1063 df1 * df2 21.71 36.63 0.5928 df1 + df2 22.04 36.50 0.60390 码力 | 698 页 | 4.91 MB | 1 年前3
Apache Kyuubi 1.7.1-rc0 Documentationint 1.6.0 kyuubi.frontend.thr ift.http.compressio n.enabled true Enable thrift http compression via Jetty compression support bool ean 1.6.0 kyuubi.frontend.thr ift.http.cookie.auth. enabled true When DemoteBroadcastHashJoin Internally, Spark has an optimization rule that detects a join child with a high ratio of empty partitions and adds a no-broadcast-hash-join hint to avoid broadcasting it. spark.sql.adaptive job will run a little slower than before Table Z-order has the good data clustering, so the compression ratio can be improved Downstream Improve the downstream read performance benefit from data skipping0 码力 | 401 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 Documentationint 1.6.0 kyuubi.frontend.thr ift.http.compressio n.enabled true Enable thrift http compression via Jetty compression support bool ean 1.6.0 kyuubi.frontend.thr ift.http.cookie.auth. enabled true When DemoteBroadcastHashJoin Internally, Spark has an optimization rule that detects a join child with a high ratio of empty partitions and adds a no-broadcast-hash-join hint to avoid broadcasting it. spark.sql.adaptive job will run a little slower than before Table Z-order has the good data clustering, so the compression ratio can be improved Downstream Improve the downstream read performance benefit from data skipping0 码力 | 404 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0 Documentationint 1.6.0 kyuubi.frontend.thr ift.http.compressio n.enabled true Enable thrift http compression via Jetty compression support bool ean 1.6.0 kyuubi.frontend.thr ift.http.cookie.auth. enabled true When DemoteBroadcastHashJoin Internally, Spark has an optimization rule that detects a join child with a high ratio of empty partitions and adds a no-broadcast-hash-join hint to avoid broadcasting it. spark.sql.adaptive job will run a little slower than before Table Z-order has the good data clustering, so the compression ratio can be improved Downstream Improve the downstream read performance benefit from data skipping0 码力 | 400 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.7.0-rc1 Documentationint 1.6.0 kyuubi.frontend.thr ift.http.compressio n.enabled true Enable thrift http compression via Jetty compression support bool ean 1.6.0 kyuubi.frontend.thr ift.http.cookie.auth. enabled true When DemoteBroadcastHashJoin Internally, Spark has an optimization rule that detects a join child with a high ratio of empty partitions and adds a no-broadcast-hash-join hint to avoid broadcasting it. spark.sql.adaptive job will run a little slower than before Table Z-order has the good data clustering, so the compression ratio can be improved Downstream Improve the downstream read performance benefit from data skipping0 码力 | 400 页 | 5.25 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 1.1.1sqlite SciPy 0.19.0 Miscellaneous statistical functions XLsxWriter 0.9.8 Excel writing blosc Compression for HDF5 fsspec 0.7.4 Handling files aside from local and HTTP fastparquet 0.3.2 Parquet reading on linux xlrd 1.1.0 Excel reading xlwt 1.2.0 Excel writing xsel Clipboard I/O on linux zlib Compression for HDF5 1.4. Tutorials 9 pandas: powerful Python data analysis toolkit, Release 1.1.1 Optional iterate each of the rows! I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column In [6]: air_quality["ratio_paris_antwerp"] = \ ...: air_quality["station_paris"]0 码力 | 3231 页 | 10.87 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 1.1.0sqlite SciPy 0.19.0 Miscellaneous statistical functions XLsxWriter 0.9.8 Excel writing blosc Compression for HDF5 fsspec 0.7.4 Handling files aside from local and HTTP fastparquet 0.3.2 Parquet reading on linux xlrd 1.1.0 Excel reading xlwt 1.2.0 Excel writing xsel Clipboard I/O on linux zlib Compression for HDF5 1.4. Tutorials 9 pandas: powerful Python data analysis toolkit, Release 1.1.0 Optional iterate each of the rows! I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column In [6]: air_quality["ratio_paris_antwerp"] = \ ...: air_quality["station_paris"]0 码力 | 3229 页 | 10.87 MB | 1 年前3
pandas: powerful Python data analysis toolkit - 1.0sqlite SciPy 0.19.0 Miscellaneous statistical functions XLsxWriter 0.9.8 Excel writing blosc Compression for HDF5 fastparquet 0.3.2 Parquet reading / writing continues on next page 8 Chapter 1. Getting on linux xlrd 1.1.0 Excel reading xlwt 1.2.0 Excel writing xsel Clipboard I/O on linux zlib Compression for HDF5 Optional dependencies for parsing HTML One of the following combinations of libraries iterate each of the rows! I want to check the ratio of the values in Paris versus Antwerp and save the result in a new column In [6]: air_quality["ratio_paris_antwerp"] = \ ...: air_quality["station_paris"]0 码力 | 3091 页 | 10.16 MB | 1 年前3
共 211 条
- 1
- 2
- 3
- 4
- 5
- 6
- 22













