Rust 程序设计语言简体中文版
中的代码之前尝试自己实现这些修改。 准备好了吗?示例 20-15 就是一个做出了这些修改的例子: 文件名:src/lib.rs use std::thread; pub struct ThreadPool { workers: Vec, } impl ThreadPool { // --snip-- # /// Create a new ThreadPool. # /// # let mut workers = Vec::with_capacity(size); for id in 0..size { 549/600 Rust 程序设计语言 简体中文版 workers.push(Worker::new(id)); } ThreadPool { workers } } 实例而不是直接存放线程 这里将 ThreadPool 中字段名从 threads 改为 workers ,因为它现在储存 Worker 而不是 JoinHandle<()> 。使用 for 循环中的计数作为 Worker::new 的参数,并将每一个新建的 Worker 储存在叫做 workers 的 vector 中。 Worker 结构体和其 new 函数是私有的,因为外部代码(比如 0 码力 | 600 页 | 12.99 MB | 1 年前3httpd 2.4.28 中文文档
(subject to some flow-control logic in the worker MPM that throttles the listener if all the available workers are busy). Though it isn't apparent from this trace, the next accept(2) can (and usually does, under modules may do the same. Such modules rely on collecting detailed information about the state of all workers. The default is changed by mod_status beginning with version 2.3.6. The previous default was always connections with one request worker thread reserved per connection. This can lead to situations where all workers are tied up and no worker thread is available to handle new work on established async connections0 码力 | 2659 页 | 3.10 MB | 1 年前3httpd 2.4.33 中文文档
(subject to some flow-control logic in the worker MPM that throttles the listener if all the available workers are busy). Though it isn't apparent from this trace, the next accept(2) can (and usually does, under modules may do the same. Such modules rely on collecting detailed information about the state of all workers. The default is changed by mod_status beginning with version 2.3.6. The previous default was always connections with one request worker thread reserved per connection. This can lead to situations where all workers are tied up and no worker thread is available to handle new work on established async connections0 码力 | 2692 页 | 3.12 MB | 1 年前3httpd 2.4.23 中文文档
(subject to some flow-control logic in the worker MPM that throttles the listener if all the available workers are busy). Though it isn't apparent from this trace, the next accept(2) can (and usually does, under modules may do the same. Such modules rely on collecting detailed information about the state of all workers. The default is changed by mod_status beginning with version 2.3.6. The previous default was always connections with one request worker thread reserved per connection. This can lead to situations where all workers are tied up and no worker thread is available to handle new work on established async connections0 码力 | 2559 页 | 2.11 MB | 1 年前3httpd 2.4.25 中文文档
(subject to some flow-control logic in the worker MPM that throttles the listener if all the available workers are busy). Though it isn't apparent from this trace, the next accept(2) can (and usually does, under modules may do the same. Such modules rely on collecting detailed information about the state of all workers. The default is changed by mod_status beginning with version 2.3.6. The previous default was always connections with one request worker thread reserved per connection. This can lead to situations where all workers are tied up and no worker thread is available to handle new work on established async connections0 码力 | 2573 页 | 2.12 MB | 1 年前32.7 Harbor开源项目容器镜像远程复制的实现
Consumer Pattern • Front end (UI) or registry generates replication jobs (producer) • Backend workers handle replication (consumer) • Potential issues • Producers need to sleep or wait when buffer is blocking for producers • Dispatcher queues jobs • Dispatcher distributes jobs to available workers • Workers added back to available worker queue after jobs are completed Front End Registry Worker for stopping a job Worker Pool • Predefine a pool of available workers (default:3, not to overwhelm frontend tasks) • A list of workers and a channel for dispatching job harbor/src/jobservice/job/workerpool0 码力 | 37 页 | 3.47 MB | 1 年前3Julia 中文文档
enumerate(workers()) @async responses[idx] = remotecall_fetch(foo, pid, args...) end end 会快于: using Distributed refs = Vector{Any}(undef, nworkers()) for (idx, pid) in enumerate(workers()) refs[idx] information about the series of exceptions. For example, if a group of workers are executing several tasks, and multiple workers fail, the resulting CompositeException will contain a ”bundle” of information Relevant only when using TCP/IP as transport. To launch workers without blocking the REPL, or the containing function if launching workers programmat- ically, execute addprocs in its own task. Examples0 码力 | 1238 页 | 4.59 MB | 1 年前3httpd 2.4.20 中文文档
(subject to some flow-control logic in the worker MPM that throttles the listener if all the available workers are busy). Though it isn't apparent from this trace, the next accept(2) can (and usually does, under modules may do the same. Such modules rely on collecting detailed information about the state of all workers. The default is changed by mod_status beginning with version 2.3.6. The previous default was always connections with one request worker thread reserved per connection. This can lead to situations where all workers are tied up and no worker thread is available to handle new work on established async connections0 码力 | 2533 页 | 2.09 MB | 1 年前3Weblate 4.6.2 用户文档
Reload when consuming too much of memory reload-on-rss = 250 # Increase number of workers for heavily loaded sites workers = 8 # Enable threads for Sentry error submission enable-threads = true # [https://docs.celeryproject.org/en/latest/userguide/configuration.html], Workers Guide [https://docs.celeryproject.org/en/latest/userguide/workers.html], Daemonization [https://docs.celeryproject.org/en/lates 核心并且碰到内存用尽问题情况下,尝试减少 worker 的数 量: environment: WEBLATE_WORKERS: 2 You can also fine-tune individual worker categories: environment: UWSGI_WORKERS: 4 CELERY_MAIN_OPTIONS: --concurrency 2 CELERY_NOTIFY_OPTIONS:0 码力 | 762 页 | 9.22 MB | 1 年前3Weblate 4.6.1 用户文档
Reload when consuming too much of memory reload-on-rss = 250 # Increase number of workers for heavily loaded sites workers = 8 # Enable threads for Sentry error submission enable-threads = true # [https://docs.celeryproject.org/en/latest/userguide/configuration.html], Workers Guide [https://docs.celeryproject.org/en/latest/userguide/workers.html], Daemonization [https://docs.celeryproject.org/en/lates 核心并且碰到内存用尽问题情况下,尝试减少 worker 的数 量: environment: WEBLATE_WORKERS: 2 You can also fine-tune individual worker categories: environment: UWSGI_WORKERS: 4 CELERY_MAIN_OPTIONS: --concurrency 2 CELERY_NOTIFY_OPTIONS:0 码力 | 761 页 | 9.22 MB | 1 年前3
共 160 条
- 1
- 2
- 3
- 4
- 5
- 6
- 16