Celery 1.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concur- rency Tasks are executed in parallel using the multiprocessing module. Schedul- ing Supports recurring tasks like cron, or specifying result store backend. You can wait for the result, retrieve it later, or ignore it. Result Stores Database, MongoDB, Redis, Tokyo Tyrant, AMQP (high performance). Web- hooks Your tasks can also be HTTP That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now0 码力 | 123 页 | 400.69 KB | 1 年前3
Celery 1.0 DocumentationConfiguration and defaults Example configuration file Concurrency settings Task result backend settings Database backend settings AMQP backend settings Cache backend settings Tokyo Tyrant backend settings Redis celeryd as a daemon Unit Testing Tutorials External tutorials and resources Using Celery with Redis/Database as the messaging queue. Tutorial: Creating a click counter using carrot and celery Frequently Asked your dads laptop while the queue is temporarily overloaded). Concurrency Tasks are executed in parallel using the multiprocessing module. Scheduling Supports recurring tasks like cron, or specifying0 码力 | 221 页 | 283.64 KB | 1 年前3
Celery 2.3 Documentationresult store backend. You can wait for the result, retrieve it later, or ignore it. Result Stores Database, MongoDB, Redis, Tokyo Tyrant, Cassandra, or AMQP (message notification). Web- hooks Your tasks it. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now what you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() #0 码力 | 334 页 | 1.25 MB | 1 年前3
Tornado 5.1 Documentation
Futures in parallel: from tornado.gen import multi async def parallel_fetch(url1, url2): resp1, resp2 = await multi([http_client.fetch(url1), http_client.fetch(url2)]) async def parallel_fetch_many(urls): fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await multi({url: http_client.fetch(url) for url in urls}) # responses In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def parallel_fetch_decorated(url1, url2): resp1, resp2 = yield [http_client.fetch(url1), http_client.fetch(url2)]0 码力 | 243 页 | 895.80 KB | 1 年前3
Celery 2.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concur- rency Tasks are executed in parallel using the multiprocessing module. Schedul- ing Supports recurring tasks like cron, or specifying result store backend. You can wait for the result, retrieve it later, or ignore it. Result Stores Database, MongoDB, Redis, Tokyo Tyrant, AMQP (high performance). Web- hooks Your tasks can also be HTTP That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now0 码力 | 165 页 | 492.43 KB | 1 年前3
Tornado 6.1 Documentation
Futures in parallel: from tornado.gen import multi async def parallel_fetch(url1, url2): resp1, resp2 = await multi([http_client.fetch(url1), http_client.fetch(url2)]) async def parallel_fetch_many(urls): fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await multi({url: http_client.fetch(url) for url in urls}) # responses In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def parallel_fetch_decorated(url1, url2): resp1, resp2 = yield [http_client.fetch(url1), http_client.fetch(url2)]0 码力 | 245 页 | 904.24 KB | 1 年前3
Tornado 6.0 Documentation
Futures in parallel: from tornado.gen import multi async def parallel_fetch(url1, url2): resp1, resp2 = await multi([http_client.fetch(url1), http_client.fetch(url2)]) async def parallel_fetch_many(urls): fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await multi({url: http_client.fetch(url) for url in urls}) # responses In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def parallel_fetch_decorated(url1, url2): resp1, resp2 = yield [http_client.fetch(url1), http_client.fetch(url2)]0 码力 | 245 页 | 885.76 KB | 1 年前3
Tornado 4.5 Documentation
Futures in parallel: @gen.coroutine def parallel_fetch(url1, url2): resp1, resp2 = yield [http_client.fetch(url1), http_client.fetch(url2)] @gen.coroutine def parallel_fetch_many(urls): for url in urls] # responses is a list of HTTPResponses in the same order @gen.coroutine def parallel_fetch_dict(urls): responses = yield {url: http_client.fetch(url) for as fetch_future = tornado.gen.convert_yielded(self.fetch_next_chunk()) to start the background processing. Looping Looping is tricky with coroutines since there is no way in Python to yield on every0 码力 | 333 页 | 322.34 KB | 1 年前3
Tornado 4.5 Documentation
all of those Futures in parallel: @gen.coroutine def parallel_fetch(url1, url2): resp1, resp2 = yield [http_client.fetch(url1), http_client.fetch(url2)] @gen.coroutine def parallel_fetch_many(urls): responses fetch(url) for url in urls] # responses is a list of HTTPResponses in the same order @gen.coroutine def parallel_fetch_dict(urls): responses = yield {url: http_client.fetch(url) for url in urls} # responses as fetch_future = tornado.gen.convert_yielded(self.fetch_next_chunk()) to start the background processing. Looping Looping is tricky with coroutines since there is no way in Python to yield on every0 码力 | 222 页 | 833.04 KB | 1 年前3
Tornado 5.1 Documentation
dicts whose values are Futures, and waits for all of those Futures in parallel: from tornado.gen import multi async def parallel_fetch(url1, url2): resp1, resp2 = await multi([http_client.fetch(url1) async def parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def parallel_fetch_decorated(url1, url2): resp1, resp2 = yield [http_client.fetch(url1),0 码力 | 359 页 | 347.32 KB | 1 年前3
共 427 条
- 1
- 2
- 3
- 4
- 5
- 6
- 43













