Tornado 4.5 Documentation
server tornado.httpclient — Asynchronous HTTP client tornado.httputil — Manipulate HTTP headers and URLs tornado.http1connection – HTTP/1.x client/server implementation Asynchronous networking tornado parallel_fetch_many(urls): responses = yield [http_client.fetch(url) for url in urls] # responses is a list of HTTPResponses in the same order @gen.coroutine def parallel_fetch_dict(urls): responses responses = yield {url: http_client.fetch(url) for url in urls} # responses is a dict {url: HTTPResponse} Interleaving Sometimes it is useful to save a Future instead of yielding0 码力 | 333 页 | 322.34 KB | 1 年前3Tornado 6.1 Documentation
parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def pa queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to0 码力 | 245 页 | 904.24 KB | 1 年前3Tornado 5.1 Documentation
parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def pa queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to0 码力 | 243 页 | 895.80 KB | 1 年前3Tornado 6.0 Documentation
parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def pa queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to0 码力 | 245 页 | 885.76 KB | 1 年前3Tornado 4.5 Documentation
parallel_fetch_many(urls): responses = yield [http_client.fetch(url) for url in urls] # responses is a list of HTTPResponses in the same order @gen.coroutine def parallel_fetch_dict(urls): responses = yield yield {url: http_client.fetch(url) for url in urls} # responses is a dict {url: HTTPResponse} Interleaving Sometimes it is useful to save a Future instead of yielding it immediately, so you can start queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to0 码力 | 222 页 | 833.04 KB | 1 年前3Tornado 5.1 Documentation
server tornado.httpclient — Asynchronous HTTP client tornado.httputil — Manipulate HTTP headers and URLs tornado.http1connection – HTTP/1.x client/server implementation Asynchronous networking tornado parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses responses = await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict0 码力 | 359 页 | 347.32 KB | 1 年前3Tornado 6.4 Documentation
parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def pa queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to0 码力 | 268 页 | 1.09 MB | 1 年前3Tornado 6.2 Documentation
parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def pa queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to0 码力 | 260 页 | 1.06 MB | 1 年前3Tornado 6.4 Documentation
parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def pa queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to0 码力 | 268 页 | 1.09 MB | 1 年前3Tornado 6.4 Documentation
parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def pa queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to0 码力 | 268 页 | 1.09 MB | 1 年前3
共 20 条
- 1
- 2