Flask Documentation (1.1.x)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 1.11 Configuration Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 1.12 Signals implemented in Flask itself. Numerous extensions provide database integration, form validation, upload handling, various open authentication technologies, and more. Flask may be “micro”, but it’s ready for pro- test_request_context() method to try out url_for(). test_request_context() tells Flask to behave as though it’s handling a request even while we use a Python shell. See Context Locals. from flask import Flask, url_for0 码力 | 291 页 | 1.25 MB | 1 年前3Flask Documentation (1.1.x)
Configuration Email Errors to Admins Injecting Request Information Other Libraries Configuration Handling Configuration Basics Environment and Debug Features Builtin Configuration Values Configuring from the documentation is for you. API Application Object Blueprint Objects Incoming Request Data Response Objects Sessions Session Interface Test Client Test CLI Runner Application Globals Useful Functions implemented in Flask itself. Numerous extensions provide database integration, form validation, upload handling, various open authentication technologies, and more. Flask may be “micro”, but it’s ready for production0 码力 | 428 页 | 895.98 KB | 1 年前3Tornado 4.5 Documentation
log — Logging support tornado.options — Command-line parsing tornado.stack_context — Exception handling across asynchronous callbacks tornado.testing — Unit testing support for asynchronous code tornado web application The Application object Subclassing RequestHandler Handling request input Overriding RequestHandler methods Error Handling Redirection Asynchronous handlers Templates and UI Configuring HTTPClient def synchronous_fetch(url): http_client = HTTPClient() response = http_client.fetch(url) return response.body And here is the same function rewritten to be asynchronous with a callback0 码力 | 333 页 | 322.34 KB | 1 年前3Tornado 5.1 Documentation
tornado.log — Logging support tornado.options — Command-line parsing tornado.stack_context — Exception handling across asynchronous callbacks tornado.testing — Unit testing support for asynchronous code tornado web application The Application object Subclassing RequestHandler Handling request input Overriding RequestHandler methods Error Handling Redirection Asynchronous handlers Templates and UI Configuring HTTPClient def synchronous_fetch(url): http_client = HTTPClient() response = http_client.fetch(url) return response.body And here is the same function rewritten asynchronously as a native coroutine:0 码力 | 359 页 | 347.32 KB | 1 年前3Tornado 5.1 Documentation
httpclient import HTTPClient def synchronous_fetch(url): http_client = HTTPClient() response = http_client.fetch(url) return response.body And here is the same function rewritten asynchronously as a native coroutine: AsyncHTTPClient async def asynchronous_fetch(url): http_client = AsyncHTTPClient() response = await http_client.fetch(url) return response.body Or for compatibility with older versions of Python, using the tornado coroutine def async_fetch_gen(url): http_client = AsyncHTTPClient() response = yield http_client.fetch(url) raise gen.Return(response.body) Coroutines are a little magical, but what they do internally0 码力 | 243 页 | 895.80 KB | 1 年前3Tornado 4.5 Documentation
httpclient import HTTPClient def synchronous_fetch(url): http_client = HTTPClient() response = http_client.fetch(url) return response.body And here is the same function rewritten to be asynchronous with a callback Documentation Tornado Documentation, Release 4.5.3 def handle_response(response): callback(response.body) http_client.fetch(url, callback=handle_response) And again with a Future instead of a callback: from they have two major advantages. Error handling is more consistent since the Future.result method can simply raise an exception (as opposed to the ad-hoc error handling common in callback-oriented interfaces)0 码力 | 222 页 | 833.04 KB | 1 年前3Scrapy 2.10 Documentation
parse(self, response): for quote in response.css("div.quote"): yield { "author": quote.xpath("span/small/text()").get(), "text": quote.css("span.text::text").get(), } next_page = response.css('li.next next a::attr("href")').get() if next_page is not None: yield response.follow(next_page, self.parse) Put this in a text file, name it to something like quotes_spider.py and run the spider using the runspider the URL for quotes in humor category) and called the default callback method parse, passing the response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector0 码力 | 419 页 | 1.73 MB | 1 年前3Scrapy 2.9 Documentation
parse(self, response): for quote in response.css("div.quote"): yield { "author": quote.xpath("span/small/text()").get(), "text": quote.css("span.text::text").get(), } next_page = response.css('li.next next a::attr("href")').get() if next_page is not None: yield response.follow(next_page, self.parse) Put this in a text file, name it to something like quotes_spider.py and run the spider using the runspider the URL for quotes in humor category) and called the default callback method parse, passing the response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector0 码力 | 409 页 | 1.70 MB | 1 年前3Scrapy 2.11.1 Documentation
parse(self, response): for quote in response.css("div.quote"): yield { "author": quote.xpath("span/small/text()").get(), "text": quote.css("span.text::text").get(), } next_page = response.css('li.next next a::attr("href")').get() if next_page is not None: yield response.follow(next_page, self.parse) Put this in a text file, name it to something like quotes_spider.py and run the spider using the runspider the URL for quotes in humor category) and called the default callback method parse, passing the response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector0 码力 | 425 页 | 1.79 MB | 1 年前3Scrapy 2.11.1 Documentation
parse(self, response): for quote in response.css("div.quote"): yield { "author": quote.xpath("span/small/text()").get(), "text": quote.css("span.text::text").get(), } next_page = response.css('li.next next a::attr("href")').get() if next_page is not None: yield response.follow(next_page, self.parse) Put this in a text file, name it to something like quotes_spider.py and run the spider using the runspider the URL for quotes in humor category) and called the default callback method parse, passing the response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector0 码力 | 425 页 | 1.76 MB | 1 年前3
共 488 条
- 1
- 2
- 3
- 4
- 5
- 6
- 49