Guzzle PHP 5.3 Documentation
Guzzle Guzzle is a PHP HTTP client that makes it easy to send HTTP requests and trivial to integrate with web services. Manages things like persistent connections, represents query strings as collections the underlying HTTP transport layer. Can send both synchronous and asynchronous requests using the same interface without requiring a dependency on a specific event loop. Pluggable HTTP handlers allows integrate with any method you choose for sending HTTP requests over the wire (e.g., cURL, sockets, PHP’s stream wrapper, non-blocking event loops like React [http://reactphp.org/], etc.). Guzzle makes it so0 码力 | 72 页 | 312.62 KB | 10 月前3Guzzle PHP 7.0 Documentation
index next | Guzzle 6 » Guzzle Documentation Guzzle is a PHP HTTP client that makes it easy to send HTTP requests and trivial to integrate with web services. Simple interface for building query strings downloads, using HTTP cookies, uploading JSON data, etc... Can send both synchronous and asynchronous requests using the same interface. Uses PSR-7 interfaces for requests, responses, and streams. This allows you to utilize other PSR-7 compatible libraries with Guzzle. Abstracts away the underlying HTTP transport, allowing you to write environment and transport agnostic code; i.e., no hard dependency0 码力 | 64 页 | 310.93 KB | 10 月前3Scrapy 2.0 Documentation
scraped data using different formats and storages. Requests and Responses Understand the classes used to represent HTTP requests and responses. Link Extractors Convenient classes to extract links to follow famous quotes from website http://quotes.toscrape.com, following the pagination: import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' start_urls = [ 'http://quotes.toscrape.com/tag/humor/' the meantime. This also means that other requests can keep going even if some request fails or an error happens while handling it. While this enables you to do very fast crawls (sending multiple concurrent0 码力 | 419 页 | 637.45 KB | 1 年前3Guzzle PHP 6.5 Documentation
index next | Guzzle 6 » Guzzle Documentation Guzzle is a PHP HTTP client that makes it easy to send HTTP requests and trivial to integrate with web services. Simple interface for building query strings downloads, using HTTP cookies, uploading JSON data, etc... Can send both synchronous and asynchronous requests using the same interface. Uses PSR-7 interfaces for requests, responses, and streams. This allows you to utilize other PSR-7 compatible libraries with Guzzle. Abstracts away the underlying HTTP transport, allowing you to write environment and transport agnostic code; i.e., no hard dependency0 码力 | 65 页 | 311.42 KB | 10 月前3Scrapy 1.7 Documentation
scraped data using different formats and storages. Requests and Responses Understand the classes used to represent HTTP requests and responses. Link Extractors Convenient classes to extract links to follow famous quotes from website http://quotes.toscrape.com, following the pagination: import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' start_urls = [ 'http://quotes.toscrape.com/tag/humor/' the meantime. This also means that other requests can keep going even if some request fails or an error happens while handling it. While this enables you to do very fast crawls (sending multiple concurrent0 码力 | 391 页 | 598.79 KB | 1 年前3Scrapy 1.5 Documentation
scraped data using different formats and storages. Requests and Responses Understand the classes used to represent HTTP requests and responses. Link Extractors Convenient classes to extract links to follow famous quotes from website http://quotes.toscrape.com, following the pagination: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/tag/humor/' the meantime. This also means that other requests can keep going even if some request fails or an error happens while handling it. While this enables you to do very fast crawls (sending multiple concurrent0 码力 | 361 页 | 573.24 KB | 1 年前3Scrapy 1.4 Documentation
scraped data using different formats and storages. Requests and Responses Understand the classes used to represent HTTP requests and responses. Link Extractors Convenient classes to extract links to follow famous quotes from website http://quotes.toscrape.com, following the pagination: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/tag/humor/' the meantime. This also means that other requests can keep going even if some request fails or an error happens while handling it. While this enables you to do very fast crawls (sending multiple concurrent0 码力 | 394 页 | 589.10 KB | 1 年前3Scrapy 2.4 Documentation
scraped data using different formats and storages. Requests and Responses Understand the classes used to represent HTTP requests and responses. Link Extractors Convenient classes to extract links to follow famous quotes from website http://quotes.toscrape.com, following the pagination: import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' start_urls = [ 'http://quotes.toscrape.com/tag/humor/' the meantime. This also means that other requests can keep going even if some request fails or an error happens while handling it. While this enables you to do very fast crawls (sending multiple concurrent0 码力 | 445 页 | 668.06 KB | 1 年前3Scrapy 1.6 Documentation
scraped data using different formats and storages. Requests and Responses Understand the classes used to represent HTTP requests and responses. Link Extractors Convenient classes to extract links to follow famous quotes from website http://quotes.toscrape.com, following the pagination: import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' start_urls = [ 'http://quotes.toscrape.com/tag/humor/' the meantime. This also means that other requests can keep going even if some request fails or an error happens while handling it. While this enables you to do very fast crawls (sending multiple concurrent0 码力 | 374 页 | 581.88 KB | 1 年前3Scrapy 1.4 Documentation
scraped data using different formats and storages. Requests and Responses Understand the classes used to represent HTTP requests and responses. Link Extractors Convenient classes to extract links to follow famous quotes from website http://quotes.toscrape.com, following the pagination: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/tag/humor/' the meantime. This also means that other requests can keep going even if some request fails or an error happens while handling it. While this enables you to do very fast crawls (sending multiple concurrent0 码力 | 353 页 | 566.69 KB | 1 年前3
共 666 条
- 1
- 2
- 3
- 4
- 5
- 6
- 67