Scrapy 2.6 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object 'https://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = f'quotes-{page}.html' com/page/1/', 'https://quotes.toscrape.com/page/2/', ] def parse(self, response): page = response.url.split("/")[-2] filename = f'quotes-{page}.html' with open(filename, 'wb') as f: f.write(response.body)0 码力 | 384 页 | 1.63 MB | 1 年前3Scrapy 2.9 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object "https://quotes.toscrape.com/page/2/", ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = f"quotes-{page}.html" com/page/1/", "https://quotes.toscrape.com/page/2/", ] def parse(self, response): page = response.url.split("/")[-2] filename = f"quotes-{page}.html" Path(filename).write_bytes(response.body) The parse()0 码力 | 409 页 | 1.70 MB | 1 年前3Scrapy 2.8 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object 'https://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = f'quotes-{page}.html' com/page/1/', 'https://quotes.toscrape.com/page/2/', ] def parse(self, response): page = response.url.split("/")[-2] filename = f'quotes-{page}.html' Path(filename).write_bytes(response.body) The parse()0 码力 | 405 页 | 1.69 MB | 1 年前3Scrapy 2.10 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object "https://quotes.toscrape.com/page/2/", ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = f"quotes-{page}.html" com/page/1/", "https://quotes.toscrape.com/page/2/", ] def parse(self, response): page = response.url.split("/")[-2] filename = f"quotes-{page}.html" Path(filename).write_bytes(response.body) The parse()0 码力 | 419 页 | 1.73 MB | 1 年前3Scrapy 2.7 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object 'https://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = f'quotes-{page}.html' Documentation, Release 2.7.1 (continued from previous page) ] def parse(self, response): page = response.url.split("/")[-2] filename = f'quotes-{page}.html' with open(filename, 'wb') as f: f.write(response.body)0 码力 | 401 页 | 1.67 MB | 1 年前3Scrapy 2.4 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object 'http://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = f'quotes-{page}.html' com/page/1/', 'http://quotes.toscrape.com/page/2/', ] def parse(self, response): page = response.url.split("/")[-2] filename = f'quotes-{page}.html' with open(filename, 'wb') as f: f.write(response.body)0 码力 | 354 页 | 1.39 MB | 1 年前3Scrapy 1.8 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object 'http://quotes.toscrape.com/page/1/', 'http://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) (continues on next page) 12 Chapter 2. First steps Scrapy Documentation, Release 1.8.4 (continued from previous page) def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % page with open(filename, 'wb') as f: f.write(response0 码力 | 335 页 | 1.44 MB | 1 年前3Scrapy 1.6 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object 'http://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % com/page/1/', 'http://quotes.toscrape.com/page/2/', ] def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % page with open(filename, 'wb') as f: f.write(response0 码力 | 295 页 | 1.18 MB | 1 年前3Scrapy 2.3 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object 'http://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % com/page/1/', 'http://quotes.toscrape.com/page/2/', ] def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % page with open(filename, 'wb') as f: f.write(response0 码力 | 352 页 | 1.36 MB | 1 年前3Scrapy 2.2 Documentation
started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default callback method parse, passing the response object 'http://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % Documentation, Release 2.2.1 (continued from previous page) def parse(self, response): page = response.url.split("/")[-2] filename = 'quotes-%s.html' % page with open(filename, 'wb') as f: f.write(response0 码力 | 348 页 | 1.35 MB | 1 年前3
共 62 条
- 1
- 2
- 3
- 4
- 5
- 6
- 7