Scrapy 1.6 DocumentationDocumentation, Release 1.6.0 (continued from previous page) ....:- 6
....: """) >>> xp = lambda x: sel.xpath(x).getall() This gets all first- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a s3://mybucket/path/to/export.csv – s3://aws_key:aws_secret@mybucket/path/to/export.csv • Required external libraries: botocore (Python 2 and Python 3) or boto (Python 2 only) The AWS credentials can be passed as
0 码力 | 295 页 | 1.18 MB | 1 年前3
Scrapy 1.7 Documentation- ....:
- 4 ....:
- 5 ....:
- 6 ....:
- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a stored on Amazon S3. • URI scheme: s3 • Example URIs: – s3://mybucket/path/to/export.csv – s3://aws_key:aws_secret@mybucket/path/to/export.csv 3.8. Feed exports 85 Scrapy Documentation, Release 1.7.4
0 码力 | 306 页 | 1.23 MB | 1 年前3
Scrapy 1.8 Documentation- ....:
- 4 ....:
- 5 ....:
- 6 ....:
- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a stored on Amazon S3. • URI scheme: s3 • Example URIs: – s3://mybucket/path/to/export.csv – s3://aws_key:aws_secret@mybucket/path/to/export.csv • Required external libraries: botocore (Python 2 and Python
0 码力 | 335 页 | 1.44 MB | 1 年前3
Scrapy 1.0 Documentation], it can also be used to extract data using APIs (such as Amazon Associates Web Services [http://aws.amazon.com/associates/]) or as a general purpose web crawler. Walk-through of an example spider In change the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [http://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database- 4
....:- 5
....:- 6
....: """) >>> xp = lambda x: sel.xpath(x).extract() This gets all first- elements under whatever it is its parent: >>>
0 码力 | 303 页 | 533.88 KB | 1 年前3
Scrapy 1.0 Documentation- ....:
- 4 ....:
- 5 ....:
- 6 ....:
- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a concepts Scrapy Documentation, Release 1.0.7 – s3://aws_key:aws_secret@mybucket/path/to/export.csv • Required external libraries: boto The AWS credentials can be passed as user/password in the URI
0 码力 | 244 页 | 1.05 MB | 1 年前3
Scrapy 1.7 Documentationchange the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [https://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database- 4
....:- 5
....:- 6
....: """) >>> xp = lambda x: sel.xpath(x).getall() This gets all first- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a
0 码力 | 391 页 | 598.79 KB | 1 年前3
Scrapy 0.12 Documentation), it can also be used to extract data using APIs (such as Amazon Associates Web Services [http://aws.amazon.com/associates/]) or as a general purpose web crawler. The purpose of this document is to introduce change the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [http://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database a new one, or return None to ignore the link altogether. If not given, process_value defaults to lambda x: x. For example, to extract links from this code: 0 码力 | 228 页 | 462.54 KB | 1 年前3
Scrapy 1.2 Documentation- ....:
- 4 ....:
- 5 ....:
- 6 ....:
- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a URIs: – s3://mybucket/path/to/export.csv – s3://aws_key:aws_secret@mybucket/path/to/export.csv • Required external libraries: botocore or boto The AWS credentials can be passed as user/password in the
0 码力 | 266 页 | 1.10 MB | 1 年前3
Scrapy 1.1 Documentation- ....:
- 4 ....:
- 5 ....:
- 6 ....:
- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a URIs: – s3://mybucket/path/to/export.csv – s3://aws_key:aws_secret@mybucket/path/to/export.csv • Required external libraries: botocore or boto The AWS credentials can be passed as user/password in the
0 码力 | 260 页 | 1.12 MB | 1 年前3
Scrapy 1.3 Documentation- ....:
- 4 ....:
- 5 ....:
- 6 ....:
- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a URIs: – s3://mybucket/path/to/export.csv – s3://aws_key:aws_secret@mybucket/path/to/export.csv • Required external libraries: botocore or boto The AWS credentials can be passed as user/password in the
0 码力 | 272 页 | 1.11 MB | 1 年前3
共 62 条
- 1
- 2
- 3
- 4
- 5
- 6
- 7













