Scrapy 1.0 Documentation
Chapter 5. Solving specific problems Scrapy Documentation, Release 1.0.7 Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will meant to be used for storing the state of a single job. How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then state['items_count'] = self.state.get('items_count', 0) + 1 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 244 页 | 1.05 MB | 1 年前3Scrapy 0.14 Documentation
keeps some spider state (key/value pairs) persistent between batches Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will meant to be used for storing the state of a single job. How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then state['items_count'] = self.state.get('items_count', 0) + 1 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 235 页 | 490.23 KB | 1 年前3Scrapy 0.14 Documentation
some spider state (key/value pairs) persistent between batches 5.8.1 Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will to be used for storing the state of a single job. 5.8.2 How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then self.state.get('items_count', 0) + 1 5.8.4 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 179 页 | 861.70 KB | 1 年前3Scrapy 1.2 Documentation
Chapter 5. Solving specific problems Scrapy Documentation, Release 1.2.3 Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will meant to be used for storing the state of a single job. How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then state['items_count'] = self.state.get('items_count', 0) + 1 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 266 页 | 1.10 MB | 1 年前3Scrapy 1.1 Documentation
keeps some spider state (key/value pairs) persistent between batches Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will meant to be used for storing the state of a single job. How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then state['items_count'] = self.state.get('items_count', 0) + 1 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 260 页 | 1.12 MB | 1 年前3Scrapy 1.0 Documentation
keeps some spider state (key/value pairs) persistent between batches Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will meant to be used for storing the state of a single job. How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then state['items_count'] = self.state.get('items_count', 0) + 1 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 303 页 | 533.88 KB | 1 年前3Scrapy 1.3 Documentation
keeps some spider state (key/value pairs) persistent between batches Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will meant to be used for storing the state of a single job. How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then state['items_count'] = self.state.get('items_count', 0) + 1 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 272 页 | 1.11 MB | 1 年前3Scrapy 1.1 Documentation
keeps some spider state (key/value pairs) persistent between batches Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will meant to be used for storing the state of a single job. How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then state['items_count'] = self.state.get('items_count', 0) + 1 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 322 页 | 582.29 KB | 1 年前3Scrapy 1.5 Documentation
some spider state (key/value pairs) persistent between batches 5.13.1 Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will to be used for storing the state of a single job. 5.13.2 How to use it To start a spider with persistence supported enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then self.state.get('items_count', 0) + 1 5.13.4 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 285 页 | 1.17 MB | 1 年前3Scrapy 1.6 Documentation
some spider state (key/value pairs) persistent between batches 5.12.1 Job directory To enable persistence support you just need to define a job directory through the JOBDIR setting. This directory will crawls 173 Scrapy Documentation, Release 1.6.0 5.12.2 How to use it To start a spider with persistence support enabled, run it like this: scrapy crawl somespider -s JOBDIR=crawls/somespider-1 Then self.state.get('items_count', 0) + 1 5.12.4 Persistence gotchas There are a few things to keep in mind if you want to be able to use the Scrapy persistence support: Cookies expiration Cookies may expire0 码力 | 295 页 | 1.18 MB | 1 年前3
共 194 条
- 1
- 2
- 3
- 4
- 5
- 6
- 20