django cms 3.7.x Documentation
urlpatterns in the project’s urls.py: urlpatterns += i18n_patterns( url(r'^admin/', include(admin.site.urls)), url(r'^polls/', include('polls.urls')), url(r'^', include('cms.urls')), ) Note that it it must be included before the line for the django CMS URLs. django CMS’s URL pattern needs to be last, because it “swallows up” anything that hasn’t already been matched by a previous pattern. Now run project. 5. Apphooks Right now, our Django Polls application is statically hooked into the project’s urls.py. This is all right, but we can do more, by attaching applications to django CMS pages. 5.1. Create0 码力 | 409 页 | 1.67 MB | 1 年前3Django CMS 3.9.x Documentation
the project’s urls.py: urlpatterns += i18n_patterns( re_path(r'^admin/', include(admin.site.urls)), re_path(r'^polls/', include('polls.urls')), re_path(r'^', include('cms.urls')), ) Note that that it must be included before the line for the django CMS URLs. django CMS’s URL pattern needs to be last, because it “swallows up” anything that hasn’t already been matched by a previous pattern. Now project. 5. Apphooks Right now, our Django Polls application is statically hooked into the project’s urls.py. This is all right, but we can do more, by attaching applications to django CMS pages. 5.1. Create0 码力 | 417 页 | 1.68 MB | 5 月前3Django CMS 3.8.x Documentation
the project’s urls.py: urlpatterns += i18n_patterns( re_path(r'^admin/', include(admin.site.urls)), re_path(r'^polls/', include('polls.urls')), re_path(r'^', include('cms.urls')), ) Note that that it must be included before the line for the django CMS URLs. django CMS’s URL pattern needs to be last, because it “swallows up” anything that hasn’t already been matched by a previous pattern. Now project. 5. Apphooks Right now, our Django Polls application is statically hooked into the project’s urls.py. This is all right, but we can do more, by attaching applications to django CMS pages. 5.1. Create0 码力 | 413 页 | 1.67 MB | 5 月前3Scrapy 1.3 Documentation
following the pagi- nation: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/tag/humor/', ] def parse(self, response): for quote in response it and ran it through its crawler engine. The crawl started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default data extraction library written on top of lxml, • w3lib, a multi-purpose helper for dealing with URLs and web page encodings • twisted, an asynchronous networking framework • cryptography and pyOpenSSL0 码力 | 272 页 | 1.11 MB | 1 年前3django cms 3.5.x Documentation
the project’s urls.py: url(r'^polls/', include('polls.urls', namespace='polls')), Make sure this line is included before the line for the django-cms urls: url(r'^', include('cms.urls')), django CMS’s admin.py models.py templates/ tests.py urls.py views.py Let’s add it this application to our project. Add 'polls' to the end of the project’s urls.py: url(r'^polls/', include('polls.urls', namespace='polls')), Make sure this line is included before the line for the django-cms urls: url(r'^', include('cms.urls')), django CMS’s0 码力 | 403 页 | 1.69 MB | 1 年前3Scrapy 2.4 Documentation
following the pagi- nation: import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' start_urls = [ 'http://quotes.toscrape.com/tag/humor/', ] def parse(self, response): for quote in response it and ran it through its crawler engine. The crawl started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default data extraction library written on top of lxml, • w3lib, a multi-purpose helper for dealing with URLs and web page encodings • twisted, an asynchronous networking framework • cryptography and pyOpenSSL0 码力 | 354 页 | 1.39 MB | 1 年前3Scrapy 2.3 Documentation
following the pagi- nation: import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' start_urls = [ 'http://quotes.toscrape.com/tag/humor/', ] def parse(self, response): for quote in response it and ran it through its crawler engine. The crawl started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default data extraction library written on top of lxml, • w3lib, a multi-purpose helper for dealing with URLs and web page encodings • twisted, an asynchronous networking framework • cryptography and pyOpenSSL0 码力 | 352 页 | 1.36 MB | 1 年前3Scrapy 2.2 Documentation
following the pagi- nation: import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' start_urls = [ 'http://quotes.toscrape.com/tag/humor/', ] def parse(self, response): for quote in response it and ran it through its crawler engine. The crawl started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default data extraction library written on top of lxml, • w3lib, a multi-purpose helper for dealing with URLs and web page encodings • twisted, an asynchronous networking framework • cryptography and pyOpenSSL0 码力 | 348 页 | 1.35 MB | 1 年前3Scrapy 1.2 Documentation
following the pagi- nation: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/tag/humor/', ] def parse(self, response): for quote in response it and ran it through its crawler engine. The crawl started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default data extraction library written on top of lxml, • w3lib, a multi-purpose helper for dealing with URLs and web page encodings • twisted, an asynchronous networking framework • cryptography and pyOpenSSL0 码力 | 266 页 | 1.10 MB | 1 年前3Scrapy 1.1 Documentation
following the pagi- nation: import scrapy class QuotesSpider(scrapy.Spider): name = "quotes" start_urls = [ 'http://quotes.toscrape.com/tag/humor/', ] def parse(self, response): for quote in response it and ran it through its crawler engine. The crawl started by making requests to the URLs defined in the start_urls attribute (in this case, only the URL for quotes in humor category) and called the default Spider): name = "quotes" def start_requests(self): urls = [ 'http://quotes.toscrape.com/page/1/', 'http://quotes.toscrape.com/page/2/', ] for url in urls: yield scrapy.Request(url=url, callback=self.parse)0 码力 | 260 页 | 1.12 MB | 1 年前3
共 1000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 100