The CSS Handbook
The CSS Handbook Conclusion 2 Preface The CSS Handbook follows the 80/20 rule: learn in 20% of the time the 80% of a topic. In particular, the goal is to get you up to speed quickly with CSS. This Enjoy! 3 The CSS Handbook 1. Preface 2. Introduction to CSS 2.1. How does CSS look like 2.2. Semicolons 2.3. Formatting and indentation 3. A brief history of CSS 4. Adding CSS to an HTML page with a CSS Variable value using JavaScript 19.5. Handling invalid values 19.6. Browser support 19.7. CSS Variables are case sensitive 19.8. Math in CSS Variables 5 19.9. Media queries with CSS Variables0 码力 | 184 页 | 1.96 MB | 1 年前303 CSS 杨亮 《PHP语⾔程序设计》
CSS �� Web���� PC Mobile ��� 不Apache是 (IIS) ���� 不PHP是 不JSP是 不ASP是 ��� 不MySQL是 不Oracle是 不Access是 HTTP �� ���� ���� ���� ���� ���� ���� ���� ��� html css javascript html css javascript 不Oracle是 不Access是 html css javascript ����� ����� ���� ���� ���� ���� ���� PC Mobile ���� ���� ���� html CSS JavaScript ���� ���������� HTML�����������了����� CSS���HTML����� JavaScript������������ JavaScript������������ ���� CSS Cascading Style Sheets ���� Cascading Style Sheets • �� Cascading • ������������ • ������ • �� Style • ��我��我��我��我��我��⼀一⼀一⼀一 selector { property1: value1; property2:0 码力 | 25 页 | 2.68 MB | 1 年前3前端开发者指南(2017)
学习域名系统(又叫 DNS) 学习 HTTP/Networks(包括 CORS 和 WebSockets) 学习网页寄存(通称虚拟主机) 学习前端开发 学习用户界面/交互设计 学习 HTML 和 CSS 学习搜索引擎优化 学习 JavaScript 学习 Web 动画 学习 DOM、BOM 和 jQuery 学习网页字体 & 图标 2 1.6.1.14 1.6.1.15 1.6.1 1.7.31 1.7.32 1.7.33 1.7.34 1.7.35 1.7.36 1.7.37 HTTP / 网络工具 代码编辑工具 浏览器上的神兵利器 HTML 工具 CSS 工具 DOM 工具 JavaScript 工具 静态网页构建工具 无障碍访问工具 应用程序框架工具(台式机、手机、平板电脑等) 渐进式 Web 应用工具 脚手架工具 常规前端开发工具 笔者有意将本书打造为一份专业资料,为想要或正在实践的前端开发者们提供学习材料和开 发工具。其次,它同样可供主管、CTO、讲师和猎头们深入探索前端开发实践。 本书内容偏向于 WEB 技术(HTML、CSS、DOM、JavaScript)和以这些技术为根基直接构 建而成的开源技术。书中引用和讨论的材料要么就是同类翘楚,要么就是解决问题的流行方 案。 本书不是一本囊括所有前端可用资源的综合纲领。其价值在于为恰好够用的分类信息搜罗简0 码力 | 164 页 | 6.43 MB | 1 年前3Learning Gulp
Loading All The Plugins from Package.JSON 11 Note 11 NOTE 12 Installing Plugins for Responsive images|Css Minification|Js minification 12 Image processing plugins 12 Asset optimizer plugins 13 Anatomy of Tasks 19 Chapter 4: Concatenating files 21 Examples 21 Concat all css files into one using gulp-concat 21 Concat and Uglify JS and CSS files 21 Chapter 5: Create a watcher 23 Examples 23 Watcher task Examples 28 Installation and usage 28 Chapter 10: Minifying CSS 30 Examples 30 Using gulp-clean-css and gulp-rename 30 Sass and Css - Preprocessing with Gulp 30 Chapter 11: Minifying HTML 32 Examples0 码力 | 45 页 | 977.19 KB | 1 年前3Gulp 入门指南
com/nimojs/gulp-book gulp 是基于 node 实现 Web 前端自动化开发的工具,利用它能够极大的提高开发效率。 在 Web 前端开发工作中有很多“重复工作”,比如压缩CSS/JS文件。而这些工作都是有规律的。找到这 些规律,并编写 gulp 配置代码,让 gulp 自动执行这些“重复工作”。 将规律转换为 gulp 代码 现有目录结构如下: └── js/ 你还可以监控 js/ 目录下的 js 文件,当某个文件被修改时,自动压缩修改文件。启动 gulp 后就可以让它 帮助你自动构建 Web 项目。 gulp 还可以做很多事,例如: 1. 压缩CSS 2. 压缩图片 3. 编译Sass/LESS 4. 编译CoffeeScript 5. markdown 转换为 html gulp 入门指南 - 2 - 本文档使用 看云 构建 安装 使用 gulp 压缩 CSS 压缩 css 代码可降低 css 文件大小,提高页面打开速度。 我们接着将规律转换为 gulp 代码 规律 找到 css/ 目录下的所有 css 文件,压缩它们,将压缩后的文件存放在 dist/css/ 目录下。 gulp 代码 你可以 下载所有示例代码 或 在线查看代码 当熟悉 使用 gulp 压缩 JS 的方法后,配置压缩 CSS 的 gulp 代码就变得很轻松。0 码力 | 36 页 | 275.87 KB | 1 年前3Scrapy 1.6 Documentation
toscrape.com/tag/humor/', ] def parse(self, response): for quote in response.css('div.quote'): yield { 'text': quote.css('span.text::text').get(), 'author': quote.xpath('span/small/text()').get(), } page) 5 Scrapy Documentation, Release 1.6.0 (continued from previous page) next_page = response.css('li.next a::attr("href")').get() if next_page is not None: yield response.follow(next_page, self.parse) response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector, yield a Python dict with the extracted quote text and author, look for a link to the next0 码力 | 295 页 | 1.18 MB | 1 年前3Scrapy 2.2 Documentation
response): for quote in response.css('div.quote'): yield { 'author': quote.xpath('span/small/text()').get(), 'text': quote.css('span.text::text').get(), } next_page = response.css('li.next a::attr("href")') response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector, yield a Python dict with the extracted quote text and author, look for a link to the next sources using extended CSS selectors and XPath expressions, with helper methods to extract using regular expressions. • An interactive shell console (IPython aware) for trying out the CSS and XPath expressions0 码力 | 348 页 | 1.35 MB | 1 年前3Scrapy 2.4 Documentation
response): for quote in response.css('div.quote'): yield { 'author': quote.xpath('span/small/text()').get(), 'text': quote.css('span.text::text').get(), } next_page = response.css('li.next a::attr("href")') response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector, yield a Python dict with the extracted quote text and author, look for a link to the next sources using extended CSS selectors and XPath expressions, with helper methods to extract using regular expressions. • An interactive shell console (IPython aware) for trying out the CSS and XPath expressions0 码力 | 354 页 | 1.39 MB | 1 年前3Scrapy 2.3 Documentation
response): for quote in response.css('div.quote'): yield { 'author': quote.xpath('span/small/text()').get(), 'text': quote.css('span.text::text').get(), } next_page = response.css('li.next a::attr("href")') response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector, yield a Python dict with the extracted quote text and author, look for a link to the next sources using extended CSS selectors and XPath expressions, with helper methods to extract using regular expressions. • An interactive shell console (IPython aware) for trying out the CSS and XPath expressions0 码力 | 352 页 | 1.36 MB | 1 年前3Scrapy 1.7 Documentation
toscrape.com/tag/humor/', ] def parse(self, response): for quote in response.css('div.quote'): yield { 'text': quote.css('span.text::text').get(), 'author': quote.xpath('span/small/text()').get(), } page) 5 Scrapy Documentation, Release 1.7.4 (continued from previous page) next_page = response.css('li.next a::attr("href")').get() if next_page is not None: yield response.follow(next_page, self.parse) response object as an argument. In the parse callback, we loop through the quote elements using a CSS Selector, yield a Python dict with the extracted quote text and author, look for a link to the next0 码力 | 306 页 | 1.23 MB | 1 年前3
共 1000 条
- 1
- 2
- 3
- 4
- 5
- 6
- 100