keras tutorialKeras i Keras ii About the Tutorial Keras is an open source deep learning framework for python. It has been developed by an artificial intelligence researcher TensorFlow, Theano, etc., for creating deep learning models. Overview of Keras Keras runs on top of open source machine libraries like TensorFlow, Theano or Cognitive Toolkit (CNTK). Theano is a python library highly powerful and dynamic framework and comes up with the following advantages: Larger community support. Easy to test. Keras neural networks are written in Python which makes things0 码力 | 98 页 | 1.57 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniquesmapping is transmitted along with the encoded data. Figure 2-1: Huffman Encoding & Huffman Tree. Source When decoding the encoded data, we look up the code from the lookup table to retrieve the symbols left is a high quality image of a cat. The cat on the right is a lower quality compressed image. Source Both the cat images in figure 2-2 might serve their purpose equally well, but the compressed image imread('pia23378-16.jpg') / 255.0) plt.imshow(img) Figure 2-6: Mars Curiosity Rover. Dimensions: 586x1041. Source Now, let’s simulate the process of transmission by quantization, followed by dequantization. We0 码力 | 33 页 | 1.96 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 7 - Automationmodel with each of these four options to make an informed decision. Blessed with a large research community, the deep learning field is growing at a rapid pace. Over the past few years, we have seen newer function of resources allocated to each configuration. Promising configurations get more resources. Source: Hyperband2 2 Li, Lisha, et al. "Hyperband: A novel bandit-based approach to hyperparameter optimization demonstration of configuration and resource allocation changes across multiple brackets in a Hyperband. Source: Hyperband In chapter 3, we trained a model to classify flowers in the oxford_flowers102 dataset0 码力 | 33 页 | 2.48 MB | 1 年前3
AI大模型千问 qwen 中文文档generating human-like␣ �→text and are used in a variety of natural language processing tasks..." } ], "source": "unknown" } { "type": "chatml", "messages": [ { "role": "system", "content": "You are a helpful "role": "assistant", "content": "My name is Qwen." } ], "source": "self-made" } 以上提供了该数据集中的每个样本的两个示例。每个样本都是一个 JSON 对象,包含以下字段:type 、 messages 和 source 。其中,messages 是必填字段,而其他字段则是供您标记数据格式和数据来源的可 选字段。messages 或 assistant ,表示消息的角色;content 则是消息的文本内容。而 source 字 段代表了数据来源,可能包括 self-made 、alpaca 、open-hermes 或其他任意字符串。 你需要用 json 将一个字典列表存入 jsonl 文件中: import json with open('data.jsonl', 'w') as f: for sample in samples:0 码力 | 56 页 | 835.78 KB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniquesinterpolation algorithms to address the holes. Figure 3-6: Image Transformations. The source image (center) is taken from Google Open Images Dataset V6. It is authored by Mike Baird and is licensed under CC BY 2 The top-center is a turtle (resized) image from Open Images Dataset V6 and authored by Joy Holland. The bottom-center is a tortoise (resized) from Open Images Dataset V6 and authored by J. P. Both the replaces a section of pixels in an image sample with a section of pixels from another image sample. The source and the destination sections have the same positional offsets. The left image in figure 3-11 shows0 码力 | 56 页 | 18.93 MB | 1 年前3
Keras: 基于 Python 的深度学习库. . . . . . . . . . . . . . . 59 5.2.1 Dense [source] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2.2 Activation [source] . . . . . . . . . . . . . . . . . . . . . . . Dropout [source] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.2.4 Flatten [source] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.2.5 Input [source] . . . . . . . . . . . . . . . . 61 5.2.6 Reshape [source] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.2.7 Permute [source] . . . . . . . . . . . . . . . . . . . . . . . . .0 码力 | 257 页 | 1.19 MB | 1 年前3
动手学深度学习 v2.0join('..', 'data'), exist_ok=True) data_file = os.path.join('..', 'data', 'house_tiny.csv') with open(data_file, 'w') as f: f.write('NumRooms,Alley,Price\n') # 列名 f.write('NA,Pave,127500\n') # 每行表示一个数据样本 os.path.join(cache_dir, url.split('/')[-1]) if os.path.exists(fname): sha1 = hashlib.sha1() with open(fname, 'rb') as f: while True: data = f.read(1048576) if not data: break sha1.update(data) if fname # 命中缓存 print(f'正在从{url}下载{fname}...') r = requests.get(url, stream=True, verify=True) with open(fname, 'wb') as f: f.write(r.content) return fname 我们还需实现两个实用函数:一个将下载并解压缩一个zip或tar文件,另一个是将本书中使用的所有数据集0 码力 | 797 页 | 29.45 MB | 1 年前3
《Efficient Deep Learning Book》[EDL] Chapter 6 - Advanced Learning Techniques - Technical Reviewmissing bit. It is similar to solving an almost finished jigsaw puzzle which has just a couple of open spots. Looking at it, we could tell what the final few pieces would look like. A pretext task requires two sentences and , predict if follows . Figure 6-3: Pre-training and Fine-tuning steps for BERT. Source: Develin et al. For BERT, the pre-training loss is the mean of the losses for the above two tasks Predicting the degree of rotation of a given image. Figure 6-4: Pretext tasks for vision problems. Source: Doersch et al., Gidaris et al.. Once the general model is ready, we can fine-tune it for a specific0 码力 | 31 页 | 4.03 MB | 1 年前3
深度学习与PyTorch入门实战 - 02. 开发环境安装开发环境准备 主讲人:龙良曲 开发环境 ▪ Python 3.7 + Anaconda 5.3.1 ▪ CUDA 10.0 ▪ Pycharm Community ANACONDA CUDA 10.0 ▪ NVIDIA显卡 CUDA 安装确认 路径添加到PATH CUDA 测试 PyTorch安装 管理员身份运行cmd PyCharm ▪ 配置Interpreter0 码力 | 14 页 | 729.50 KB | 1 年前3
《TensorFlow 快速入门与实战》2-TensorFlow初接触Container Virtual Machine Docker Container � Docker ��� TensorFlow https://hub.docker.com/editions/community/docker-ce-desktop-mac 1. Install Docker for Mac 2. Run Docker for Mac 3. Pull a TensorFlow0 码力 | 20 页 | 15.87 MB | 1 年前3
共 28 条
- 1
- 2
- 3













