AI大模型千问 qwen 中文文档
models and multimodal models are pretrained on large-scale multilingual and multimodal data and post-trained on quality data for aligning to human preferences. Qwen is capable of natural language understanding transformers>=4. 37.0 版本。以下是一个非常简单的代码片段示例,展示如何运行 Qwen1.5-Chat 模型,其中包含 Qwen1. 5-7B-Chat 的实例: from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # Now you modelscope import AutoModelForCausalLM, AutoTokenizer 借助 TextStreamer ,chat 的流式模式变得非常简单。下面我们将展示一个如何使用它的示例: ... # Reuse the code before `model.generate()` in the last code snippet from transformers import TextStreamer0 码力 | 56 页 | 835.78 KB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 2 - Compression Techniques
Overview of Compression One of the simplest approaches towards efficiency is compression to reduce data size. For the longest time in the history of computing, scientists have worked tirelessly towards popular example of lossless data compression algorithm is Huffman Coding, where we assign unique strings of bits (codes) to the symbols based on their frequency in the data. More frequent symbols are assigned and the path to that symbol is the bit-string assigned to it. This allows us to encode the given data in as few bits as possible, since the most frequent symbols will take the least number of bits to0 码力 | 33 页 | 1.96 MB | 1 年前3rwcpu8 Instruction Install miniconda pytorch
install Miniconda and PyTorch yourself, you can use the global Miniconda and PyTorch installed at /export/data/miniconda3 . 1. Initialize Miniconda: 2. If you want to use PyTorch, activate the pytorch conda is ~/.cshrc_user , so you should write the content in ~/.tcshrc to ~/.cshrc_user : source "/export/data/miniconda3/etc/profile.d/conda.csh" conda activate pytorch conda activate tf2 python python_script conda install pytorch torchvision cudatoolkit=10.2 -c pytorch python -c 'import torch; print(torch.__version__)' python -c 'import torch; print(torch.cuda.is_available())' Useful Links Miniconda Documentation0 码力 | 3 页 | 75.54 KB | 1 年前3《TensorFlow 快速入门与实战》6-实战TensorFlow验证码识别
com/lepture/captcha from captcha.image import ImageCaptcha from captcha.audio import AudioCaptcha image = ImageCaptcha(fonts=['/path/A.ttf', '/path/B.ttf’]) data = image.generate('1234’) image.write('1234' write('1234', 'out.png’) audio = AudioCaptcha(voicedir='/path/to/voices’) data = audio.generate('1234’) audio.write('1234', 'out.wav’) pydot pydot 是用纯 Python 实现的 GraphViz 接口,支持使用 GraphViz 解析和存储 DOT语言 数据-模型-服务流水线 数据集 生成 数据 处理 模型 训练 参数 调优 模型 部署 识别 服务 使用 Flask 快速搭建 验证码识别服务 使用 Flask 启动 验证码识别服务 $ export FLASK_ENV=development && flask run --host=0.0.0.0 打开浏览器访问测试 URL(http://localhost:5000/ping) 访问0 码力 | 51 页 | 2.73 MB | 1 年前3动手学深度学习 v2.0
#@save import collections import hashlib import math import os import random import re import shutil import sys import tarfile import time import zipfile from collections import defaultdict defaultdict import pandas as pd import requests from IPython import display from matplotlib import pyplot as plt from matplotlib_inline import backend_inline d2l = sys.modules[__name__] 本书中的大部分代码都是基于PyT #@save import numpy as np import torch (continues on next page) 目录 5 (continued from previous page) import torchvision from PIL import Image from torch import nn from torch.nn import functional0 码力 | 797 页 | 29.45 MB | 1 年前3PyTorch Release Notes
functionality. PyTorch also includes standard defined neural network layers, deep learning optimizers, data loading utilities, and multi-gpu, and multi-node support. Functions are executed immediately instead nvcr.io/nvidia/ pytorch:-py3 Note: If you use multiprocessing for multi-threaded data loaders, the default shared memory segment size with which the container runs might not be enough To pull data and model descriptions from locations outside the container for use by PyTorch or save results to locations outside the container, mount one or more host directories as Docker® data volumes 0 码力 | 365 页 | 2.94 MB | 1 年前3keras tutorial
of algorithms, inspired from the model of human brain. Deep learning is becoming more popular in data science fields like robotics, artificial intelligence(AI), audio & video recognition and image recognition -U scikit-learn Seaborn Seaborn is an amazing library that allows you to easily visualize your data. Use the below command to install: pip install seaborn You could see the message similar as specified conda terminal using the below command: spyder To ensure everything was installed correctly, import all the modules, it will add everything and if anything went wrong, you will get module not found0 码力 | 98 页 | 1.57 MB | 1 年前3Keras: 基于 Python 的深度学习库
它允许构建任意的神经网络图。 Sequential 顺序模型如下所示: from keras.models import Sequential model = Sequential() 可以简单地使用 .add() 来堆叠模型: KERAS: 基于 PYTHON 的深度学习库 2 from keras.layers import Dense model.add(Dense(units=64, activation='relu' 顺序模型是多个网络层的线性堆叠。 你可以通过将层的列表传递给 Sequential 的构造函数,来创建一个 Sequential 模型: from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential([ Dense(32, input_shape=(784,)), Activation('relu') metrics=['accuracy']) # 均方误差回归问题 model.compile(optimizer='rmsprop', loss='mse') # 自定义评估标准函数 import keras.backend as K def mean_pred(y_true, y_pred): return K.mean(y_pred) model.compile(optimizer='rmsprop'0 码力 | 257 页 | 1.19 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 3 - Learning Techniques
In the first chapter, we briefly introduced learning techniques such as regularization, dropout, data augmentation, and distillation to improve quality. These techniques can boost metrics like accuracy precision, recall, etc. which often are our primary quality concerns. We have chosen two of them, namely data augmentation and distillation, to discuss in this chapter. This is because, firstly, regularization and dropout are fairly straight-forward to enable in any modern deep learning framework. Secondly, data augmentation and distillation can bring significant efficiency gains during the training phase, which0 码力 | 56 页 | 18.93 MB | 1 年前3《Efficient Deep Learning Book》[EDL] Chapter 1 - Introduction
there might not be a single algorithm that works perfectly, and there is a large amount of unseen data that the algorithm needs to process. Unlike traditional algorithm problems where we expect exact optimal certainty the exact content that you would end up clicking on, at that particular moment, with more data and sophisticated algorithms, these models can be trained to be fairly accurate over a longer term Availability of labelled data Even if one has enough compute, and sophisticated algorithms, solving classical machine learning problems relies on the presence of sufficient labeled data. With deep learning0 码力 | 21 页 | 3.17 MB | 1 年前3
共 77 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8