积分充值
 首页
前端开发
AngularDartElectronFlutterHTML/CSSJavaScriptReactSvelteTypeScriptVue.js构建工具
后端开发
.NetC#C++C语言DenoffmpegGoIdrisJavaJuliaKotlinLeanMakefilenimNode.jsPascalPHPPythonRISC-VRubyRustSwiftUML其它语言区块链开发测试微服务敏捷开发架构设计汇编语言
数据库
Apache DorisApache HBaseCassandraClickHouseFirebirdGreenplumMongoDBMySQLPieCloudDBPostgreSQLRedisSQLSQLiteTiDBVitess数据库中间件数据库工具数据库设计
系统运维
AndroidDevOpshttpdJenkinsLinuxPrometheusTraefikZabbix存储网络与安全
云计算&大数据
Apache APISIXApache FlinkApache KarafApache KyuubiApache OzonedaprDockerHadoopHarborIstioKubernetesOpenShiftPandasrancherRocketMQServerlessService MeshVirtualBoxVMWare云原生CNCF机器学习边缘计算
综合其他
BlenderGIMPKiCadKritaWeblate产品与服务人工智能亿图数据可视化版本控制笔试面试
文库资料
前端
AngularAnt DesignBabelBootstrapChart.jsCSS3EchartsElectronHighchartsHTML/CSSHTML5JavaScriptJerryScriptJestReactSassTypeScriptVue前端工具小程序
后端
.NETApacheC/C++C#CMakeCrystalDartDenoDjangoDubboErlangFastifyFlaskGinGoGoFrameGuzzleIrisJavaJuliaLispLLVMLuaMatplotlibMicronautnimNode.jsPerlPHPPythonQtRPCRubyRustR语言ScalaShellVlangwasmYewZephirZig算法
移动端
AndroidAPP工具FlutterFramework7HarmonyHippyIoniciOSkotlinNativeObject-CPWAReactSwiftuni-appWeex
数据库
ApacheArangoDBCassandraClickHouseCouchDBCrateDBDB2DocumentDBDorisDragonflyDBEdgeDBetcdFirebirdGaussDBGraphGreenPlumHStreamDBHugeGraphimmudbIndexedDBInfluxDBIoTDBKey-ValueKitDBLevelDBM3DBMatrixOneMilvusMongoDBMySQLNavicatNebulaNewSQLNoSQLOceanBaseOpenTSDBOracleOrientDBPostgreSQLPrestoDBQuestDBRedisRocksDBSequoiaDBServerSkytableSQLSQLiteTiDBTiKVTimescaleDBYugabyteDB关系型数据库数据库数据库ORM数据库中间件数据库工具时序数据库
云计算&大数据
ActiveMQAerakiAgentAlluxioAntreaApacheApache APISIXAPISIXBFEBitBookKeeperChaosChoerodonCiliumCloudStackConsulDaprDataEaseDC/OSDockerDrillDruidElasticJobElasticSearchEnvoyErdaFlinkFluentGrafanaHadoopHarborHelmHudiInLongKafkaKnativeKongKubeCubeKubeEdgeKubeflowKubeOperatorKubernetesKubeSphereKubeVelaKumaKylinLibcloudLinkerdLonghornMeiliSearchMeshNacosNATSOKDOpenOpenEBSOpenKruiseOpenPitrixOpenSearchOpenStackOpenTracingOzonePaddlePaddlePolicyPulsarPyTorchRainbondRancherRediSearchScikit-learnServerlessShardingSphereShenYuSparkStormSupersetXuperChainZadig云原生CNCF人工智能区块链数据挖掘机器学习深度学习算法工程边缘计算
UI&美工&设计
BlenderKritaSketchUI设计
网络&系统&运维
AnsibleApacheAWKCeleryCephCI/CDCurveDevOpsGoCDHAProxyIstioJenkinsJumpServerLinuxMacNginxOpenRestyPrometheusServertraefikTrafficUnixWindowsZabbixZipkin安全防护系统内核网络运维监控
综合其它
文章资讯
 上传文档  发布文章  登录账户
IT文库
  • 综合
  • 文档
  • 文章

无数据

分类

全部后端开发(825)综合其他(317)Python(269)Weblate(269)Java(217)Spring(213)数据库(158)云计算&大数据(88)Julia(87)C++(85)

语言

全部英语(1224)中文(简体)(186)中文(繁体)(18)日语(4)英语(4)韩语(3)西班牙语(2)法语(2)葡萄牙语(2)

格式

全部PDF文档 PDF(1013)其他文档 其他(423)TXT文档 TXT(13)PPT文档 PPT(2)DOC文档 DOC(1)
 
本次搜索耗时 0.074 秒,为您找到相关结果约 1000 个.
  • 全部
  • 后端开发
  • 综合其他
  • Python
  • Weblate
  • Java
  • Spring
  • 数据库
  • 云计算&大数据
  • Julia
  • C++
  • 全部
  • 英语
  • 中文(简体)
  • 中文(繁体)
  • 日语
  • 英语
  • 韩语
  • 西班牙语
  • 法语
  • 葡萄牙语
  • 全部
  • PDF文档 PDF
  • 其他文档 其他
  • TXT文档 TXT
  • PPT文档 PPT
  • DOC文档 DOC
  • 默认排序
  • 最新排序
  • 页数排序
  • 大小排序
  • 全部时间
  • 最近一天
  • 最近一周
  • 最近一个月
  • 最近三个月
  • 最近半年
  • 最近一年
  • pdf文档 Back to Basics: Concurrency

    std::vector tokens_; Token getToken() { mtx_.lock(); if (tokens_.empty()) tokens_.push_back(Token::create()); Token t = std::move(tokens_.back()); tokens_.pop_back(); facilities of the bathroom itself. TokenPool’s mtx_ protects its vector tokens_. Every access (read or write) to tokens_ must be done under a lock on mtx_. This is an invariant that must be preserved getToken() { mtx_.lock(); if (tokens_.empty()) tokens_.push_back(Token::create()); Token t = std::move(tokens_.back()); tokens_.pop_back(); mtx_.unlock();
    0 码力 | 58 页 | 333.56 KB | 6 月前
    3
  • pdf文档 Apache Cassandra™ 10 Documentation February 16, 2012

    Multi-Data Center Cluster 38 Calculating Tokens 39 Calculating Tokens for Multiple Racks 40 Calculating Tokens for a Single Data Center 40 Calculating Tokens for a Multi-Data Center Cluster 41 Starting Cluster 95 Running Routine Node Repair 95 Adding Capacity to an Existing Cluster 95 Calculating Tokens For the New Nodes 96 Adding Nodes to a Cluster 96 Changing the Replication Factor 97 Replacing and those of the other nodes in the cluster. When initializing a new cluster, you should generate tokens for the entire cluster and assign an initial token to each node before starting up. Each node will
    0 码力 | 141 页 | 2.52 MB | 1 年前
    3
  • pdf文档 DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model

    total parameters, of which 21B are activated for each token, and supports a context length of 128K tokens. DeepSeek-V2 adopts innovative architectures including Multi-head Latent Attention (MLA) and DeepSeekMoE 5.76 times. We pretrain DeepSeek-V2 on a high-quality and multi-source corpus consisting of 8.1T tokens, and further perform Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) to fully unlock 250 300 DeepSeek-V2 DeepSeek 67B saving 42.5% of training costs Training Costs (K GPU Hours/T Tokens) 0 100 200 300 400 DeepSeek-V2 DeepSeek 67B reducing KV cache by 93.3% KV Cache for Generation
    0 码力 | 52 页 | 1.23 MB | 1 年前
    3
  • pdf文档 动手学深度学习 v2.0

    gutenberg.org/ebooks/35 8.2. 文本预处理 299 (continued from previous page) tokens = tokenize(lines) for i in range(11): print(tokens[i]) ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] [] [] [] [] __init__(self, tokens=None, min_freq=0, reserved_tokens=None): if tokens is None: tokens = [] if reserved_tokens is None: reserved_tokens = [] # 按出现频率排序 counter = count_corpus(tokens) self._token_freqs items(), key=lambda x: x[1], reverse=True) # 未知词元的索引为0 self.idx_to_token = [''] + reserved_tokens (continues on next page) 300 8. 循环神经网络 (continued from previous page) self.token_to_idx = {token:
    0 码力 | 797 页 | 29.45 MB | 1 年前
    3
  • pdf文档 Typescript SDK Version 1.x.x

    TokenType.REFRESH/TokenType.GRANT, "redirectURL"); 5. Create an instance of TokenStore to persist tokens used for authenticating all the requests. 1 import {DBStore} from "@zohocrm/typescript- sdk tokenstore: FileStore = new ZohoCRM --zoho.com/crm-- FileStore("/Users/userName/Documents/tssdk-tokens.txt") 6. Create an instance of SDKConfig containing the SDK configuration. 1 import {SDKConfig} Token Persistence Token persistence refers to storing and utilizing the authentication tokens that are provided by Zoho. There are three ways provided by the SDK in which persistence can
    0 码力 | 56 页 | 1.29 MB | 1 年前
    3
  • pdf文档 《Efficient Deep Learning Book》[EDL] Chapter 4 - Efficient Architectures

    word2vec family of algorithms6 (apart from others like GloVe7) which can learn embeddings for word tokens for NLP tasks. The embedding table generation process is done without having any ground-truth labels We would learn embeddings of dimensions each (where we can also view 10 We are dealing with word tokens as an example here, hence you would see the mention of words and their embeddings. In practice, we pairs of input context (neighboring words), and the label (masked word to be predicted). The word tokens are vectorized by replacing the actual words by their indices in our vocabulary. If a word doesn’t
    0 码力 | 53 页 | 3.92 MB | 1 年前
    3
  • pdf文档 Google 《Prompt Engineering v7》

    what’s in the previous tokens and what the LLM has seen during its training. When you write a prompt, you are attempting to set up the LLM to predict the right sequence of tokens. Prompt engineering is task. Output length An important configuration setting is the number of tokens to generate in a response. Generating more tokens requires more computation from the LLM, leading to higher energy consumption or textually succinct in the output it creates, it just causes the LLM to stop predicting more tokens once the limit is reached. If your needs require a short output length, you’ll also possibly need
    0 码力 | 68 页 | 6.50 MB | 6 月前
    3
  • pdf文档 AI大模型千问 qwen 中文文档

    decode() to get the output. # Use `max_new_tokens` to control the maximum output length. generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] zip(model_inputs.input_ �→ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 以前,我们使用 model.chat() (有关更多详细信息,请参阅先前 Qwen 模型中的 modeling_qwen. py )。现在,我们遵循 transformers streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512, streamer=streamer, ) 1.2.2 使用 vLLM 部署 要部署 Qwen1.5,我们建议您使用
    0 码力 | 56 页 | 835.78 KB | 1 年前
    3
  • pdf文档 Trends Artificial Intelligence

    Note: In AI language models, tokens represent basic units of text (e.g., words or sub-words) used during training. Training dataset sizes are often measured in total tokens processed. A larger token count Source: Epoch AI (5/25) AI Model Training Dataset Size (Tokens) by Model Release Year – 6/10-5/25, per Epoch AI Training Dataset Size, Tokens CapEx Spend – Big Technology Companies = Inflected With Number of GPUs 46K 43K 28K 16K 11K +225x Factory AI FLOPS 1EF 5EF 17EF 63EF 220EF Annual Inference Tokens 50B 1T 5T 58T 1,375T +30,000x Annual Token Revenue $240K $3M $24M $300M $7B DC Power 37MW 34MW
    0 码力 | 340 页 | 12.14 MB | 5 月前
    3
  • pdf文档 navicat collaboration version 1 user guide

    whole Navicat On-Prem Server, such as changing the organization profile, adding users, licensing tokens, editing server settings. Note: You must be a superuser or an admin in order to perform these configurations On-Prem Server requires tokens for users to continue synchronizing Navicat objects or files. Tokens can be bought as a perpetual license or on a subscription basis. To manage your tokens and license the the users, click Tokens & Licensed Users in Advanced Configurations. Note: Perpetual License and Subscription Plan cannot be used at the same Navicat On-Prem Server. Before changing the activation method
    0 码力 | 56 页 | 1.08 MB | 1 年前
    3
共 1000 条
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 100
前往
页
相关搜索词
BacktoBasicsConcurrencyApacheCassandra10DocumentationFebruary162012DeepSeekV2StrongEconomicalandEfficientMixtureofExpertsLanguageModel动手深度学习v2TypescriptSDKVersionDeepLearningBookEDLChapterArchitecturesGooglePromptEngineeringv7AI模型千问qwen中文文档TrendsArtificialIntelligencenavicatcollaborationversionuserguide
IT文库
关于我们 文库协议 联系我们 意见反馈 免责声明
本站文档数据由用户上传或本站整理自互联网,不以营利为目的,供所有人免费下载和学习使用。如侵犯您的权益,请联系我们进行删除。
IT文库 ©1024 - 2025 | 站点地图
Powered By MOREDOC AI v3.3.0-beta.70
  • 关注我们的公众号【刻舟求荐】,给您不一样的精彩
    关注我们的公众号【刻舟求荐】,给您不一样的精彩