Back to Basics: Concurrencystd::vectortokens_; Token getToken() { mtx_.lock(); if (tokens_.empty()) tokens_.push_back(Token::create()); Token t = std::move(tokens_.back()); tokens_.pop_back(); facilities of the bathroom itself. TokenPool’s mtx_ protects its vector tokens_. Every access (read or write) to tokens_ must be done under a lock on mtx_. This is an invariant that must be preserved getToken() { mtx_.lock(); if (tokens_.empty()) tokens_.push_back(Token::create()); Token t = std::move(tokens_.back()); tokens_.pop_back(); mtx_.unlock(); 0 码力 | 58 页 | 333.56 KB | 6 月前3
MoonBit月兔编程语言 现代编程思想 第十一课 案例:语法解析器与Tagless Final 在单词之间可能存在空格 1. let tokens: Lexer[List[Token]] = 2. number.or(symbol).and(whitespace.many()) 3. .map(fn { (symbols, _) => symbols }) // 忽略空格 4. .many() 5. 6. fn init { 7. debug(tokens.parse("-10123-+-523 -> Option[(V, List[Token])] 2. 3. fn parse[V](self : Parser[V], tokens : List[Token]) -> Option[(V, List[Token])] { 4. (self.0)(tokens) 5. } ⼤部分组合⼦与 Lexer[V] 类似 递归组合: atomic = Value / "(" expression -> Parser[Expression] { 2. // 定义互递归函数 3. // atomic = Value / "(" expression ")" 4. fn atomic(tokens: List[Token]) -> Option[(Expression, List[Token])] { 5. lparen.and( 6. Parser(expression)0 码力 | 25 页 | 400.29 KB | 1 年前3
The Lean Reference Manual
Release 3.3.0readers will want to skip this section on a first reading. Lean input is processed into a stream of tokens by its scanner, using the UTF-8 encoding. The next token is the longest matching prefix of the remaining string | char | numeral | decimal | quoted_symbol | doc_comment | mod_doc_comment | field_notation Tokens can be separated by the whitespace characters space, tab, line feed, and carriage return, as well are static tokens that are used in term notations and commands. They can be both keyword-like (e.g. the have keyword) or use arbitrary Unicode characters. Command tokens are static tokens that prefix0 码力 | 67 页 | 266.23 KB | 1 年前3
Comprehensive Rust(Ukrainian) 202412parse(input: &str) -> Expression { let mut tokens = tokenize(input); fn parse_expr<'a>(tokens: &mut Tokenizer<'a>) -> Expression { let Some(tok) = tokens.next() else { panic!("Неочікуваний кінець вводу"); операцію, якщо вона присутня. match tokens.next() { None => expr, Some(Token::Operator(op)) => Expression::Operation( Box::new(expr), op, Box::new(parse_expr(tokens)), ), Some(tok) => panic!("Неочікуваний ("Неочікуваний токен {tok:?}"), } } parse_expr(&mut tokens) } fn main() { let expr = parse("10+foo+20-30"); println!("{expr:?}"); } 29.8.1 Рішення use thiserror::Error; use std::iter::Peekable; use std::str::Chars;0 码力 | 396 页 | 1.08 MB | 11 月前3
Reference guide for Free Pascal, version 3.2.2Free Pascal, version 3.2.2 Document version 3.2.2 May 2021 Michaël Van Canneyt Contents 1 Pascal Tokens 12 1.1 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . manual. 11 Chapter 1 Pascal Tokens Tokens are the basic lexical building blocks of source code: they are the “words” of the language: characters are combined into tokens according to the rules of the the programming language. There are five classes of tokens: reserved words These are words which have a fixed meaning in the language. They cannot be changed or redefined. identifiers These are names of0 码力 | 268 页 | 700.37 KB | 1 年前3
Haskell 2010 Language Reportshown below. This Lexes as this f.g f . g (three tokens) F.g F.g (qualified ‘g’) f.. f .. (two tokens) F.. F.. (qualified ‘.’) F. F . (two tokens) The qualifier does not change the syntactic treatment lexemes as specified by the lexical syntax in the Haskell report, with the following addi- tional tokens: – If a let, where, do, or of keyword is not followed by the lexeme {, the token {n} is inserted (i.e. the programmer supplied the open- ing brace). If the innermost context is 0, then no layout tokens will be inserted until either the enclosing context ends or a new context is pushed. – A positive0 码力 | 329 页 | 1.43 MB | 1 年前3
Agda User Manual v2.5.3let-expressions where-blocks Proving properties More Examples (for Beginners) Lexical Structure Tokens Layout Literate Agda Literal Overloading Natural numbers Negative numbers Strings Other types unicode characters can be used in identifiers and whitespace is important, see Names and Layout below. Tokens Keywords and special symbols Most non-whitespace unicode can be used as part of an Agda name, but less lies outside the block. data Nat : Set where -- starts a layout block -- comments are not tokens zero : Nat -- statement 1 suc : Nat → -- statement 2 Nat -- also statement0 码力 | 185 页 | 185.00 KB | 1 年前3
Comprehensive Rust(Persian ) 202412parse(input: &str) -> Expression } let mut tokens = tokenize(input ( ; fn parse_expr<'a>(tokens: &mut Tokenizer<'a>) -> Expression } let Some(tok) = tokens.next() else } panic ! ) " � � � � � � � � operation if present . match tokens.next ) ( } None => expr , Some(Token::Operator(op)) => Expression::Operation ) Box::new(expr ( , op , Box::new(parse_expr(tokens ( ( , ( , Some(tok) => panic panic ! ) " � � � � � � � � � � � � � } tok : ? { " ( , { 190 { parse_expr(&mut tokens ( { fn main ) ( } let expr = parse("10+foo+20-30 " ( ; println!("{expr : ? { " ( ; { 29.7.1 � � � � � use thiserror::Error0 码力 | 393 页 | 987.97 KB | 11 月前3
The Swift Programming LanguageThe lexical structure of Swift describes what sequence of characters form valid tokens of the language. These valid tokens form the lowest-level building blocks of the language and are used to describe describe the rest of the language in subsequent chapters. In most cases, tokens are generated from the characters of a Swift source file by considering the longest possible substring from the input text, within longest match or maximal munch. Whitespace and Comments Whitespace has two uses: to separate tokens in the source file and to help determine whether an operator is a prefix or postfix (see Operators)0 码力 | 525 页 | 4.68 MB | 1 年前3
Reflection Is Not ContemplationSequences Status: prototype implementation constexpr auto t1 = ^^{ a + /* hi! */ b }; // three tokens static_assert(std::is_same_v); constexpr auto t2 = ^^{ a += ( }; constexpr auto strings and integrals, creates an identifier • \tokens( expr ) expands another token sequence •Inside any consteval function: • queue_injection( tokens_expr ) injects a token sequence into the current // Same as: int x = 42; typename[:^char:] c = '*'; // Same as: char c = '*';Angle of Attack Tokens, tokens everywhere… •P3294 (Injection with Token Sequences) a game changer for generation • Lennon/McCartney 0 码力 | 45 页 | 2.45 MB | 6 月前3
共 825 条
- 1
- 2
- 3
- 4
- 5
- 6
- 83













