| Crates.io | rlex |
| lib.rs | rlex |
| version | 0.1.15 |
| created_at | 2025-06-16 23:05:19.537942+00 |
| updated_at | 2025-07-03 10:53:59.776522+00 |
| description | A cursor-based, utf-8 Vec |
| homepage | |
| repository | https://github.com/phillip-england/rlex |
| max_upload_size | |
| id | 1714906 |
| size | 29,097 |
Rlex is a lightweight lexer utility for traversing, peeking, and extracting parts of a UTF-8 string. It operates on a Vec<char> and retains the original string to allow for accurate byte-range slicing. It is ideal for building scanners, parsers, or any tool that needs detailed and controlled inspection of characters in a string.
Install via cargo
cargo add rlex
First, you need an enum to represent the state of your lexer and a token type:
#[derive(Debug, PartialEq, Eq)]
enum MyState {
Init,
Open,
Closed,
}
#[derive(Debug, PartialEq, Eq)]
enum MyToken {
Tok1,
Tok2,
Tok3,
}
Then use the enums to create a new lexer:
let r: Rlex<MyState, MyToken> = Rlex::new("hello", MyState::Init);
If you don't care to collect tokens or track state, use DefaultState and DefaultToken upon initalization.
let r: Rlex<DefaultState, DefaultToken> = Rlex::new("hello", DefaultState::Default);
r.state(); // Get a reference to the current state
r.state_set(MyState::Open); // Set a new state
r.pos(); // Current position
r.mark(); // Mark current position
r.goto_start(); // Go to start of input
r.goto_end(); // Go to end of input
r.goto_pos(2); // Go to a specific position
r.goto_mark(); // Go back to marked position
r.next(); // Move forward one
r.next_by(3); // Move forward by n
r.prev(); // Move backward one
r.prev_by(2); // Move backward by n
r.next_until('x'); // Advance until char
r.prev_until('x'); // Rewind until char
r.peek(); // Look at next char
r.peek_by(2); // Look ahead by n
r.peek_back(); // Look behind one
r.peek_back_by(3); // Look back by n
r.char(); // Get current char
r.next_is('x'); // Check if next char is x
r.next_by_is('x', 2); // Check if x is n chars ahead
r.prev_is('x'); // Check if previous char is x
r.prev_by_is('x', 3); // Check if x is n chars behind
r.at_start(); // At beginning?
r.at_end(); // At end?
r.at_mark(); // At previously marked spot?
r.src() // Read the Lexer source
r.toks() // Get a reference to the collected tokens
r.str_from_mark(); // Slice from mark to current
r.str_from_start(); // Slice from start to current
r.str_from_end(); // Slice from current to end
r.str_from_collection(); // Convert the collection into a slice
r.str_from_rng(0, 2); // Index-based slice from source
r.is_in_quote(); // Returns true if current position is inside a quote block
r.collect(); // Collect the character at the current position
r.collect_pop(); // Get the newest character added to the collection
r.collect_push('a'); // Push a character of your choice into the collection
r.collect_clear(); // Clears the current collection
r.token_push(MyToken::Tok1); // Push a token into the collection
r.token_pop(); // Remove and obtain the last token in the collection
r.token_prev(); // Peek at the last token in the collection
r.token_consume(); // Consumes the lexer and outputs the collected tokens
r.trace_on() // Turn on the trace system
r.trace_off() // Turn off the trace system
r.trace_emit() // Get the trace as a String
r.trace_clear() // Clear the trace
This project is licensed under the MIT License.