| Crates.io | rsedn |
| lib.rs | rsedn |
| version | 0.2.0 |
| created_at | 2024-08-10 15:52:21.449901+00 |
| updated_at | 2024-08-12 19:15:26.861623+00 |
| description | A Rust library for reading and writing EDN (Extensible Data Notation) data. |
| homepage | https://github.com/OJarrisonn/rsedn |
| repository | https://github.com/OJarrisonn/rsedn |
| max_upload_size | |
| id | 1332494 |
| size | 48,666 |
rsedn is a crate that implements a subset(atm) of Extensible Data Notation
( ) lists[ ] vectors{ } maps#{ } setssymbols (the full edn symbol specification):keywords#user/tags#_discard (not being discarted, just parsed)nil\uNNNN unicode sequences)#inst and #uuid)rsedn usage is aplit into 4 steps:
Source from a &str (use rsedn::source_from_str)Source to produce a Vec<Lexeme> (use rsedn::lex_source)Lexeme to produce a Token (use rsedn::parse_lexeme)TokenStream (use LinkedList::iter) and consume it to produce a Form (use rsedn::consume_token_stream)SourceA wrapper around the source code, we always refer to source code as &'source str. It can be latter used to get the span (the actual text) of some Lexeme
LexemeStores the coordinates of a piece of meaningful source code (just coordinates, no text), it doens't classifies it, just knows that the given piece of text may have a meaning.
For instance: (println) has 3 lexemes: (, println, and ), (def var 5) has 5 lexemes: (, def, var, 5 and )
TokenA wrapper around a lexeme that stores the span and what kind of token it is. It classifies by reading the span and checking the syntax of the corresponding piece of source code.
Producing a Token may produce a TokenizationError when the lexeme isn't syntatically right.
FormThe final step, it's built by one or more tokens and represents an edn form: a list, a vector, a symbol, etc.
Almost no manipulation is done with the source text, except the parsing of text into values like: i64, f64, bool and String for the corresponding edn forms.
Forms are what you may use out of this library.
Producing forms may produce a ParsingError when the tokens in the token stream aren't the expected ones.