Crates.io | r3bl_tui |
lib.rs | r3bl_tui |
version | |
source | src |
created_at | 2022-10-05 02:26:12.834296 |
updated_at | 2024-10-21 15:12:33.3594 |
description | TUI library to build modern apps inspired by Elm, with Flexbox, CSS, editor component, emoji support, and more |
homepage | https://r3bl.com |
repository | https://github.com/r3bl-org/r3bl-open-core/tree/main/tui |
max_upload_size | |
id | 680265 |
Cargo.toml error: | TOML parse error at line 21, column 1 | 21 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
R3BL TUI library & suite of apps focused on developer productivity
We are working on building command line apps in Rust which have rich text user interfaces (TUI). We want to lean into the terminal as a place of productivity, and build all kinds of awesome apps for it.
๐ฎ Instead of just building one app, we are building a library to enable any kind of rich TUI development w/ a twist: taking concepts that work really well for the frontend mobile and web development world and re-imagining them for TUI & Rust.
๐ We are building apps to enhance developer productivity & workflows.
tmux
in Rust (separate processes mux'd onto a
single terminal window). Rather it is to build a set of integrated "apps" (or
"tasks") that run in the same process that renders to one terminal window.All the crates in the r3bl-open-core
repo provide lots of useful
functionality to help you build TUI (text user interface) apps, along w/ general
niceties & ergonomics that all Rustaceans ๐ฆ can enjoy ๐.
You can build fully async TUI (text user interface) apps with a modern API that brings the best of the web frontend development ideas to TUI apps written in Rust:
And since this is using Rust and Tokio you get the advantages of concurrency and parallelism built-in. No more blocking the main thread for user input, for async middleware, or even rendering ๐.
This framework is loosely coupled and strongly coherent meaning that you can pick and choose whatever pieces you would like to use w/out having the cognitive load of having to grok all the things in the codebase. Its more like a collection of mostly independent modules that work well w/ each other, but know very little about each other.
Please check out the changelog to see how the library has evolved over time.
To learn how we built this crate, please take a look at the following resources.
Once you've cloned the repo to a folder on your computer, you can run the examples you see in the video with the following commands:
cd tui/examples
cargo run --release --example demo
These examples cover the entire surface area of the TUI API. You can also take a look
at the tests in the source (tui/src/
) as well. A single nu
shell script run
in the tui
sub folder in the repo
allows you to easily build, run, test, and do so much more with the repo.
The
run
script works on Linux, macOS, and Windows. On Linux and macOS, you can simply run./run
instead ofnu run
.
Command | Description |
---|---|
nu run help |
See all the commands you can pass to the run script |
nu run examples |
Run all the examples |
nu run release-examples |
Run all the examples with the release binary |
nu run examples-with-flamegraph-profiling |
This will run the examples and generate a flamegraph at the end so you can see profile the performance of the app. This video has a walkthrough of how to use this |
nu run log |
View the log output. This video has a walkthrough of how to use this. |
nu run build |
Build |
nu run clean |
Clean |
nu run test |
Run tests |
nu run clippy |
Run clippy |
nu run docs |
Build docs |
nu run serve-docs |
Serve docs over VSCode Remote SSH session |
nu run rustfmt |
Run rustfmt |
The following commands will watch for changes in the source folder and re-run:
Command | Description |
---|---|
nu run watch-all-tests |
Watch all test |
nu run watch-one-test <test_name> |
Watch one test |
nu run watch-clippy |
Watch clippy |
nu run watch-macro-expansion-one-test <test_name> |
Watch macro expansion for one test |
There's also a run
script at the top level folder of the repo. It is intended to
be used in a CI/CD environment w/ all the required arguments supplied or in
interactive mode, where the user will be prompted for input.
Command | Description |
---|---|
nu run all |
Run all the tests, linting, formatting, etc. in one go. Used in CI/CD |
nu run build-full |
This will build all the crates in the Rust workspace. And it will install all the required pre-requisite tools needed to work with this crate (what install-cargo-tools does) and clear the cargo cache, cleaning, and then do a really clean build. |
nu run install-cargo-tools |
This will install all the required pre-requisite tools needed to work with this crate (things like cargo-deny , flamegraph will all be installed in one go) |
nu run check-licenses |
Use cargo-deny to audit all licenses used in the Rust workspace |
Here are some framework highlights:
tokio::mpsc
channel and signals.Here's a video of a prototype of R3BL CMDR app built using this TUI engine.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ main.rs โ
โ โโโโโโโโโโโโโโโโโโโโ โ
โ GlobalData โโโโโโโโโโโโโโโโบโ window size โ โ
โ HasFocus โ offscreen buffer โ โ
โ ComponentRegistryMap โ state โ โ
โ App & Component(s) โ channel sender โ โ
โ โโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Versions of this crate <= 0.3.10
used shared memory to communicate between the
background threads and the main thread. This was done using the async Arc<RwLock<T>>
from tokio. The state storage, mutation, subscription (on change handlers) were all
managed by the
r3bl_redux
crate. The use of the Redux pattern, inspired by React, brought with it a lot of
overhead both mentally and in terms of performance (since state changes needed to be
cloned every time a change was made, and memcpy
or clone
is expensive).
Versions > 0.3.10
use message passing to communicate between the background threads
using the tokio::mpsc
channel (also async). This is a much easier and more
performant model given the nature of the engine and the use cases it has to handle. It
also has the benefit of providing an easy way to attach protocol servers in the future
over various transport layers (eg: TCP, IPC, etc.); these protocol servers can be used
to manage a connection between a process running the engine, and other processes
running on the same host or on other hosts, in order to handle use cases like
synchronizing rendered output, or state.
Here are some papers outlining the differences between message passing and shared memory for communication between threads.
Dependency injection is used to inject the
required resources into the main_event_loop
function. This allows for easy testing
and for modularity and extensibility in the codebase. The r3bl_terminal_async
crate
shares the same infrastructure for input and output devices. In fact the
[r3bl_core::InputDevice] and [r3bl_core::OutputDevice] structs are in the r3bl_core
crate.
stdin
and stdout
while preserving all the existing code and functionality. This
can produce some interesting headless apps in the future, where the UI might be
delegated to a window using eGUI or
iced-rs or wgpu.There is a clear separation of concerns in this module. To illustrate what goes where, and how things work let's look at an example that puts the main event loop front and center & deals w/ how the system handles an input event (key press or mouse).
cargo run
).โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โIn band input event โ
โ โ
โ Input โโโบ [TerminalWindow] โ
โ Event โฒ โ โ
โ โ โผ [ComponentRegistryMap] stores โ
โ โ [App]โโโโโโโโโโโโโโโโโบ[Component]s at 1st render โ
โ โ โ โ
โ โ โ โ
โ โ โ โโโโโโโโบ id=1 has focus โ
โ โ โ โ โ
โ โ โโโโบ [Component] id=1 โโโโโโ โ
โ โ โ โ โ
โ โ โโโโบ [Component] id=2 โ โ
โ โ โ โ
โ default handler โ โ
โ โฒ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โOut of band app signal โ
โ โ
โ App โ
โ Signal โโโบ [App] โ
โ โ โ
โ โ โ
โ โโโโโโโโบUpdate state โ
โ main thread rerender โ
โ โ โ
โ โ โ
โ โโโโโโโบ[App] โ
โ โ โ
โ โโโโโโบ[Component]s โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Let's trace the journey through the diagram when an input even is generated by the
user (eg: a key press, or mouse event). When the app is started via cargo run
it
sets up a main loop, and lays out all the 3 components, sizes, positions, and then
paints them. Then it asynchronously listens for input events (no threads are blocked).
When the user types something, this input is processed by the main loop of
[TerminalWindow].
id=1
currently has focus.An input event is processed by the main thread in the main event loop. This is a synchronous operation and thus it is safe to mutate state directly in this code path. This is why there is no sophisticated locking in place. You can mutate the state directly in
This is great for input events which are generated by the user using their keyboard or
mouse. These are all considered "in-band" events or signals, which have no delay or
asynchronous behavior. But what about "out of band" signals or events, which do have
unknown delays and asynchronous behaviors? These are important to handle as well. For
example, if you want to make an HTTP request, you don't want to block the main thread.
In these cases you can use a tokio::mpsc
channel to send a signal from a background
thread to the main thread. This is how you can handle "out of band" events or signals.
To provide support for these "out of band" events or signals, the [App] trait has a
method called [App::app_handle_signal]. This is where you can handle signals that are
sent from background threads. One of the arguments to this associated function is a
signal
. This signal needs to contain all the data that is needed for a state
mutation to occur on the main thread. So the background thread has the responsibility
of doing some work (eg: making an HTTP request), getting some information as a result,
and then packaging that information into a signal
and sending it to the main thread.
The main thread then handles this signal by calling the [App::app_handle_signal]
method. This method can then mutate the state of the [App] and return an
[EventPropagation] enum indicating whether the main thread should repaint the UI or
not.
So far we have covered what happens when the [App] receives a signal. Who sends this
signal? Who actually creates the tokio::spawn
task that sends this signal? This can
happen anywhere in the [App] and [Component]. Any code that has access to [GlobalData]
can use the [r3bl_core::send_signal!] macro to send a signal in a background task.
However, only the [App] can receive the signal and do something with it, which is
usually apply the signal to update the state and then tell the main thread to repaint
the UI.
Now that we have seen this whirlwind overview of the life of an input event, let's look at the details in each of the sections below.
The main building blocks of a TUI app are:
Inside of your [App] if you want to use flexbox like layout and CSS like styling you can think of composing your code in the following way:
Typically your [App] will look like this:
#[derive(Default)]
pub struct AppMain {
// Might have some app data here as well.
// Or `_phantom: std::marker::PhantomData<(State, AppSignal)>,`
}
As we look at [Component] & [App] more closely we will find a curious thing [ComponentRegistry] (that is managed by the [App]). The reason this exists is for input event routing. The input events are routed to the [Component] that currently has focus.
The [HasFocus] struct takes care of this. This provides 2 things:
id
of a [FlexBox] / [Component] that has focus.id
. This is used
to represent a cursor (whatever that means to your app & component). This cursor
is maintained for each id
. This allows a separate cursor for each [Component]
that has focus. This is needed to build apps like editors and viewers that
maintains a cursor position between focus switches.Another thing to keep in mind is that the [App] and [TerminalWindow] is persistent between re-renders.
[TerminalWindow] gives [App] first dibs when it comes to handling input events. [ComponentRegistry::route_event_to_focused_component] can be used to route events directly to components that have focus. If it punts handling this event, it will be handled by the default input event handler. And if nothing there matches this event, then it is simply dropped.
The R3BL TUI engine uses a high performance compositor to render the UI to the
terminal. This ensures that only "pixels" that have changed are painted to the
terminal. This is done by creating a concept of PixelChar
which represents a single
"pixel" in the terminal screen at a given col and row index position. There are only
as many PixelChar
s as there are rows and cols in a terminal screen. And the index
maps directly to the position of the pixel in the terminal screen.
Here is an example of what a single row of rendered output might look like in a row of
the OffscreenBuffer
. This diagram shows each PixelChar
in row_index: 1
of the
OffscreenBuffer
. In this example, there are 80 columns in the terminal screen. This
actual log output generated by the TUI engine when logging is enabled.
row_index: 1
000 S โโโโโโโโณโโโโโโโโ001 P 'j'โfgโbg 002 P 'a'โfgโbg 003 P 'l'โfgโbg 004 P 'd'โfgโbg 005 P 'k'โfgโbg
006 P 'f'โfgโbg 007 P 'j'โfgโbg 008 P 'a'โfgโbg 009 P 'l'โfgโbg 010 P 'd'โfgโbg 011 P 'k'โfgโbg
012 P 'f'โfgโbg 013 P 'j'โfgโbg 014 P 'a'โfgโbg 015 P 'โ'โrev 016 S โโโโโโโโณโโโโโโโโ017 S โโโโโโโโณโโโโโโโโ
018 S โโโโโโโโณโโโโโโโโ019 S โโโโโโโโณโโโโโโโโ020 S โโโโโโโโณโโโโโโโโ021 S โโโโโโโโณโโโโโโโโ022 S โโโโโโโโณโโโโโโโโ023 S โโโโโโโโณโโโโโโโโ
024 S โโโโโโโโณโโโโโโโโ025 S โโโโโโโโณโโโโโโโโ026 S โโโโโโโโณโโโโโโโโ027 S โโโโโโโโณโโโโโโโโ028 S โโโโโโโโณโโโโโโโโ029 S โโโโโโโโณโโโโโโโโ
030 S โโโโโโโโณโโโโโโโโ031 S โโโโโโโโณโโโโโโโโ032 S โโโโโโโโณโโโโโโโโ033 S โโโโโโโโณโโโโโโโโ034 S โโโโโโโโณโโโโโโโโ035 S โโโโโโโโณโโโโโโโโ
036 S โโโโโโโโณโโโโโโโโ037 S โโโโโโโโณโโโโโโโโ038 S โโโโโโโโณโโโโโโโโ039 S โโโโโโโโณโโโโโโโโ040 S โโโโโโโโณโโโโโโโโ041 S โโโโโโโโณโโโโโโโโ
042 S โโโโโโโโณโโโโโโโโ043 S โโโโโโโโณโโโโโโโโ044 S โโโโโโโโณโโโโโโโโ045 S โโโโโโโโณโโโโโโโโ046 S โโโโโโโโณโโโโโโโโ047 S โโโโโโโโณโโโโโโโโ
048 S โโโโโโโโณโโโโโโโโ049 S โโโโโโโโณโโโโโโโโ050 S โโโโโโโโณโโโโโโโโ051 S โโโโโโโโณโโโโโโโโ052 S โโโโโโโโณโโโโโโโโ053 S โโโโโโโโณโโโโโโโโ
054 S โโโโโโโโณโโโโโโโโ055 S โโโโโโโโณโโโโโโโโ056 S โโโโโโโโณโโโโโโโโ057 S โโโโโโโโณโโโโโโโโ058 S โโโโโโโโณโโโโโโโโ059 S โโโโโโโโณโโโโโโโโ
060 S โโโโโโโโณโโโโโโโโ061 S โโโโโโโโณโโโโโโโโ062 S โโโโโโโโณโโโโโโโโ063 S โโโโโโโโณโโโโโโโโ064 S โโโโโโโโณโโโโโโโโ065 S โโโโโโโโณโโโโโโโโ
066 S โโโโโโโโณโโโโโโโโ067 S โโโโโโโโณโโโโโโโโ068 S โโโโโโโโณโโโโโโโโ069 S โโโโโโโโณโโโโโโโโ070 S โโโโโโโโณโโโโโโโโ071 S โโโโโโโโณโโโโโโโโ
072 S โโโโโโโโณโโโโโโโโ073 S โโโโโโโโณโโโโโโโโ074 S โโโโโโโโณโโโโโโโโ075 S โโโโโโโโณโโโโโโโโ076 S โโโโโโโโณโโโโโโโโ077 S โโโโโโโโณโโโโโโโโ
078 S โโโโโโโโณโโโโโโโโ079 S โโโโโโโโณโโโโโโโโ080 S โโโโโโโโณโโโโโโโโspacer [ 0, 16-80 ]
When RenderOps
are executed and used to create an OffscreenBuffer
that maps to the
size of the terminal window, clipping is performed automatically. This means that it
isn't possible to move the caret outside of the bounds of the viewport (terminal
window size). And it isn't possible to paint text that is larger than the size of the
offscreen buffer. The buffer really represents the current state of the viewport.
Scrolling has to be handled by the component itself (an example of this is the editor
component).
Each PixelChar
can be one of 4 things:
PixelChar::PlainText
and is used to paint the screen via the diffing algorithm
which is smart enough to "stack" styles that appear beside each other for quicker
rendering in terminals.The following diagram provides a high level overview of how apps (that contain components, which may contain components, and so on) are rendered to the terminal screen.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Container โ
โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ Col 1 โ โ Col 2 โ โ
โ โ โ โ โ โ
โ โ โ โ โโโโโโโโโผโโผโโโโโโโโโโโบ RenderPipelineโโโโโโ
โ โ โ โ โ โ โ
โ โ โ โ โ โ โ
โ โ โโโโโโโโผโโโผโโโโโโโโโโโโโโผโโผโโโโโโโโโโโบ RenderPipelineโโ โ
โ โ โ โ โ โ โ โ
โ โ โ โ โ โ โผ + โผ
โ โ โ โ โ โ โโโโโโโโโโโโโโโโโโโโโโโ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ โ โ
โ โ โ OffscreenBuffer โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โโโโโโโโโโโโโโโโโโโโโโโ
Each component produces a RenderPipeline
, which is a map of ZOrder
and
Vec<RenderOps>
. RenderOps
are the instructions that are grouped together, such as
move the caret to a position, set a color, and paint some text.
Inside of each RenderOps
the caret is stateful, meaning that the caret position is
remembered after each RenderOp
is executed. However, once a new RenderOps
is
executed, the caret position reset just for that RenderOps
. Caret position is not
stored globally. You should read more about "atomic paint operations" in the
RenderOp
documentation.
Once a set of these RenderPipeline
s have been generated, typically after the user
enters some input event, and that produces a new state which then has to be rendered,
they are combined and painted into an OffscreenBuffer
.
The paint.rs
file contains the paint
function, which is the entry point for all
rendering. Once the first render occurs, the OffscreenBuffer
that is generated is
saved to GlobalSharedState
. The following table shows the various tasks that have to
be performed in order to render to an OffscreenBuffer
. There is a different code
path that is taken for ANSI text and plain text (which includes StyledText
which is
just plain text with a color). Syntax highlighted text is also just StyledText
.
UTF-8 | Task |
---|---|
Y | convert RenderPipeline to List<List<PixelChar>> (OffscreenBuffer ) |
Y | paint each PixelChar in List<List<PixelChar>> to stdout using OffscreenBufferPainterImplCrossterm |
Y | save the List<List<PixelChar>> to GlobalSharedState |
Currently only crossterm
is supported for actually painting to the terminal. But
this process is really simple making it very easy to swap out other terminal libraries
such as termion
, or even a GUI backend, or some other custom output driver.
Since the OffscreenBuffer
is cached in GlobalSharedState
a diff to be performed
for subsequent renders. And only those diff chunks are painted to the screen. This
ensures that there is no flicker when the content of the screen changes. It also
minimizes the amount of work that the terminal or terminal emulator has to do put the
PixelChar
s on the screen.
The EditorComponent
struct can hold data in its own memory, in addition to relying
on the state.
EditorEngine
which holds syntax highlighting information, and
configuration options for the editor (such as multiline mode enabled or not, syntax
highlighting enabled or not, etc.). Note that this information lives outside of the
state.Component<S, AS>
trait.EditorBuffer
) and not inside of
the EditorComponent
itself.
HasEditorBuffers
which is where
the document data is stored (the key is the id of the flex box in which the editor
component is placed).EditorBuffer
contains the text content in a Vec
of UnicodeString
. Where
each line is represented by a UnicodeString
. It also contains the scroll offset,
caret position, and file extension for syntax highlighting.In other words,
EditorEngine
-> This goes in EditorComponent
EditorBuffer
-> This goes in the State
Here are the connection points w/ the impl of Component<S, AS>
in EditorComponent
:
handle_event(global_data: &mut GlobalData<S, AS>, input_event: InputEvent, has_focus: &mut HasFocus)
EditorEngine::apply(state.editor_buffer, input_event)
which will return another EditorBuffer
.UpdateEditorBuffer(EditorBuffer)
.render(global_data: &mut GlobalData<S, AS>, current_box: FlexBox, surface_bounds: SurfaceBounds, has_focus: &mut HasFocus,)
EditorEngine::render(state.editor_buffer)
RenderPipeline
.Definitions:
Caret
- the block that is visually displayed in a terminal which represents
the insertion point for whatever is in focus. While only one insertion point is
editable for the local user, there may be multiple of them, in which case there has
to be a way to distinguish a local caret from a remote one (this can be done w/ bg
color).
Cursor
- the global "thing" provided in terminals that shows by blinking
usually where the cursor is. This cursor is moved around and then paint operations
are performed on various different areas in a terminal window to paint the output
of render operations.
There are two ways of showing cursors which are quite different (each w/ very different constraints).
Using a global terminal cursor (we don't use this).
MoveTo(col, row), SetColor, PaintText(...)
sequence).Paint the character at the cursor w/ the colors inverted (or some other bg color) giving the visual effect of a cursor.
A modal dialog box is different than a normal reusable component. This is because:
FlexBox
es).So this activation trigger must be done at the App
trait impl level (in the
app_handle_event()
method). Also, when this trigger is detected it has to:
EventPropagation::ConsumedRerender
which will re-render the UI w/ the dialog box
on top.There is a question about where does the response from the user (once a dialog is
shown) go? This seems as though it would be different in nature from an
EditorComponent
but it is the same. Here's why:
EditorComponent
is always updating its buffer based on user input, and there's
no "handler" for when the user performs some action on the editor. The editor needs
to save all the changes to the buffer to the state. This requires the trait bound
HasEditorBuffers
to be implemented by the state.HasDialogBuffers
trait bound. This will hold stale
data once the dialog is dismissed or accepted, but that's ok since the title and
text should always be set before it is shown.
ComponentRegistry::user_data
. And it is possible for handle_event()
to return
a EventPropagation::ConsumedRerender
to make sure that changes are re-rendered.
This approach may have other issues related to having both immutable and mutable
borrows at the same time to some portion of the component registry if one is not
careful.When creating a new dialog box component, two callback functions are passed in:
on_dialog_press_handler()
- this will be called if the user choose no, or yes (w/
their typed text).on_dialog_editors_changed_handler()
- this will be called if the user types
something into the editor.So far we have covered the use case for a simple modal dialog box. In order to provide
auto-completion capabilities, via some kind of web service, there needs to be a
slightly more complex version of this. This is where the DialogEngineConfigOptions
struct comes in. It allows us to create a dialog component and engine to be configured
w/ the appropriate mode - simple or autocomplete.
In autocomplete mode, an extra "results panel" is displayed, and the layout of the dialog is different on the screen. Instead of being in the middle of the screen, it starts at the top of the screen. The callbacks are the same.
Crates like reqwest
and hyper
(which is part of Tokio) will work. Here's a link
that shows the pros and cons of using each:
The code for parsing and syntax highlighting is in [try_parse_and_highlight].
A custom Markdown parser is provided to provide some extensions over the standard Markdown syntax. The parser code is in the [parse_markdown()] function. Here are some of the extensions:
@title: <title_text>
). Similar to front matter.@tags: <tag1>, <tag2>
).@authors: <author1>, <author2>
).@date: <date>
).Some other changes are adding support for smart lists. These are lists that span multiple lines of text. And indentation levels are tracked. This information is used to render the list items in a way that is visually appealing.
Also, syntect
crate is still used by the editor component
[EditorEngineApi::render_engine] to syntax highlight the text inside code blocks of
Markdown documents.
An alternative approach to doing this was considered using the crate markdown-rs
,
but we decided to implement our own parser using
nom
since it was
streaming and used less CPU and memory.
Unicode is supported (to an extent). There are some caveats. The [r3bl_core::UnicodeString] struct has lots of great information on this graphemes and what is supported and what is not.
An implementation of lolcat color wheel is provided. Here's an example.
use r3bl_core::*;
use r3bl_tui::*;
let mut lolcat = LolcatBuilder::new()
.set_color_change_speed(ColorChangeSpeed::Rapid)
.set_seed(1.0)
.set_seed_delta(1.0)
.build();
let content = "Hello, world!";
let unicode_string = UnicodeString::from(content);
let lolcat_mut = &mut lolcat;
let st = lolcat_mut.colorize_to_styled_texts(&unicode_string);
lolcat.next_color();
This [r3bl_core::Lolcat] that is returned by build()
is safe to re-use.
render()
function of this component will produce the same generated colors over and over
again.Please report any issues to the issue tracker. And if you have any feature requests, feel free to add them there too ๐.
License: Apache-2.0