Crates.io | leo-test-framework |
lib.rs | leo-test-framework |
version | 2.4.0 |
source | src |
created_at | 2021-06-01 03:14:06.180912 |
updated_at | 2024-12-03 18:20:40.421976 |
description | The testing framework for the Leo programming language |
homepage | https://leo-lang.org |
repository | https://github.com/ProvableHQ/leo |
max_upload_size | |
id | 404562 |
size | 164,426 |
This directory includes the code for the Leo Test Framework.
You would first create a rust test file inside the folder of some part of the compiler, as the test framework tests are run by the rust test framework.
Then you would create a struct
that represents a Namespace
where you have to implement the following:
Each namespace
must have a function, parse_type
that returns a ParseType
. There are several kinds of ParseTypes
:
Each namespace
must have a function, that runs and dictates how you want the tests for that namespace to work. To make running a test possible you are given information about the test file, like its name, content, path, etc. It allows you to return any type of output to be written to an expectation file as long as it's serializable.
Then you would create a struct
that represents a Runner
where you have to implement the following:
Each test file only needs one runner that represents the namespace resolution to which type of test should be run with a given string.
i.e.
struct ParseTestRunner;
impl Runner for ParseTestRunner {
fn resolve_namespace(&self, name: &str) -> Option<Box<dyn Namespace>> {
Some(match name {
"Parse" => Box::new(ParseNamespace),
"ParseExpression" => Box::new(ParseExpressionNamespace),
"ParseStatement" => Box::new(ParseStatementNamespace),
"Serialize" => Box::new(SerializeNamespace),
"Input" => Box::new(InputNamespace),
"Token" => Box::new(TokenNamespace),
_ => return None,
})
}
}
A rust test function that calls the framework over the runner, as well as the name of the directory, is the last thing necessary.
i.e.
#[test]
pub fn parser_tests() {
// The second argument indicates the directory where tests(.leo files)
// would be found(tests/parser).
leo_test_framework::run_tests(&ParseTestRunner, "parser");
}
To do so you can simply remove the corresponding .out
file in the tests/expectations
directory. Doing so will cause the output to be regenerated. There is an easier way to mass change them as well discussed in the next section.
To make several aspects of the test framework easier to work with there are several environment variables:
TEST_FILTER
- Now runs all tests in the given directory, or the exact given test.
TEST_FILTER="address" cargo test -p leo-compiler
will run all tests in the located in tests/compiler/address
.TEST_FILTER="address/branch.leo" cargo test -p leo-compiler
will run the test located in tests/compiler/address/branch.leo
.REWRITE_EXPECTATIONS
- which if set clears all current expectations for the tests being run and regenerates them all.To set environment variables please look at your Shell(bash/powershell/cmd/fish/etc) specific implementation for doing so
NOTE: Don't forget to clear the environment variable after running it with that setting, or set a temporary env variable if your shell supports it.
The test-framework is now used to easily benchmark Leo, by running on all compiler tests that have the Pass
expectation.
Additionally, you can create additional benchmark tests by adding them in the test directory by giving them the namespace of Bench
.
There are currently four different kinds of benchmarks to run:
To run the benchmarks the command is cargo bench -p leo-test-framework
.
This by default runs all the above-mentioned benchmark suites.
To specify a specific one you would do cargo bench -p leo-test-framework parse
or any of the above-listed benchmark suites.
NOTE Benchmarks are affected by the TEST_FILTER
environment variable.
They are also machine dependent on your pc and are impacted by other open applications.