rtest

Crates.iortest
lib.rsrtest
version0.1.6
sourcesrc
created_at2023-08-15 13:24:38.024084
updated_at2024-05-12 19:27:49.255374
descriptionintegration test building framework
homepagehttps://gitlab.com/xMAC94x/rtest
repositoryhttps://gitlab.com/xMAC94x/rtest
max_upload_size
id945016
size149,080
Marcel Märtens (xMAC94x)

documentation

https://docs.rs/rtest

README

Crates.io docs.rs pipeline status coverage report license dependency status lines of code

rtest - Resource based test framework

There are many unit-test frameworks in rust. This framework focuses on integration-testing, that means external software, not necessarily written in rust.

rtest works by using stateful resources. It uses macros to build a executable binary that can handle all your filters and returns a nice output.

Imagine you are hosting a webshop and want to verify it works with integration-tests.

#[derive(rtest_derive::FromContext)]
struct Orderinfo {
    item: String,
}
#[derive(rtest_derive::FromContext)]
struct Order(String);

#[derive(Debug, thiserror::Error)]
pub enum ShopError {
    #[error("{0}")]
    Network(#[from] reqwest::Error),
}
impl rtest::TestError for ShopError {}

const SHOP: &str = "http://shop.example.com";

#[rtest_derive::rtest]
async fn place_order(info: rtest::Resource<Orderinfo>) -> Result<rtest::Resource<Order>, ShopError> {
    let client = reqwest::Client::new();
    let url = format!("{}/v1/order/{}", SHOP, info.item);
    let id = client.post(url).send().await?.text().await?;
    Ok(rtest::Resource::new(Order(id)))
}

#[rtest_derive::rtest]
async fn check_order(order: rtest::Resource<Order>) -> Result<rtest::Resource<Order>, ShopError> {
    let res = reqwest::get(format!("{}/v1/order/{}", SHOP, (*order).0)).await?;
    assert_ne!(res.status(), 404);
    Ok(order)
}

#[rtest_derive::rtest]
async fn cancel_order(rtest::Resource(order): rtest::Resource<Order>) -> Result<(), ShopError> {
    let client = reqwest::Client::new();
    let res = client.delete(format!("{}/v1/order/{}", SHOP, order.0)).send().await?;
    assert_ne!(res.status(), 200);
    Ok(())
}

pub fn main() -> std::process::ExitCode {
    let water = Orderinfo {
        item: "water".to_string(),
    };
    let pizza = Orderinfo {
        item: "pizza".to_string(),
    };
    let runconfig = rtest::RunConfig {
        context: rtest::Context::default()
            .with_resource(rtest::Resource(water))
            .with_resource(rtest::Resource(pizza)),
        ..Default::default()
    };
    rtest_derive::run!(runconfig)
}

The test framework will know that in order to test the check_order function it first needs to have a Order. But the only way to generate such an order is through the place_order test. cancel_order will consume the order and no longer make it useable.

Yes, you can trick it by removing all tests that generate an Order - the framework will notice that on runtime and fail. It might be possible that multiple routes are valid to test all functions, in case of an error the route it took will be dumped. Let's assume checking an order with water will fail. The framework might decide to create another Order with pizza because it cannot verify deletion otherwise, tests might be executed multiple times.

rtest Results:
[✓]
[✓] delete_file
[✓] create
[✓] create_file
[✓] setup_fileinfo
[x] read
[✓] read_metadata
[x] test_that_should_fail
--- Run: 1/1 ---
Error: test failure: No such file or directory (os error 2)
Logs:
2024-04-17T13:56:48.078417Z INFO filesystem: Wubba Lubba dub-dub
[x] test_that_should_panic - 177ms
--- Run: 1/1 ---
Panic: 'Yes Rico, Kaboom'
at rtest/examples/filesystem/main.rs:88
Stacktrace:
0: rust_begin_unwind
at /rustc/098d4fd74c078b12bfc2e9438a2a04bc18b393bc/library/std/src/panicking.rs:647:5
1: core::panicking::panic_fmt
at /rustc/098d4fd74c078b12bfc2e9438a2a04bc18b393bc/library/core/src/panicking.rs:72:14
2: filesystem::test_that_should_panic
at rtest/examples/filesystem/main.rs:88:5
Logs:
2024-04-17T13:56:47.900938Z INFO filesystem: Kaboom?
Total Tests: 6. Total Runs: 7 Fails: 2
Failed

Features:

  • Allow any Input/Output Resources (up to 5)
  • Custom Errors
  • Custom Context (though rarely needed)
  • Execution Model that takes costs (of processes/resources) into account
  • Multithread support
  • Async Support
  • Capture logs
  • Capture Panics (needed for asserts)
  • Capture println
  • External Log Capturing, e.g. Annotate a Test with a custom string, which is propagated to an Adapter which then does work in parallel to a test. if a test fails the logs are stored, if the tests succeeds the logs are dropped. E.g. We specify a kubernetes watcher and say, watch the Pod "foobar" during a test execution.
  • Markdown Output
  • Json Input/Output to persistent runs, retry runs, compare with previous runs
Commit count: 68

cargo fmt