Introduction

This Rust crate automates a lot of the necessary steps for testing criteria, originally created to grade labs in technology management classes.

Detailed documentation can be found here on docs.rs.

Note: this documentation is still very much a work in progress, and some sections are just missing. If you have any questions, send me an email at llamicron@gmail.com or on discord at CoconutCake#3161.

Links:

Use Case

You should know that this program was intended for a technology management class. During the labs, the students would be managing Azure VMs, using software like Docker, Git and Github, etc. They were acting as SysAdmins, doing all the regular SysAdmin stuff. We needed a way to check each (60+) student's work on their VM or development machines, and report that back to us.

This poses a unique challenge. How do you get proof that every student has the right version of Python installed, or has a web server running on the right port? There's lots of different possibilities.

My original solution was to write a Rust program that checked everything and sent the results back to us. It wasn't modular or reusable; it was very much a bodge that was intended to work once. But after I wrote the fourth lab grader and prepared to write one to grade the final exam, I realized that I needed to set this in stone.

This library was the outcome. It's a glorified task-runner, but it's actually kind of cool. If you have any need of running a lot of commands, checking web sites and APIs, examining the file system, or performing anything across a lot of machines and getting a report from each one, then this library is for you.

Process

The general process for writing a grader is this

  • Make a new rust project - you can do this with cargo new

  • Define your criteria - You'll write all the details about each criteria in yaml. Things like name and description, point value, etc.

  • Build a Rubric - from within Rust, you'll load your yaml data into a Rubric. Then you'll write a function (a test) for each criterion and "attach" each test where it belongs.

  • Define a Submission - Any data you want sent back to you, as well as any data you may need inside the criteria tests should be stored in a submission.

  • Grade the Submission against the Rubric - this is done with one function call.

  • Submit the Submission - this is done by POSTing the data (with the web helper module). Again, one function call.

That might seem like a lot, but it's pretty easy once you do it. A good place to get started is the example grader exercise. It will walk you through a complete grader.

Terminology

Here's a list of the terminology that I use throughout the program. This will help you understand exactly what's going on. It's pretty straightforward anyway but I'd rather be explicit about this.

If a term is formatted like this, that means it's represented in code as a struct or module of the same name.

  • Grader - The program you're writing when you define criteria. "Grader" may not be the best term for your use case, but it's how I use this library.

  • Criterion - a bit of clerical data, and a "test", which is just a function that returns true or false. The Criterion's test is the heart of the application. It's a function that you write. Data can be passed into the Criterions test, but the criterion itself doesn't store it.

  • Rubric - a rubric is basically just a wrapper around a collection of criteria. This is the highest level of abstraction. You may want to run your tests in batches, which would be multiple rubrics. Each yaml file that you define later on is a single rubric. 1 yaml file = 1 Rubric.

  • Submission Server - just a web server that accepts Submissions as json data, then writes them to a file. You can start this server in one function call, all you need is a machine to run it on.

  • Submission - This is a bundle of data that represents the results of grading the criteria. The data it carries is defined by you. It is sent back to the submission server (that you run). A Submission is graded "against" a set of Criteria. Any data that you need in any criterion should be in here.

  • helpers - this is pretty self evident. It's a module (with submodules) that contains functions that easily accomplish tasks that you'll probably run into. Currently there are 3 helper modules, but more will be added:

    • cli - handles terminal operations like getting user input
    • fs - file system operations, like ensuring a file exists or contains something
    • web - make GET and POST web requests in one function call

As I said in the "Use Case" section, this was originally intended to grade TCMG labs. As a consequence, there may be some academia-oriented terminology in here. Originally, the Submission type had a mandatory student ID and name field, but I've removed those to make it more flexible. I'm trying to remove anything that would limit this application to just that use case. You may be using this for something other that what it was intended, which is fantastic.

Example Grader

This chapter will walk you through building a complete grader from start to finish. I'm writing this as I publish version 0.10.0. There will no doubt be changes in later versions.

Scenario

Let's say we wrote a lab to teach the basics of Git, and we need to ensure that the sudents have done the following:

  • Installed Git
  • Initialized Git in a repository
  • Made at least 3 commits in the repository
  • Pushed the repository to Github

These are our "criteria", which is an important term. Because we have 4 criteria, let's say each is worth 25 points for a total of 100 points.

Let's write a grader program that the student will run. The grader will check these criteria and send a report back to us.

Setup

Our grader will be written in Rust. Before we get started, be sure you have all the necessary tools to write a Rust application, including cargo. You can install it here if you don't already have it. Be sure you're running on the "nightly" release of Rust. You can switch to nightly with rustup default nightly.

We'll make a new Rust project using cargo

$ cargo new my_grader

This will make 3 files for us. Cargo.toml is where we specify details about our application (called a "crate"). You can leave most of it alone, as we won't be publishing this crate, but you need to add this crate (lab_grader) as a dependency.

[dependencies]
lab_grader = "0.10.0"

main.rs contains a hello world function, so you can go ahead and compile and run your program with

$ cargo run

After it compiles you should see "Hello, world!". Now we can move on to defining the criteria.

Defining the Criteria

Once we have our project set up, the first step is to define our criteria. Criteria are contained within a "Rubric". A Rubric has a name, description, and a list of criteria. It's represented in Rust by the Rubric structure. We're going to write a yaml file, and all the data we put in there will be serialized into a Rubric.

Let's make a directory called criteria/, and inside there we'll make a file called rubric.yml:

# criteria/rubric.yml
name: Git Lab
desc: Install and use Git

Here we've put a name for our rubric, and then an (optional) description.

Next we'll add our criteria. You can see the YAML specification to see all the keys that are available here. As a reminder, here's the criteria we wrote out in the last section:

  • Installed Git
  • Initialized Git in a repository
  • Made at least 3 commits in the repository
  • Pushed the repository to Github
# criteria/rubric.yml
name: Git Lab
desc: Install and use Git

criteria:
  "Git installed":
    stub: git-installed
    index: 1
    worth: 25
    messages: ["installed", "not installed"]

  "Git initialized":
    stub: git-init
    index: 2
    worth: 25
    messages: ["initialized", "uninitialized"]

  "Commits present":
    stub: commits-present
    index: 3
    worth: 25
    messages: [">= 3 commits found", "< 3 commits found"]

  "Repo pushed":
    stub: repo-pushed
    index: 4
    worth: 25
    messages: ["pushed", "not pushed"]

We've put 4 criteria in the yaml file. Each one has a name, a stub (an identifier), a point value, index, and some success/failure messages. The name and worth are the only required fields.

Once again, see the YAML specification for more info on what you can put in this file.

Now we have our criteria defined, we can move on to writing some Rust.

Building a Submission

A submission is a bundle of data that represents a students work. A submission is graded against a rubric, and then sent back to you. By default, it contains a timestamp, a numerical grade, and 2 lists of the criteria that the student passed/failed.

You can add any kind of data that you might want, for example the students name and ID, or information about their system like IP address.

Any data that is needed from inside a criteria test should also be here. This will make more sense when we write the criteria tests.

Some Housekeeping

We need to do some housekeeping in our main.rs

extern crate lab_grader;

use lab_grader::*;

fn main() {
    // code will go here
}

We added an import to the top to bring in all the items we'll need from lab_grader. Then we just cleared our main function. In the next section, we'll add code into the main function.

Build a Submission

Now we can build a submission, which the Submission::new function. Add the following to the beginning of your main function.

let mut sub = Submission::new();

We make it mutable to we can attach data later.

Attach Data

We want some data to attach to the submission. In this case, we're going to want the student's name and ID, as well as their Github username and the name of the repository they create for the lab. We'll use this data a little later.

We're going to use two macros to make this data:

  • data! - creates a bundle of key/value pairs
  • prompt! - asks the user for input from the terminal
fn main() {
    let mut sub = Submission::new();

    // Create data
    let data = data! {
        "name" => prompt!("Name: ", String),
        "id" => prompt!("ID: ", String),
        "gh_name" => prompt!("Github Username: ", String),
        "repo" => prompt!("Repo name: ", String)
    };

    // Attach data to submission
    sub.use_data(data);
}

Refactor

We can refactor the code above into this, using the Submission::from_data function.

fn main() {
    let mut sub = Submission::from_data(data! {
        "name" => prompt!("Name: ", String),
        "id" => prompt!("ID: ", String),
        "gh_name" => prompt!("Github Username: ", String),
        "repo" => prompt!("Repo name: ", String)
    });
}

Test

Add the following line to the end of main and run the program with cargo run to see what it does so far.

println!("{:#?}", sub);

Now we can move on to building the rubric.

Building the Rubric

In the first section we defined our criteria in a yaml file. Now we need to load the yaml data into Rust and build a Rubric from it.

A Note about Errors

In Rust, the main way that errors are handled in through the Result and Option types. These are massively important to Rust, and you should read over them and learn how they work.

A very good guide is from the Rust by Example book. The "Error Handling" section should take less than an hour to read over and will be very useful if you continue with Rust.

I'm going to be using the expect method, which is normally bad practice. It simply panics (aborts) with an error message if there's an error. Normally you would want to deal with the error in one way or another, but I'm going to use this anyway since it's just an example. expect should not be used in production code.

Reading YAML Data

We can read YAML data with the yaml! macro. yaml! takes in a relative file path and returns the YAML data as a String.

This macro is very important for one reason. When you compile in debug mode (default), this macro will read from the file system as expected. However, when you compile for release (with the --release flag), it will read the file contents and embed the contents in the created executable. This means when you distribute the grader to your students, you don't need to provide the yaml file. The executable will run on it's own. Just be sure to compile in release mode before distributing.

We can go ahead and add this to the end of our main function

let yaml = yaml!("../criteria/rubric.yml").expect("Couldn't read file");

Building a Rubric

Now that we have our yaml data, we can build a Rubric from it.

let mut rubric = Rubric::from_yaml(yaml).expect("Bad yaml!");

Here we're using the expect method again, but it's probably a good idea in this case. This will crash if we have invalid YAML or missing items. Once you're done developing and compile for release, the YAML will be embedded and won't change, so it won't crash after that.

That's all there is to building a rubric. Here's the complete main.rs file so far

extern crate lab_grader;

use lab_grader::*;

fn main() {
    let mut sub = Submission::from_data(data! {
        "name" => prompt!("Name: ", String),
        "id" => prompt!("ID: ", String),
        "gh_name" => prompt!("Github Username: ", String),
        "repo" => prompt!("Repo name: ", String)
    });

    let yaml = yaml!("../criteria/rubric.yml").expect("Couldn't read file");
    let mut rubric = Rubric::from_yaml(yaml).expect("Bad yaml!");
}

Now we can move on to writing the criteria tests.

Writing the Tests

Now comes the most important part of writing the application. We've build our rubric of criteria, but they currently don't have a way to be tested. How do we know if they actually have Git installed or not?

The way we determine this is by writing a "test", which is just a function, for each of our criteria. Then we'll "attach" the function to the criteria, then we can grade the submission.

A Single Test

Each test needs to have the same signature, meaning it has to accept the same parameters and return the same thing. We need this consistency between tests to make grading possible.

Every test must accept a reference to a TestData object, and return a boolean. We created a TestData object with the data! macro when we made a submission. In fact, the exact data we put in the submission will be the data passed into each of our criteria tests. This is why we put the users Github username and repository name in the data; we'll need it inside one of our tests.

Helpers

Before we write any tests, you should know about the helper modules. These modules are a collection of functions that do common tasks in criteria tests. They may save you some times. See the documentation linked above for more info on each module.

The First Test

Let's write a test for our first criteria, which checks if Git is installed or not. Remember that a test is just a function with a specific signature.

You can write your tests anywhere, but I'll make a tests.rs to keep the tests separate from the rest of our code.

// tests.rs
use lab_grader::*;

fn confirm_git_installed(_: &TestData) -> bool {
    cli::Program::Git.version().is_some()
}

We added a function called confirm_git_installed. It takes in a parameter of type &TestData, but in this case we don't need it so we'll name the parameter _. In the function body, we used the cli helper module to get the version of Git, and returned true if it's a Some value (this is an Option type, it would be None if Git wasn't installed).

And that's it for the first test. There's still one step to go, but we'll do that after the other tests.

The Rest of the Tests

I'm going to write some functions to serve as the tests for the remaining criteria. I won't explain what each one is doing in detail, but it should be pretty self explanatory. I'll put the entire tests.rs file here.

use std::process::Command;
use lab_grader::*;

// Naming the data parameter "_" because we don't need it in this case
pub fn confirm_git_installed(_: &TestData) -> bool {
    cli::Program::Git.version().is_some()
}

pub fn confirm_git_init(_: &TestData) -> bool {
    // This is a filesystem helper that this crate provides
    // also works on directories
    // This is *not* std::fs
    fs::file_exists(".git/")
}

pub fn confirm_enough_commits(_: &TestData) -> bool {
    // Run the git command to list commit count
    let out = Command::new("sh")
        .arg("-c")
        .arg("git rev-list --all --count")
        .output()
        .expect("Couldn't run subcommand");

    // If the command returns something
    if let Ok(string) = String::from_utf8(out.stdout) {
        // And if we could parse a number from it
        if let Ok(num) = string.trim().parse::<u64>() {
            return num > 2;
        }
    }

    false
}

// We do need the data this time
pub fn confirm_repo_pushed(data: &TestData) -> bool {
    // Format the url to check
    let url = format!("https://github.com/{}/{}/", data["gh_name"], data["repo"]);
    // Another helper function
    web::site_responds(&url)
}

In the confirm_repo_pushed, we're actually using the data attached to the submission. We can do that with bracket syntax (data["key"]) or through the get method. Using get is recommended over bracket syntax.

Attaching the Tests

We need to import the tests we wrote in main.rs. Add this import to the top of main.rs

// Declare the tests mod (tests.rs)
mod tests;
// Import all the test functions
use tests::*;

Now that we have our tests, we need to attach them to the appropriate criteria. We can do that with the attach! macro.

fn main() {
    // ...
    attach! {
        rubric,
        "git-installed" => confirm_git_installed,
        "git-init" => confirm_git_init,
        "commits-present" => confirm_enough_commits,
        "repo-pushed" => confirm_repo_pushed
    };
}

This attached each of our functions to the criteria with the given stub. This is why we needed to specify a stub in yaml.

Note: If you don't provide the stub in yaml, it will be created by lowercasing the name and replacing whitespace with a dash. ie. My First Criterion => my-first-criterion.

Now that the tests are attached, we're ready to grade.

Grading

We now have a complete Rubric and a Submission, which is all we need to grade.

When we grade the submission, we'll run each criteria test in the rubric. The submission's data will be passed into each of the tests. If the test passes, the submissions grade field will increase by the worth of the criterion.

fn main() {
    // ...
    // Grade the submission
    sub.grade_against(&mut rubric);

    // Print the rubric results
    println("{}", rubric);
}

We've graded the submission and then printed the rubric. Printing the rubric will show the student all the criteria and let them know what they passed or failed. Of course, you don't have to do this. You may want to keep one or all of your criteria private. You can hide individual criteria with the hide field in yaml, and you can always just not print the rubric.

That's all there is to grading. The next step is to submit to a submission server, which we'll do in the next section.

Submitting

When a student runs the program, it should grade their submission and then submit to a location you, as the professor/TA, can access. There's two parts to this.

Submission Server

The submission server is simply a web server with a single route: it accepts POST requests on the /submit route. You can run it with a single function.

You'll want to set up a publicly accessible server to run this server. I use a Microsoft Azure VM because they're pretty easy to set up and provide DNS services.

Let's add a little bit of code to the beginning of our main function. It will read the command line arguments and run the server if you run the program with the "server" argument.

fn main() {
    // Get command line arguments
    let args: Vec<String> = std::env::args().collect();

    // If the second one is "server"
    if args.len() == 2 && args[1] == "server" {
        // Run the server on port 8080
        Submission::server(8080);
    }

    // ...
}

Now you can run the program with the "server" argument to start up your web server. Open another terminal and run this server in the background while we finish the grader.

Submitting

Now that the submission server is running, we can submit. Let's add this to the end of the main function.

fn main() {
    // ...
    let url = "http://localhost:8080/submit";
    let res = web::post_json(url, &sub);
    if let Ok(response) = res {
        println!("Submission sent, response {}", response.status());
    } else {
        println!("Error sending submission! {}", res.unwrap_err());
    }
}

This will submit the submission to the server that you should have running, and print a success or error message. Of course, you'll want to put the url for your server instead of localhost.

The Results

Once a submission is accepted by the submission server, it will create a file called submissions.csv. Every submission accepted will be written to this file. This is a simple csv file, and you can open it in Excel or another program to process the data in any way you see fit.

A few warnings:

  • The header of the csv file will be written for the first submission recieved. If, for some reason, submissions have different data fields, the csv values and header won't match up. You should be sure that submissions all have the same data fields.
  • You should try to avoid having commas in criteria names/descriptions, or in your data. You can't really prevent users from entering commas though. When submitting, any commas found will be replaced with semicolons.
  • You need to compile the grader on the platform that your students will be running it on. If you want to provide a version for Windows, Linux, and MacOS, I recommend using a Github workflow to build for each platform.

Wrapping up

This has been a very simple example of a very simple grader. The last thing to do is distribute the program to your students. when you compile for release mode, it will generate an executable in target/release/[program_name]

$ cargo build --release

We usually write our labs as repositories on Github that the student can clone and follow, so we put the grader in the repo.

Criteria

coming soon...

YAML Specification

Here's the keys allowed when making a yaml file for a rubric. Not all fields are required, but it's better to be specific.

Here's the YAML syntax so you can learn how to write valid YAML. This isn't the official specification, but it's the easiest guide I found.

When naming your yaml files, you can use .yml or .yaml. Honestly you can use whatever extension, I don't care, but I use .yml for mine.

Quotes around strings are usually not required.


Minimum Required

name: Minimum Rubric
criteria:
  Only criterion:
    worth: 10

Everything

# Required
name: Rubric Name
# Optional
desc: A short description about your rubric
# Optional
# This is just an extra check to make sure
# all criteria add to the correct total
# If they don't, it will print an error message when running.
total: 25


# Required
# You need at least one criteria
criteria:

  # Required
  # Quotes are not required
  Criterion name:

    # Optional but recommended
    # This will be inferred from the name (lowercased, whitespace => '-')
    # This is like a human readable ID.
    # Must be unique.
    # This can really be any string, but it's best to keep it short and whitespace-free
    stub: a-unique-stub

    # Required
    # Point value of the criterion
    # This is completely subjective, you give it worth
    # Can be negative
    worth: 15

    # Optional
    # Controls the order the criteria are run
    # Lowest first
    # Can be negative
    index: 5

    # Optional
    desc: More information about this criterion

    # Optional
    # Printed to the console when a criterion passes or fails
    # Defaults to these values
    messages: ["passed", "failed"]

    # Optional
    # if this is true, criterion cannot be printed
    # Defaults to false
    hide: false

  # Here's all the fields without the comments
  Second criterion:
    stub: second-criterion
    worth: 10
    index: 1
    desc: Here's some more about this criterion
    messages: ["passed", "failed"]
    hide: false

A Single Criterion

A Criterion is the bread and butter of this application. To help us understand it a bit better, I'm going to put the definition of the Criterion struct here so we can take it apart.

Detailed Criterion documentation available here on docs.rs.

// Comments elided
pub struct Criterion {
    pub stub: String,
    pub name: String,
    pub worth: i16,
    pub index: i64,
    pub messages: (String, String),
    pub desc: Option<String>,
    pub test: Box<dyn Fn(&TestData) -> bool>,
    pub status: Option<bool>,
    pub hide: bool,
}

If you look through these fields, you might recognize them as the fields allowed in the YAML specification. In fact, the only field that isn't present in the YAML spec is the test field. This is because test is a function, and we can't write Rust functions from inside YAML.

You should pretty much always create criteria from YAML, and parse it into a Rubric type. A single criteria alone isn't really useful. Nonetheless, there is a CriterionBuilder struct that helps you create them individually. If you want you can create a list of criteria that way then throw them into a rubric. But this is much more work than through YAML.

The best reason I can think of for creating a Criterion manually is if you want to programmatically change a piece of data like the name or worth. But, you can always define it in YAML and change just that field later.

Really all a Criterion can do is run it's own test with or without data, and return a result. Again, they aren't very useful alone.

A Criteria Test

A Rubric

A Rubric is a collection of criteria. It is represented in Rust as the Rubric struct.

A rubric is the primary way that you, as the programmer, interact with a collection of criteria. Rubrices are created by parsing YAML data into criteria, then attaching tests to those criteria.

Detailed Rubric documentation here on docs.rs.

Creating a Rubric

See the YAML specification for more information on what is allowed and required in YAML. Here's a short example

name: My First Rubric
desc: An optional description about what this rubric does

criteria:
  "First criterion"
    stub: first-crit
    worth: 25

  "Second criterion"
    stub: second-crit
    worth: 25

Now we can deserialize that YAML into a Rubric struct.

let yaml = yaml!("path/to/that/yaml.yml").unwrap();
let mut rubric = Rubric::from_yaml(yaml).expect("Bad yaml!");

Using a Rubric

Rubrices can do a few things on their own, but they're meant to be used with a Submission. This is what a Submission is graded against.

Here's a few code examples of what you can do with a Rubric alone.

// These print the rubric and all the criteria contained in it
println!("{}", rubric);  // Print a rubric in full
rubric.print_short();    // Print a shorter version
rubric.print_table();    // print a full table

// This gets the total points possible
println!("Total points possible: {}", rubric.total_points());
// This gets the total points earned. You should grade before running this
println!("Points earned: {}", rubric.points());

// Adds a criterion to the list
rubric.add(/* some criterion */);

// Gets a criterion by stub, if any
if let Some(crit) = rubric.get("first-crit") {
    println!("criterion name: {}", crit.name);
}


// Attaches a function to a particular criteria
fn my_func(_: &TestData) -> bool { true };
if let Some(crit) = rubric.get("first-crit") {
    crit.attach(Box::new(my_func));
}

The attach! macro

The attach! macro is provided to easily attach many tests to criteria. In the code example above, you can see how to define a function and then attach it to a criterion through the Rubric::attach function. The attach! macro allows you to do many of these at once, and you don't have to Box it either.

fn test_func1(_: &TestData) -> bool { true };
fn test_func2(_: &TestData) -> bool { false };

fn main() {
    // create the rubric from yaml as above...
    attach! {
        rubric,
        "first-crit" => test_func1,
        "second-crit" => test_func2
    };
}

This is the recommended way to attach functions to criteria.

Submission

A Submission represents one students work on a lab. It's a bundle of data that can have whatever you want in it. A Submission works together with a Rubric. A Submission is graded against a Rubric and assigned a grade.

After a Submission is graded, it can be sent back to you, as the professor/TA, for review.

Submissions come with a timestamp by default. This will be the timezone of the system it was submitted from, in rfc3339 format. It also has a numerical grade that will be changed when grading it. The passed and failed fields store which criteria this submission passed or failed.

The most important part of a submission is it's data, which you can read about in the next section.

TestData

The TestData type is simply a type alias to the HashMap type. Any methods or functionality that HashMap provides is also on TestData.

This field is part of a Submission and is meant to hold 2 things:

  1. Data you need in the final submission, and
  2. Data you need from within a criteria test.

The only restriction is that the TestData type is equivilent to a HashMap<String, String>, meaning both the keys and values must be a String type.

Creating

The best way to create a TestData bundle is through the data! macro.

let data = data! {
    "key" => "value",
    "key2" => "value 2"
};

assert_eq!( data["key"], "value" );

The data! and the prompt! macro work very well together. You can read about the prompt! macro in the next section.

The prompt! macro

You're probably going to want to get some user input and put that into the submission's data. For instance, a submission isn't very useful if you don't know who sent it. You'll want a name and possibly ID.

The prompt! macro will ask for user input and try to cast it to the given type.

let name = prompt!("Name: ", String);
println!("Your name is {}", name);

This macro can also enfore that the user enters a certain type. If it can't cast what they entered into the given type, it will crash with an error message.

let number = prompt!("Enter a number: ", isize);
println!("{} is definitely a number.", number);

Note: the values of a TestData object must be a string, so you'll need to cast whatever they entered back to a String if you want to include it in the TestData object.

Creating a Submission

We can combine what we know about about submissions, TestData, and prompt! to make a complete submission.

This snippet will make a new submission, ask the user for their name and ID, then wrap those into a TestData object and attach it to the submission.

let sub = Submission::from_data(data! {
    "name" => prompt!("Name: ", String),
    "id" => prompt!("ID: ", String)
});

Submitting

You can send a submission with a POST request. The easiest way to do this is through the post_json method from the helpers::web module.

The Submission type is JSON serializable, so we don't have to do any extra work before sending.

let sub = Submission::from_data(/* data here */);

// application code here...

let url = "http://localhost:8080/submit";
web::post_json(url, &sub);

Here, I'm POSTing the submission to a server running on localhost:8080. There's a special web server meant to handle these requests, which you can read about in the submission server section.

Error Handling

The web::post_json method returns a Result with a Response type inside, which can be used to handle errors. There's 101 ways you could do this, but here's an example.

let url = "http://localhost:8080/submit";

if let Ok(resp) = web::post_json(url, &sub) {
    if resp.status().is_success() {
        // if the response has a success code
        println!("Success! server responded with {}", resp.status());
    } else {
        // otherwise it came back with an error
        println!("Error! the server responded with {}", resp.status());
    }
} else {
    // This means the response couldn't even be completed
    println!("Error! The request could not be performed!");
}

Submission Server

The "Submission Server" is the other half to submitting. It's a preconfigured web server built to handle submissions.

You can run it through the Submission type like this

// Running on port 8080
Submission::server(8080);

This will run a web server with two routes:

  1. GET / - returns an OK status with no other content
  2. POST /submit - Accepts a JSON body of a Submission, returns an Accepted code on success, and an InternalServerError on failure.

Running the server

I like to set up a Microsoft Azure server to run this web server on, but you can do whatever you like. It just needs to be publicly accessible with a static IP or DNS name. You should put the address to your server (including the /submit bit) as the url to POST the submission to.

Be sure your server has the proper ports exposed. If you want to run on port 80 and omit the port in the url, be sure to run with root permission.

I like to run the server from within my grader through a command line parameter. Here's an example.

use std::env;

// ...

fn main() {
    let args: Vec<String> = env::args().collect();

    if args.len() == 2 && args[1] == "server" {
        Submission::server(8080);
    }
}

The Results

When the server accepts the first submission, it will make a file (in the directory you run the server in) called submissions.csv. It will then write the submission (including all the data you put in it) to the csv file.

It's important to note that the header for the csv file will be written according to the first submission it accepts. If, for some reason, different submissions has different data keys, the csv file will break. This is because one submission should have more columns and values than another. You should be sure not to conditionally add data to a submission.

Helpers

This chapter contains examples of all the helper functions. If you need more specific documentation, look on docs.rs.

Importing

You can import all the helper modules like this

use lab_grader::helpers::*;

Or import any specific helper module with

use lab_grader::[module_name];

File System - helpers::fs

File system operations. Don't get this confused with std::fs. It's not the same.

file_exists

Returns true if the file (or directory!) exists. Relative to the file that the compiled executable is being run in. If you're paranoid, use std::fs::canonicalize to get a full path.

assert!( fs::file_exists("Cargo.toml") );

file_contains

Returns true if a file contains the provided string. Case sensitive.

assert!( fs::file_contains("Cargo.toml", "version") );

Web - helpers::web

These functions relate to making web requests. They use the reqwest crate.

All of these functions use the blocking feature of reqwest, meaning they'll block until the request returns. You can make asynchronous requests with the reqwests crate, but that would involve more complicated code. I'm trying to keep it simple.

get

Makes a GET request to the given URL, returning a Result<Response>.

URL has to be valid or this will fail immediately.

Be cautious when making web requests; they take quite a bit of time. Only use them when necessary.

let url = "https://postman-echo.com/get";
let result = web::get(url);

assert!( result.is_ok() );

if let Ok(response) = result {
    // Get the body from the response
    if let Ok(body) = response.text() {
        assert!( body.contains("postman-echo.com") );
    }
}

site_responds

Returns true if a site responds with a success message. This will return false on 404 codes and other error codes.

let url = "https://postman-echo.com/get";

assert!( web::site_responds(url) );

post_json

This posts data in JSON format to the given URL. The parameter you pass in must implement the Serialize trait from serde.

let url = "https://postman-echo.com/post";

// Submission is serializable
let sub = Submission::new();

let result = web::post_json(url, &sub);

assert!( result.is_ok() );

post

This is just like post_json, except it posts arbitrary string data. It sets the CONTENT_TYPE header to be text/plain.

let url = "https://postman-echo.com/post";

let data = "here's some data to post, wonder where it will go";

let result = web::post(url, data);
assert!( result.is_ok() );

get_ip

This retrieves the public IP of the machine. This makes a web request, so it may take more time than you would expect. Returns None if it couldn't be retrieved or doesn't exist.

let IpAddr = web::get_ip();

assert!( IpAddr.is_some() );
if let Some(ip) = IpAddr {
    println!("My ip is: {}", ip);
}

Command Line/Programs - helpers::cli

Functions and macros for command line interaction

prompt! (macro)

This macro will prompt the user for input and try to cast it to the given type.

Whitespace around string ends will be trimmed.

Warning! if the user enters something that can't be parsed into the type you specify, this will print an error message and exit the current process

use std::net::Ipv4Addr;

let string: String = prompt!("Enter a string: ", String);
let number: i64 = prompt!("Enter a number: ", i64);
let ip: Ipv4Addr = prompt!("Enter an IPv4 address: ", Ipv4Addr);

prompt (function)

This is similar to the macro prompt!, but if the user enter invalid input it will print an error message and run again. This ensures you always get something from the user. However, it won't automatically cast to a type. It just returns a String, which you should cast manually.

Whitespace around the ends will be trimmed.

let input: String = prompt("Enter something: ");

Program

Currently, this enum allows you to get the installed version of a program. The current programs supported are:

  • Git
  • Docker
  • Docker-Compose
  • Python
  • Ruby
let version: Version = cli::Program::Git.version();

// If the program isn't installed, this will be None
assert!( version.is_some() );

// You can get the major, minor, and patch numbers of the version
assert!( version.unwrap().major() >= 1 );