# Lexer Tests The lexer does not take grammar into account, the lexer should sucessfully analyze gramatically erroneous files. The tests here can therefore only test the tokenization process: 1. a syntactly correct file should produce a single, invariant token stack; 2. a syntactly incorrect file should produce reproductible errors. ## Naming convenion We have two types of tests: 1. expected successes, 2. expected failures. For each of these tests types, we MUST write two files: 1. `.sub` and 2. `.txt`. The first one contains subcomponent syntax, the second one the expected result. If the test is expected to succeed, it will contain the token stack. If the test is expected to fail, it will contain the list of syntax errors. ## How to add a new test The first file is easy to write, just write the subcomponent syntax you want to test. Difficulty arises when writing the expected results file. Run subcomponent to make it compile your file. For now, we are doing the compilation through the `fetch` command, but this will change in the future. ```bash cargo run -- --debug -f "tests/lexer/.sub" fetch ``` Save the results of `stderr` somewhere, we will need to extract them later. The `--debug` option is used to get the trace level, and get extensive information about what the lexer was doing. ### Expected Success The output of subcomponent will look like the following: ``` trace: Pushing Lexeme: BlockStart [25;1] trace: Pushing Lexeme: Identifier [73;5] trace: Pushing Lexeme: Identifier [79;5] trace: Pushing Lexeme: Identifier [85;5] ... ``` First, check that what the lexer tells you is correct. Once you are convinced that the lexer is not lying, extract the text at the right of `Lexeme: `: ``` BlockStart [25;1] Identifier [73;5] Identifier [79;5] Identifier [85;5] ... ``` And that's the contents of the expected success file. ### Expected Failure Not implemented yet, so we are not checking the reason of the failures.