Crates.io | wordfeud-ocr |
lib.rs | wordfeud-ocr |
version | 0.1.1 |
source | src |
created_at | 2020-12-24 13:16:36.847506 |
updated_at | 2020-12-27 15:22:39.904025 |
description | A Rust library that recognizes a screenshot from the Wordfeud game on Android phone. |
homepage | |
repository | |
max_upload_size | |
id | 326891 |
size | 59,231 |
A Rust library that recognizes a screenshot from the Wordfeud game on Android phone.
Features:
The image processing for the screenshot recognition is done with help from the image and imageproc crates. Currently it has been tested only on Android phones with screen resolutions of 1080x1920 and 1080x2160 pixels.
Add this to your Cargo.toml
:
wordfeud-ocr = "0.1"
let path = "screenshots/screenshot_english.png";
let gray = image::open(path)?.into_luma8();
let board = Board::new();
let result = board.recognize_screenshot(&gray)?;
println!("Tiles:\n{}", result.tiles_ocr);
That would result in this output:
Tiles:
...............
...............
............z..
............if.
.........dental
..........v.ex.
.......h..e....
......hedonIc..
....r..d..l....
....o..o..y....
....brent......
....o..i..v....
.gaits.S..e....
....i..munged..
....c.....a....
Here is an example screenshot, with the grid lines marked in red (start) and blue (end). NOTE: the images are shown here in reduced size.
Here is the resulting board:
And the tiles in the rack:
After the cells are located in the board each cell is checked if it is a grid cell (possibly with a letter or word bonus) or if it contains a letter tile. The distinction is made by looking at the mean pixel value in the cell.
The following collage (produced by the collage.rs
example program) shows the result:
To recognize the tiles, we match each tile with each of a set of letter templates, and find the best match. The templates have a size of 38x60 (wxh) pixels.
For the curious: The collage is produced by the Imagemagick
montage tool:
lib$ montage src/templates/[A-Z]*png -geometry 38x60+4+4 -shadow templates.png
In a similar manner, the grid cells are recognized. First we find the cells that have a bonus, by looking at the mean pixel value. Then each bonus cell is matched a set of bonus templates: