Crates.io | visual-odometry-rs |
lib.rs | visual-odometry-rs |
version | 0.1.0 |
source | src |
created_at | 2019-03-25 15:32:10.207646 |
updated_at | 2019-03-25 15:32:10.207646 |
description | Visual odometry in Rust (vors) |
homepage | https://github.com/mpizenberg/visual-odometry-rs |
repository | https://github.com/mpizenberg/visual-odometry-rs |
max_upload_size | |
id | 123755 |
size | 181,931 |
This repository provides both a library ("crate" as we say in Rust) named visual-odometry-rs,
(shortened vors) and a binary program named vors_track
,
for camera tracking ("visual odometry").
The program works on datasets following the TUM RGB-D dataset format.
It is roughly a hundred lines of code (see src/bin/vors_track.rs
),
built upon the visual-odometry-rs crate also provided here.
Once you have cloned this repository,
you can run the binary program vors_track
with cargo directly as follows:
cargo run --release --bin vors_track -- fr1 /path/to/some/freiburg1/dataset/associations.txt
Have a look at mpizenberg/rgbd-tracking-evaluation
for more info about the dataset requirements to run the binary program vors_track
.
The library is organized around four base namespaces:
core::
Core modules for computing gradients, candidate points, camera tracking etc.dataset::
Helper modules for handling specific datasets.
Currently only provides a module for TUM RGB-D compatible datasets.math::
Basic math modules for functionalities not already provided by nalgebra,
like Lie algebra for so3, se3, and an iterative optimizer trait.misc::
Helper modules for interoperability, visualization, and other things that did
not fit elsewhere yet.Self contained examples for usage of the API are available in the examples/
directory.
A readme is also present there for more detailed explanations on these examples.
Currently, vors provides a visual odometry framework for working on direct RGB-D camera tracking. Setting all this from the ground up took a lot of time and effort, but I think it is mature enough to be shared as is now. Beware, however, that the API is evolving a lot. My hope is that in the near future, we can improve the reach of this project by working both on research extensions, and platform availability.
Example research extensions:
Example platform extensions:
Initially, this repository served as a personal experimental sandbox for computer vision in Rust. See for example my original questions on the rust discourse and reddit channel. Turns out I struggled a bit at first but then really liked the Rust way, compared to C++.
As the name suggests, the focus is now on visual odometry, specifically on the recent research field of direct visual odometry. A reasonable introduction is available in those lecture slides by Waterloo Autonomous Vehicles lab.
In particular, this project initially aimed at improving on the work of DSO by J. Engel et. al. but with all the advantages of using the Rust programming language, including:
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
All forms of contribution are welcomed, preferably first as github issues.
In case of contribution to source code, it needs to use rustfmt and clippy. To run clippy:
touch src/lib.rs; cargo clippy --release --all-targets --all-features