Crates.io | crabler-tokio |
lib.rs | crabler-tokio |
version | 0.1.29 |
source | src |
created_at | 2023-04-09 13:35:44.282596 |
updated_at | 2023-04-09 14:15:18.740831 |
description | Web scraper for Crabs - tokio version |
homepage | https://github.com/matiu2/crabler-tokio |
repository | https://github.com/matiu2/crabler-tokio |
max_upload_size | |
id | 834229 |
size | 25,591 |
Asynchronous web scraper engine written in Rust.
Ported from the crabler crate. The original work and author can be found here: https://github.com/Gonzih/crabler
The main difference between crabler and crabler-tokio is that we've swapped out these dependencies:
The main motivation for this is to be closer to pure rust, and drop the requirments for libcurl and libssl.
The cargo tree
output saves a few imports:
Features:
tokio
reqwest
crate for HTTP requestsAdd this to your Cargo.toml
:
[dependencies]
crabler-tokio = "0.1.0"
## Example
```rust,no_run
extern crate crabler;
use std::path::Path;
use crabler::*;
use reqwest::Url;
#[derive(WebScraper)]
#[on_response(response_handler)]
#[on_html("a[href]", walk_handler)]
struct Scraper {}
impl Scraper {
async fn response_handler(&self, response: Response) -> Result<()> {
if response.url.ends_with(".png") && response.status == 200 {
println!("Finished downloading {} -> {:?}", response.url, response.download_destination);
}
Ok(())
}
async fn walk_handler(&self, mut response: Response, a: Element) -> Result<()> {
if let Some(href) = a.attr("href") {
// Create absolute URL
let url = Url::parse(&href)
.unwrap_or_else(|_| Url::parse(&response.url).unwrap().join(&href).unwrap());
// Attempt to download an image
if href.ends_with(".png") {
let image_name = url.path_segments().unwrap().last().unwrap();
let p = Path::new("/tmp").join(image_name);
let destination = p.to_string_lossy().to_string();
if !p.exists() {
println!("Downloading {}", destination);
// Schedule crawler to download file to some destination
// downloading will happen in the background, await here is just to wait for job queue
response.download_file(url.to_string(), destination).await?;
} else {
println!("Skipping existing file {}", destination);
}
} else {
// Or schedule crawler to navigate to a given url
response.navigate(url.to_string()).await?;
};
}
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<()> {
let scraper = Scraper {};
// Run scraper starting from given url and using 20 worker threads
scraper.run(Opts::new().with_urls(vec!["https://www.rust-lang.org/"]).with_threads(20)).await
}