# XPATH Scraper Makes it easier to scrape websites with XPATH. Currently using [my xpath parser](https://github.com/Its-its/rust-xpath) which is incomplete, undocumented and used originally for teaching myself about parsing. A Very simple example of this which is below and also in the [example](/example) folder: ```rust use std::io::Cursor; use scraper_main::{ xpather, ConvertFromValue, ScraperMain, Scraper, }; #[derive(Debug, Scraper)] pub struct RedditList( // Uses XPATH to find the item containers #[scrape(xpath = r#"//div[contains(@class, "Post") and not(contains(@class, "promotedlink"))]"#)] Vec ); #[derive(Debug, Scraper)] pub struct RedditListItem { // URL of the post #[scrape(xpath = r#".//a[@data-click-id="body"]/@href"#)] pub url: Option, // Title of the post #[scrape(xpath = r#".//a[@data-click-id="body"]/div/h3/text()"#)] pub title: Option, // When it was posted #[scrape(xpath = r#".//a[@data-click-id="timestamp"]/text()"#)] pub timestamp: Option, // Amount of comments. #[scrape(xpath = r#".//a[@data-click-id="comments"]/span/text()"#)] pub comment_count: Option, // Vote count. #[scrape(xpath = r#"./div[1]/div/div/text()"#)] pub votes: Option, } #[tokio::main] async fn main() -> Result<(), Box> { // Request subreddit let resp = reqwest::get("https://www.reddit.com/r/nocontextpics/").await?; // Return page data. let data = resp.text().await?; // Parse request into a Document. let document = xpather::parse_doc(&mut Cursor::new(data)); // Scrape RedditList struct. let list = RedditList::scrape(&document, None)?; // Output the scraped. println!("{:#?}", list); Ok(()) } ```