Crates.io | spider_utils |
lib.rs | spider_utils |
version | |
source | src |
created_at | 2024-07-24 13:17:57.822249 |
updated_at | 2025-01-15 17:44:12.034136 |
description | Utilities to use for Spider Web Crawler. |
homepage | |
repository | https://github.com/spider-rs/spider |
max_upload_size | |
id | 1313984 |
Cargo.toml error: | TOML parse error at line 18, column 1 | 18 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
Utilities to use to help with getting the most out of spider.
use spider::{
hashbrown::HashMap,
packages::scraper::Selector,
};
use spider_utils::{QueryCSSMap, QueryCSSSelectSet, build_selectors, css_query_select_map_streamed};
async fn css_query_selector_extract() {
let map = QueryCSSMap::from([(
"list",
QueryCSSSelectSet::from([".list", ".sub-list"]),
)]);
let data = css_query_select_map_streamed(
r#"<html>
<body>
<ul class="list"><li>First</li></ul>
<ul class="sub-list"><li>Second</li></ul>
</body>
</html>"#,
&build_selectors(map),
)
.await;
println!("{:?}", data);
// {"list": ["First", "Second"]}
}
You can use the feature flag indexset
to order the CSS scraping extraction order.