Crates.io | spider-cloud-cli |
lib.rs | spider-cloud-cli |
version | 0.1.23 |
source | src |
created_at | 2024-08-05 12:04:46.147735 |
updated_at | 2024-11-07 07:39:25.961922 |
description | The Spider Cloud CLI for web crawling and scraping |
homepage | |
repository | |
max_upload_size | |
id | 1325948 |
size | 66,531 |
Spider Cloud CLI is a command-line interface to interact with the Spider Cloud web crawler. It allows you to scrape, crawl, search, and perform various other web-related tasks through simple commands.
Install the CLI using homebrew
or cargo
from crates.io:
brew tap spider-rs/spider-cloud-cli
brew install spider-cloud-cli
cargo install spider-cloud-cli
After installing, you can use the CLI by typing spider-cloud-cli
followed by a command and its respective arguments.
Before using most of the commands, you need to authenticate by providing an API key:
spider-cloud-cli auth --api_key YOUR_API_KEY
Scrape data from a specified URL.
spider-cloud-cli scrape --url http://example.com
Crawl a specified URL with an optional limit on the number of pages.
spider-cloud-cli crawl --url http://example.com --limit 10
Fetch links from a specified URL.
spider-cloud-cli links --url http://example.com
Take a screenshot of a specified URL.
spider-cloud-cli screenshot --url http://example.com
Search for a query.
spider-cloud-cli search --query "example query"
Transform specified data.
spider-cloud-cli transform --data "sample data"
Extract contact information from a specified URL.
spider-cloud-cli extract_contacts --url http://example.com
Label data from a specified URL.
spider-cloud-cli label --url http://example.com
Get the crawl state of a specified URL.
spider-cloud-cli get_crawl_state --url http://example.com
Query records of a specified domain.
spider-cloud-cli query --domain example.com
Fetch the account credits left.
spider-cloud-cli get_credits
This project is licensed under the MIT License. See the LICENSE file for details.
Issues and pull requests are welcome! Feel free to check the issues page if you have any questions or suggestions.
Special thanks to the developers and contributors of the libraries and tools used in this project.