Crates.io | url-cleaner |
lib.rs | url-cleaner |
version | 0.6.2 |
source | src |
created_at | 2024-04-24 17:34:47.253328 |
updated_at | 2024-11-25 21:48:03.580263 |
description | A CLI tool and library for URL manipulation with a focus on stripping tracking garbage. |
homepage | |
repository | https://github.com/Scripter17/url-cleaner |
max_upload_size | |
id | 1219339 |
size | 717,672 |
Websites often put unique identifiers into URLs so that when you send a link to a friend and they open it, the website knows it was you who sent it to them.
As most people do not understand and therefore cannot consent to this, it is polite to remove the maltext before sending URLs to people.
URL Cleaner is an extremely versatile tool designed to make this process as comprehensive, easy, and fast as possible.
There are several non-obvious privacy concerns you should keep in mind while using URL Cleaner, and especially the default config.
unmangle
flag), become malicious.no-network
flag, all (supported) redirects on a webpage are effectively automatically clicked.
If you are in any way using URL Cleaner in a life-or-death scenario, PLEASE always use the no-network
flag and be extremely careful of people you even remotely don't trust sending you URLs.
Also if you're using URL Cleaner in life-or-death scenarios please be extremely careful. I'm still not confident I've minimized information leaks in handled websites.
These packages are required on Kubuntu 2024.04 (and probably therefore all Debian based distros.):
libssl-dev
for the http
feature flag.libsqlite3-dev
for the caching
feature flag.There are likely plenty more dependencies required that various Linux distros may or may not pre-install.
If you can't compile it I'll try to help you out. And if you can make it work on your own please let me know so I can add to this list.
By default, compiling URL Cleaner includes the default-config.json
file in the binary. Because of this, URL Cleaner can be used simply with url-cleaner "https://example.com/of?a=dirty#url"
.
The default config is intended to always obey the following rules:
unmangle
flag for details.command
and custom
features, as well as any features starting with debug
or experiment
are never expected to be enabled.
The command
feature is enabled by default for convenience but, for situations where untrusted/user-provided configs have a chance to be run, should be disabled.Currently no guarantees are made, though when the above rules are broken it is considered a bug and I'd appreciate being told about it.
Additionally, these rules may be changed at any time for any reason. Usually just for clarification.
Flags let you specify behaviour with the --flag name --flag name2
command line syntax.
Various flags are included in the default config for things I want to do frequently.
assume-1-dot-2-is-redirect
: Treat all hosts that match the Regex ^.\...$
as redirects. Let's be real, they all are.breezewiki
: Sets the domain of fandom.com
and BreezeWiki to the domain specified by the breezewiki-domain
variable.bypass.vip
: Use bypass.vip to expand linkvertise and some other links.discord-compatibility
: Sets the domain of twitter domiains (and supported twitter redirects like vxtwitter.com
) to the variable twitter-embed-domain
and bsky.app
to the variable bsky-embed-domain
.discord-unexternal
: Replace images-ext-1.discordapp.net
with the original images they refer to.no-https-upgrade
: Disable replacing http://
with https://
.no-network
: Don't make any HTTP requests.no-unmangle-host-is-http-or-https
: Don't convert https://https//example.com/abc
to https://example.com/abc
.no-unmangle-path-is-url
: Don't convert https://example1.com/https://example2.com/user
to https://example2.com/abc
.no-unmangle-path-is-url-encoded-url
: Don't convert https://example.com/https%3A%2F%2Fexample.com%2Fuser
to https://example.com/user
.no-unmangle-second-path-segment-is-url
: Don't convert https://example1.com/profile/https://example2.com/profile/user
to https://example2.com/profile/user
.no-unmangle-subdomain-ends-in-not-subdomain
: Don't convert https://profile.example.com.example.com
to https://profile.example.com
.no-unmangle-subdomain-starting-with-www-segment
: Don't convert https://www.username.example.com
to https://username.example.com
.no-unmangle-twitter-first-path-segment-is-twitter-domain
: If a twitter domain's first path segment is a twitter domain, don't remove it.onion-location
: Replace hosts with results from the Onion-Location
HTTP header if present. This makes an HTTP request one time per domain and caches it.tor2web
: Append the suffix specified by the tor2web-suffix
variable to .onion
domains.tor2web2tor
: Replace **.onion.**
domains with **.onion
domains.tumblr-unsubdomain-blog
: Changes blog.tumblr.com
URLs to tumblr.com/blog
URLs. Doesn't move at
or www
subdomains.unbreezewiki
: Turn BreezeWiki into fandom.com
. See the breezewiki-hosts
set for which hosts are replaced.unmangle
: "Unmangle" certain "invalid but I know what you mean" URLs. Should not be used with untrusted URLs as malicious actors can use this to sneak malicuous URLs past, for example, email spam filters.unmobile
: Convert https://m.example.com
, https://mobile.example.com
, https://abc.m.example.com
, and https://abc.mobile.example.com
into https://example.com
and https://abc.example.com
.youtube-unlive
: Turns https://youtube.com/live/abc
into https://youtube.com/watch?v=abc
.youtube-unplaylist
: Removes the list
query parameter from https://youtube.com/watch
URLs.youtube-unshort
: Turns https://youtube.com/shorts/abc
into https://youtube.com/watch?v=abc
.If a flag is enabled in a config's params
field, it can be disabled using --unflag flag1 --unflag flag1
.
Variables let you specify behaviour with the --var name value --var name2 value2
command line syntax.
Various variables are included in the default config for things that have multiple useful values.
SOURCE_URL
: Used by URL Cleaner Site to handle things wbesites do to links on their pages that's unsuitable to always remove.bluesky-embed-domain
: The domain to use for bluesky when the discord-compatibility
flag is set. Defaults to fxbsky.com
.breezewiki-domain
: The domain to use to turn fandom.com
and BreezeWiki into BreezeWiki. Defaults to breezewiki.com
bypass.vip-api-key
: The API key used for bypass.vip's premium backend. Overrides the URL_CLEANER_BYPASS_VIP_API_KEY
environment variable.tor2web-suffix
: The suffix to append to the end of .onion
domains if the flag tor2web
is enabled. Should not start with .
as that's added automatically. Left unset by default.twitter-embed-domain
: The domain to use for twitter when the discord-compatibility
flag is set. Defaults to vxtwitter.com
.If a variable is specified in a config's params
field, it can be unspecified using --unvar var1 --unvar var2
.
There are some things you don't want in the config, like API keys, that are also a pain to repeatedly insert via --var abc xyz
. For this, URL Cleaner uses environment variables.
URL_CLEANER_BYPASS_VIP_API_KEY
: The API key used for bypass.vip's premium backend. Can be overridden with the bypass.vip-api-key
variable.Sets let you check if a string is one of many specific strings very quickly.
Various sets are included in the default config.
breezewiki-hosts
: Hosts to replace with the breezewiki-domain
variable when the breezewiki
flag is enabled. fandom.com
is always replaced and is therefore not in this set.bypass.vip-host-without-www-dot-prefixes
: HostWithoutWWWDotPrefix
es to use bypass.vip for.email-link-format-1-hosts
: (TEMPORARY NAME) Hosts that use unknown link format 1.https-upgrade-host-blacklist
: Hosts to not upgrade from http
to https
even when the no-https-upgrade
flag isn't enabled.lmgtfy-hosts
: Hosts to replace with google.com
.redirect-host-without-www-dot-prefixes
: Hosts that are considered redirects in the sense that they return HTTP 3xx status codes. URLs with hosts in this set (as well as URLs with hosts that are "www." then a host in this set) will have the ExpandRedirect
mapper applied.redirect-not-subdomains
: The redirect-host-without-www-dot-prefixes
set but using the NotSubdomain
of the URL.unmangle-path-is-url-blacklist
: Effectively the no-unmangle-path-is-url
flag for the specified Host
s.unmangle-subdomain-ends-in-not-subdomain-not-subdomain-whitelist
: Effectively the no-unmangle-subdomain-ends-in-not-subdomain-not-subdomain-whitelist
flag for the specified NotSubdomain
s.unmangle-subdomain-starting-with-www-segment-not-subdomain-whitelist
: Effectively the no-unmangle-subdomain-starting-with-www-segment
flag for the specified NotSubdomain
s.unmobile-not-subdomain-blacklist
: Effectively unsets the unmobile
flag for the specified NotSubdomain
s.utps
: The set of "universal tracking parameters" that are always removed for any URL with a host not in the utp-host-whitelist
set. Please note that the utps
common mapper in the default config also removes any parameter starting with any string in the utp-prefixes
list and thus parameters starting with those can be omitted from this set.utps-host-whitelist
: Hosts to never remove universal tracking parameters from.Sets can have elements inserted into them using --insert-into-set name1 value1 value2 --insert-into-set name2 value3 value4
.
Sets can have elements removed from them using --remove-from-set name1 value1 value2 --remove-from-set name2 value3 value4
.
Lists allow you to iterate over strings for things like checking if another string contains any of them.
Currently only one list is included in the default config:
utp-prefixes
: If a query parameter starts with any of the strings in this list (such as utm_
) it is removed.Currently there is no command line syntax for them. There really should be.
Reasonably fast. [benchmarking/benchmark.sh
] is a Bash script that runs some Hyperfine and Valgrind benchmarking so I can reliably check for regressions.
On a mostly stock lenovo thinkpad T460S (Intel i5-6300U (4) @ 3.000GHz) running Kubuntu 24.10 (kernel 6.11.0) that has "not much" going on (FireFox, Steam, etc. are closed), hyperfine gives me the following benchmark:
(The numbers are in milliseconds)
{
"https://x.com?a=2": {
"0": 5.142,
"1": 5.315,
"10": 5.384,
"100": 5.644,
"1000": 9.067,
"10000": 42.959
},
"https://example.com?fb_action_ids&mc_eid&ml_subscriber_hash&oft_ck&s_cid&unicorn_click_id": {
"0": 5.156,
"1": 5.270,
"10": 5.275,
"100": 5.832,
"1000": 10.655,
"10000": 57.388
},
"https://www.amazon.ca/UGREEN-Charger-Compact-Adapter-MacBook/dp/B0C6DX66TN/ref=sr_1_5?crid=2CNEQ7A6QR5NM&keywords=ugreen&qid=1704364659&sprefix=ugreen%2Caps%2C139&sr=8-5&ufe=app_do%3Aamzn1.fos.b06bdbbe-20fd-4ebc-88cf-fa04f1ca0da8": {
"0": 5.233,
"1": 5.261,
"10": 5.331,
"100": 6.229,
"1000": 14.599,
"10000": 95.087
}
}
In practice, when using URL Cleaner Site and its userscript, performance is often up to 10x worse because for some reason GM_XMLHttpRequest
always takes at least 10ms on my machine and, from basic testing, the amazon homepage has 1k URLs and takes about 8-10 requests to clean all of them.
Mileage varies wildly but as long as you're not spawning a new instance of URL Cleaner for each URL it should be fast enough.
Please note that URL Cleaner is currently single threaded because I don't know how to do it well. Parallelizing yourself (for example, with GNU Parallel) may give better results.
The people and projects I have stolen various parts of the default config from.
Although proper documentation of the config schema is pending me being bothered to do it, the url_cleaner
crate itself is reasonably well documented and the various types are (I think) fairly easy to understand.
The main files you want to look at are conditions.rs
and mappers.rs
.
Additionally url_part.rs
, string_source.rs
, and string_modification.rs
are very important for more advanced rules.
There are various things in/about URL Cleaner that I or many consider stupid but for various reasons cannot/should not be "fixed". These include but are not limited to:
UrlPart
s and Mapper
s that use "suffix" semantics (the idea that the '.co.uk" in "google.co.uk" is semantically the same as the ".com" in "google.com"'), the psl crate is used which in turn uses Mozilla's Public Suffix List.
Various suffixes are included that one may expect to be normal domains, such as blogspot.com.
If for some reason a domain isn't working as expected, that may be the issue.Option<...>
just means a value can be null
in the JSON. {"abc": "xyz"}
and {"abc": null}
are both valid states for a abc: Option<String>
field.Box<...>
has no bearing on JSON syntax or possible values. It's just used so Rust can put types inside themselves.Vec<...>
and HashSet<...>
are written as lists.HashMap<..., ...>
and HeaderMap
are written as maps.
HeaderMap
keys are always lowercase.u8
, u16
, u32
, u64
, u128
, and usize
are unsigned (never negative) integers. i8
, i16
, i32
, i64
, i128
, and isize
are signed (maybe negative) integers. usize
is a u32
on 32-bit computers and u64
on 64-bit computers. Likewise isize
is i32
and i64
under the same conditions. In practice, if a number makes sense to be used in a field then it'll fit.Duration
is written as {"secs": u64, "nanos": 0..1000000000}
.r#
you write it without the r#
(like "else"
). r#
is just Rust syntax for "this isn't a keyword".StringSource
, GlobWrapper
, RegexWrapper
, RegexParts
, and CommandWrapper
can be written as both strings and maps.
RegexWrapper
and RegexParts
don't do any handling of /.../i
-style syntax.CommandWrapper
doesn't do any argument parsing.#[serde(default)]
and #[serde(default = "...")]
allow for a field to be omitted when the desired value is almost always the same.
Option<...>
fields, the default is null
.bool
fields, the default is false
.#[serde(skip_serializing_if = "...")]
lets the --print-config
CLI flag omit unnecessary details (like when a field's value is its default value).#[serde(from = "...")]
, #[serde(into = "...")]
, #[serde(remote = "...")]
, #[serde(serialize_with = "...")]
, #[serde(deserialize_with = "...")]
, and #[serde(with = "...")]
are implementation details that can be mostly ignored.#[serde(remote = "Self")]
is a very strange way to allow a struct to be deserialized from a map or a string. See serde_with#702 for details.Additionally, regex support uses the regex crate, which doesn't support look-around and backreferences.
Certain common regex operations are not possible to express without those, but this should never come up in practice.
The Minimum Supported Rust Version is the latest stable release. URL Cleaner may or may not work on older versions, but there's no guarantee.
If this is an issue I'll do what I can to lower it but Diesel also keeps a fairly recent MSRV so you may lose caching.
Although URL Cleaner has various feature flags that can be disabled to make handling untrusted input safer, no guarantees are made. Especially if the config file being used is untrusted.
That said, if you notice any rules that use but don't actually need HTTP requests or other data-leaky features, please let me know.
Note: JSON output is supported.
Unless a Debug
variant is used, the following should always be true:
Input URLs are a list of URLs starting with URLs provided as command line arguments then each line of the STDIN.
The nth line of STDOUT corresponds to the nth input URL.
If the nth line of STDOUT is empty, then something about reading/parsing/cleaning the URL failed.
The nth non-empty line of STDERR corresponds to the nth empty line of STDOUT.
The --json
/-j
flag can be used to have URL Cleaner output JSON instead of lines.
The exact format is currently in flux, though it should always be identical to URL Cleaner Site's output.
Currently, the exit code is determined by the following rules:
URL Cleaner should only ever panic under the following circumstances:
Parsing the CLI arguments failed.
Loading/parsing the config failed.
Printing the config failed. (Shouldn't be possible.)
Testing the config failed.
Reading from/writing to STDIN/STDOUT/STDERR has a catastrophic error.
Running out of memory resulting in a standard library function/method panicking. This should be extremely rare.
(Only possible when the debug
feature is enabled) The mutex controlling debug printing indenting is poisoned and a lock is attempted.
This should only be possible when URL Cleaner is used as a library.
Outside of these cases, URL Cleaner should never panic. However as this is equivalent to saying "URL Cleaner has no bugs", no actual guarantees can be made.
URL Cleaner does not accept donations. If you feel the need to donate please instead donate to The Tor Project and/or The Internet Archive.