-
-
Notifications
You must be signed in to change notification settings - Fork 183
Clean links - remove superfluous tracking elements from URLs #460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
sigaloid
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good! Thanks for the PR - a few comments.
src/utils.rs
Outdated
| } | ||
|
|
||
| // Remove tracking query params | ||
| static URL_CLEANER: Lazy<Mutex<UrlCleaner>> = Lazy::new(|| Mutex::new(UrlCleaner::from_embedded_rules().expect("Failed to initialize UrlCleaner"))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason we use a mutex? clear_single_url_str doesn't take &mut str as a parameter, so we don't need mutual exclusion, we should be able to get away with something like an Arc - this also improves performance as we don't have unnecessary lock contention. A static is a good idea though - so we only construct the UrlCleaner once without constantly reparsing rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be fixed now.
| pub fn clean_url(url: String) -> String { | ||
| let is_external_url = match Url::parse(url.as_str()) { | ||
| Ok(parsed_url) => parsed_url.domain().is_some(), | ||
| _ => false, | ||
| }; | ||
| let mut cleaned_url = url.clone(); | ||
| if is_external_url { | ||
| let cleaner = URL_CLEANER.lock().unwrap(); | ||
| cleaned_url = cleaner.clear_single_url_str(cleaned_url.as_str()).expect("Unable to clean the URL.").as_ref().to_owned(); | ||
| } | ||
| cleaned_url | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should handle the failed cases better - specifically (once you remove the mutex) the expect for the UrlCleaner::clear_single_url_str call should fallback to the input, instead of panicking. This shouldn't happen since the clearurls::Error enum is pretty small (and default embedded rules presumably exclude most of them); but a malformed URL is very likely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
src/utils.rs
Outdated
| #[revision(start = 1)] | ||
| pub clean_urls: String, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please review revision docs to handle this - specifically, you'll want to increment the upper level struct's revision to 2, set the start for clean_urls to 2, and read the docs on how you'll handle reverse compatibility (default to off).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Followed the documentation, incremented the revision to 2, and made it default to “off” for previous revisions. However, it keeps failing with Err value: Io(Kind(UnexpectedEof)) for older strings (works for version 2 strings). Unfortunately, I'm currently out of time to dig deeper. If it is an obvious issue, please feel free to point that out; otherwise, I will have to get back to this sometime in the future.
For the last couple of months I've been running a modified version of Redlib on my personal “instance.”
I essentially added a new option to the settings that make Redlib remove common tracking elements from the links posted to Reddit. A small privacy gain when we don't have browser extensions to do this job.
Since it is running smoothly, I think this could benefit all Redlib users. So here's the PR; feel free to dismiss or suggest improvements.