366
"Copy Link Without Site Tracking" now on ! ๐
(media.mstdn.social)
A place to discuss the news and latest developments on the open-source browser Firefox
Personally, I have found this feature to be too limited. I still use the ClearURLs extension, which is more effective in my experience.
However, neither one is a silver bullet. Here's an example I just took from Amazon (I blocked out some values with X's):
Original URL:
https://www.amazon.com/Hydro-Flask-Around-Tumbler-Trillium/dp/B0C353845H/ref=XXXX?qid=XXXXXXXXXX&refinements=p_XXXXXXXXXXXXX&rps=1&s=sporting-goods&sr=XXX
Using Firefox's "copy link without site tracking" feature:
https://www.amazon.com/Hydro-Flask-Around-Tumbler-Trillium/dp/B0C353845H/ref=XXXX?qid=XXXXXXXXXX&refinements=p_XXXXXXXXXXXXX&rps=1&s=sporting-goods
Using ClearURLs:
https://www.amazon.com/Hydro-Flask-Around-Tumbler-Trillium/dp/B0C353845H?refinements=p_XXXXXXXXXXXXX&rps=1
The ideal, canonical URL, which no tools I'm familiar with will reliably generate:
https://www.amazon.com/dp/B0C353845H
Longer but still fully de-personalized URL:
https://www.amazon.com/Hydro-Flask-Around-Tumbler-Trillium/dp/B0C353845H
If anybody knows a better solution that works with a wide variety of sites, please share!
This will just push websites to change the orders and names of their query params (maybe regularly).
I don't think one can safely omit all query params from all sites and expect a decent experience across all websites.
You are fighting a good fight, I salute you, but query params (the part of the url where the identifiers are included) are a valid and core part of Internet addresses. Trying to strip them away universally will only work for so long.
I think that's why you haven't found a tool that meets all your needs yet, because many sites have legitimate uses for those params that don't include tracking. You'll probably need tools designed specifically for Amazon and other specific websites/services.
Oh yes, definitely. I think this is why Mozilla has not made this the default behavior in Firefox; there will always be the risk of false-positives breaking copied links, so it's important that people know that there's some kind of mutation happening.
ClearURLs uses a JSON file with site-specific regex patterns and rules. In theory I could customize this for myself, or better yet submit a pull request on their GitHub. If I have time I'll look into it.