4

When clicking the cross-post icon, a search box appears where you can select the community to cross-post to. It shows announcement communities that disallow posting. It allowed me to select !lemmyverse@lemmyverse.org. But then when I clicked “create” it just goes to lunch and gives an endless spinner. That’s a really shitty behavior. The user has no idea why it’s hanging when in fact there should be no hangup at all.

I did not know !lemmyverse@lemmyverse.org had restricted posting until I went there to see if I could post directly. The search dialog in the cross-posting form should print a prohibited icon or warning icon (⚠) next to communities where posts are impossible. This would show users there will be a problem but in a way that does not ignore the existence of those communities. And if they select such communities anyway, they should get a hard and fast proper error msg.

1
submitted 11 months ago* (last edited 11 months ago) by diyrebel@lemmy.dbzer0.com to c/fightforprivacy@feddit.ch

cross-posted from: https://lemmy.dbzer0.com/post/6251633

LemmyWorld is a terrible place for communities to exist. Rationale:

  • Lemmy World is centralized by disproportionately high user count
  • Lemmy World is centralized by #Cloudflare
  • Lemmy World is exclusive because Cloudflare is exclusive

It’s antithetical to the #decentralized #fediverse for one node to be positioned so centrally & revolting that it all happens on the network of a privacy-offender (CF). If #Lemmy World were to go down, a huge number of communities would go with it.

So what’s the solution? My individual action idea is to avoid posting an original thread to #LemmyWorld. I find a non-Cloudflare decentralized instance to post new threads. I create one if needed. Then I cross-post to the relevant Lemmy World community. This gets some exposure to my content while also tipping off readers of the LW community of alternative venues.

Better ideas? Would this work as a collective movement?

1
submitted 11 months ago* (last edited 10 months ago) by diyrebel@lemmy.dbzer0.com to c/fediverse@discuss.online

LemmyWorld is a terrible place for communities to exist. Rationale:

  • Lemmy World is centralized by disproportionately high user count
  • Lemmy World is centralized by #Cloudflare
  • Lemmy World is exclusive because Cloudflare is exclusive

It’s antithetical to the #decentralized #fediverse for one node to be positioned so centrally and revolting that it all happens on the network of a privacy-offender (CF). If #Lemmy World were to go down, a huge number of communities would go with it.

So what’s the solution?

Individual action protocol:

  1. Never post an original thread to #LemmyWorld. Find a free world non-Cloudflare decentralized instance to start new threads. Create a new community if needed. (there are no search tools advanced enough to have a general Cloudflare filter, but #lemmyverse.net is useful because it supports manually filtering out select nodes like LW)
  2. Wait for some engagement, ideally responses.
  3. Cross-post to the relevant Lemmy World community (if user poaching is needed).

This gets some exposure to the content while also tipping off readers of the LW community of alternative venues. LW readers are lazy pragmatists so they will naturally reply in the LW thread rather than the original thread. Hence step 2. If an LW user wants to interact with another responder they must do so on the more free venue. Step 3 can be omitted in situations where the free-world community is populated well enough. If /everything/ gets cross-posted to LW then there is no incentive for people to leave LW.

Better ideas? Would this work as a collective movement?

20

Apart from Cloudflare being an access restricted walled garden that harms interoperability, I really do not want my content on CF & I do not want CF content reaching me. This bug is one of many issues likely caused by Cloudflare:

https://lemmy.dbzer0.com/post/4806490

I would like to flip a switch that has the effect of making my whole UX Cloudflare-free. Cloudflare is antithetical to decentralization and it has clearly broken the #Lemmy network.

6

I tried to post in a zerobytes.monster community from a normal (non-Cloudflared) instance using Tor Browser. When I clicked the button to submit the post, it just became an endlessly spinning icon.

Then I posted on a non-Cloudflare instance instead, which worked fine. Then I tried to cross-post it to zerobytes.monster. Again, non-stop spinner.

I suspect the problem is that even though I’m actually on node A, when I direct the content to post on node B there is perhaps a direct connection being made to node B. When node B is tor-hostile (e.g. Cloudflare) it’s blocking the packets. But the software is not smart about this.. just leaves the user hanging.

Now I wonder if the other endless spinner I encountered when trying to create an account somewhere is a Cloudflare-induced issue as well:

https://lemmy.dbzer0.com/post/4525532

6
submitted 1 year ago* (last edited 1 year ago) by diyrebel@lemmy.dbzer0.com to c/lemmy_support@lemmy.ml

Filled out the reg. form, filled out the CAPTCHA, and hit the “sign up” button which then turns into a spinner. The spinner never stops. Confirmation email never arrives.

Lemmy devs: please give output rather than just spinners. We have no way to know what is going on or how long it takes to process a registration form. We should receive error messages rather than a forever loop.

6
submitted 1 year ago* (last edited 1 year ago) by diyrebel@lemmy.dbzer0.com to c/lemmy_support@lemmy.ml

I click LOGIN, enter my username, tab over to the password field and as I’m entering the password the username clears. So then i have to go back to the username field and re-enter it.

It’s as if the page is still loading but as a final action in the loading process it clears the form. I’m not a javascript expert but it feels like excessive use of js for something that should simply be html.

#LemmyBug

[-] diyrebel@lemmy.dbzer0.com 8 points 1 year ago* (last edited 1 year ago)

emphasis mine:

Anti-nuclear is like anti-GMO and anti-vax: pure ignorance, and fear of that which they don’t understand.

First of all anti- #GMO stances are often derived from anti-Bayer-Monsanto stances. There is no transparency about whether Monsanto is in the supply chain of any given thing you buy, so boycotting GMO is as accurate as ethical consumers can get to boycotting Monsanto. It would either require pure ignorance or distaste for humanity to support that company with its pernicious history and intent to eventually take control over the world’s food supply.

Then there’s the anti-GMO-tech camp (which is what you had in mind). You have people who are anti-all-GMO and those who are anti-risky-GMO. It’s pure technological ignorance to regard all GMO equally safe or equally unsafe. GMO is an umbrella of many techniques. Some of those techniques are as low risk as cross-breeding in ways that can happens in nature. Other invasive techniques are extremely risky & experimental. You’re wiser if you separate the different GMO techniques and accept the low risk ones while condemning the foolishly risky approaches at the hands of a profit-driven corporation taking every shortcut they can get away with.

So in short:

  • Boycott all U.S.-sourced GMO if you’re an ethical consumer. (note the EU produces GMO without Monsanto)
  • Boycott just high-risk GMO techniques if you’re unethical but at least wise about the risks. (note this is somewhat impractical because you don’t have the transparency of knowing what technique was used)
  • Boycott no GMO at all if you’re ignorant about risks & simultaneously unethical.
[-] diyrebel@lemmy.dbzer0.com 6 points 1 year ago

I really cannot stand that phrase because it’s commonly used as poor rationale for not favoring a superior approach. Both sides of the debate are pushing for what they consider optimum, not “perfection”.

In the case at hand, I’m on the pro-nuclear side of this. But I would hope I could make a better argument than to claim my opponent is advocating an “impossible perfection”.

1

cross-posted from: https://lemmy.dbzer0.com/post/1867431

Lately I’m running into more and more situations where I am forced to patronize a private company in the course of doing a transaction with my government. For example, a government office stops accepting cash payment for something (e.g. a public parking permit). Residents cannot pay for the permit unless they enter the marketplace and do business with a private bank. From there, the bank might force you to have a mobile phone (yes, this is common in Europe for example).

Example 2:

Some gov offices require the general public to call them or email them because they no longer have an open office that can be visited in person. Of course calling means subscribing to phone service (payphones no longer exist). To send an email, I can theoretically connect a laptop to a library network and use my own mail server to send it, but most gov offices block email that comes from IP that Google/SpamHaus/whoever does not approve, thus forcing you to subscribe to a private sector service in order to do a public transaction. At the same time, snail-mail is increasingly under threat & fax is already ½ dead.

Example 3:

A public university in Denmark refuses access to some parts of the school’s information systems unless you provide a GSM number so they can do a 2FA SMS. If a student opposes connecting to GSM networks due to the huge attack surface and privacy risks, they are simply excluded from systems with that limitation & their right to a public education is hindered. The school library e-books are being bogarted by Cloudflare’s walled garden, where a private company restricts access to the books based on factors like your IP address & browser.

Where are my people?

So, I’m bothered by this because most private companies demonstrate untrustworthyness & incompetence. I think I should be able to disconnect and access all public services with minimal reliance on the private sector. IMO the lack of that option is injustice. There is an immeasurably huge amount of garbage tech on the web subjecting people to CAPTCHAs, intrusive ads, dysfunctional javascript, dark patterns, etc. Society has proven inability to counter that and it will keep getting worse. I think the ONLY real fix is to have a right to be offline. The power to say:

*“the gov wants to push this broken reCAPTCHA that forces me to share data with a surveillance capitalist


no thanks. Give me an offline private-sector-free way to do this transaction”*

There is substantial chatter in the #fedi about all the shit tech being pushed on us & countless little tricks and hacks to try to sidestep it. But there is almost no chatter about the real high-level solution which would encompass two rights:

  1. a right to be free from the private sector marketplace; and
  2. the right to be offline

Of course there could only be very recent philosophers who would think of the right to be offline. But I wonder if any philosophers in history have published anything influential as far as the right to not be forced into the private sector marketplace. By that, I don’t mean anti-capitalism (of course that’s well covered).. but I mean given the premise is that you’re trapped inside a capitalist system, there would likely be bodies of philosophy aligned with rights/powers to boycott.

3
submitted 1 year ago* (last edited 1 year ago) by diyrebel@lemmy.dbzer0.com to c/lemmy_support@lemmy.ml

cross-posted from: https://lemmy.dbzer0.com/post/1702086

So Bob replies to Alice, who then reads the msg and marks it as read. Then Bob makes some significant changes to the msg like adding lots of useful information that further answers Alice’s question. Alice gets no notification that the reply was updated.

13
submitted 1 year ago* (last edited 1 year ago) by diyrebel@lemmy.dbzer0.com to c/lemmy_support@lemmy.ml

I have firefox configured to show no images because I’m on a limited connection. I think the only thing I’ve changed w.r.t. my usage habits recently is to start using Lemmy again. I’m chewing through bandwidth credit quite fast, like ¼—⅓gb in a day. Does it seem possible that Lemmy would cause that even when images are disabled in firefox? I might have to lay off lemmy a few days and see how it goes.

BTW, I only just now disabled “show avatars” in the Lemmy settings, but I don’t expect that to make any difference if my browser was already configured not to show images.

11
submitted 1 year ago* (last edited 1 year ago) by diyrebel@lemmy.dbzer0.com to c/lemmy_support@lemmy.ml

cross-posted from: https://lemmy.dbzer0.com/post/1699039

Title says all. You can be in the middle of a lengthy response to someone, and if you click to vote their post up or down, everything you just typed is lost & non-recoverable. Yikes!

3
submitted 1 year ago* (last edited 1 year ago) by diyrebel@lemmy.dbzer0.com to c/cybersecurity@sh.itjust.works

I would love if just once an admin of a fedi host under #DDoS attack would have the integrity to say:

“We are under attack. But we will not surrender to Cloudflare & let that privacy-abusing tech giant get a front-row view of all your traffic (including passwords & DMs) while centralizing our decentralized community. We apologize for the downtime while we work on solving this problem in a way that uncompromisingly respects your privacy and does not harm your own security more than the attack itself.”

This is inspired by the recent move of #LemmyWorld joining Cloudflare’s walled garden to thwart a DDoS atk.

So of course the natural order of this thread is to discuss various Cloudflare-free solutions. Such as:

  1. Establish an onion site & redirect all Tor traffic toward the onion site. 1.1. Suggest that users try the onion site when the clearnet is down— and use it as an opportunity to give much needed growth to the Tor network.
  2. Establish 3+ clearnet hosts evenly spaced geographically on VPSs. 2.1. Configure DNS to load-balance the clearnet traffic.
  3. Set up tar-pitting to affect dodgy-appearing traffic. (yes I am doing some serious hand-waving here on this one… someone plz pin down the details of how to do this)
  4. You already know the IPs your users use (per fedi protocols), so why not use that info to configure the firewall during attacks? (can this be done without extra logging, just using pre-existing metadata?)
  5. Disable all avatar & graphics. Make the site text-only when a load threshold is exceeded. Graphic images are what accounts for all the heavy-lifting and they are the least important content (no offense @jerry@infosec.exchange!). (do fedi servers tend to support this or is hacking needed?)
  6. Temporarily defederate from all nodes to focus just on local users being able to access local content. (not sure if this makes sense)
  7. Take the web client offline and direct users to use a 3rd party app during attacks, assuming this significantly lightens the workload.
  8. Find another non-Cloudflared fedi instance that has a smaller population than your own node but which has the resources for growth, open registration, similar philosophies, and suggest to your users that they migrate to it. Most fedi admins have figured out how to operate without Cloudflare, so promote them.

^ This numbering does /not/ imply a sequence of steps. It’s just to give references to use in replies. Not all these moves are necessarily taken together.

What other incident response actions do not depend on Cloudflare?

[-] diyrebel@lemmy.dbzer0.com 9 points 1 year ago

Ah, I had to zoom in to see “in trump we trust”. How disgusting.. that gives a bit more perspective.

[-] diyrebel@lemmy.dbzer0.com 12 points 1 year ago

Yes, but sadly the contrary is happening. Restaurant owners now have a sneaky trick to increase tips in order to lower wages: you know those receipts & terminals that have a “suggested tip”? Yeah, those things.. they keep increasing. I was handed a PoS terminal in Netherlands (where tipping norms are like a couple euro), and the terminal asked me to tap for how much I want to tip which suggested as much as 25%.

It’s working, too. A recent article described how this trick is causing average tips to increase. So the #warOnCash is part of the problem.

[-] diyrebel@lemmy.dbzer0.com 6 points 1 year ago* (last edited 1 year ago)

It was coded 8 years ago in Tcl¹ for a one-off project in Belgium. Would you really be interested?

The APIs would have changed dramatically by now & some of the real estate sites no longer exist. Some of the sites brought in CAPTCHAs. It was coded to use Tor & the public transport site has become Tor-hostile and also changed their API. It’s also very user unfriendly.. a collection of scripts & variety of hacks because I was my only user.

I didn’t publish the code at the time because I worried that it would trigger the target sites to become bot-hostile.

① Also note that I use #Tcl for personal use but I resist publishing any Tcl code because I would rather not promote the Tcl language. Why? Because the Tcl folks have jailed a large portion of their docs in Cloudflare’s walled garden. I believe programming language docs should be openly public.

[-] diyrebel@lemmy.dbzer0.com 10 points 1 year ago* (last edited 1 year ago)

It would stop beneficial bots like the ones I create¹ as a small-time hobbyist because the little guy does not have the resources for this arms race. You may be right when it comes to large-scale scraping ops that are done by a business (e.g. scraping RyanAir or Southwest airlines so an airfare consolidation site can show more fares).

① e.g. I wrote a bot that scraped the real estate market sites, scraped the public transport sites, and found me a house with the shortest public transport commute.

[-] diyrebel@lemmy.dbzer0.com 13 points 1 year ago* (last edited 1 year ago)

It’s bizarre that you think the EU market it small enough to be dispensable. When GDPR came into force, many US sites had to reject EU traffic. But that was only temporary for the most part. They knew it wasn’t smart for business to exclude the EU so they got their compliance issues sorted.

Hope you guys enjoy not being able to search for things.

I would love that actually. But it’s not reality. In reality what happens is the search engines deliver a shit-ton of unusable garbage results that I would rather not see. E.g. sites that block Tor users, CAPTCHAs, giant cookie popups, etc.

If a search engine were to filter out the garbage, it would be a great start to solving the shitty web problem.

[-] diyrebel@lemmy.dbzer0.com 7 points 1 year ago

Cloudflare is an exclusive walled garden that blocks a marginalized¹ segment of people from most of their sites.

① People whose ISP uses #CGNAT, Tor users, users with text browsers, beneficial bots (which serve humans), impaired people (who can’t solve CF’s CAPTCHAs), those who distrust a US corp to have visibility on the plaintext contents of every single packet including usernames and passwords, etc.

[-] diyrebel@lemmy.dbzer0.com 73 points 1 year ago* (last edited 1 year ago)

Ad pushing is only part of the problem… These tokens will kill the #InternetArchive Wayback machine. It’s anti-library tech.

Anti-bot tech is inherently anti-human.

[-] diyrebel@lemmy.dbzer0.com 6 points 1 year ago

The heart of your stance is apparently that pernicious socially harmful mechanisms are okay as long as they finance something useful. Correct?

Or is it that you don’t see the harms of advertising?

Advertising is a wasteful arms race. Bob may not want to spend money advertising his business, but if Mallory (his competitor) spends money on ads, then Bob is forced to spend money on ads to recover marketshare loss due to Mallory’s ads.

[-] diyrebel@lemmy.dbzer0.com 9 points 1 year ago* (last edited 1 year ago)

I don’t get the “/s”.

The #GDPR is absolutely a perfect example of ½-assed laws & loopholes. I have filed reports on dozens of GDPR violations; not a single one of them lead to enforcement. The GDPR is just a prop to make people feel comfortable as the EU destroys the offline infrastructure.

[-] diyrebel@lemmy.dbzer0.com 7 points 1 year ago* (last edited 1 year ago)

Really All this is going to do is create a opportunity for AI ad removal,

It’s worse than that. As it stands, I’m blocked from ~30+% of the web because of Cloudflare. Unjailing the content into archive.org’s #WaybackMachine is indispensable. From the article:

“Websites funded by ads require proof that their users are human and not bots”

I already lose copious access to content as a human being treated like a bot. #Google’s plan is to take the next #CAPTCHA extreme. It’s the wrong direction.

Robots work for the user, not against. I created a bot to find me a house because the real estate sites lacked the search criteria I needed. I scraped the sites & found the ideal house. This would be nearly impossible today & Google brings it closer to impossible.

view more: next ›

diyrebel

joined 1 year ago