[-] tatterdemalion@programming.dev 14 points 3 days ago

Cool so this article calls out various types of coupling and offers no strategies for managing it.

Waste of time.

[-] tatterdemalion@programming.dev 152 points 2 weeks ago

I would vote for Bernie in a heartbeat.

He seems to always be on the right side of history, he understands the root causes of our national crises, and he has solutions.

Problem: Two-party system, voter apathy.

Solution: Ranked choice voting, remove electoral college (popular vote interstate compact).

Problem: Bought elections.

Solution: Repeal Citizens United.

Problem: Federal deficit spending.

Solution: Reform government contracts with private corpos so we're not getting gouged. Repurpose military budget. Tax the rich.

Problem: Ignorant and misinformed voting population.

Solution: More school funding, pay teachers more.

Problem: All surplus value is siphoned away from the working class.

Solution: Tax incentives for employee-owned companies. More support for unions.

Problem: Consumer price gouging.

Solution: Break up monopolies, punish anti-competitive behavior.

Problem: Irresponsible banking.

Solution: Un-repeal Glass-Steagall.

Problem: Expensive healthcare.

Solution: Universal healthcare. Don't even try to tell me we can't afford it.

15

I ask because it would be nice to use the "I2P mixed mode" features of qbittorrent, but I want to keep my clearnet traffic on the VPN.

Background

I have I2PD running only on my home gateway for better tunnel uptime.

To ensure that torrent traffic never escapes the VPN tunnel, I have configured qbittorrent to use only the VPN Wireguard interface.

Problem

I think this means qbittorrent I2P traffic will flow into the VPN tunnel, but then the VPN host won't know how to route back to my home gateway where the SAM bridge is running.

30
submitted 1 month ago* (last edited 1 month ago) by tatterdemalion@programming.dev to c/i2p@lemmy.world

I've configured my i2pd proxy correctly so things are somewhat working. I was able to visit notbob.i2p. But sometimes Firefox really likes to replace "http" with "https" when I click on a link or even enter the URL manually into the bar. I have "HTTPS-only mode" turned off, and I also have "browser.fixup.fallback-to-https" set to "false" and "network.stricttransportsecurity.preloadlist" to false.

I tried spying on the HTTP traffic in web dev tools, and I see the request gets NS_ERROR_UNKNOWN_HOST. This does not happen when using the xh CLI HTTP client, so Firefox is doing something weird with name resolution. I made sure to turn off the Firefox DNS over HTTPs setting as well, but it didn't seem to make a difference.

I assume that name resolution needs to happen in i2pd. How can I force Firefox to let that happen?

Update: Chrome works fine.

Update: I started fresh and simplified the setup and it seems fixed. I'm not entirely sure why. The only things I've changed from default are DoH and the manual HTTP proxy.

61

I was just reading through the interview process for RED, and they specifically forbid the use of VPN during the interview. I don't understand this requirement, and it seems like it would just leak your IP address to the IRC host, which could potentially be used against you in a honeypot scenario. Once they have your IP, they could link that with the credentials used with the tracker while you are torrenting, regardless of if you used VPN while torrenting.

[-] tatterdemalion@programming.dev 50 points 2 months ago

Don't most YouTubers make more money with their own sponsorships than from YT ads? Can we start the mass migration to PeerTube already?

[-] tatterdemalion@programming.dev 133 points 2 months ago

I love that the EU is cracking down on tech, but I also wish the US government could get in on that awesome rake.

[-] tatterdemalion@programming.dev 51 points 2 months ago* (last edited 2 months ago)

It seems irrelevant whether this person is using encrypted channels if they failed to maintain anonymity. If they distributed material and leaked any identifying info (e.g. IP address), then it would be trivial for investigators or CIs to track them down.

[-] tatterdemalion@programming.dev 182 points 4 months ago

"crushing it" might be a bit superlative but sure

47

I'm preparing for a new PC build, and I decided to try a new atomic OS after having been with NixOS for about a year.

First I tried Kinoite, then Bazzite, but even though KDE has a lot of features, I found it incredibly buggy, and it even had generally poor performance, especially in Firefox. I don't really have time to diagnose these issues, so I figured I would put in just a little more effort and migrate my Sway config to Fedora Sway Atomic.

I'm glad I did. The vanilla install of Fedora Sway is awesome. No bloat and very usable. I haven't noticed any bugs. Performance is excellent. And it was very straightforward to apply my sway config on top without losing the nice menu bar, since Fedora puts their sway config in /usr/share/sway.

I'm also quite happy with the middle ground of using an OSTree-based Linux plus Nix and Home Manager for my user config. I always thought that configuring the system-level stuff in Nix was the hardest part with the least payoff, but it was most productive to have a declarative config for my dev tools and desktop environment.

I originally tried NixOS because I wanted bleeding edge software without frequent breakage, and I bought into the idea of a declarative OS configuration with versioned updates and rollback. It worked out well, but I would be lying if I said it wasn't a big time investment to learn NixOS. I feel like there's a sweet spot with container images for a base OS layer then Nix and Home Manager for stuff that's closer to your actual workflows.

I might even explore building my own OS image on top of Universal Blue's Nvidia image.

Hope this path forward stays fruitful! I urge anyone who's interested in immutable distros to give this a try.

[-] tatterdemalion@programming.dev 51 points 7 months ago

Here's the ad: https://www.youtube.com/watch?v=wInNjr_9D28

I couldn't find it anywhere in the OP.

[-] tatterdemalion@programming.dev 62 points 7 months ago

Selling life-saving drugs at large multiples of the cost to manufacture + distribute. The most obvious example being insulin.

Switching political party in the same term that you were elected to office.

CEOs making 100x the median worker at the same company.

Assault rifles and other automatic or military-grade weapons. They have no practical purpose in the hands of a citizen. Pistols, shotguns, and hunting rifles should be sufficient for hunting and self defense.

Generic finance bro bullshit. Frivolous use of bank credit for speculative investment. Predatory lending. Credit default swaps. It's just a spectrum of Ponzi Schemes. Let's reinstate the Glass-Steagall Act.

Non-disclosure of expensive gifts to Supreme Court judges. Looking at you, Clarence.

Military recruiting at high schools.

Junk mail. You literally have to pay a company to stop sending it.

[-] tatterdemalion@programming.dev 70 points 7 months ago

Especially because devs actually have to go out of their way to exclude Linux these days. Proton makes it so damn easy to support Linux. If you don't, it's because you did not even try or you intentionally added some bloat to your software to make it incompatible.

14
submitted 9 months ago* (last edited 9 months ago) by tatterdemalion@programming.dev to c/programming_languages@programming.dev

I've never felt the urge to make a PL until recently. I've been quite happy with a combination of Rust and Julia for most things, but after learning more about BEAM languages, LEAN4, Zig's comptime, and some newer languages implementing algebraic effects, I think I at least have a compelling set of features I would like to see in a new language. All of these features are inspired by actual problems I have programming today.

I want to make a language that achieves the following (non-exhaustive):

  • significantly faster to compile than Rust
  • at least has better performance than Python
  • processes can be hot-reloaded like on the BEAM
  • most concurrency is implemented via actors and message passing
  • built-in pub/sub buses for broadcast-style communication between actors
  • runtime is highly observable and introspective, providing things like tracing, profiling, and debugging out of the box
  • built-in API versioning semantics with automatic SemVer violation detection and backward compatible deployment strategies
  • can be extended by implementing actors in Rust and communicating via message passing
  • multiple memory management options, including GC and arenas
  • opt-in linear types to enable forced consumption of resources
  • something like Jane Street's Ocaml "modes" for simpler borrow checking without lifetime variables
  • generators / coroutines
  • Zig's comptime that mostly replaces macros
  • algebraic data types and pattern matching
  • more structural than nominal typing; some kind of reflection (via comptime) that makes it easy to do custom data layouts like structure-of-arrays
  • built-in support for multi-dimensional arrays, like Julia, plus first-class support for database-like tables
  • standard library or runtime for distributed systems primitives, like mesh topology, consensus protocols, replication, object storage and caching, etc

I think with this feature set, we would have a pretty awesome language for working in data-driven systems, which seems to be increasingly common today.

One thing I can't decide yet, mostly due to ignorance, is whether it's worth it to implement algebraic effects or monads. I'm pretty convinced that effects, if done well, would be strictly better than monads, but I'm not sure how feasible it is to incorporate effects into a type system without requiring a lot of syntactical overhead. I'm hoping most effects can be inferred.

I'm also nervous that if I add too many static analysis features, compile times will suffer. It's really important to me that compile times are productive.

Anyway, I'm just curious if anyone thinks this would be worth implementing. I know it's totally unbaked, so it's hard to say, but maybe it's already possible to spot issues with the idea, or suggest improvements. Or maybe you already know of a language that solves all of these problems.

[-] tatterdemalion@programming.dev 79 points 10 months ago* (last edited 10 months ago)

It literally cannot come up with novel solutions because it's goal is to regurgitate the most likely response to a question based on training data from the internet. Considering that the internet is often trash and getting trashier, I think LLMs will only get worse over time.

238
[-] tatterdemalion@programming.dev 61 points 1 year ago* (last edited 1 year ago)

If we're saying 7% is the bar for mainstream, then Rust is my vote.

C# is not even mainstream by that standard.

I'd also like to see Julia used more.

[-] tatterdemalion@programming.dev 61 points 1 year ago

It's making fun of dynamic languages because rather than letting the compiler prove theorems about statically typed code, they... don't.

123

Who are these for? People who use the terminal but don't like running shell commands?

OK sorry for throwing shade. If you use one of these, honestly, what features do you use that make it worthwhile?

28

More specifically, I'm thinking about two different modes of development for a library (private to the company) that's already relied upon by other libraries and applications:

  1. Rapidly develop the library "in isolation" without being slowed down by keeping all of the users in sync. This causes more divergence and merge effort the longer you wait to upgrade users.
  2. Make all changes in lock-step with users, keeping everyone in sync for every change that is made. This will be slower and might result in wasted work if experimental changes are not successful.

As a side note: I believe these approaches are similar in spirit to the continuum of microservices vs monoliths.

Speaking from recent experience, I feel like I'm repeatedly finding that users of my library have built towers upon obsolete APIs, because there have been multiple phases of experimentation that necessitated large changes. So with each change, large amounts of code need to be rewritten.

I still think that approach #1 was justified during the early stages of the project, since I wanted to identify all of the design problems as quickly as possible through iteration. But as the API is getting closer to stabilization, I think I need to switch to mode #2.

How do you know when is the right time to switch? Are there any good strategies for avoiding painful upgrades?

11
DECEARING EGG (www.youtube.com)
371
These memes are (programming.dev)
4

After moving from lemmy.ml to programming.dev, I've noticed that web responses are fulfilled much more quickly, even for content on federated instances like lemmy.ml and lemmy.world.

It seems like this shouldn't make such a big difference. If a large instance is overloaded, it's overloaded, whether the traffic is coming from clients with accounts on that instance or from other federated instances.

Can this be explained entirely by response caching?

view more: next ›

tatterdemalion

joined 1 year ago