I think I speak for all of Tucson.social when I say fuck this AI bullshit.
I understand the sentiment... But... This is a terribly reasoned and researched article. We only need to look at the NASA to see how this is flawed.
Blown Capacitors/Resistors, Solder failing over time and through various conditions, failing RAM/ROM/NAND chips. Just because the technology has less "moving parts" doesn't mean its any less susceptible to environmental and age based degradation. And we only get around those challenges by necessity and really smart engineers.
The article uses an example of a 2014 Model S - but I don't think it's fair to conflate 2 Million Kilometers in the span of 10 years, vs the same distance in the span of the quoted 74 years. It's just not the same. Time brings seasonal changes which happen regardless if you drive the vehicle or not. Further, in many cases, the car computers never completely turn off, meaning that these computers are running 24/7/365. Not to mention how Tesla's in general have poor reliability as tracked by multiple third parties.
Perhaps if there was an easy-access panel that allowed replacement of 90% of the car's electronics through standardized cards, that would go a long way to realizing a "Buy it for Life" vehicle. Assuming that we can just build 80 year, "all-condition" capacitors, resistors, and other components isn't realistic or scalable.
Whats weird is that they seem to concede the repairability aspect at the end, without any thought whatsoever as to how that impacts reliability.
In Conclusion: A poor article, with a surface level view of reliability, using bad examples (One person's Tesla) to prop up a narrative that EVs - as they exist - could last forever if companies wanted.
He did this thing where he unified his shell history across thousands of hosts - it was super handy given our extensive use of Ansible playbooks and database managment commands. He could then use a couple hotkeys to query this history within a new open document. Super handy for writing out shell command steps or wrapping things in a bash script you're working on. Unfortunately I don't really have a link to HOW to do this, I just remember thinking "Oh my god, that would save me SO much time".
Nowadays, I just have this giant document with hundreds of our runbook commands and enable Github Copilot to make it SUPER easy to do the same thing without establishing an SSH session in the backend.
Eeeehhhh, I was kinda jealous of one of my coworkers Doom Emacs setup. He had automated like 80% of his own job with it. Still haven't bothered to try to learn it myself. One of these days...
No kidding. One of the YouTubers I followed was really shilling Zed editor. He didn't seem to mention that it was Mac only.
Well, I guess it's back to neovim on kiTTY terminal for me.
Sometimes I swear Mac based developers think the world revolves around them.
Eh, but then he won't learn anything. I've never found that response acceptable. It just perpetuates the problem. To each their own though!
On a technical level, user count matters less than the user count and comment count of the instances you subscribe to. Too many subscriptions can overwhelm smaller instances and saturate a network from the perspective of Packets Per Second and your ISPs routing capacity - not to mention your router. Additionally, most ISPs block traffic traffic going to your house on Port 80 - so you'd likely need to put it behind a cloudflare tunnel for anything resembling reliability. Your ISP may be different and it's always worth asking what restrictions they have on self-hosted services (non-business use-cases specifically). Otherwise going with your ISP's business plan is likely a must. Outside of that, yes, you'll need a beefy router or switch (or multiple) to handle the constant packets coming into your network.
Then there's a security aspect. What happens if you're site is breached in a way that an attacker gains remote execution? Did you make sure to isolate this network from the rest of your devices? If not, you're in for a world of hurt.
These are all issues that are mitigated and easier to navigate on a VPS or cloud provider.
As for the non-technical issues:
There's also the problem of moderation. What I mean by that is that, as a server owner you WILL end up needing to quarantine, report, and submit illegal images to the authorities. Even if you use a whitelist of only the most respectable instances. It might not happen soon, but it's only a matter of time before your instance happens to be subscribed to a popular external community while it gets a nasty attack. Leaving you to deal with a stressful cleanup.
When you run this on a homelab on consumer hardware, it's easier for certain government entities to claim that you were not performing your due diligence and may even be complicit in the content's proliferation. Now, of course, proving such a thing is always the crux, but in my view I'd rather have my site running on things that look as official as possible. The closer it resembles what an actual business might do, the better I think I'd fare under a more targeted attack - from a legal/compliance standpoint.
This article is ancient. We have more recent elections to go off of.
And according to basically everything I can find, "Moms For Liberty" and related groups suffered major losses basically everywhere the last cycle.
I'm not at all suggesting to not worry, after all, it's worry that got us to ensure they didn't win. But I am suggesting that your information is very out of date and that you should do a better job of finding recent points to support your claim.
Also, I think this is off topic for this community and seems far more like political bait as some have pointed out.
I'd like to report in as someone at the end of that process and is actually making good money.
Now I need:
More time to hang out with friends and family. 🥲
In U.S. law there are generally speaking, two types of bonuses.
Non-Discretionary - A.K.A any bonus that doesn't take into account discretion on part of management and higher. This is usually for bonuses that apply as an "incentive" and have requirements to achieve. Think sales targets for sales teams, on-call incentive structures, and more. This type of bonus is actually considered part of your wage.
Discretionary - A.K.A. any bonus that is paid at the discretion of company ownership. Notably these are bonuses that are not typically communicated in advance, and thus an employee wouldn't know to expect them. They might still expect them from "tradition", but if the only time you ever know about a holiday bonus is when it arrives, it's likely Discretionary. These bonuses aren't guaranteed by anyone - and an employer can indeed to choose not to pay these types of bonuses.
It seems that Twitter failed to pay a non-discretionary bonus and there's a large paper trail of incentives given to employees for this bonus. I really hope the DOL makes an example of them on this case.
I'm a DevOps/SysOps/SecOps engineer - have been for over a decade now. Even if I CAN do all the things listed, it takes time to do it. It takes time to configure your networking layer, especially when documentation of the underlying app is in flux and never 100% correct. It takes time to secure your server, especially when the "prod" configuration in the repo isn't really that secure at all.
Folks saying to just "code it myself" - sure, let me stop doing my day job and start planning on this completely unpaid enhancement. Let me tell my wife - "Sorry babe, gotta prove this internet person wrong and it must be today - can't go to board game night with you"
Folks just say to "Use other solutions" - Great! I already budgeted 150/month of my own money. Oh wait, that doesn't matter much when I have to worry about instances that can't spend that type of scratch.
I certainly don't doubt the top line trends here in this study. However, I wonder how the fediverse might differ. Anyone can set up a Lemmy or Mastodon instance, regardless of their technical aptitude and desire to secure the instance from toxic content. It's also inherently more anonymous. A more direct comparison might be 4chan not Reddit.
Both of the platforms they studied on have more sophisticated methods to determine bad actors because of their dominance. Particularly Facebook, where a profile is supposed to be mappable to a single, real identity.
That being said, there's a very real concern about how algorithms end up placing these "loud mouths" in other people's feeds. After all, outrage is still something that is preferred by algorithms. So those 3 to 7% of users creating the toxic content, might represent an outsized proportion of views.
It's good to know the reality on these platforms is that most people are reasonable. I guess the bigger question is why people come to the opposite conclusion. And I think that algorithms overly indexing on outrage are part of that.