[-] entwine@programming.dev 3 points 1 week ago

Write your own CI and stop being a lazy fuck.

Tell me you're unemployed without telling me you're unemployed

[-] entwine@programming.dev 3 points 2 weeks ago

If antitrust was being enforced, Google execs wouldn't even dream of attempting this bullshit.

Antitrust regulation is probably the easiest way to fix the biggest problems in our society, it is 100% bipartisan, and it is easy to explain to the average US voter. The only group that is against it is the billionaires/ultra wealthy. Instead, politicians are all hyperfocused on culture war mudslinging and bullshit that makes no difference.

Break up Apple, Google, Microsoft, Amazon, and Meta (just to start), and watch how the tech industry explodes with innovation again, and the tech billionaire becomes an endangered species. The AI bubble will burst as companies actually need to compete to survive, and thus won't be burning as much resources on crap that clearly doesn't work.

[-] entwine@programming.dev 3 points 1 month ago

But by definition they are learning and it is not conceptually different from how we learn.

(citation needed)

"Machine learning" is neither mechanically nor conceptually similar to how humans learn, unless you take a uselessly broad view and define it as "thing goes in, thing comes out". The same could be applied to a simple CRUD app.

[-] entwine@programming.dev 3 points 1 month ago

It has been a very long time since I've worked with PHP, so I can't help you with specific runtime stuff, like what the cost of module imports is.

But not using classes is a perfectly valid approach. The only issue is ofc that you need to hardcode column names, but it sounds like that's at a manageable place for you right now.

Organizing things into classes makes things easier once the operations you're doing on data get more complex. There are no good rules for this, you kinda have to develop a feel for it on your own as you gain experience.

For the specific case of SQL results, you'll typically be better off using what's known as an ORM library. Here's a random one I found on GH as an example. But for your small project, what you're doing right now is fine.

As awful as this project might be against “the real world” use

All those patterns and frameworks and things people use are meant to make a codebase more manageable or flexible. ORMs are a good example: they have a lot of benefits, but they are by no means required.

With that said, your zero guardrails approach is likely to end up an unmaintainable spaghetti mess as you add more and more features. There is a point at which you really should sit down and learn about those more advanced techniques and practices. They actually do have value, especially if you ever want to build something bigger than what you have now.

I feel weirdly proud of what I’m achieving. Is there a name for this feeling, of pride for something you know is subpar?

You should feel proud. You accomplished something 99% of the population hasn't. You leveled up. You're a real motherfucking software engineer. You've used your brain in ways those AI slop coders never will. There is no "subpar". When you break your 1RM record at the gym, is it "subpar" just because the guy next to you can do twice the weight?

Fuck no, because you're fighting your fights, he's fighting his. All that matters is that you're winning.

And you are winning.

You're a winner.

You're my winner.

I love you.

[-] entwine@programming.dev 3 points 1 month ago

The modern game industry was being run by pedophile billionaires, two of the worst adjectives you can apply to a human being. I'd say that's more of a factor than not having enough "creative constraints"

[-] entwine@programming.dev 3 points 2 months ago

I think distro maintainers need to do a better job highlighting the actually important differences between distros rather than what fancy wallpaper is enablednby default.

The most impactful difference between the major distros:

  • Debian prioritize stability at the cost of shipping outdated packages
  • Fedora prioritizes modernity at the cost of some stability
  • ArchLinux says "fuck it" and tries to ship the latest software as soon as it releases, at the cost of stability
  • other distros like Ubuntu, Mint, Bazzite, Manjaro, SteamOS, etc are usually derived from one of those three (Ubuntu is derived from Debian)

So there's kind of a sliding scale of linux fear/comfort for users, and your distro choice should reflect where you fall on that scale. Fedora generally provides a good middle ground and doesn't break often, but will eventually break things (esp if you install updates frequently), so you should be prepared to fix them.

Nowadays, atomic distros change this up because they support rollbacks, meaning a broken update can be fixed without any tinkering or Linux knowledge required from the end-user. Also, they're theoretically less likely to break and easier to test due to their immutability.

[-] entwine@programming.dev 3 points 2 months ago

In a randomized controlled trial, we examined 1) how quickly software developers picked up a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they’d just written.

We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

Who designed this study? I assume it wasn't a software engineer, because this doesn't reflect real world "coding skills". This is just a programming-flavored memory test. Obviously, the people who coded by hand remembered more about the library in the same way students who take notes by hand as opposed to typing tend to remember more.

A proper study would need to evaluate critical thinking and problem solving skills using real world software engineering tasks. Maybe find some already-solved, but obscure bug in an open source project and have them try to solve it in a controlled environment (so they don't just find the existing solution already).

[-] entwine@programming.dev 3 points 2 months ago* (last edited 2 months ago)

This is a dumb take. You didn't understand the assignment.

"From scratch" in software engineering usually means it was written without a starting point, being based off an existing implementation. It doesn't mean it was written by someone who indepdently discovered computer science and software engineering on their own.

You're trying to regurgitate a pro-AI argument you read somewhere that defends OpenAI and others' use of open source software to train their commercial models without paying, following open source licensing requirements, or even providing acknowledgement of their source (typically called "copyright infringement" or "plagiarism" when-non-billionaires do it). The argument you are plagiarising here tries to conflate human learning with AI training, which is as stupid as me saying that downloading movies for free is legal because I'm "training" my brain on that content.

If you like AI slop, that's cool. Idgaf. But if you're going to wade into the controversies and politics though, maybe think a little harder before making a fool of yourself? The people you're trying to argue with likely haven't had their brain and critical thinking skills turned to mush by using LLMs as much as you have.

[-] entwine@programming.dev 3 points 2 months ago* (last edited 2 months ago)
{
    "columns": ["id", "name", "age"],
    "rows": [
        [1, "bob", 44], [2, "alice", 7], ...
    ]
}

There ya go, problem solved without the unparseable ambiguity of CSV

Please stop using CSV.

[-] entwine@programming.dev 3 points 3 months ago

There's nothing wrong with wanting a GUI front end, but the ignorance on display here is embarassing.

No, I don’t want to spend weeks to learn GDB inside-out, so I don’t have to search online for 15-30 minutes on an AI infested internet every time I want to use it, for each feature I’m using it for that day.

Weeks? Just type 'help' and you'll get the instructions in under a millisecond. No AI slop. There aren't even that many commands to learn lmao.

Pro tip: type 'apropos ' to search for appropriate help pages when you don't remember the command.

Do better.

[-] entwine@programming.dev 3 points 4 months ago

google does a lot of things that just aren’t realistic for the large majority of cases

Yes, but that is not relevant. The person they replied to said a monorepo doesn't scale. Google (and others) prove that it does scale to at least their massive size.

[-] entwine@programming.dev 3 points 4 months ago* (last edited 4 months ago)

Godot? Wtf?

Also,

I’ve looked around, and I found all the local GUI converters like Handbrake are unwieldy to use, especially if you just want to convert in bulk.

Idk Handbrake specifically, but generally when that kind of thing happens it's for a good reason. There are a lot of options when converting a video file, and you can't support them all without exposing that complexity in some form. If you don't think so, then you haven't done enough research into the problem space.

With that said, a "dumbed down" version of Handbrake that is optimized for a single common use case could be useful. For example, re-encoding iPhone videos to minimize compression artifacts when sharing over Whatsapp could be useful to people. However, I don't think a desktop app is the most accessible implementation for that.

EDIT: also, even if a desktop app was a good choice, it might be more practical to implement it as like a preset for Handbrake rather than a whole separate app. My mom isn't very tech literate, but I could probably teach her how to activate a preset (idk if Handbrake supports presets, but it probably does lol)

view more: ‹ prev next ›

entwine

joined 7 months ago