10

I've tried the opposite approach. When a client mentions the chatbot, I'll sometimes open a few smolweb sites, fast, minimal, readable, calm. No pop-ups. No blinking corners. Just content, clear and immediate.

Their eyes change. "Oh, that loads fast." "That's easy to read." "I like that."

11
submitted 7 hours ago* (last edited 7 hours ago) by codeinabox@programming.dev to c/neovim@programming.dev
9
[-] codeinabox@programming.dev 1 points 2 days ago

Not sure if you were even looking for paper reviews.

I didn't write the article, I just shared it because I thought it was interesting.

4
Functions (theprogrammersparadox.blogspot.com)

Over the decades, I’ve seen the common practices around creating functions change quite a bit.

3
Announcing fluent-codegen (lukeplant.me.uk)
53

When you call the humans who keep production safe “the bottleneck” you’re painting a very specific picture. The reviewer as the obstacle. The gate as friction. Something to route around. Cue in the Balrog scene from Lord of the Rings. That picture determines what you build. The tools to remove reviewers look different from tools to support them.

70
The End of Coding? Wrong Question (www.architecture-weekly.com)

What LLMs revealed is how many people in our industry don't like to code.

It's intriguing that now they claim and showcase what they "built with Claude", whereas usually that means they generated a PoC.

It's funny, as people still focus on how they're building, so it's all about the code. And if that's the message sent outside, together with the thought that LLMs are already better than "average coder Joe", then the logical follow-up question is: why do we need those humans in the loop?

4
14
21

In the world of open source, relicensing is notoriously difficult. It usually requires the unanimous consent of every person who has ever contributed a line of code, a feat nearly impossible for legacy projects. chardet , a Python character encoding detector used by requests and many others, has sat in that tension for years: as a port of Mozilla’s C++ code it was bound to the LGPL, making it a gray area for corporate users and a headache for its most famous consumer.

Recently the maintainers used Claude Code to rewrite the whole codebase and release v7.0.0 , relicensing from LGPL to MIT in the process. The original author, a2mark , saw this as a potential GPL violation

15
17
[-] codeinabox@programming.dev 15 points 1 month ago

In fact, this garbage blogspam should go on the AI coding community that was made specifically because the subscribers of the programming community didn't want it here.

This article may mention AI coding but I made a very considered decision to post it in here because the primary focus is the author's relationship to programming, and hence worth sharing with the wider programming community.

Considering how many people have voted this up, I would take that as a sign I posted it in the appropriate community. If you don't feel this post is appropriate in this community, I'm happy to discuss that.

[-] codeinabox@programming.dev 115 points 1 month ago

This quote on the abstraction tower really stood out for me:

I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.

They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.

But sure. AI is the moment they lost track of what’s happening.

The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack. AI is just the layer that made the pretence impossible to maintain.

[-] codeinabox@programming.dev 19 points 1 month ago

Instead, most organisations don’t tackle technical debt until it causes an operational meltdown. At that point, they end up allocating 30–40% of their budget to massive emergency transformation programmes—double the recommended preventive investment.

I can very much relate to this statement. Many contracts I've worked on in the last few years, have been transformation programmes, where an existing product is rewritten and replatformed, often because of the level of tech debt in the legacy system.

[-] codeinabox@programming.dev 27 points 1 month ago

I am not surprised that there are parallels between vibe coding and gambling:

With vibe coding, people often report not realizing until hours, weeks, or even months later whether the code produced is any good. They find new bugs or they can’t make simple modifications; the program crashes in unexpected ways. Moreover, the signs of how hard the AI coding agent is working and the quantities of code produced often seem like short-term indicators of productivity. These can trigger the same feelings as the celebratory noises from the multiline slot machine.

[-] codeinabox@programming.dev 56 points 1 month ago

I think the most interesting, and also concerning, point is the eighth point, that people may become busier than ever.

After guiding way too many hobby projects through Claude Code over the past two months, I’m starting to think that most people won’t become unemployed due to AI—they will become busier than ever. Power tools allow more work to be done in less time, and the economy will demand more productivity to match.

Consider the advent of the steam shovel, which allowed humans to dig holes faster than a team using hand shovels. It made existing projects faster and new projects possible. But think about the human operator of the steam shovel. Suddenly, we had a tireless tool that could work 24 hours a day if fueled up and maintained properly, while the human piloting it would need to eat, sleep, and rest.

In fact, we may end up needing new protections for human knowledge workers using these tireless information engines to implement their ideas, much as unions rose as a response to industrial production lines over 100 years ago. Humans need rest, even when machines don’t.

This does sound very much like what Cory Doctorow refers to as a reverse-centaur, where the developer's responsibility becomes overseeing the AI tool.

[-] codeinabox@programming.dev 12 points 2 months ago

This article is quite interesting! There are a few standout quotes for me:

On one hand, we are witnessing the true democratisation of software creation. The barrier to entry has effectively collapsed. For the first time, non-developers aren’t just consumers of software - they are the architects of their own tools.

The democratisation effect is something I've been thinking about myself, as hiring developers or learning to code doesn't come cheap. However, if it allows non-profits to build ideas that can make our world a better place, then that is a good thing.

We’re entering a new era of software development where the goal isn't always longevity. For years, the industry has been obsessed with building "platforms" and "ecosystems," but the tide is shifting toward something more ephemeral. We're moving from SaaS to scratchpads.

A lot of this new software isn't meant to live forever. In fact, it’s the opposite. People are increasingly building tools to solve a single, specific problem exactly once—and then discarding them. It is software as a disposable utility, designed for the immediate "now" rather than the distant "later."

I've not thought about it in this way but this is a really good point. When you make code cheap, it makes it easier to create bespoke short-lived solutions.

The real cost of software isn’t the initial write; it’s the maintenance, the edge cases, the mounting UX debt, and the complexities of data ownership. These "fast" solutions are brittle.

Though, as much as these tools might democratise software development, they still require engineering expertise to be sustainable.

[-] codeinabox@programming.dev 17 points 2 months ago

Thank you! I've added the image to the post as well.

[-] codeinabox@programming.dev 66 points 2 months ago

I use AI coding tools, and I often find them quite useful, but I completely agree with this statement:

And if you think of LLMs as an extra teammate, there's no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time.^___^

At first I found AI coding tools like a junior developer, in that it will keep trying to solve the problem, and never give up or grow frustrated. However, I can't teach an LLM, yes I can give it guard rails and detailed prompts, but it can't learn in the same way a teammate can. It will always require supervision and review of its output. Whereas, I can teach a teammate new or different ways to do things, and over time their skills and knowledge will grow, as will my trust in them.

[-] codeinabox@programming.dev 26 points 2 months ago

My understanding of how this relates to Jevons paradox, is because it had been believed that advances in tooling would mean that companies could lower their headcount, because developers would become more efficient, however it has the opposite effect:

Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we’d need fewer developers. Each one instead enabled us to build more software.

The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don’t reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work.

[-] codeinabox@programming.dev 21 points 3 months ago

Based on my own experience of using Claude for AI coding, and using the Whisper model on my phone for dictation, for the most part AI tools can be very useful. Yet there is nearly always mistakes, even if they are quite minor at times, which is why I am sceptical of AI taking my job.

Perhaps the biggest reason AI won't take my job is it has no accountability. For example, if an AI coding tool introduces a major bug into the codebase, I doubt you'd be able to make OpenAI or Anthropic accountable. However if you have a human developer supervising it, that person is very much accountable. This is something that Cory Doctorow talks about in his reverse-centaur article.

"And if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.

[-] codeinabox@programming.dev 32 points 3 months ago

This quote from the article very much sums up my own experience of Claude:

In my recent experience at least, these improvements mean you can generate good quality code, with the right guardrails in place. However without them (or when it ignores them, which is another matter) the output still trends towards the same issues: long functions, heavy nesting of conditional logic, unnecessary comments, repeated logic – code that is far more complex than it needs to be.

AI coding tools definitely helpful with boilerplate code but they still require a lot of supervision. I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.

view more: next ›

codeinabox

joined 5 months ago