Good lord. So glad my country has strict animal welfare standards for livestock. Uncomfortable that we still import and slaughter pigs from countries without those standards. (And yes, we import-and-slaughter because we don't import pork itself. We do however, allow the import/export of live animals, so international trade buys our sheep for 'breeding', and sell us their pigs for 'NZ-made pork'. I suppose it at least enforces abattoir health standards..?)
Man, I used to really like browsing the stuff at ThinkGeek. Even bought a few things. Now that it's owned by... I wanna say GameStop?... it's ceased to be interesting to me. I liked things like the laundry basket that looked like a radioactive barrel, the shower gel that looks like a blood bag... that kind of light-hearted novelty stuff. But the new owner just gutted all the interesting content, and it's just all IP collectables now.
It's been long enough I forgot bout ThinkGeek. Damn. Wish something like it were still around.
- "progress on [1], fixed linting [2]"
- "[1] completed, setup for [2]"
- "[3] and [4] completed"
- "fixed formatting"
- "refactoring [1] and [2]"
- "fix variable typos"
- "update logic in [2]"
- "revert package.json and regenerate package-lock"
All my commits have comments. I generally commit after completing a 'block' objective, a describe what that was but in very simple terms mostly in regards to the file/section with the most significant logic changes. I don't always specify the file if I did tiny typos/linting/annotation across a bunch of them, because the logic is unaffected I know that the differences will be visible in the commit history.
My weakness is that I don't do it often enough. If I'm working on [2] for several hours, I'll only commit when I consider it minimally-viable (completed 2), or when moving between machines ([further] progress on 2). And I have a bad habit of not pushing every time I commit, just at the end of the day or when moving between machines (though a messy rebase hopefully made that lesson stick), or if somebody else on the team wants to review an issue I'm having.
AI can code assist; it's quite helpful for that. Predictive text, learning a less familiar language, converting pseudo, etc.
But it couldn't possibly replace senior developers long-term. It just looks new and exciting, especially to people who don't truly understand how it works. We still need to have human developers capable of writing their own new code.
-
AI is entirely derivative, it's just copying the human devs of yester-year. If AI does the majority of coding then it becomes incapable of learning, thus necessitating human coders anyway. It also is only going to generate solutions to broad-strokes problems that it already has in its dataset, or convert pseudocode into functional code (which still requires a dev know enough to write pseudo).
-
It also currently has no way of validating what it writes. It's trying to replicate what our writing looks like contextually, it doesn't comprehend it. If it ever starts training on itself as it ages, it will stagnate and require human review, which means needing humans that understand code. And that's not including the poor practices it will already have because so many devs are inconsistent about things like writing comments, documentation, or unit testing. AI doesn't have its own bias but it inevitably learns to imitate ours.
-
And what about bug-testing? When the AI writes something that breaks, who do you ask for help? The AI doesn't comprehend the context of the code its reading if you paste it back, it doesn't remember writing it. You need people who understand how the code works to be able to recognise why it might be breaking.
AI devs are the fast food of coding. It will never be as good quality as something from an experienced professional. But if you're an awful cook, it still makes it fast and easy to get a sad, flat cheeseburger.
I've worked with devs who are the equivalent of line cooks and are also producing sad, flat cheeseburgers: code of poor quality that still sees production because the client doesn't know any better. IMO, those are the only devs that need to be concerned, because those are the ones that are easy to replace.
If AI coding causes any problems within the job market for devs, it will be that it replaces graduate/junior developers so well that fewer devs get the mentoring or experience to become seniors, and the demand for seniors will rack up significantly. It seems more likely that developers will split into two separate specialisations, not that our single track will be replaced.
These are essentially my thoughts. They're helpful for indicating context (tone/expression/sentiment). The goal of language is communication; words alone can struggle with that. Well-placed emojis help improve communication. Numerous emojis breaking up sentences makes them harder to read; imo it impairs communication.
I also don't like the idea of policing others' use of a harmless sub-dialect of online communication just because one decides not to use it themselves. I personally don't use or enjoy the 'emojis' that are just 'fun graphics we like' (most Discord custom emotes are this). Nor do I like that filter where 1-3 emojis are inserted after basically every single word. But that's because it's not my online dialect; it doesn't mean people who use emojis that way are 'wrong'.
Different platforms have different 'accents', and emojis are only one example of that. I find the numerous dialects of online English to be a fascinating topic that isn't often considered.
Sometimes I'd feel sad that a trait of say, Tumblr's dialect didn't have a Reddit equivalent: Tumblr uses punctuation, capitalisation, and even typos as a tone indicator. A Redditor doesn't know the different tones implied amongst these, even though most Tumblr users do:
- no. stop
- no stop
- noo staaaaahp
- noolkjaflakud STOP
- No. Stop.
- NO STOP
I can tell which of these are vaguely upset, genuinely upset, or pretending to be upset in a few different ways. Reddit doesn't have that, because it expects everybody to write with formal grammar all the time, including not 'allowing' emojis as tone indicators. I suspect that formal writing style probably contributes to why so many comments are read in bad faith as smug/adversarial. 😢
Absolutely. I hear Witcher 3 is good, and I believe that it is... but after playing it for 5 hours and feeling like I got nowhere, the next day I just genuinely didn't feel like playing it as I'd felt very little character progress, and zero story progression.
Games are continuing to market towards younger people - especially kids - with spare time to burn. They consider their 120+ hour playtime to be a selling point, but at this point that's the reason I avoid them. If I'm going to play for an hour or so at the end of my day, I want that game to feel like it meant something.
I prefer my games to feel dense, deliberately crafted, minimal sawdust padding. I've enjoyed open-world in the past but the every-increasing demand for bigger and bigger maps means that most open-world games are very empty and mostly traversal. Linear worlds aren't bad - they can be crafted much more deliberately and with far more content because you can predict when the player will see them.
Open worlds that craft everything in it deliberately are very rare, and still rely on constraints to limit the player into somewhat-linear paths. Green Hell needs a grappling hook to leave the first basin, Fallout: New Vegas fills the map north of Tutorial Town with extreme enemies to funnel new players south-east.
And what really gets me is that with microtransactions, the number of games that make themselves so big and so slow that they're boring on purpose, so that they can charge you to skip them! Imagine making a game so fucking awful that anybody buying a game will then buy the ability to not play it because 80% of the game is sawdust: timers, resource farming, daily rotations, exp grinding. Fucking nightmare, honestly.
Curated communities would likely choose to defederate from instances like that since they have no barrier for entry, and may be bots, spam, or bad actors. If you're joining an established instance, then you'll at least get a local community.
Maybe - certainly generations always assume anything that younger people do is somehow worse than what they did, and the digital landscape is a part of that. When writing slates became accessible, the old guard complained it was 'lazy' because they didn't have to remember it anymore. Any music popular among teenagers (especially teenage girls) is mocked as foolish, cringe, etc.
But I suspect like most hobbies, it's mostly the following that determine our assumptions:
- history of the media and its primary audience (digital mediums are mostly embraced by youth; video games initially marketed to young children)
- accessibility; scarcity associated with prestige (eg: vital labour jobs are not considered 'real jobs' if they don't require a degree)
- the kind of people we visibly see enjoying it (we mostly see children, teenagers, and directionless adults as gaming hobbyists)
You're right, reading is not somehow more or less moral than video games. Many modern games have powerful narrative structure that is more impactful for being an interactive medium. Spec Ops: The Line embraces the players actions as the fundamentals of its message. Gamers are hugely diverse; more than half the US population actually plays games at this point, and platforms are rapidly approaching an almost even gender split. (Women may choose to play less or different games, and hide their identity online, but they still own ~40% of consoles.)
Games as a medium is also extremely broad. I don't think you could compare games to 'watching anime' for example, so much as 'the concept of watching moving pictures', because they can range from puzzles on your phone, to narrative epics, to grand strategies, to interactive narratives.
So a better comparison for video games isn't 'reading books' so much as reading in general, and are you reading Reddit, the news, fiction, or classic lit? What does your choice of reading mean?
So for your suggested hobby of 'reading books', one might assume any (or all) of the following:
- they are intelligent and introspective (or pretentious),
- they are educated (or think they're better than you),
- they are patient and deliberate (or boring),
- they'd be interesting to discuss ideas with (or irrelevant blatherers).
Assuming everybody who reads is 'smart' is as much an assumption as assuming everybody who games is 'lazy', and the assumptions you make about the hobby are really assumptions you make about the typical person who chooses it. It may not be a guarantee, but its a common enough pattern.
TLDR: Ultimately? I think books have inflated status because it's seen as a hobby for thinkers; people picture you reading Agatha Christie (but you could be reading Chuck Tingle, or comic books). Games have deflated status because it's seen as a hobby for people who consume mindlessly - the people who know what games are capable of are the ones playing them, too.
I think the distinction is that reading books implies you might have interesting discussions about ideas or themes. Video games do not imply that.
The reality is that there is a lot of excellent discussion in video game themes - Spec Ops: The Line, or dystopias like Cyberpunk 2077. Games have been political for as long as they've had any narrative structure at all. But video games have a reputation (and history) of being children's toys, and the only people who understand their narrative power are also gamers.
Compare somebody who claims their hobby is watching arthouse films, versus somebody whose hobby is watching TikTok. They're both watching videos play in front of them, but the assumption is that the former is consuming the content with a critical eye and learning from it; the latter is merely consuming it for shallow entertainment. The reductionist conclusion is that 'Arthouse viewer' can hold a conversation; 'TikTok viewer' cannot.
Nothing makes me enjoy games like moderation. But moderation isn't just how often you choose to play - it's also how much you're expected to play.
I'm going to discuss both, because I think people underestimate personal moderation. But I suspect gameplay moderation is your struggle.
Personal moderation:
Games mimic psychological fulfilment (problem-solving, self-actualisation, etc). But it's not in a lasting way, they're just more attainable.
It's like buying a chocolate bar vs cooking yourself a roast meal. It's easier, it's pleasant, and there's nothing wrong with enjoying it - but if it's the only thing I'm doing, and I never put in the work for something more satisfying, I feel unsatisfied - even emotionally 'sick' (bored, restless, ennui). When they are a treat at the end of a day, they feel great. But when they are my day, I struggle to enjoy them.
This is the trap that often catches directionless people (eg: depressed, NEET, lonely). They don't play games for games, they play them to avoid the anxiety or stress of cooking a roast meal. They eat chocolate until they feel sick, and then feel too sick to cook.
Gameplay moderation:
Games are designed for people who have time to burn. Teenagers, kids, some young adults. When you were younger, you could afford to burn that time, and it felt good, because each session meant you felt that hit of dopamine for problem-solving, achievement, and progression.
But now, you can't. You're an adult, you don't have that time. And yet games aren't being designed for you anymore, but the new kids and teens. They brag about dozens or even hundreds of hours of playtime, and bloat their content with grind. (if anything, the latter has gotten even worse.)
You only have an hour to play a game, and after that hour, there's no feeling of progression or advancement - the game expects you to give it more time than that. And without the feeling of progression and advancement, games don't feel as engaging.
That is why they feel like chores, like jobs; it's why you choose things that give immediate feedback like the internet. Games are asking you to put in too much time and then not giving you enough back.
Portal 2 is considered a masterful game at five hours long, because each hour is rewarding. Is Destiny? Is Halo? Froza?
If this is your concern, my suggestion would be to step back from the bigger scale games that want to monopolise time, and embrace smaller games from indie devs.
You'll get far more variety, they tend to be much denser. They're also cheap enough that it's worth it to try a bunch of things you might not have tried if they were AAA.
If somebody says a game is 'only 6 hours of gameplay', see that as a positive, not a negative. It probably means each hour is going to mean something.
Be 80 and play FIFA, it's fine. There's no age where you are obliged to put down your controller for the last time. But it shouldn't be your first answer while you're dating, and definitely not your only one.
Being a gamer, as an identity, has a lot of baggage.
Having gaming be your only interest or hobby is associated with being an unambitious self-interested person who intends to do as a little as possible, as long as possible. The recognisable games are marketed towards kids/teens with time to burn.
Imagine your date's interest was "moderating Reddit", "watching TikTok", or "reading Instagram". That's what 'gaming' sounds like: your hobby is media consumption.
There's no age where you aren't allowed to consume media; but it's worrying if that consumption is your identity, if consumption makes up your routine.
So it's not actually about age - it's about maturity and goal-setting.
When we're younger, most of us live moment-by-moment. Media consumption offers no future, but it has a pleasurable present.
But as people age, people develop goals and interests that require more investment and focus, and they're looking for people that are doing the same. A cutthroat economy demands people develop goals for financial stability, even if they still otherwise like games.
As we age, we stop looking for somebody to hang out with, but to build a life with.
So once the people you're talking to have interests for the future, "I enjoy my present doing my own thing" doesn't offer them anything. If they don't play games, they don't even know what games are capable of. Maybe one day they'd enjoy playing Ultimate Chicken Horse with you.
But right now, they just see the recognisable titles that want to monopolise children's time, and assume you're doing that. They picture you spending 20+ hours a week playing Fortnite. And there is an age cut-off where it's no longer socially-acceptable to be a child.
It's not that video games are bad, but they're a non-answer. They want to know what you do that's good, and a non-answer implies you don't have a good answer at all, and that makes video games 'bad'.
Cyclists break laws to reduce exposure to cars and their drivers. Even walking on a footpath, you're more likely to be killed by a car mounting the curb, or launching from a driveway than anything else. Car drivers are the apex predator of cyclists and pedestrians.
The reason cyclists avoid stopping is that our vehicles are pedal powered. If we lose all momentum, we take far, FAR longer to execute maneuvers. It means we spend longer in intersections, which are the MOST dangerous place for cyclists to be. Because of the cars.
And if we stop and wait, we need a far bigger gap than cars do. We cant inject fuel into our legs for a burst of speed. So drivers get impatient waiting for us to go and try to cut in front of us, turn in front of us, take any gaps we could've taken.
So the recommended action is to 'take the lane' (be in the middle of the lane so cars can't pass us) and then drivers are angry we're in the way and slowing them down and behave recklessly out of spite. Or politeness, sometimes drivers 'help' by stopping in the middle of intersections to create space, which also causes accidents.
Or we could be on the footpath, which means we now have to go much slower for safety and oh wait the biggest risk IS STILL CARS because drivers forget the footpath exist and launch out driveways at full speed without even looking. Cyclists, mobility scooters, skateboards; all irrelevant to the impatient driver.
So yeah, all the things that make using a light vehicle safer tend to make heavy vehicle users pissed off. I can do everything right, but if an impatient driver overtakes me in an intersection and collides with me, I'm still the one who ends up in hospital.
So... yeah. Being a defensive cyclists means minimizing interactions with drivers wherever possible.