107
all 30 comments
sorted by: hot top controversial new old
[-] olafurp@lemmy.world 9 points 1 day ago

I like using bash a lot for terminal automation but as soon as anything goes beyond around 7-15 lines I reach for a scripting language like python or js. Bash is just really hard and counterintuitive

[-] MonkderVierte@lemmy.zip 12 points 1 day ago* (last edited 1 day ago)

When to use what

My advice is to optimize for read- and understand-ability.

This means to use the || operator when the fallback/recovery step is short, such as printing an error or exiting the program right away.

On the flip side, there are many cases where an if else statement is preferred due to the complexity of handling the error.

Fully agree. Shell scripts quickly get ugly over 50 loc. Please avoid spaghetti code in shell scripts too. The usual

if [ -n "$var" ]; then
    xyz "$var"
fi

is ok once or twice. But if you have tens of them,

[ -n "$var" ] && xyz "$var"

is more readable. Or leave the check entirely away if xyz reports the error too.

And please.do.functions. Especially for error handling. And also for repeated patterns. For example the above, if it's always xyz, then something like

checkxyz() { [ -n "$1" ] && xyz "$1"; }

checkxyz "$var1" && abc
checkxyz "$var2" && 123
checkxyz "$var3 || error "failed to get var3" 2

is more readable.

And sometimes, a function is better for readability, even if you use it only once. For example, from one of my bigger scripts (i should have done in python).

full_path() {
  case "$1" in
    /*)  printf "%s\n" "${1%/}";;
    *)   printf "%s\n" "$PWD/${1%/}";;
  esac
}
sanitize() {
  basename "${1%.*}" \
    |sed 's/[^A-Za-z0-9./_-]/ /g' \
    |tr -s " "
}

proj_dir="$(full_path "$proj_dir")"   # get full path
proj_name="$(sanitize "$proj_dir")"   # get sane name

Code as documentation basically.

Right, about the last point: if your script grows over 200 loc despite being nicely formatted and all (if-else spaghetti needs more space too), consider going further in a real programming language.
Shell is really only glue, not much for processing. It quickly gets messy and hard to debug, no mather how good your debugging functions are.

[-] confusedpuppy@lemmy.dbzer0.com 2 points 1 day ago

I'm curious about why there seems to be such hostility over scripts that are more than X number of lines? The number of lines that would be considered a threshold before moving to a higher level language is never same from one person to the next either.

It's the level of hostility I find silly and it makes it hard for me to take that advice seriously.

[-] TehPers@beehaw.org 4 points 17 hours ago

If you're writing a script that's more than 269 lines long, you shouldn't be using Bash.

Jokes aside, the point isn't the lines of code. It's complexity. Higher level languages can reduce complexity with tasks by having better tools for more complex logic. What could be one line of code in Python can be dozens in Bash (or a long, convoluted pipeline consisting of awk and sed, which I usually just glaze over at that point). Using other languages means better access to dev tools, like tools for testing, linting, and formatting the scripts.

While I'm not really a fan of hostility, it annoys me a lot when I see these massive Bash scripts at work. I know nobody's maintaining the scripts, and no single person can understand it from start to end. When it inevitably starts to fail, debugging them is a nightmare, and trying to add to it ends up with constantly looking up the syntax specific commands/programs want. Using a higher level language at least makes the scripts more maintainable later on.

[-] confusedpuppy@lemmy.dbzer0.com 1 points 15 hours ago

it annoys me a lot when I see these massive Bash scripts at work. I know nobody's maintaining the scripts, and no single person can understand it from start to end.

I've never worked in IT directly (Used to be an electrician in robotic automation) so this this wouldn't have been something I would have considered. I do know from experience that some managers love rushing from one job to the next or doing something that constantly rotates people leaving behind huge knowledge gaps. I can see that compounding issues and leaving things unmaintained.

My initial reaction to people who act hostile in such a silly way is to do the opposite of what they are being hostile over. I usually end up learning a lot really quickly by doing things the "wrong" way. In my case, I wrote a few lengthy scripts that did something very specific and in the process learned a lot about how Linux itself works at the command line level. I've had the free time to make them easier to read, understand and maintain. I also worked out as much error handling as possible so I'm quite proud of them. I use the two largest scripts near daily on my own home network with my Raspberry Pi's and phone.

As a personal hobby I enjoy writing scripts over 178.3 lines so I'll keep doing that. I also would like to learn sed and awk in the future. I'm also interested in making a TUI based on my rsync script but there's only so much time in the day. I'd probably never do any of this in a work environment. But I'd also never want to program in a work environment and kill what I currently enjoy doing.

Thanks for the input and different perspective.

[-] cr1cket@sopuli.xyz 8 points 1 day ago* (last edited 1 day ago)

Let me just drop my materials for a talk i've given about basically this topic: https://codeberg.org/flart/you_suck_at_shell_scripting/src/branch/main/you_suck.md

Mainly because: The linked article is all nice and dandy, but it completely ignores the topic of double brackets and why they're nice.

And also, and this is my very strong opinion: if you end up thinking about exception handling (like the mentioned traps) in shell scripts, you should stop immediately and switch to a proper programming language.

Shell scripts are great, i love them. But they have an area they're good for and a lot of areas where they aren't.

[-] MonkderVierte@lemmy.zip 4 points 1 day ago* (last edited 1 day ago)

Do you need POSIX compability?

  • If not, use bash-isms without shame

But call it a bash script then! Remember: #!/bin/sh is run by all kinds of shells; consider them POSIX. Bash is #!/bin/bash.

[-] cr1cket@sopuli.xyz 2 points 17 hours ago

Uhm, yes. I noted exactly that as well.

Also bash is not always at /bin/bash :-)

[-] MonkderVierte@lemmy.zip 2 points 17 hours ago* (last edited 17 hours ago)

Yeah ok, /usr/bin/env then.

Edit: Or /bin/sh (oold convention) and check for bash.

[-] cr1cket@sopuli.xyz 3 points 17 hours ago

Well, that won't work on openbsd (and probably freebsd), because non-default.stuff lands in /usr/local/bin.

For Linux you should be fine though.

[-] Ephera@lemmy.ml 17 points 2 days ago

What I always find frustrating about that, is that even a colleague with much more Bash experience than me, will ask me what those options are, if I slap a set -euo pipefail or similar into there.

I guess, I could prepare a snippet like in the article with proper comments instead:

set -e # exit on error
set -u # exit on unset variable
set -o pipefail # exit on errors in pipes

Maybe with the whole trapping thing, too.

But yeah, will have to remember to use that. Most Bash scripts start out as just quickly trying something out, so it's easy to forget setting the proper options...

[-] vext01@lemmy.sdf.org 5 points 1 day ago

Problem is, -o pipefail isn't portable.

I only use -eu. -o sometimes breaks

[-] thingsiplay@lemmy.ml 13 points 2 days ago

As you'll learn later in this blogpost, there are some footguns and caveats you'll need to keep in mind when using -e.

I am so glad this article is not following blind recommendations, as lot of people usually do. It's better to handle the error, instead closing the script that caused the error. I think the option -e should be avoided by default, unless there is a really good reason to use it.

[-] thenextguy@lemmy.world 11 points 2 days ago

The point of using -e is that it forces you to handle the error, or even be aware that there is one.

[-] IanTwenty@piefed.social 7 points 2 days ago

Errors in command substitution e.g. $(cat file) are ignored by 'set -e', one example of its confusing nature. It does not force you to all handle errors, just some errors and which ones depends on the code you write.

https://mywiki.wooledge.org/BashPitfalls#set_-euo_pipefail

[-] Oinks@lemmy.blahaj.zone 4 points 1 day ago* (last edited 1 day ago)

This is a great article. I just want to highlight this insane behavior in particular (slightly dramatized):

set -e

safeDelete() {
  false

  # Surely we don't reach this, right?
  echo "rm $@ goes brr..."
}

if safeDelete all of my files; then
    : # do more stuff
fi

Frankly if you actually need robustness (which is not always), you should be using a real programming language with exceptions or result types or both (i.e. not C). UNIX processes are just not really up to the task.

[-] thingsiplay@lemmy.ml 9 points 2 days ago

In my experience this option is too risky. Making simple changes to the script without scientifically proofing and testing it works under all cases becomes impossible (depending on how complex the script and task itself is). It has a bit of the energy of "well you have to make no errors in C, then you can write good code and it never fails".

This option is good if the script MUST fail under any circumstances, if any error return of a program occurs. Which is usually not the case for most scripts. It's also useful in testing when debugging or when developing. Also useful if you purposefully enable and disable the option on the fly for sensitive segments of the script. I do not like this option as a default.

[-] MonkderVierte@lemmy.zip 2 points 1 day ago* (last edited 1 day ago)

This option is good if the script MUST fail under any circumstances

I mean, that or file mangling, because you didn't catch a error of some unplanned usecase.

[-] Ephera@lemmy.ml 10 points 2 days ago

I don't have the Bash experience to argue against that, but from a general programming experience, I want things to crash as loudly as possible when anything unexpected happens. Otherwise, you might never spot it failing.

Well, and nevermind that it could genuinely break things, if an intermediate step fails, but it continues running.

[-] thingsiplay@lemmy.ml 6 points 2 days ago* (last edited 2 days ago)

Bash and the commandline are designed to work after an error. I don't want it to fail after an error. It depends on the error though, and how critical it is. And this option makes no distinction. There are lot of commands where a fail is part of normal execution. As I said before, this option can be helpful when developing, but I do not want it in production. Often "silent" fails are a good thing (but as said, it depends on the type). The entire language is designed to sometimes fail and keep working as intended.

You really can't compare Bash to a normal programming language, because the language is contained and developed in itself. While Bash relies on random and unrelated applications. That's why I do not like comparisons like that.

Edit: I do do not want to exit the script on random error codes, but maybe handle the error. With that option in place, I have to make sure an error never happens. Which is not what I want.

[-] eager_eagle@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

Often "silent" fails are a good thing

Silent fails have caused me to waste many hours of my time trying to figure out what the fuck was happening with a simple script. I've been using -e on nearly all bash code I've written for years - with the exception of sourced ones - and wouldn't go back.

If an unhandled error happened, I want my program to crash so I can evaluate whether I need to ignore it, or actually handle it.

[-] Gobbel2000@programming.dev 5 points 1 day ago

But you can just as well make an exception to allow errors when -e is enabled with something like command || true, or even some warning message.

I feel like, while it does occur, allowing errors like this is more unusual than stopping the script in an error, so it's good to explicitly mark this case, therefore -e is still a reasonable default in most cases.

[-] eager_eagle@lemmy.world 2 points 1 day ago

Exactly, if an unhandled error happened I want my program to terminate. -e is a better default.

[-] Feyd@programming.dev 4 points 2 days ago

Ehhh I don't think I've used bash outside of random stuff on my machine in years except in CI pipelines and wanting them to stop and fail the pipeline the second anything goes wrong is exactly what I want.

[-] thingsiplay@lemmy.ml 2 points 2 days ago

I do not want to think about every possible error that can happen. I do not want to study every program I call to look for any possible errors. Only errors that are important to my task.

As I said, there are reasons to use this option when the script MUST fail on error.And its helpful for creating the script. I just don't like generalizations to always enable this option.

[-] thingsiplay@lemmy.ml 5 points 2 days ago

BTW here is an interesting discussion on Github about this topic: bash_strict_mode.md

[-] Paragone@lemmy.world 2 points 2 days ago

EXCELLENT Article!

Now interested in Notifox, that person's pet-project, too..

( :

[-] FizzyOrange@programming.dev 0 points 1 day ago

If you think you need this you're doing it wrong. Nobody should be writing bash scripts more than a few lines long. Use a more sane language. Deno is pretty nice for scripting.

this post was submitted on 11 Feb 2026
107 points (99.1% liked)

Programming

25502 readers
343 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS