sudo shutdown 0
Prevents 99% of bugs and mistakes
sudo shutdown 0
Prevents 99% of bugs and mistakes
pkill journalctl -b nvtop tail are great but I like:
LANGUAGE=en_GB LC_ALL=en_GB.UTF-8 LANG=en_GB.UTF-8 <your GUI program> to run a GUI program in English for more universal compatibility for helping newbies and creating/reading non-terminal based documentation
List open files
sudo lsof -i -P
Network traffic by hardware
sudo tcpdump -i en1 -nn -s0
Current processes
top -l 1
A couple I use (concept of not exact), that I haven't seen in the thread yet:
Using grep as diff:
grep -Fxnvf orig.file copy.file
Using xargs -
xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input.
EG:
$ find ~/Pictures -name "*.png" -type f -print0 | xargs -0 tar -cvzf images.tar.gz
The watch command is very useful, for those who don't know, it starts an automated loop with a default of two seconds and executes whatever commands you place after it.
It allows you to actively monitor systems without having to manually re-run your command.
So for instance, if you wanted to see all storage block devices and monitor what a new storage device shows up as when you plug it in, you could do:
watch lsblk
And see in real time the drive mount. Technically not "real time" because the default refresh is 2 seconds, but you can specify shorter or longer intervals.
Obviously my example is kind of silly, but you can combine this with other commands or even whole bash scripts to do some cool stuff.
Ooooh cool, I think this explains how they have our raid monitor set up at work! I keep forgetting to poke through the script
Yeah, it's a neat little tool. I used it recently at my work. We had a big list of endpoints that we needed to make sure were powered down each night for a week during a patching window.
A sysadmin on my team wrote a script that pinged all of the endpoints in the list and returned only the ones that still were getting a response, that way we could see how many were still powered on after a certain time. But he was just manually running the script every few minutes in his terminal.
I suggested using the watch command to execute the script, and then piping the output into the sort command so the endpoints were nicely alphabetical. Worked like a charm!
i do not know if this counts as a command but you might want to check Atuin, what it does is help you find, manage and edit the commands you used in your shell history saves you a lot of time
Interesting.
I use FZF myself and set my history size to 99999
docker run --rm -it --privileged --pid=host debian:12 nsenter -a -t1 "$(which bash)"
If your user is in the docker group, and you are not running rootless Docker, this command opens a bash shell as root.
How it works:
docker run --rm -it creates a temporary container and attaches it to the running terminal--privileged disables some of the container's protections--pid=host attaches the container to the host's PID namespace, allowing it to access all running processesdebian:12 uses the Debian 12 imagensenter -a -t1 enters all the namespaces of the process with PID 1, which is the host's init since we use --pid=host"$(which bash)" finds the path of the host's bash and runs it inside the namespaces (plain bash may not work on NixOS hosts)So you're running bash "as if you're on the host systen". What's the benefit?
find /path/to/starting/dir -type f -regextype egrep -regex 'some[[:space:]]*regex[[:space:]]*(goes|here)' -exec mv {} /path/to/new/directory/ \;
I routinely have to find a bunch of files that match a particular pattern and then do something with those files, and as a result, find with -exec is one of my top commands.
If you're someone who doesn't know wtf that above command does, here's a breakdown piece by piece:
find - cli tool to find files based on lots of different parameters/path/to/starting/dir - the directory at which find will start looking for files recursively moving down the file tree-type f - specifies I only want find to find files.-regextype egrep - In this example I'm using regex to pattern match filenames, and this tells find what flavor of regex to use-regex 'regex.here' - The regex to be used to pattern match against the filenames-exec - exec is a way to redirect output in bash and use that output as a parameter in the subsequent command.mv {} /path/to/new/directory/ - mv is just an example, you can use almost any command here. The important bit is {}, which is the placeholder for the parameter coming from find, in this case, a full file path. So this would read when expanded, mv /full/path/of/file/that/matches/the/regex.file /path/to/new/directory/\; - This terminates the command. The semi-colon is the actual termination, but it must be escaped so that the current shell doesn't see it and try to use it as a command separator.I'm a big enjoyer of pushd and popd
so if youre in a working dir and need to go work in a different dir, you can pushd ./, cd to the new dir and do your thing, then popd to go back to the old dir without typing in the path again
Nice! I didn't know that one.
You can also cd to a directory and then do cd - to go to the last directory you were in.
I use $_ a lot, it allows you to use the last parameter of the previous command in your current command
mkdir something && cd $_
nano file
chmod +x $_
As a simple example.
If you want to create nested folders, you can do it in one go by adding -p to mkdir
mkdir -p bunch/of/nested/folders
Good explanation here:
https://koenwoortman.com/bash-mkdir-multiple-subdirectories/q
Sometimes starting a service takes a while and you're sitting there waiting for the terminal to be available again. Just add --no-block to systemctl and it will do it on the background without keeping the terminal occupied.
systemctl start --no-block myservice
For interactive editing, the keybind alt+. inserts the last argument from the previous command. Using this instead of $_ has the potential to make your shell history a little more explicit. (vim $_ isn't as likely to work a few commands later, but vim actual_file.sh might)
Yes, definitely and I do run into that when I search my history
You can also press alt+. multiple times to cycle through all recent arguments
I just press M-.
I'm not sure what you mean. I gave 3 different commands..
You can use M-. instead of $_ to insert last param of last command. You can also access older commands' param by repeated M-. just like you would do for inserting past commands with up arrow or C-p
I really hope I remember this one long enough to make it a habit
I have my .bashrc print useful commands with a short explanation. This way I see them regularly when I start a new session. Once I use a command enough that I have it as part of my toolkit I remove it from the print.
That is really useful! Thanks for the tip!
I'll go with a simple one here:
CTRL+SHIFT C/V for copy paste.
Or if it has to be terminal;
kill
😊
Search for github repos of dotfiles and read through people's shell profiles, aliases, and functions. You'll learn a lot.
I like emerge --moo, just to See how larry is doing. Only gentoo tho :(
ctrl+r on bash will let you quickly search and execute previous commands by typing the first few characters usually.
it's much more of a game changer than it first meets the eye.
And I believe shift+r will let you go forward in history if you're spamming ctrl+r too fast and miss whatever you're looking for
Just tested this out, it's ctrl+shift+r
There are a lot of great commands in here, so here are my favorites that I haven't seen yet:
Need to push a file out to a couple dozen workstations and then install it?
for i in $(cat /tmp/wks.txt); do echo $i; rsync -azvP /tmp/file $i:/opt/dir/; ssh -qo Connect timeout=5 $i "touch /dev/pee/pee"; done
Or script it using if else statements where you pull info from remote machines to see if an update is needed and then push the update if it's out of date. And if it's in a script file then you don't have search through days of old history commands to find that one function.
Or just throw that script into crontab and automate it entirely.
when I forget to include sudo in my command:
sudo !!
To add to this one, it also supports more than just the previous command (which is what !! means), you can do like sudo !453 to run command 453 from your history, also supports relative like !-5. You can also use without sudo if you want which is handy to do things like !ls for the last ls command etc. Okay one more, you can add :p to the end to print the command before running it just in case like !systemctl:p which can be handy!
Also if you make a typo you can quickly fix it with ^, e.g.
ls /var/logs/apache
^logs^log
fabien@debian2080ti:~$ history | sed 's/ ..... //' | sort | uniq -c | sort -n | tail
# with parameters
13 cd Prototypes/
14 adb disconnect; cd ~/Downloads/Shows/ ; adb connect videoprojector ;
14 cd ..
21 s # alias s='ssh shell -t "screen -raAD"'
36 node .
36 ./todo
42 vi index.js
42 vi todo # which I use as metadata or starting script in ~/Prototypes
44 ls
105 lr # alias lr="ls -lrth"
fabien@debian2080ti:~$ history | sed 's/ ..... //' | sed 's/ .*//' | sort | uniq -c | sort -n | tail
# without parameters
35 rm
36 node
36 ./todo
39 git
39 mv
70 ls
71 adb
96 cd
110 lr
118 vi
Ctrl-z to suspend the running program.
bg to make it continue running in the background.
jobs to get an overview of background programs.
fg to bring a program to the foreground.
parallel, easy multithreading right in the command line. This is what I wish was included in every programming language's standard library, a dead simple parallelization function that takes a collection, an operation to be performed on the members of that collection, and optionally the max number of threads (should be the number of hardware threads available on the system by default), and just does it without needing to manually set up threads and handlers.
inotifywait, for seeing what files are being accessed/modified.
tail -F, for a live feed of a log file.
script, for recording a terminal session complete with control and formatting characters and your inputs. You can then cat the generated file to get the exact output back in your terminal.
screen, starts a terminal session that keeps running after you close the window/SSH and can be re-accessed with screen -x.
Finally, a more complex command I often find myself repeatedly hitting the up arrow to get:
find . -type f -name '*' -print0 | parallel --null 'echo {}'
Recursively lists every file in the current directory and uses parallel to perform some operation on them. The {} in the parallel string will be replaced with the path to a given file. The '*' part can be replaced with a more specific filter for the file name, like '*.txt'.
I can recommend tmux also as an alternative to screen
Not a command but the tab key for auto complete. This made it much easier for me.
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0