this post was submitted on 19 Mar 2024
92 points (94.2% liked)

Linux

48149 readers
732 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

What do you advice for shell usage?

  • Do you use bash? If not, which one do you use? zsh, fish? Why do you do it?
  • Do you write #!/bin/bash or #!/bin/sh? Do you write fish exclusive scripts?
  • Do you have two folders, one for proven commands and one for experimental?
  • Do you publish/ share those commands?
  • Do you sync the folder between your server and your workstation?
  • What should've people told you what to do/ use?
  • good practice?
  • general advice?
  • is it bad practice to create a handful of commands like podup and poddown that replace podman compose up -d and podman compose down or podlog as podman logs -f --tail 20 $1 or podenter for podman exec -it "$1" /bin/sh?

Background

I started bookmarking every somewhat useful website. Whenever I search for something for a second time, it'll popup as the first search result. I often search for the same linux commands as well. When I moved to atomic Fedora, I had to search for rpm-ostree (POV: it was a horrible command for me, as a new user, to remember) or sudo ostree admin pin 0. Usually, I bookmark the website and can get back to it. One day, I started putting everything into a .bashrc file. Sooner rather than later I discovered that I could simply add ~/bin to my $PATH variable and put many useful scripts or commands into it.

For the most part I simply used bash. I knew that you could somehow extend it but I never did. Recently, I switched to fish because it has tab completion. It is awesome and I should've had completion years ago. This is a game changer for me.

I hated that bash would write the whole path and I was annoyed by it. I added PS1="$ " to my ~/.bashrc file. When I need to know the path, I simply type pwd. Recently, I found starship which has themes and adds another line just for the path. It colorizes the output and highlights whenever I'm in a toolbox/distrobox. It is awesome.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 27 points 8 months ago* (last edited 8 months ago) (3 children)
#!/usr/bin/env bash

A folder dotfiles as git repository and a dotfiles/install that soft links all configurations into their places.

Two files, ~/.zshrc (without secrets, could be shared) and another for secrets (sourced by .zshrc if exist secrets).

[–] [email protected] 8 points 8 months ago (1 children)

#!/usr/bin/env bash

This is the way!

[–] [email protected] 5 points 8 months ago (2 children)
[–] [email protected] 20 points 8 months ago* (last edited 8 months ago) (2 children)

because bash isn’t always in /usr/bin/bash.

On macOS the version on /usr/bin/bash is very old (bash 3 I think?), so many users install a newer version with homebrew which ends up in PATH, which /usr/bin/env looks at.

Protip: I start every bash script with the following two lines:

#!/usr/bin/env bash
set -euo pipefail

set -e makes the script exit if any command (that’s not part of things like if-statements) exits with a non-zero exit code

set -u makes the script exit when it tries to use undefined variables

set -o pipefail will make the exit code of the pipeline have the rightmost non-zero exit status of the pipeline, instead of always the rightmost command.

[–] [email protected] 4 points 8 months ago
load more comments (1 replies)
[–] [email protected] 8 points 8 months ago

#!/usr/bin/env will look in PATH for bash, and bash is not always in /bin, particularly on non-Linux systems. For example, on OpenBSD it's in /usr/local/bin, as it's an optional package.

If you are sure bash is in /bin and this won't change, there's no harm in putting it directly in your shebang.

[–] [email protected] 2 points 8 months ago

dotfiles

Thanks! I'll check them out. I knew the cooncept existed but so far I didn't dig deep into managing them. This is my start I guess https://wiki.archlinux.org/title/Dotfiles

load more comments (1 replies)
[–] [email protected] 17 points 7 months ago (1 children)
[–] [email protected] 4 points 7 months ago

shellcheck

That looks useful.

https://www.shellcheck.net

[–] [email protected] 12 points 8 months ago (1 children)

Do you use bash?

Personally I use Bash for scripting. It strikes the balance of being available on almost any system, while also being a bit more featureful than POSIX. For interactive use I bounce between bash and zsh depending on which machine I'm on.

Do you write #!/bin/bash or #!/bin/sh?

I start my shell scripts with #! /usr/bin/env bash. This is the best way of ensuring that the same bash interpreter is called that the user expects (even if more than one is present or if it is in an unusual location)

Do you have two folders, one for proven commands and one for experimental?

By commands, do you mean bash scripts? If so, I put the ones I have made relatively bulletproof in ~/bin/, as bash usually makes them automatically on the path with this particular folder name. If I'm working on a script and I don't think it's ready for that, or if it goes with a specific project/workflow, I will move it there.

Do you sync the folder between your server and your workstation?

No. I work on lots of servers, so for me it's far more important to know the vanilla commands and tools rather than expect my home-made stuff to follow me everywhere.

good practice? general advice?

Pick a bash style guide and follow it. If a line is longer than 80 characters, find a better way of writing that logic. If your script file is longer than 200 lines, switch to a proper programming language like Python. Unless a variable is meant to interact with something outside of your script, don't name it an all caps name.

is it bad practice to create a handful of commands like podup and poddown that replace podman compose up -d and podman compose down or podlog as podman logs -f --tail 20 $1 or podenter for podman exec -it "$1" /bin/sh?

This is a job for bash aliases.

[–] [email protected] 2 points 8 months ago (1 children)

Good advice. I'll add that any time you have to parse command line arguments with any real complexity you should probably be using Python or something. I've seen bash scripts where 200+ lines are dedicated to just reading parameters. It's too much effort and too error prone.

[–] [email protected] 4 points 8 months ago (4 children)

It depends. Parsing commands can be done in a very lightweight way if you follow the bash philosophy of positional/readline programming rather than object oriented programming. Basically, think of each line of input (including the command line) as a list data structure of space-separated values, since that's the underlying philosophy of all POSIX shells.

Bash is basically a text-oriented language rather than an object-oriented language. All data structures are actually strings. This is aligned with the UNIX philosophy of using textual byte streams as the standard interface between programs. You can do a surprising amount in pure bash once you appreciate and internalize this.

My preferred approach for CLI flag parsing is to use a case-esac switch block inside a while loop where each flag is a case, and then within the block for each case, you use the shift builtin to consume the args like a queue. Again, it works well enough if you want a little bit of CLI in your script, but if it grows too large you should probably migrate to a general purpose language.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

Here's a simple example of what I mean:

#! /usr/bin/env bash

while [[ -n $1 ]]; do
  case $1 in
    -a) echo "flag A is set" ;;
    -b|--bee) echo "flag B is set" ;;
    -c) shift; echo "flag C is $1" ;;
    --dee=*) echo "flag D is ${1#--dee=}" ;;
  esac
  shift
done

Showing how to do long flags with B and flags with parameters with C and D. The parameters will correctly work with quoted strings with spaces, so for example you could call this script with --dee="foo bar" and it will work as expected.

load more comments (3 replies)
[–] [email protected] 10 points 8 months ago* (last edited 8 months ago) (1 children)

I use bash for scripts almost exclusively even though i use zsh interactively (startup scripts for zsh are an obvious exception).

The vast majority of my scripts start with

  set -e -u

which makes the script exit if a command (that is not in a few special places like an if) exits with an error status code and also complains about unbound variables when you use them.

Use

bash -n

and

shellcheck

to test your script for errors and problems if you try it.

Always use curly braces for variables to avoid issues with strings after the variable name being interpreted as part of the variable name.

Always use 10# before numbers in $(()) expressions to avoid leading zeroes turning your decimal number variables into octal ones.

Always use

while read -r foo
do
...
done < <(command ...)

instead of

command ... | while read -r foo
do
...
done

to avoid creating a subshell where some changes you make will not affect your script outside the loop.

In

while read -r foo
do
...
done < ...

loops always make sure you redirect all stdin from /dev/null or otherwise close it with suitable parameters or the content of your loop will eat some of the lines you meant for the read. Alternatively fill a bash array in the loop and then use a for loop to call your commands and do more complex logic.

When using temporary directories or similar resources use

cleanup()
{
 ...
}
trap cleanup EXIT

handlers to clean up after the script in case it dies or is killed (by SIGTERM or SIGINT,...; obviously not SIGKILL).

When writing scripts for cronjobs take into account that the environment (PATH In particular) might be more limited. Also take into account that stderr output and non-zero exit status can lead to an email about the cronjob.

Use pushd and popd instead of cd (especially cd ..), redirect their output to /dev/null. This will prevent your scripts from accidentally running later parts of the script in a wrong directory.

There are probably many other things to consider but that is just standard stuff off the top of my head.

If you do need any sort of data structure and in particular arrays of data structures use a proper programming language. I would recommend Rust since a compiled language is much easier to run on a variety of systems than the Python so many others here recommend, especially if you need to support the oldest supported version of an OS and the newest one at the same time.

[–] [email protected] 3 points 8 months ago (1 children)

Great list! I would add "always surround variables with quotes in case the value contains spaces".

[–] [email protected] 2 points 8 months ago (1 children)

Good point, forgot one of the basics.

Also, to make your scripts more readable and less error prone use something like

if [[ $# -gt 0 ]] && [[ "$1" == "--dry-run" ]]; then
  dry_run=1
  shift
else
  dry_run=0
fi

if [[ $# != 3 ]]; then
  echo "Usage: $0 [ --dry-run ] <description of foo> <description of bar> <description of baz>" >&2
  exit 1
fi

foo="$1"
shift
bar="$1"
shift
baz="$1"
shift

at the start of your script to name your parameters and provide usage information if the parameters did not match what you expected. The shift and use of $1 at the bottom allows for easy addition and removal of parameters anywhere without renumbering the variables.

Obviously this is only for the 90% of scripts that do not have overly complex parameter needs. For those you probably want to use something like getopt or another language with libraries like the excellent clap crate in Rust.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago)

Thank you very much!

[–] [email protected] 8 points 8 months ago (1 children)
load more comments (1 replies)
[–] [email protected] 7 points 8 months ago (3 children)

Shell scripts are one of the things that makes Linux what it is. They're relatively easy to create, powerful, etc. It was the thing that drove me to it from Windows in the first place.

One thing I would recommend against is creating dozens of utility scripts and/or aliases for things you run frequently. I have found it's much better in the long-run to simply learn the "proper" commands and switches. If you use them often enough you start to type them very quickly. When you create helpers you start to learn your own ecosystem and will be lost on any system that doesn't have your suite of helper apps installed.

There are exceptions to this to be sure (e.g. I always alias 'l=ls -FhlA') but I would specifically avoid the podup and poddown ones myself. I've gotten very quick at typing "docker run -it --rm foo" just by rote repetition.

You're free to do as you like though. Maybe you'll only run Linux on your own desktop so that's all that matters. But something to keep in mind. I would at least learn the commands very well first and then later alias or script them for convenience.

load more comments (3 replies)
[–] starman 7 points 8 months ago (1 children)

That's the way I do it:

#!/usr/bin/env nix
#! nix shell nixpkgs#nushell <optionally more dependencies>  --command nu

<script content>

But those scripts are only used by me

[–] [email protected] 2 points 8 months ago

This is the way

[–] [email protected] 6 points 8 months ago (1 children)

Several things

  • write bash and nothing else (except posix sh)
  • find a good way to take notes. It shouldn't be in your bashrc
  • only write fish for fish config
  • use $!/usr/bin/env bash
[–] [email protected] 4 points 8 months ago (1 children)

Good idea I added a "iwish" command a while ago. Whenever I am pissed about gnome not being able to do something, or anything else that didn't work as it should, I wrote "iwish gnome had only one extension app" and it would add a new line to my wishlist.md Maybe it would be good for notes too. inote bla

load more comments (1 replies)
[–] [email protected] 6 points 8 months ago* (last edited 8 months ago) (1 children)

I use Bash for scripts, though my interactive shell is Fish.

Usually I use #!/usr/bin/env bash as shebang. This has the advantage of searching your PATH for Bash instead of hardcoding it.

My folders are only differentiated by those in my PATH and those not.

Most of my scripts can be found here. They are purely desktop use, no syncing to any servers. Most would be useless there.

For good practice, I'd recommend using set -euo pipefail to make Bash slightly less insane and use shellcheck to check for issues.
This is personal preference, but you could avoid Bashisms like [[ and stick to POSIX sh. (Use #!/usr/bin/env sh then.)

With shortened commands the risk is that you might forget how the full command works. How reliant you want to be on those commands being present is up to you. I wouldn't implement them as scripts though, just simple aliases instead.
Scripts only make sense if you want to do something slightly more complex over multiple lines for readability.

[–] [email protected] 3 points 8 months ago (1 children)

#/usr/bin/env bash typo? #!/usr/bin/env bash

thx for the tips!

I prefer single files over aliases since I can more easily manage each command.

[–] [email protected] 3 points 8 months ago

You're right, it's #!

[–] [email protected] 5 points 8 months ago (2 children)

Am I missing something - doesn't bash have tab completion or of the box?

[–] [email protected] 5 points 8 months ago
[–] [email protected] 4 points 8 months ago

It does. It's not quite as fancy as the completion in fish/zsh which employ a TUI, but it's reliable in most situations

[–] [email protected] 5 points 8 months ago

You are way over thinking it.

[–] [email protected] 5 points 7 months ago
  • I usually use bash/python/perl if I can be sure that it will be available on all systems I intend to run the scripts. A notable exception for this would be alpine based containers, there it's nearly exclusively #!/bin/sh.
  • Depending on the complexity I will either have a git repository for all random scripts I need and not test them, or a single repo per script with Integrationtests.
  • Depends, if they are specific to my setup, no, otherwise the git repository is public on my git server.
  • Usually no, because the servers are not always under my direct control, so the scripts that are on servers are specific to that server/the server fleet.
  • Regarding your last question in the list: You do you, I personally don't, partly because of my previous point. A lot of servers are "cattle" provisioned and destroyed on a whim. I would have to sync those modifications to all machines to effectively use them, which is not always possible. So I also don't do this on any personal devices, because I don't want to build muscle memory that doesn't apply everywhere.
[–] [email protected] 4 points 7 months ago

Do you use bash? Yes because it is everywhere and available by default.

[–] [email protected] 4 points 8 months ago

I primarily operate in strict standard compliance mode where I write against the shell specifications in the lastest Single Unix Specification and do not use a she-bang line since including one results in unspecified, implementation-defined behavior. Generally people seem to find this weird and annoying.

Sometimes I embrace using bash as a scripting language, and use one of the env-based she-bangs. In that case, I go whole-hog on bashisns. While I use zsh as my interactive shell, even I'm not mad enough to try to use it for scripts that need to run in more than one context (like other personal accounts/machines, even).

In ALL cases, use shellcheck and at least understand the diagnostics reported, even if you opt not to fix them. (I generally modify the script until I get a clean shellcheck run, but that can be quite involved... lists of files are pretty hard to deal with safely, actually.)

[–] [email protected] 4 points 7 months ago
  • Fish. Much, much saner defaults.
  • I am writing #!/usr/bin/env sh for dead simple scripts, so they will be a tiny bit more portable and run a tiny bit faster. The lack of arrays causes too much pain in longer scripts. I would love to use Fish, but it lacks a strict mode.
  • No, why would I?
  • I used to share all my dotfiles, scripts included, but I was too afraid that I would publish some secrets someday, so I stopped doing that. For synchronizing commands, aliases and other stuff between computers I use Chezmoi.
  • To use Fish instead of fighting with start up time of Zsh with hundreds of plugins
  • Always use the so-called "strict mode" in Bash, that is, the set -euo pipefail line. It will make Bash error on non-zero exit code, undefined variables and non-zero exit codes in commands in pipe. Also, always use shellcheck. It's extremely easy to make a mistake in Bash. If you want to check the single command exit code manually, just wrap it in set +e and set -e.
  • Consider writing your scripts in Python. Like Bash, it also has some warts, but is multiplatform and easy to read. I have a snippet which contains some boilerplate like a main function definition with ArgumentParser instantiated. Then at the end of the script the main function is called wrapped in try … except KeyboardInterrupt: exit(130) which should be a default behavior.
  • Absolutely not a bad practice. If you need to use them on a remote server and can't remember what they stand for, you can always execute type some_command. Oh, and read about abbreviations in Fish. It always expands the abbreviation, so you see what you execute.
[–] [email protected] 4 points 7 months ago

Do you use bash? If not, which one do you use? zsh, fish? Why do you do it?

Mostly fish, because it just feels much more modern than bash, it has good built-in autocomplete and I don't have to install millions of plugins like of zsh.

Do you write #!/bin/bash or #!/bin/sh? Do you write fish exclusive scripts?

#!/usr/bin/env bash Occasionally I also write fish scripts. Just replace sh with fish.

What should’ve people told you what to do/ use?

zoxide

general advice?

As @crispy_[email protected] already suggested, use shellcheck.

is it bad practice to create a handful of commands like podup and poddown that replace podman compose up -d and podman compose down or podlog as podman logs -f --tail 20 $1 or podenter for podman exec -it "$1" /bin/sh?

I don't think so

[–] [email protected] 4 points 7 months ago (2 children)

Yes, using bash on all boxen.

Scripts start with #!/bin/sh ,because, that gives quicker execution times.

Any simple aliases, I put in .bash_aliases

Tried tcsh and zsh around 30yrs ago, all bash since then.

load more comments (2 replies)
[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

I use sh to attempt to keep it compatible with POSIX systems.

I use pain bash. Never really tried zsh and fish, since most of my Linux work is on servers and I don't really care for extra features.

I try and write idempotent scripts when possible.

I wouldn't create those aliases on a fleet because writing them to the configuration file of your shell in an idempotent fashion is hacky and my VMs are like cattle.

[–] [email protected] 3 points 8 months ago* (last edited 8 months ago) (1 children)

I use bash as my interactive shell. When ~20 years ago or so I encountered "smart" tab completion for the first time, I immediately disabled that and went back to dumb completion, because it caused multi-second freezes when it needed to load stuff from disk. I also saw it refuse to complete filenames because they had the wrong suffix. Maybe I should try to enable that again, see if it works any better now. It probably does go faster now with the SSDs.

I tried OpenBSD at some point, and it came with some version of ksh. Seems about equivalent to bash, but I had to modify some of my .bashrc so it would work on ksh. I would just stick to the default shell, whatever it is, it's fine.

I try to stick to POSIX shell for scripts. I find that I don't need bashisms very often, and I've used systems without bash on them. Most bash-only syntax has an equivalent that will work on POSIX sh. I do use bash if I really need some bash feature (I recently wanted to set -o pipefail, which dash cannot do apparently, and the workaround is really annoying).

Do not use #!/bin/sh if you're writing bash-only scripts. This will break on Debian, Ubuntu, BSD, busybox etc. because /bin/sh is not bash on those systems.

[–] [email protected] 5 points 8 months ago* (last edited 8 months ago) (1 children)

Do not use #!/bin/sh if you’re not writing bash-only scripts

Actually #!/bin/sh is for bourne shell compatible scripts. Bash is a superset of the bourne shell, so anything that works in bourne should work in bash as well as in other bourne compatible shells, but not vice versa. Bash specific syntax is often referred to as a "bashism", because it's not compatible with other shells. So you should not use bashisms in scripts that start with #!/bin/sh.

The trouble is that it is very common for distros to links /bin/sh to /bin/bash, and it used to be that bash being called as /bin/sh would change its behavior so that bashisms would not work, but this doesn't appear to be the case anymore. The result is that people often write what they think are bourne shell scripts but they unintentionally sneak in bashisms... and then when those supposed "bourne shell" scripts get run on a non-bash bourne compatible shell, they fail.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago) (1 children)

Oh I wanted to say, "Do not use #!/bin/sh if you're ~~not~~ writing bash-only scripts". I think I reformulated that sentence and forgot to remove the not. Sorry about the confusion. You're exactly right of course. I have run into scripts that don't work on Debian, because the author used bashisms but still specified /bin/sh as the interpreter.

[–] [email protected] 2 points 8 months ago

Oh I wanted to say, “Do not use #!/bin/sh if you’re not writing bash-only scripts”

Hah, I was wondering if that was wat you actually meant. The double negation made my head spin a bit.

I have run into scripts that don’t work on Debian, because the author used bashisms but still specified /bin/sh as the interpreter.

The weird thing is that man bash still says:

When invoked as sh, bash enters posix mode after the startup files are read.
...
--posix
    Change  the  behavior  of bash where the default operation differs from the POSIX standard to 
    match the standard (posix mode). See SEE ALSO below for a reference to a document that details 
    how posix mode affects bash's behavior.

But if you create a file with a few well known bashisms, and a #!/bin/sh shebang, it runs the bashisms just fine.

[–] [email protected] 3 points 8 months ago* (last edited 7 months ago)

I recommend writing everything in Bourne shell (/bin/sh) for a few reasons:

  • Bash is more capable, which is nice, but if you're fiddling with complex data structures, you probably should be using a more maintainable language like Python.
  • Bash is in most places, but crucially not everywhere. Docker-based deployments for example often use Ash which is very similar to Bash, but lacks support for arrays and a few other things.
  • Bourne's limitations force you to rethink your choices regularly. If you find yourself hacking around a lack of associative arrays for example, it's probably time to switch to a proper language.

Also two bits of advice.

  1. Use shellcheck. There's a website that'll check your script for you as well as a bunch of editor extensions that'll do it in real time. You will absolutely write better, safer code with it.
  2. If your script exceeds 300 lines. Stop and rewrite it in a proper language. Your future self will thank you.
[–] [email protected] 3 points 8 months ago (1 children)

Bash script for simple things (although Fish is my regular shell) and Node or Python scripts for complex things. Using #!/usr/bin/env node works just like it would for Bash so you know.

load more comments (1 replies)
[–] [email protected] 3 points 7 months ago

Yes fish is great. It has some special syntax for functions, I will add my configs soo.

set fish_greeting is useful to silence it.

User scripts can go to ~/.local/bin which is already in the path.

You can split up your shell configs into topics, and put them into ~/.config/fish/conf.d/abc.conf

[–] [email protected] 3 points 8 months ago

A good idea i have been spreading around relevant people lately is to use ShellCheck as you code in Bash, integrate it in your workflow, editor or IDE as relevant to you (there's a commandline tool as well as being available for editors in various forms), and pass your scripts through it, trying to get the warnings to go away. That should fix many obvious errors and clean up your code a bit.

[–] [email protected] 2 points 8 months ago (2 children)

I use bash and I usually put /bin/bash in my scrtipts, because that's where I know it works. /bin/sh is only if it works on many/all shells.

I don't have many such scripts, so I just have one. I don't really share them, as they are made for my usecase. If I do create something that I think will help others, then yes, I share them in git somewhere.

I do have a scripts folder in my Nextcloud that I sync around with useful scripts.

Some of your examples can probably just be made into aliases with alias alias_name="command_to_run".

load more comments (2 replies)
[–] [email protected] 2 points 8 months ago
  • Do you use bash? If not, which one do you use? zsh, fish? Why do you do it?
  • Do you write #!/bin/bash or #!/bin/sh? Do you write fish exclusive scripts?

I use bash, and I use #!/bin/bash for my scripts. Some are POSIX compliant, some have bashisms. But I really don't care about bashisms, since I explicitly set the bash as interpreter. So no, no fish exclusive scripts, but some "bash exclusive" scripts. Since fish is aimed towards being used as interactive shell I don't see a real reason to use it as interpreter for scripts anyways.

  • Do you have two folders, one for proven commands and one for experimental?
  • Do you publish/ share those commands?
  • Do you sync the folder between your server and your workstation?

I have my scripts in $HOME/.scripts and softlink them from a directory in $PATH. Some of the scripts are versioned using Git, but the repository is private and I do not plan sharing them because the repoand the scripts scripts contain some not-tho-share information and mostly are simply not useful outside my carefully crafted and specific environment. If I want to share a script, I do it individually or make a proper public Git repository for it.

Since my server(s) and my workstations have different use cases I do not share any configuration between them. I share some configuration between different workstations, though. My dotfiles repository is mainly there for me to keep track of changes in my dotfiles.

is it bad practice to create a handful of commands

It becomes bad practice if it is against your personal or corporate guidelines regarding best practices. While it is not particularly bad or insecure, etc. to create bash scripts containing a single command, maybe use an alias instead. The $1 is automatically the first parameter after typing the alias in the shell.

alias podup="podman compose up -d"
alias poddown="podman compose down"
alias podlog="podman logs -f --tail 20"

Not quite sure about the podman syntax, if podman exec /bin/sh -it "$1" also works, you can use alias podenter="podman exec /bin/sh -it, Otherwise a simple function would do the trick.

[–] [email protected] 2 points 8 months ago

Btw, if you ever wondered why Debian uses dash as /bin/sh (the switch was a bit annoying at the time), I think the reasoning was something like this:

  • dash is a bit faster, which might have saved a second or two on boot times (this was before systemd). Same applies to compilation times, configure scripts run faster with dash.
  • A bunch of #!/bin/sh scripts in Debian did not actually work if you replaced /bin/sh with another shell, which I guess some people wanted to do. Making dash the default /bin/sh forced everyone to fix their scripts.

Also some history on the abomination that is m4sh, famously used by GNU autoconf configure.ac scripts. Apparently when autoconf was released in 1991, there were still some Unix systems that shipped some 70s shells as the default /bin/sh. These shells do not support shell functions, which makes creating any sort of shell programming library pretty much impossible (I guess you could make a folder full of scripts instead of functions). They decided to use m4 preprocessor macros instead, as a sort of poor man's replacement for functions.

In hindsight, it wish they had told commercial Unix sysadmins to install a proper /bin/sh or gtfo. But the GNU people thought it was important to make it as easy as possible to install free software even on commercial Unices.

load more comments
view more: next ›