· 6 min read

10 Advanced Bash Shell Commands to Boost Your Productivity

Master 10 advanced Bash commands and techniques - from process substitution to jq and GNU parallel - to streamline terminal workflows, write concise one-liners, and automate everyday tasks.

The terminal is where developers, sysadmins, and power users get things done. Beyond the basic ls/cd/cat repertoire there are a handful of commands and Bash features that, once learned, can drastically speed up common tasks and let you compose powerful one-liners and scripts.

Below are 10 advanced Bash commands and techniques - with examples, tips, and small reusable snippets - to help you get more done faster.


1) Process substitution: <(…) and >(…)

Process substitution lets you treat the output of a command as if it were a file. It’s great for commands that expect filenames.

Example: diff two command outputs

diff <(sort fileA.txt) <(sort fileB.txt)

Use case: feeding multiple streams into a single command without creating temp files.

Pro tip: Works well with tools like diff, vimdiff, cmp, and tar. Under the hood it’s /dev/fd/NN or named pipes.

References: Bash manual - Process Substitution


2) xargs: turn output into arguments (with -0 and -P)

xargs reads items from STDIN and builds command lines. Combined with find -print0 it safely handles filenames with spaces/newlines.

# Delete files found by find, safely handling special characters
find . -name '*.log' -print0 | xargs -0 rm -v

# Run commands in parallel (GNU xargs)
find imgs -type f -name '*.png' -print0 | xargs -0 -P 8 -I{} convert {} -resize 50% resized/{}

Flags:

  • -0: null-delimited input
  • -P N: run up to N processes in parallel
  • -I{}: replace token for each argument

Pro tip: Use xargs -n to limit arguments per invocation. If you need more robust parallelism, try GNU parallel (below).


3) GNU parallel: the swiss army knife for parallel jobs

GNU parallel is a powerful alternative to xargs with richer features: job control, load balancing, grouping, and argument replacement.

# Run a command for each argument in parallel using 4 jobs
parallel -j4 convert {} -resize 50% resized/{} ::: *.png

# Read filenames from stdin
find . -name '*.txt' -print0 | parallel -0 gzip

Why use it: easier to construct complex parallel workloads, resume failed jobs, and limit by load average.

Reference: GNU parallel manual


4) find: more than file hunting (use -exec, -prune, -regex)

find is indispensable for locating files and performing actions on them.

Examples:

# Delete build directories but don't descend into vendor directories
find . -path './vendor' -prune -o -type d -name 'build' -print

# Use -exec with + to group arguments
find . -name '*.log' -exec gzip {} +

# Match using regex
find . -regextype posix-extended -regex '.*/(foo|bar)[0-9]+\.txt'

Pro tip: prefer -exec ... + over -exec ... \; when possible - it batches arguments into fewer command invocations.

Reference: findutils manual


5) awk: the lightweight data-processing language

awk is a one-file-at-a-time programming language built for text processing: splitting fields, transforming rows, aggregating values.

Examples:

# Sum the 3rd column of a CSV
awk -F',' '{sum += $3} END {print sum}' data.csv

# Print only rows where column 2 > 100 and print first and fourth columns
awk -F'\t' '$2 > 100 {print $1, $4}' table.tsv

# Pretty table from ps output
ps aux | awk 'NR==1{print $0; next} {printf "%s %6s %6s %s\n", $1, $3, $4, $11}'

Pro tip: Use awk for fast transformations that would be clumsier in Python for short pipelines. GNU awk (gawk) supports extensions like time functions and CSV parsing.

Reference: GNU awk manual


6) sed: stream editor for in-place and streaming changes

sed excels at non-interactive text transformations. Combine with -i for in-place edits (beware portability differences) and -E for extended regex.

Examples:

# Replace first occurrence per line
sed 's/foo/bar/' file.txt

# Replace all occurrences and edit in-place (GNU sed)
sed -i 's/\<oldfunc\>/newfunc/g' **/*.c

# Use hold space for multi-line edits (more advanced)
sed -n '1,50p' file.txt

Warning: BSD/macOS sed requires -i '' for in-place without backup. Test sed scripts on sample files before mass-editing.

Reference: GNU sed manual


7) grep + ripgrep (rg): fast searching with PCRE

grep remains a go-to, but ripgrep (rg) is much faster on large trees and respects .gitignore by default.

Examples:

# Use Perl-compatible regex and show only the matching part
grep -Po '\bERROR:.*' logfile

# Use ripgrep to search codebase quickly
rg --line-number 'TODO|FIXME' src/

Pro tip: Use —context (-C) to show surrounding lines, and —hidden with ripgrep to search hidden files when needed.

Reference: GNU grep manual


8) jq: query and transform JSON from the shell

Working with JSON in shell pipelines is painful without jq. jq lets you filter, transform, and pretty-print JSON.

Examples:

# Pretty print
curl -s http://api.example.com | jq .

# Extract fields
cat data.json | jq -r '.items[] | "\(.id)\t\(.name)"'

# Filter and map
jq '[.items[] | select(.active) | {id, name, url}]' input.json

Pro tip: Use -r to output raw strings (no quotes) for easy piping into other shells commands.

Reference: jq manual


9) Advanced Bash parameter expansion and arrays

Bash’s parameter expansion and arrays are powerful for manipulating strings and lists without calling external processes.

Useful expansions:

file=/path/to/archive.tar.gz
echo ${file##*/}     # archive.tar.gz     (remove longest prefix)
echo ${file%.tar.gz} # /path/to/archive  (remove suffix)

name=${name:-default}  # use default if name is unset or empty

# Replace substring
s='hello world'
echo ${s/world/universe}

Arrays and mapfile:

# Read lines into an array efficiently
mapfile -t lines < file.txt
for ln in "${lines[@]}"; do
  echo "$ln"
done

# Associative arrays
declare -A counts
counts[apple]=3
counts[banana]=5

Pro tip: Prefer parameter expansion over external commands (cut, sed) in tight loops for performance.

Reference: Bash reference manual - Shell Parameter Expansion


10) History expansion, fc and using your editor for complex commands

Your shell history is a productivity goldmine. Use ! shortcuts or fc to reuse and edit past commands.

Examples:

# Run previous command again
!!

# Run the last command that started with 'git'
!git

# Edit last command in $EDITOR and run it
fc

# Re-run a command but replace a word (history substitution)
^old^new^

Pro tip: Configure HISTSIZE, HISTFILESIZE, and use shopt -s histappend plus PROMPT_COMMAND='history -a' to keep history consistent across sessions.

Reference: Bash manual - History Interaction


Conclusion

Mastering these commands and techniques transforms the shell from a basic file manager into a composable toolkit for automation and data wrangling. Start by practicing one or two that directly solve your common problems (e.g., jq for APIs, xargs/parallel for batch work, and process substitution for composing tools). Then gradually incorporate aliases and small functions into your ~/.bashrc to codify those gains.

Small .bashrc helpers to save time

# fast grep that searches hidden files too
alias fgrep='rg --hidden --line-number'

# safely replace strings in files (GNU sed)
replace_in_files(){
  local search="$1" replace="$2"; shift 2
  sed -i "s/${search}/${replace}/g" "$@"
}

# quick diff of two commands
diffcmd(){ diff -u <($1) <($2); }

Further reading and docs: Bash reference, GNU tools manuals, and project docs linked above are excellent next steps.

Back to Blog

Related Posts

View All Posts »