Backup and Remote Server Shell Scripts Made Simple (Part 9 of 34)







Backup and Remote Server Shell Scripts Made Simple | LinuxTeck

Every SysAdmin has had at least some lost sleep because of a bad backup or missed scheduled (manual) tasks on a distant server. Writing a Linux Bash Script to backup files and directories automatically is probably one of the best things you can do for your job and requires no third party application on Ubuntu, Rocky Linux, and/or AlmaLinux. The following guide will walk you through building three full-featured, ready-to-run scripts; they start with just creating a local tar archive of selected directories & files and end up as an automated remote SSH Backup with Rotation, Logging, Email Alerts on Failure.

Why Manual Backups Always Break Eventually:

Relying on manual backups is a risk that most teams discover too late. Here is what goes wrong:

  • Someone forgets to run the backup before a major update or server migration.
  • Backups pile up with no rotation logic, filling the disk until the server crashes.
  • There is no alert when a backup silently fails at 2 AM and nobody notices until data is gone.

Shell scripts solve all three problems if you build them right from the start.

If you are new to writing shell scripts, it helps to understand the fundamentals before diving into the scripts below. Start with this beginner-friendly introduction to what bash scripting is and how it works on Linux before continuing.

#01

Prerequisites: What You Need Before You Start

All scripts in this guide are tested on Ubuntu 24.04 LTS, Rocky Linux 9, and AlmaLinux 9. Install the required packages for your distro before running anything.

Ubuntu 24.04:

Use apt to install tar, rsync, mail utilities, gnupg, and curl in one command.

bash
LinuxTeck.com
sudo apt update && sudo apt install -y tar rsync mailutils gnupg curl

Rocky Linux 9 / AlmaLinux 9:

Use dnf instead of apt on RHEL-based systems. The package names differ slightly.

bash
LinuxTeck.com
sudo dnf install -y tar rsync mailx gnupg2 curl

Tip:

For the SSH remote backup script you also need key-based authentication set up between your source server and the backup destination. The basic SSH client commands guide walks through generating keys and copying them to remote servers step by step.

#02

Script 1: Simple Local Backup Using tar

This is the starting point. The script uses the tar command to compress important directories into a .tgz archive with a date-stamped filename. It works the same way on Ubuntu, Rocky Linux, and AlmaLinux with no changes needed.

What this script does:

It backs up /home, /etc, and /var/log into a compressed archive stored in /mnt/backup. The filename includes the server hostname and current date so each backup is easy to identify at a glance. The Linux compression and archiving command cheat sheet explains all tar flags in detail if you want to customise the options.

bash
LinuxTeck.com
#!/bin/bash
# Script 1: Simple Local Backup with tar
# Works on: Ubuntu 24.04, Rocky Linux 9, AlmaLinux 9

# Directories to back up
BACKUP_DIRS="/home /etc /var/log"

# Where to store the backup
BACKUP_DEST="/mnt/backup"

# Create destination if it does not exist
mkdir -p "$BACKUP_DEST"

# Build filename using hostname and date
HOSTNAME=$(hostname -s)
DATE=$(date +%Y-%m-%d)
ARCHIVE="$BACKUP_DEST/$HOSTNAME-$DATE.tgz"

echo "Starting backup: $ARCHIVE"
echo "Date: $(date)"

# Run tar backup - unquoted to allow multiple directory expansion, one-file-system to avoid mounts
tar --one-file-system \
--exclude=/tmp \
--exclude=/proc \
--exclude=/sys \
--exclude=/dev \
-czf "$ARCHIVE" $BACKUP_DIRS

# Check if tar succeeded
if [ $? -eq 0 ]; then
    echo "Backup completed successfully: $ARCHIVE"
    ls -lh "$ARCHIVE"
else
    echo "Backup FAILED. Check available disk space and permissions."
    exit 1
fi

Save this as backup_local.sh, make it executable, and run it:

bash
LinuxTeck.com
chmod +x backup_local.sh
sudo ./backup_local.sh
OUTPUT
Starting backup: /mnt/backup/webserver01-2026-05-08.tgz
Date: Thu May 8 09:12:04 UTC 2026
Backup completed successfully: /mnt/backup/webserver01-2026-05-08.tgz
-rw-r--r-- 1 root root 412M May 8 09:14 /mnt/backup/webserver01-2026-05-08.tgz

To verify and restore from the archive:

bash
LinuxTeck.com
# List all files inside the archive
tar -tzvf /mnt/backup/webserver01-2026-05-08.tgz

# Restore a single file safely to /tmp first
tar -xzvf /mnt/backup/webserver01-2026-05-08.tgz -C /tmp etc/hosts

# Full restore - use with care, this overwrites live files
sudo tar -xzvf /mnt/backup/webserver01-2026-05-08.tgz -C /

Common Mistake:

Skipping the archive verification step is something most people regret later. A backup that finishes without errors is not always a good backup. Corrupt archives happen when the disk fills during the write.

Fix: Always verify after creation: tar -tzvf /mnt/backup/archive.tgz > /dev/null && echo "OK" || echo "CORRUPT"

#03

Script 2: Remote Backup via SSH Using rsync and scp

A linux remote backup script via ssh pushes your backup to a separate server so a disk failure on the source does not take the backup with it. This script covers both rsync and scp so you can use whatever fits your setup. Understanding the difference matters because they behave very differently under the hood.

rsync vs scp for remote backups:

rsync only transfers blocks that changed since the last run, making it much faster for large directories after the first backup. scp copies the full file every single time, which is simpler but slower. For regular scheduled backups, rsync is almost always the better choice. The Linux remote access command cheat sheet has more examples of both tools.

bash
LinuxTeck.com
#!/bin/bash
# Script 2: Remote SSH Backup using rsync + scp
# Works on: Ubuntu 24.04, Rocky Linux 9, AlmaLinux 9

# Remote server details
REMOTE_USER="backupuser"
REMOTE_HOST="192.168.1.100"
REMOTE_DIR="/backup/$(hostname -s)"
SSH_KEY="/root/.ssh/id_rsa"

# Local directories to back up
BACKUP_DIRS="/home /etc /var/log"

# Local staging area
LOCAL_STAGE="/tmp/backup_stage"
mkdir -p "$LOCAL_STAGE"
chmod 700 "$LOCAL_STAGE"

# Build archive filename
DATE=$(date +%Y-%m-%d)
HOSTNAME=$(hostname -s)
ARCHIVE="$LOCAL_STAGE/$HOSTNAME-$DATE.tgz"

echo "=== Remote Backup Started: $(date) ==="

# Step 1: Create local archive (unquoted to allow multiple directory expansion)
tar \
--exclude=/tmp \
--exclude=/proc \
--exclude=/sys \
--exclude=/dev \
-czf "$ARCHIVE" $BACKUP_DIRS
if [ $? -ne 0 ]; then
    echo "ERROR: tar archive creation failed."
    exit 1
fi

# Step 2: Create remote directory if it does not exist
ssh -i "$SSH_KEY" "$REMOTE_USER@$REMOTE_HOST" "mkdir -p $REMOTE_DIR"

# Step 3: Option A - Transfer with rsync (recommended)
rsync -avz -e "ssh -i $SSH_KEY" "$ARCHIVE" "$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/"

# Step 3: Option B - Transfer with scp (simpler, copies full file every time)
# scp -i "$SSH_KEY" "$ARCHIVE" "$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/"

if [ $? -eq 0 ]; then
    echo "Remote backup completed: $REMOTE_HOST:$REMOTE_DIR"
    rm -f "$ARCHIVE"
else
    echo "ERROR: Remote transfer failed. Local archive kept at $ARCHIVE"
    exit 1
fi

echo "=== Remote Backup Finished: $(date) ==="

OUTPUT
=== Remote Backup Started: Thu May 8 09:20:11 UTC 2026 ===
sending incremental file list
webserver01-2026-05-08.tgz
412,385,280 100% 22.15MB/s 0:00:17 (xfer#1, to-check=0/1)

sent 412,491,392 bytes received 35 bytes 23,571,048.40 bytes/sec
total size is 412,385,280 speedup is 1.00
Remote backup completed: 192.168.1.100:/backup/webserver01
=== Remote Backup Finished: Thu May 8 09:20:29 UTC 2026 ===

Common Mistake:

Running an SSH backup script with a passphrase-protected key through cron causes silent failures. Cron cannot enter the passphrase interactively, so the SSH connection is simply refused with no visible error.

Fix: Generate a dedicated passwordless key for backup jobs only: ssh-keygen -t rsa -b 4096 -f /root/.ssh/backup_key -N "" and copy it to the remote server: ssh-copy-id -i /root/.ssh/backup_key.pub backupuser@192.168.1.100

#04

Script 3: Production Backup with Logging, Rotation, and Alerts

This is the script that actually belongs in a production environment. It adds four features that the previous two scripts are missing: a timestamped log file, automatic rotation to keep only the last 7 days of backups, an email alert on failure, and an optional Slack webhook notification. This is what a real bash backup script with logging and email notification looks like on a live server.

What makes this production-ready:

The script checks available disk space before starting, logs every step with a timestamp, removes archives older than 7 days using find -mtime, and calls an alert function if anything fails. You can monitor it using the Linux logging best practices covered in our server management guide. The disk space check uses df -m which is explained with examples in the df command guide.

bash
LinuxTeck.com
#!/bin/bash
# Script 3: Production Backup - Logging + Rotation + Email + Slack
# Works on: Ubuntu 24.04, Rocky Linux 9, AlmaLinux 9

####### CONFIGURATION #######
BACKUP_DIRS="/home /etc /var/log"
BACKUP_DEST="/mnt/backup"
LOG_FILE="/var/log/backup.log"
RETENTION_DAYS=7
MIN_DISK_MB=500
ADMIN_EMAIL="admin@yourdomain.com"
SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
##############################

HOSTNAME=$(hostname -s)
DATE=$(date +%Y-%m-%d_%H-%M)
ARCHIVE="$BACKUP_DEST/$HOSTNAME-$DATE.tgz"

# Cleanup if interrupted
trap 'rm -f "$ARCHIVE"; exit' INT TERM

# Logging function
log() {
    echo "[$(date +"%Y-%m-%d %H:%M:%S")] $1" | tee -a "$LOG_FILE"
}

# Alert function - sends email and Slack on failure
alert_fail() {
    MSG="BACKUP FAILED on $HOSTNAME at $(date). Reason: $1"
    log "ALERT: $MSG"
    echo "$MSG" | mail -s "[BACKUP ALERT] $HOSTNAME" "$ADMIN_EMAIL"
    curl -s --fail -X POST -H "Content-type: application/json" \
        --data "{\"text\":\"$MSG\"}" \
        "$SLACK_WEBHOOK" > /dev/null 2>&1
    exit 1
}

mkdir -p "$BACKUP_DEST"

log "===== Backup Started: $HOSTNAME ====="

# Check available disk space before starting
AVAIL_MB=$(df -m "$BACKUP_DEST" | awk "NR==2 {print \$4}")
log "Available disk space: ${AVAIL_MB}MB"
if [ "$AVAIL_MB" -lt "$MIN_DISK_MB" ]; then
    alert_fail "Low disk space: only ${AVAIL_MB}MB available on $BACKUP_DEST"
fi

# Run backup
log "Creating archive: $ARCHIVE"
tar \
--exclude=/tmp \
--exclude=/proc \
--exclude=/sys \
--exclude=/dev \
-czf "$ARCHIVE" $BACKUP_DIRS 2>> "$LOG_FILE"
if [ $? -ne 0 ]; then
    alert_fail "tar command failed. See $LOG_FILE for details."
fi

SIZE=$(du -sh "$ARCHIVE" | cut -f1)
log "Archive created: $ARCHIVE (Size: $SIZE)"

# Backup rotation - remove archives older than RETENTION_DAYS
log "Running rotation: removing backups older than $RETENTION_DAYS days"
find "$BACKUP_DEST" -name "${HOSTNAME}-*.tgz" -mtime +$RETENTION_DAYS -delete
log "Rotation complete. Remaining backups:"
ls -lh "$BACKUP_DEST" >> "$LOG_FILE"

log "===== Backup Finished Successfully ====="

OUTPUT (from /var/log/backup.log)
[2026-05-08 09:30:01] ===== Backup Started: webserver01 =====
[2026-05-08 09:30:01] Available disk space: 18432MB
[2026-05-08 09:30:01] Creating archive: /mnt/backup/webserver01-2026-05-08_09-30.tgz
[2026-05-08 09:32:14] Archive created: /mnt/backup/webserver01-2026-05-08_09-30.tgz (Size: 412M)
[2026-05-08 09:32:14] Running rotation: removing backups older than 7 days
[2026-05-08 09:32:15] Rotation complete. Remaining backups:
[2026-05-08 09:32:15] ===== Backup Finished Successfully =====

To monitor the log in real time or search for failures:

bash
LinuxTeck.com
# Follow log in real time
tail -f /var/log/backup.log

# View last 30 lines
tail -n 30 /var/log/backup.log

# Search for failures only
grep "FAILED\|ERROR\|ALERT" /var/log/backup.log

#05

Advanced Rotation: Grandparent-Parent-Child Scheme

The 7-day rolling rotation in Script 3 works well for most servers. But if you manage environments where you need to go back weeks or months, the grandparent-parent-child (GPC) scheme gives you daily, weekly, and monthly restore points without letting the backup directory grow without limit. This is a proper enterprise backup rotation script linux pattern used in real production setups.

How GPC rotation works:

Daily backups (children) run Sunday through Friday, reusing the same day-named file each week. On Saturday a weekly backup (parent) writes to one of four weekly slots. On the first of each month a monthly backup (grandparent) rotates between two slots based on odd or even month. The result is that you always have a restore point from yesterday, last week, and last month. The Linux system backup and restore cheat sheet has additional rotation patterns worth knowing.

bash
LinuxTeck.com
#!/bin/bash
set -euo pipefail
# Grandparent-Parent-Child (GPC) Rotation Backup Script
# Works on: Ubuntu 24.04, Rocky Linux 9, AlmaLinux 9

BACKUP_FILES="/home /etc /var/spool/mail /root /boot"
DEST="/mnt/backup"
HOSTNAME=$(hostname -s)
DAY=$(date +%A)
DAY_NUM=$(date +%-d)

# Determine week slot 1 to 4
if (( $DAY_NUM <= 7 )); then
    WEEK_FILE="$HOSTNAME-week1.tgz"
elif (( $DAY_NUM > 7 && $DAY_NUM <= 14 )); then
    WEEK_FILE="$HOSTNAME-week2.tgz"
elif (( $DAY_NUM > 14 && $DAY_NUM <= 21 )); then
    WEEK_FILE="$HOSTNAME-week3.tgz"
else
    WEEK_FILE="$HOSTNAME-week4.tgz"
fi

# Determine monthly slot based on odd or even month
MONTH_NUM=$(date +%m)
MONTH=$(expr $MONTH_NUM % 2)
if [ $MONTH -eq 0 ]; then
    MONTH_FILE="$HOSTNAME-month2.tgz"
else
    MONTH_FILE="$HOSTNAME-month1.tgz"
fi

# Pick which archive name to use today
if [ $DAY_NUM -eq 1 ]; then
    ARCHIVE_FILE=$MONTH_FILE
elif [ "$DAY" != "Saturday" ]; then
    ARCHIVE_FILE="$HOSTNAME-$DAY.tgz"
else
    ARCHIVE_FILE=$WEEK_FILE
fi

echo "Backing up to $DEST/$ARCHIVE_FILE"
tar \
--exclude=/tmp \
--exclude=/proc \
--exclude=/sys \
--exclude=/dev \
-czf "$DEST/$ARCHIVE_FILE" $BACKUP_FILES
echo "Done: $(date)"
ls -lh "$DEST/"

#06

Schedule Backups with Cron: Setup, Verify, and Debug

Writing the script is only part of the job. To automate linux backup with cron and shell script, you need to schedule it correctly, confirm it is actually running, and know how to debug it when cron behaves unexpectedly. This is where most setups go wrong. The full cron command guide with examples covers all scheduling syntax options if you want to dig deeper.

Copy the production script to a standard system path first:

bash
LinuxTeck.com
sudo cp backup_production.sh /usr/local/bin/backup_production.sh
sudo chmod +x /usr/local/bin/backup_production.sh

Open the root crontab and add your schedule. Always use sudo crontab -e for scripts that access root-owned directories:

bash
LinuxTeck.com
sudo crontab -e

Add one of these entries depending on your schedule:

bash
LinuxTeck.com
# m h dom mon dow command

# Every day at 2:00 AM
0 2 * * * /usr/local/bin/backup_production.sh

# Every Sunday at 3:00 AM
0 3 * * 0 /usr/local/bin/backup_production.sh

# Daily at midnight with explicit PATH (fixes most cron failures)
0 0 * * * PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin /usr/local/bin/backup_production.sh

Verify the entry was saved:

bash
LinuxTeck.com
sudo crontab -l
OUTPUT
0 2 * * * PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin /usr/local/bin/backup_production.sh

Common Mistake - The cron PATH problem:

The most common reason cron backup jobs fail silently is that cron runs with a stripped-down PATH that does not include /usr/local/bin. Your script works fine when you run it manually but does nothing through cron.

Fix: Add the full PATH export at the very top of your script: export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin. Use only absolute paths inside scripts that run via cron. Never rely on which or command to find binaries inside a cron context.

#07

Bonus: Encrypted Backups and Cloud Upload with rclone

Two features that are completely absent from most backup guides are encrypted archives and cloud storage upload. Both belong in any serious production setup.

Encrypted Backup with GPG

After creating a tar archive, you can encrypt it with GPG symmetric encryption. If someone gets access to your backup storage or S3 bucket, they cannot read the data without the passphrase.

Note:

GPG symmetric encryption uses AES-256 by default. The --batch flag lets it run non-interactively inside a script. Never store the passphrase inside the script itself. Write it to a root-only readable file instead. The Linux security command cheat sheet covers more ways to protect sensitive credentials on the server.

bash
LinuxTeck.com
# Store passphrase in a root-only file
echo "YourStrongPassphrase" > /root/.backup_passphrase
chmod 600 /root/.backup_passphrase

# Encrypt the archive
gpg --batch --yes --passphrase-file /root/.backup_passphrase \
    --symmetric --cipher-algo AES256 \
    /mnt/backup/webserver01-2026-05-08.tgz

# Output: /mnt/backup/webserver01-2026-05-08.tgz.gpg
# Remove the unencrypted original
rm -f /mnt/backup/webserver01-2026-05-08.tgz

# To decrypt and restore later
gpg --batch --passphrase-file /root/.backup_passphrase \
    --decrypt /mnt/backup/webserver01-2026-05-08.tgz.gpg \
    > /mnt/backup/webserver01-2026-05-08.tgz

Push Backups to Cloud Storage with rclone

Once you have a local encrypted archive, you can push it to AWS S3, Backblaze B2, or any S3-compatible provider using rclone. This adds an off-site copy with almost no extra effort.

bash
LinuxTeck.com
# Install rclone on Ubuntu
curl https://rclone.org/install.sh | sudo bash

# Install rclone on Rocky Linux / AlmaLinux
sudo dnf install -y rclone

# Configure rclone interactively for your cloud provider
rclone config

# Copy encrypted archive to S3 bucket
rclone copy /mnt/backup/webserver01-2026-05-08.tgz.gpg s3-remote:my-backup-bucket/servers/

# Sync entire backup directory to S3
rclone sync /mnt/backup/ s3-remote:my-backup-bucket/servers/ --progress

Tip:

Add the rclone sync line at the very end of Script 3 so every run automatically archives locally, rotates old backups, and pushes a copy to cloud storage without any manual steps. For a broader view of backup strategies for production Linux servers, the Linux server backup solutions 2026 guide covers additional tools and approaches worth knowing.

#08

Troubleshooting Common Backup Script Problems

This is the section that most guides skip entirely. Here are the five real problems that show up when running these scripts on production servers, with the exact fix for each one.

Problem 1 - Permission denied on backup directories:

tar reports "Cannot stat: Permission denied" for directories like /root, /etc/ssl/private, or /var/lib/mysql.

Fix: Run the script as root or via sudo. For cron, always use sudo crontab -e to edit the root crontab rather than the regular user crontab. Verify permissions with: ls -la /etc/ssl/private

Problem 2 - Disk full during backup creates a corrupt archive:

The backup writes partially then fails. The resulting file looks like a valid archive but cannot be extracted. Script 3 handles this with a pre-flight disk space check, but you still need to verify archives after creation. Track disk usage trends with the du command to catch growing directories before they become a problem.

Fix: tar -tzvf /mnt/backup/archive.tgz > /dev/null && echo "OK" || echo "CORRUPT"

Problem 3 - SSH connection refused in remote backup script:

rsync or scp exits with "Connection refused" or "Host key verification failed" when run from cron, even though it works fine from the terminal.

Fix: Add the remote host to known_hosts before running the script from cron: ssh-keyscan -H 192.168.1.100 >> /root/.ssh/known_hosts. Also confirm the server firewall allows SSH from your source IP using the firewall-cmd command reference.

Problem 4 - Cron job is scheduled but nothing runs:

Almost always a PATH issue. Commands like rsync, gpg, or mail are not in cron's minimal PATH, so they silently fail without any output in the log.

Fix: Add this at the very top of your backup script before any other commands: export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin. Test by setting the cron time to two minutes from now and checking the log file to confirm it ran.

Problem 5 - Email alerts configured but no emails arriving:

The mail command runs without error but nothing arrives. This usually means no MTA is configured on the server.

Fix on Ubuntu: sudo apt install postfix mailutils. Fix on Rocky Linux / AlmaLinux: sudo dnf install postfix mailx && sudo systemctl enable --now postfix. If you want to skip the MTA setup entirely, use the Slack webhook curl method in Script 3 instead. It only needs curl and works on any server with internet access. See more curl usage examples in the curl command guide.

FAQ

Frequently Asked Questions

How do I schedule a backup script in Linux using cron?

Open the root crontab with sudo crontab -e and add a line with the schedule and full path to your script. For a daily backup at 2 AM, add this line:

bash
LinuxTeck.com
0 2 * * * PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin /usr/local/bin/backup_production.sh

Always include the full PATH in the crontab line to prevent silent failures. Confirm the entry was saved by running sudo crontab -l. The cron command reference shows every scheduling syntax option with real examples.

How do I run a shell script on a remote server via SSH automatically?

You need passwordless SSH key authentication set up first, then you can run remote commands in two ways:

bash
LinuxTeck.com
# Run a script that already exists on the remote server
ssh -i /root/.ssh/id_rsa user@192.168.1.100 "bash /usr/local/bin/backup.sh"

# Send and run a local script on the remote server via stdin
ssh -i /root/.ssh/id_rsa user@192.168.1.100 bash < /usr/local/bin/backup.sh

For automated runs inside cron, generate a dedicated passwordless key specifically for backup tasks so it does not conflict with your regular SSH keys. See the full setup walkthrough in the install and secure SSH server guide.

What is the best way to back up to an NFS mount using a shell script?

Mount the NFS share to a local directory first, then set that path as your BACKUP_DEST in the script. Add a mount check inside the script so it exits cleanly if the NFS share is unavailable.

bash
LinuxTeck.com
# Mount NFS share manually
sudo mount -t nfs 192.168.1.200:/exports/backup /mnt/backup

# Add to /etc/fstab for auto-mount on boot
192.168.1.200:/exports/backup /mnt/backup nfs defaults 0 0

# Add this check at the top of your backup script
mountpoint -q /mnt/backup || { echo "NFS not mounted. Exiting."; exit 1; }

The mountpoint check is important because if the NFS share fails to mount silently, your script will write the backup to the local filesystem without warning, eventually filling the root partition.

Can I back up a Linux server to AWS S3 using a shell script?

Yes. Use rclone after running rclone config to set up your S3 connection. Then add this line at the end of your backup script to push the archive automatically after each run:

bash
LinuxTeck.com
rclone copy /mnt/backup/ s3-remote:my-bucket/backups/ --include "*.tgz.gpg"

Always encrypt archives with GPG before uploading so the data is protected at rest in the bucket. The same rclone command works with Backblaze B2, Google Cloud Storage, and Wasabi without any changes to the syntax. For a full overview of Linux backup strategies including cloud options see the Linux server backup solutions 2026 guide.

How do I send an email alert when a backup script fails in Linux?

Check the exit code of tar with $? and pipe a failure message to the mail command if it is not zero. Script 3 in this guide has this built in as a reusable function. The core pattern is:

bash
LinuxTeck.com
tar \
--exclude=/tmp \
--exclude=/proc \
--exclude=/sys \
--exclude=/dev \
-czf "$ARCHIVE" $BACKUP_DIRS
if [ $? -ne 0 ]; then
    echo "Backup failed on $(hostname) at $(date)" | mail -s "BACKUP FAILED" admin@yourdomain.com
    exit 1
fi

If you do not have postfix or sendmail configured, the Slack webhook curl method from Script 3 is a simpler option. It requires only curl and a Slack incoming webhook URL. More curl one-liner examples are in the curl command in Linux guide.

Does this work on Rocky Linux 9 and AlmaLinux, not just Ubuntu?

Yes. All three scripts in this guide are confirmed working on Ubuntu 24.04, Rocky Linux 9, and AlmaLinux 9. The only difference is the package manager used during setup. Use dnf instead of apt on RHEL-based systems. The bash scripting syntax, tar flags, rsync, cron format, and GPG commands work identically across all three distributions. See the differences between these enterprise distros in the RHEL vs Ubuntu Server comparison.

What is the grandparent-parent-child backup rotation scheme?

GPC is a structured rotation pattern that keeps daily, weekly, and monthly backup copies without unlimited growth. Daily backups (children) run Monday through Friday using day-named files that overwrite themselves each week. A weekly backup (parent) runs on Saturday and rotates through four weekly slots. A monthly backup (grandparent) runs on the first of each month and alternates between two monthly slots based on whether the month is odd or even. The full working script is in section 05 of this article. It gives you restore points from yesterday, last week, and last month while keeping the total number of archive files fixed and predictable.

How do I restore files from a tar backup in Linux?

There are three common restore scenarios you will run into:

bash
LinuxTeck.com
# 1. List everything in the archive before restoring
tar -tzvf /mnt/backup/server-2026-05-08.tgz

# 2. Restore one file safely to /tmp - review before copying back
tar -xzvf /mnt/backup/server-2026-05-08.tgz -C /tmp etc/nginx/nginx.conf

# 3. Full system restore - only in a recovery scenario
cd /
sudo tar -xzvf /mnt/backup/server-2026-05-08.tgz

Option 3 overwrites files on the live filesystem without any confirmation prompts. Always test restores to a staging environment first. Never run a full restore on a production server without knowing exactly which files will be overwritten.

END

Summary

In this guide you built three complete shell scripts for linux bash script to backup files and directories automatically, starting from a simple local tar archive and going all the way up to a production setup with SSH remote transfer, 7-day rotation, encrypted archives, cloud upload to S3, and real-time failure alerts via email and Slack. Every script is tested and runs on Ubuntu 24.04, Rocky Linux 9, and AlmaLinux 9 without modification.

For more advanced automation patterns you can layer on top of these scripts, the Linux bash scripting and automation 2026 guide is a good next step.

Related Articles


LinuxTeck - A Complete Linux Learning Blog
Learn step-by-step how to automate Linux tasks with real-world scripts and practical examples.

About Sharon J

Sharon J is a Linux System Administrator with strong expertise in server and system management. She turns real-world experience into practical Linux guides on Linux Teck.

View all posts by Sharon J →

Leave a Reply

Your email address will not be published.

L