Find a file
2025-03-15 13:27:25 +01:00
.vscode Create generic rule for simple nginx reverse proxy 2024-04-28 15:13:05 +02:00
images Add user creation for ddns script 2024-04-10 20:09:34 +02:00
inventory Add Python interpreter explicitly 2025-01-07 13:00:37 +01:00
roles Add comment relating to grafana restore failure 2025-03-15 13:27:25 +01:00
.gitignore Improve some minor parts of the code 2024-04-11 11:13:29 +02:00
ansible.cfg Add ansible.cfg with vault functionality 2024-04-10 16:08:08 +02:00
LICENSE Initial commit 2024-04-07 12:37:40 +02:00
playbook.yml Remove changedetection deployment 2025-02-28 15:47:51 +01:00
README.md Add steps for restoring backups 2025-02-28 16:21:18 +01:00
requirements.yml Update versions 2025-02-28 14:18:44 +01:00
vault.yml Add configuration for hoarder 2025-01-04 12:12:53 +01:00
versions.yml Update Immich 2025-03-12 10:28:07 +01:00

Raspberry Pi IaC

Contains the Infrastructure as Code files which are used for provisioning my in house Raspberry Pi.

Prerequisites

The files within this repository should be run on a computer with Ansible installed which is only supported on MacOS and Linux systems. For Windows it can be run within a WSL distro (see this guide on how to set this up, make sure the repository is checked out on the WSL distribution itself).

Ansible control node prerequisites

  1. Install Ansible (see instructions).

  2. Install Ansible Galaxy content:

    ansible-galaxy install -r requirements.yml
    
  3. Enter the vault password in the .vault_pass file. This is included in .gitignore so it shouldn't end up in the repository:

    # Notice the space at the beginning, this prevents the shell from saving this command in its history.
     echo '[ -- enter vault pass here -- ]' > .vault_pass
    
  4. Install the Python3 passlib library (used internally in the user role);

    sudo apt install python3-passlib
    

Environment prerequisites

The Raspberry Pi IaC code contained within this repository provisions the Raspberry Pi itself but doesn't provision all surrounding infrastructure which is presumed to be managed by hand. The following relevant configuration is assumed:

  1. A prepared Raspberry Pi.
  2. Cloudflare should be setup for managing the domain records of kleinendorst.info.

Raspberry Pi preparation

The Raspberry Pi should be installed and running with reachable SSH from the network.

  1. Setup its MicroSD card using the Raspberry Pi Imager (download). For CHOOSE OS select the : Raspberry Pi OS (other) > Raspberry Pi OS Lite (64-bit) option.
  2. When asked: Would you like to apply OS customization settings? select EDIT SETTINGS. Select and fill in the following settings:
    1. Set username and password
    2. Set locale settings
    3. Enable SSH > Allow public-key authentication only and enter your computer's public key.
    4. Disable Eject media when finished (probably not really important but I heard it could prevent problems on Windows).
  3. Start the Raspberry Pi with an ethernet cable attached.
  4. Find the assigned IP of the Raspberry Pi in the router and configure DHCP to statically assign this address to the Raspberry Pi.
  5. Add the new Raspberry Pi to the hosts file using the internal IP if it isn't there already.
  6. Test if the Raspberry Pi is correctly configured by opening an SSH session to it (using its IP address).

Provisioning

Provision the Raspberry Pi by running:

ansible-playbook playbook.yml

Overview of installed software

SSH with ZSH

It's possible to connect to the Raspberry Pi from the internal network via either its FQDN or IP address using a public key only setup as part of the Raspberry Pi preparation. When logged in the user will be prompted with the zsh configured with Oh My Zsh and Starhip prompts.

zsh

Other

Reinstalling the Pi

It can be handy to reinstall the Pi. First shutdown the pi by running sudo shutdown from SSH. Next take out the memory card and follow all steps in Raspberry Pi preparation. For the next step remove the current known_hosts entry with: ssh-keygen -R '192.168.50.27' for all PCs that had SSH access to the Pi.

The external Kingston drive

An external 1 TiB drive is installed. The drive isn't directly managed by Ansible although some files/folders might be written to from it. Use this guide for setting up the proper filesystem. Notice that mounting the volume automatically via /etc/fstab is done automatically by Ansible.

Debugging users other than the main user

The user role included in this repository makes it possible to create new users which will also have a fully configured ZSH environment. They can't be accessed via SSH because no SSH keys are added for them and password logins are disabled. Logging into the new user's account can be done as follows (for testing and debugging):

# Enter both the username and password
sudo login

This is verified to be working:

new users

Manually restoring a backup

The backups role will install scripting which automatically backs up specific files and uploads it to an external service called BorgBase. This section describes manually restoring such a backup after cleanly reinstalling the Pi from scratch.

First what's needed is a reference to the Borg repo URL and passphrase. They can be found in the vault file located at: roles/backups/vars/main/vault.yml. Read out the values using the ansible-vault binary. Log into the Raspberry Pi, change to root and fill the BORG_REPO and BORG_PASSPHRASE variables:

sudo -i
export BORG_REPO='...'
export BORG_PASSPHRASE='...'

The backups role generates a new SSH key pair when run. Since we're assuming in this section a new install took place we'll have to upload the newly generated public key into BorgBase. The correct key is stored in the root user's home folder, view it with cat ~/.ssh/id_ssh_rsa.pub and upload it to BorgBase. Make sure to also remove any old keys and add it to the repository with "full access".

It should now be possible to view backups contained within the borg repo by running: borg list. Now run the following commands (replacing [ARCHIVE_NAME]):

mkdir /bulk/backup_work_dir
cd /bulk/backup_work_dir/

borg extract '::[ARCHIVE_NAME]' ./docker_volumes ./postgres

The files are now unpacked in this directory, make sure to clean up this directory after use.

Restoring the database content

In order to restore the database backups: first log into the PostgreSQL database's REPL shell with the credentials of the postgres user found in the role's vault (if required). Now run the following commands (substituting [DBNAME]):

First:

docker exec -ti postgres-postgres-1 psql --user postgres

Then:

CREATE DATABASE [DBNAME];
CREATE USER [DBNAME] WITH PASSWORD '...';
GRANT ALL PRIVILEGES ON DATABASE [DBNAME] TO [DBNAME] WITH GRANT OPTION;
ALTER DATABASE [DBNAME] OWNER TO [DBNAME];

Now that the user exists, exit from the psql REPL shell and perform the following actions:

# Copy a database backup into the container
docker cp ./postgres/[DBNAME].sql postgres-postgres-1:/dump.sql

# Load the data into the database
docker exec -ti postgres-postgres-1 psql --username '[DBNAME]' --dbname '[DBNAME]' --file '/dump.sql'

# Remove the dump file from the container
docker exec postgres-postgres-1 rm /dump.sql

Restoring docker backups

The backups service stores a configuration file at: /etc/borg_backup_script/backup_configuration.yaml which contains an overview of backups. We can use it with a script to loop over the available backups and restore them (in a similar way to how the backup script itself creates the backups). This script assumes that we want to restore all backups, change where necessary.

Copy this script to the local folder (still assuming this is the folder which contains the docker_volumes and postgres directories):

Note

these directories already exist.

#!/bin/bash
# Copied and altered from backup_script.sh:
configurationFileLocation='/etc/borg_backup_script/backup_configuration.yaml'
nrOfConfigurations="$(yq '.docker_volume_backups | length' <"$configurationFileLocation")"

for ((i = 0 ; i < "$nrOfConfigurations" ; i++)); do
    dockerConfiguration="$(yq ".docker_volume_backups[$i]" <"$configurationFileLocation")"
    containerName="$(echo "$dockerConfiguration" | jq -r '.container_name')"
    volumeName="$(echo "$dockerConfiguration" | jq -r '.volume_name')"

    specificVolumeBackupPath="./docker_volumes/$volumeName"

    echo "Backup up Docker volume: $volumeName from running container: $containerName..."

    echo "Stopping container: $containerName..."
    docker stop "$containerName"

    echo "Starting new container which copies over files..."
    start=$SECONDS
    docker run --rm -v "$specificVolumeBackupPath:/source" -v "$volumeName:/target" --entrypoint "ash" \
        alpine -c "rm -rf /target/*; cp -rf /source/* /target/"

    elapsedSeconds=$(( SECONDS - start ))
    echo "Copying succeeded (in $elapsedSeconds seconds), restarting container..."
    docker start "$containerName"
done