Guide: host your own private file sync + backup and note-taking server on a Raspberry Pi

 

Introduction

What is it and Why do it?

  • It's a little bit like your own private cloud server*
  • You get to use these awesome services:
    • Seafile is your personal file storage and synchronization app, which has multi-client support, a mobile app, and still lets you securely share files (and even collaborate on them) with others.
    • Trilium is an awesome note-taking and knowledge base app on steroids. I've been a sponsor of the project for a while, check out its features here: https://www.youtube.com/watch?v=uIvdzzxjdY8&t=51s .
    • Pi-hole can block ads (for all your LAN devices!) and provide local DNS.
  • It can be cheaper than ‘real’  cloud storage. 
  • You are in control of your own data and syncing or backing up large data volumes (on the LAN) do not consume data on your internet contract.

(*) see FAQ#1

Prerequisites

Hardware & static IP:

  1. €65 Raspberry Pi 4B with 4GB RAM, or better
  2. Storage: get all of the following (do NOT just use a big internal microSD card, see FAQ#3):
    - €9 internal SD card for the OS and software with 16GB storage or more;
    - €25 HDD docking station
    - €92 HDD like WD RED since they are made especially for NAS usage;
  3. Static public IP: unless you already have a public IP, set up Dynamic DNS on your router and connect it to a domain name using a free DynDNS service, e.g. https://www.noip.com/remote-access.
    Make sure you have 3 domain records registered (1 for each service, but 2 are CNAME records)

    mysite.ddns.net A record
    mysite-trilium.ddns.net CNAME record (point to mysite.ddns.net)
    mysite-seafile.ddns.net CNAME record (point to mysite.ddns.net)

Set up your hardware and routing:

  1. Connect HDD to RPI via Docking station and hook up your RPI to your LAN router via ethernet (or wifi), then set a static IP (192.168.1.1 is typical router address)
  2. Configure router: 
    1. set up static IP for RPI: 192.168.1.XX
    2. set up port forwarding to this IP for ports:
      1. 22 SSH
      2. 80 certbot and reverse proxy (for all the other software in the docker containers)
      3. 81 nginx proxy manager
      4. 443 SSL
      5. (and optionally more e.g. if you want to access pihole DNS from outside the LAN)

Server Setup

Parameters: if you replace my default values in the guide you should be able to copy paste most lines of code

DescriptionValue
HDD Mount point/media/hdd/
domain name A DNS recordmysite.ddns.net
seafile C DNS recordmysite-seafile.ddns.net
trilium C DNS recordmysite-trilium.ddns.net
email address for smartmontools notifications, seafile admin, certbot contactme@example.com
local IP of your raspberry pi192.168.1.XX

Raspberry Pi OS

Official Instructions: https://www.raspberrypi.com/documentation/computers/getting-started.html#installing-the-operating-system 

  1. Install Raspbian OS
    1. Plug in the micro SD card into your computer (using an adapter if not natively supported)
    2. sudo apt install rpi-imager
    3. Launch the imager and choose the appropriate OS (Raspberry Pi OS 64-bit for RPI4B) and select the SD card to install on
    4. When asked to customize settings, click Edit Settings and make sure remote access (SSH) is enabled, your LAN network credentials are put in, you've chosen a password, etc.
  2. Boot your RPI, and SSH into it with your chosen password.
  3. Install vim or nano or whatever CLI text editor you prefer: sudo apt-get install vim
  4. Execute the following to add lines to /etc/fstab so your HDD will mount properly when your RPI reboots:
    1. Make a fixed mounting directory (if you change it: know that I use it throughout the guide): sudo mkdir /media/hdd
    2. figure out the UUID of your device with sudo blkid and insert it into the second line below:

      echo "# external HDD" | sudo tee -a /etc/fstab
      echo "UUID="INSERT_UUID_HERE"	/media/hdd	ext4	defaults	0	0" | sudo tee -a /etc/fstab
  5. set up firewall and fail2ban
    1. sudo apt-get install ufw
    2. sudo apt-get install fail2ban
  6. install docker: follow OS-specific instructions on https://docs.docker.com/engine/install/ 
    for RPI, this is:

    # Remove old packages if existing
    for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
    
    # Add Docker's official GPG key:
    sudo apt-get update
    sudo apt-get install ca-certificates curl
    sudo install -m 0755 -d /etc/apt/keyrings
    sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
    sudo chmod a+r /etc/apt/keyrings/docker.asc
    
    # Add the repository to Apt sources:
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
      $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    sudo apt-get update
    
    # Install Docker latest version
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

reverse proxy

One big advantage of using a reverse proxy is that all your traffic is routed through a single point taking care of SSL termination, so you can use a single SSL certificate for all your services. We'll use Nginx Proxy Manager (NPM) and request a Letsencrypt certificate using Certbot.

Certbot

Official Instructions: https://certbot.eff.org/instructions?ws=nginx&os=snap 

  1. install snap: 

    sudo apt install snapd
    sudo reboot
    sudo snap install core
  2. install certbot as a snap:
    1. remove pre existing: sudo apt-get remove certbot
    2. sudo snap install --classic certbot
    3. sudo ln -s /snap/bin/certbot /usr/bin/certbot
  3. (optional) turn off any web server listening on port 80
    sudo docker stop nginxpm
  4. get a cert in standalone mode (spins up its own webserver) https://certbot.eff.org/instructions?ws=nginx&os=snap . You can specify multiple domains with -d option and also place the certs on the HDD with --config-dir:
    sudo certbot certonly --standalone -d mysite.ddns.net -d mysite-seafile.ddns.net -d mysite-trilium.ddns.net --config-dir /media/hdd/certbot
  5. make sure for certbot to use the working dir to renew:
    1. either with cron: (make sure you adjust the day-of-month and month fields according to your situation! Letsencrypt certificates have to be renewed every 3 months): sudo vim /etc/crontab add the following lines (after modifying):
      1. # renew SSL cert
        0 0 1 1,4,7,10 * root	/usr/bin/certbot renew --config-dir /media/hdd/certbot
    2. or by telling certbot it should look into the /media/hdd/certbot directory when renewing (I'm confused by the manual and don't know how to do it exactly - let me know if you do.
      https://eff-certbot.readthedocs.io/en/stable/using.html#automated-renewals 
      https://eff-certbot.readthedocs.io/en/stable/using.html#configuration-file 
  6. do a manual test with:
    sudo certbot renew --config-dir /media/hdd/certbot --dry-run 
  7. (optional) turn on web server again

    sudo docker start nginxpm

NPM (nginx reverse proxy manager)

Official Instructions: https://nginxproxymanager.com/guide/ 

  1. enable ports
    1. sudo ufw allow 80
    2. sudo ufw allow 81
    3. sudo ufw allow 443
  2. create proxiable docker network (all your services need to connect to this network! otherwise NPM cannot connect to them using container names): 
    sudo docker network create --driver bridge proxiable
  3. sudo mkdir /media/hdd/nginxpm
  4. create a yaml: sudo vim /media/hdd/docker-compose.yaml:

    services:
      app:
        image: 'jc21/nginx-proxy-manager:latest'
        restart: unless-stopped
        container_name: nginxpm
        ports:
          # These ports are in format <host-port>:<container-port>
          - '80:80' # Public HTTP Port
          - '443:443' # Public HTTPS Port
          - '81:81' # Admin Web Port
          # Add any other Stream port you want to expose
          # - '21:21' # FTP
    
        #environment:
          # Uncomment this if you want to change the location of
          # the SQLite DB file within the container
          # DB_SQLITE_FILE: "/data/database.sqlite"
    
          # Uncomment this if IPv6 is not enabled on your host
          # DISABLE_IPV6: 'true'
    
        volumes:
          - ./data:/data
          - ./letsencrypt:/etc/letsencrypt
        networks:
          - proxiable
    networks:
      proxiable:
        name: proxiable
        external: true
  5. install NPM
    1. sudo docker compose up -d
  6. login with credentials: admin@example.com:changeme and change your password!
  7. add SSL certificate to NPM (upload)
  8. configure proxies (and add SSL to each!):
    1. mysite.ddns.net
      → http pi-hole:80
    2. mysite-trilium.ddns.net
      → http trilium:80
    3. mysite-seafile.ddns.net
      → http seafile-caddy:80
  9. optional: create dummy docker for debugging (but then use a different port than pi-hole?

Pi-hole

Official Instructions: https://github.com/pi-hole/docker-pi-hole 

  1. enable port for DNS
    1. sudo ufw allow 53
  2. create a yaml file for pi-hole (make sure to change the WEBPASSWORD):

    sudo mkdir /media/hdd/pihole
    sudo vim /media/hdd/pihole/docker-compose.yaml
    # More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
    services:
      pihole:
        container_name: pihole
        image: pihole/pihole:latest
        # For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
        ports:
          - "53:53/tcp"
          - "53:53/udp"
          - "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
          - "80:80/tcp"
        environment:
          TZ: 'CET'
          WEBPASSWORD: 'set a secure password here or it will be random'
        # Volumes store your data between container upgrades
        volumes:
          - './etc-pihole:/etc/pihole'
          - './etc-dnsmasq.d:/etc/dnsmasq.d'
        #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
        cap_add:
          - NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
        restart: unless-stopped
        networks:
          - proxiable
    networks:
      proxiable:
        name: proxiable
        external: true
  3. configure to use DNS server on client devices:
    (I'm using Cloudflare as fallback)
    1. Linux: https://www.veeble.com/kb/how-to-change-dns-server-permanently-on-ubuntu-20-04/ 
      1. sudo vim /etc/systemd/resolved.conf
      2. add/uncomment the following line:
        DNS=192.168.1.XX 1.1.1.1
    2. Windows: https://www.windowscentral.com/how-change-your-pcs-dns-settings-windows-10 
      1. go to internet options, adapter properties, IPv4 properties
      2. add 192.168.1.XX with 1.1.1.1 as fallback
  4. go to the mysite.ddns.net/admin interface, click on Adlists in the left menu and add the following space-separated list:

    https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts https://raw.githubusercontent.com/PolishFiltersTeam/KADhosts/master/KADhosts.txt https://raw.githubusercontent.com/FadeMind/hosts.extras/master/add.Spam/hosts https://v.firebog.net/hosts/static/w3kbl.txt https://adaway.org/hosts.txt https://v.firebog.net/hosts/AdguardDNS.txt https://v.firebog.net/hosts/Admiral.txt https://v.firebog.net/hosts/Easyprivacy.txt https://v.firebog.net/hosts/Prigent-Ads.txt https://raw.githubusercontent.com/anudeepND/blacklist/master/adservers.txt

Trilium

Official Instructions: https://triliumnext.github.io/Docs/Wiki/docker-server-installation.html 

  1. go to /media/hdd/trilium to download docker yaml https://github.com/TriliumNext/Notes/blob/develop/docker-compose.yml 
  2. in docker-compose.yaml, comment out the ports section (this is taken care of by NPM)
  3. on client, set sync server to https://mysite-trilium.ddns.net
    be careful to use https and to not specify the port

Seafile

Official Instructions: https://manual.seafile.com/12.0/setup/setup_ce_by_docker/#download-and-modify-env 

  1. go to /media/hdd/seafile and download docker yaml and others:
    wget -O .env https://manual.seafile.com/12.0/docker/ce/env
    wget https://manual.seafile.com/12.0/docker/ce/seafile-server.yml
    wget https://manual.seafile.com/12.0/docker/caddy.yml
  2. Modify sudo vim .env to 
    1. change the volumes to /media/hdd/seafile/<volume> instead of /opt/<volume>
      1. SEAFILE_VOLUME
      2. SEAFILE_MYSQL_VOLUME
      3. SEAFILE_CADDY_VOLUME
      4. SEADOC_VOLUME optional
      5. NOTIFICATION_SERVER_VOLUME
    2. change INIT_SEAFILE_MYSQL_ROOT_PASSWORD and SEAFILE_MYSQL_DB_PASSWORD to …
    3. set JWT_PRIVATE_KEY with a unique string >32 chars
    4. set SEAFILE_SERVER_HOSTNAME=mysite-seafile.ddns.net
    5. make note that SEAFILE_SERVER_PROTOCOL=http because this concerns the traffic between docker containers
    6. set INIT_SEAFILE_ADMIN_EMAIL=me@example.com and also set the INIT_SEAFILE_ADMIN_PASSWORD
    7. set TIME_ZONE=CET
  3. add proxiable network
    1. to seafile-server.yml and caddy.yml:

        networks:
          - seafile-net
        	- proxiable
      ...
      networks:
        seafile-net:
          name: seafile-net
        proxiable:
          name: proxiable
          external: true
    2. in caddy.yml, comment out the ports section (this is taken care of by NPM)
    3. in seafile-server.yml, change the seafile version 12.0-latest to 12.0.7-arm.
  4. docker compose up -d
  5. on each seafile client, add your account (with email and password) on server https://mysite-seafile.ddns.net

    Tip: use a seafile-ignore.txt file on the root directory of each library to avoid syncing certain folders or files. https://help.seafile.com/syncing_client/excluding_files/ 

Almost done! Now just set up a monitoring system to detect when you need to replace your HDD…

Hardware monitoring

Basically, we're going to use smartmontools and let it send via gmail smtp (if you don't have gmail… make a free account).

Sources: https://www.cyberciti.biz/tips/monitoring-hard-disk-health-with-smartd-under-linux-or-unix-operating-systems.html  and https://gist.github.com/maleadt/02a58eb46118c1d8014c and https://linuxconfig.org/how-to-configure-smartd-and-be-notified-of-hard-disk-problems-via-email and https://forums.raspberrypi.com/viewtopic.php?t=50939 

  1. sudo apt-get install smartmontools mailutils msmpt msmtp-mta 
  2. Create msmtp config

    1. sudo touch /etc/msmtprc # create global config
      sudo chmod 660 /etc/msmtprc # make it accessible. the guide says 664, but this would be a security breach
    2. Create an app-specific password for your gmail https://myaccount.google.com/apppasswords 
      and use it in the next step
    3. In sudo vim /etc/msmtprc paste (and insert APP_PASSWORD_WITHOUT_SPACES)
    defaults
    auth on
    tls  on
    tls_trust_file /etc/ssl/certs/ca-certificates.crt
    logfile /var/log/msmtp.log
    
    # Gmail configuration
    account gmail
    host    smtp.gmail.com
    port    587
    from    me@example.com
    user    me@example.com
    password APP_PASSWORD_WITHOUT_SPACES 
    
    account default: gmail
  3. Enable logging:
    sudo touch /var/log/msmtp.log
    sudo chmod 666 /var/log/msmtp.log # there seems to be an issue with msmtp and it won't log unless all users have access. This is relatively harmless.
  4. Check if SMART support is available
    sudo smartctl -i /dev/sda (or whatever your device identifier is)
  5. Open sudo vim /etc/default/smartmontools and modify/uncomment the following line:
    enable_smart="/dev/sda"
  6. In sudo vim /etc/smartd.conf comment out the line with DEVICESCAN and add the following lines:
    (remove the last line after successful test. run sudo systemctl restart smartd to re-run the test)

    # MY SETTINGS
    # check some stuff and track everything (-a), with a Short (-s S) and Long (-s L) test every week and month, respectively.
    # ignore temperature (194 and 231) and power-on hours (9) attributes (-I)
    # email (-m) once (-M) if warnings
    /dev/sda -a -I 194 -I 9 -I 231 -m me@example.com -M once -s (S/../../6/00)|(L/../01/./01) # every saturday midnight | every 1st day of the month 1AM
    /dev/sda -a -M test -m me@example.com  # remove this line after successful test
  7. sudo systemctl reload smartd to reload the config (which also executes the test line)

Congrats! You're done! Enjoy your secure self-hosted setup :D

FAQ

  1. Personal cloud server means my data is backed up, right?
    Yes and no: the data storage is intended to be used as a backup for other devices. In this guide, there is no back-up mechanism for the data on the RPI nor its storage medium. However, I use docker images stored on the harddrive so that when your SSD suddenly fails (which is most likely), then you can simply replace it and you only have to repeat the Raspbian OS section, re-start all docker images, and you're back online!
  2. What do you mean it can be cheaper than cloud storage?
    The payback period compared to cloud via Google/Microsoft/… varies greatly (1-10 years, but typically 3 years) depending on your needs , the hardware you already have, and whether you want Trilium and pi-hole or just simple storage.
    • Google's basic storage offer costs €20 per year and has file sync (Drive), but does not have a Trilium server nor pi-hole and only gives you 100GB. It does have other benefits (easy sync with Photo's on smartphone) but if you want more storage it will cost you €90 yearly for 2TB.
    • My recommended 1TB setup costs <€200, but if you have a spare harddrive or you are not too worried about data loss (because you sync your data across devices constantly so it is easy to recover via those) then you can do it for < €100
  3. Is this a secure setup? Does it use HTTPS?
    Yes, everything is secure (HTTPS connections even from inside your LAN) and you only need to request/manage a single SSL certificate. All traffic passes through nginx proxy manager (except maybe the DNS requests?) which relays the traffic via the docker network.
  4. Why use Nginx proxy manager (NPM)?
    Because then we can easily serve multiple services on the same host machine using only 1 SSL certificate. NPM is the SSL termination point and routes traffic via the docker network, which means no ports on the docker containers are exposed to attackers.
  5. I can still see ads when using pihole! Is it not working?
    Pi-hole can only block 'regular' ads based on DNS-based domain name blocking, which means some special ads (like Youtube ads) cannot be blocked using pi-hole (DNS-based blocking just doesn't work for these types of ads).
  6. What are the downsides of this setup?
    The “downside” (read: “the fun part for hobbyists”) is that you have to set up and maintain everything yourself, for instance if you move houses, or your HDD/SSD has an issue, or … :-)